MCSA 70-462 Training Kit Preparation: Setting Up the Hyper-V Lab Environment

If you are planning on taking the Microsoft Certification Exam for 70-462, this guide provides the step-by-step instructions to completely setup your Hyper-V lab environment used in the official training kit.

The post is published in its entirety at my favorite SQL Server resource destination, SQL Server Central (http://www.sqlservercentral.com) on November 25th, 2013. Check it out there!

sqlservercentral_logo

Introduction

The Microsoft Press book, Exam 70-462: Administering Microsoft SQL Server 2012 Databases, requires a substantial lab environment for the reader to follow along with the instructions in the kit. It recommends utilizing Hyper-V to setup a Domain Controller (DC) and an additional five domain members, all running Windows Server 2008 R2 Enterprise. While the book covers the hardware requirements needed to support the virtual lab, it does not provide the instructions for creating it.

This article will provide step-by-step instructions for creating the Hyper-V lab environment described in the 70-462 Training Kit. It assumes the reader has the necessary materials and software. Familiarity with Windows Hyper-V is not required. The instructions provided were documented using Hyper-V with Windows 8 Enterprise as the host machine and Windows Server 2008 R2 for the Hyper-V virtual machines.

Following the steps outlined in this post should leave the reader with the lab environment necessary to work through the entire 70-462 Training Kit book.

The post is published in its entirety at my favorite SQL Server resource destination, SQL Server Central (http://www.sqlservercentral.com) on November 25th, 2013. Check it out there!

sqlservercentral_logo

Non-Invocable Member Cannot Be Used Like A Method

If you’re here by way of searching for the exception named in the title of this post, you might be trying to convert VB.NET (Visual Basic .NET) code to C#. The remedy for this error is usually simple. C# uses brackets, e.g. [brackets], for referencing items in a collection, whereas VB.Net uses parentheses, e.g. (parentheses).

If you are translating VB.NET code into a C# code window, you will want to pay special attention to those lines referencing items in a collection.

An image of the C# code window displaying the exception, using the Dts.Variables collection as an example:

An image of the corrected C# code:

Square brackets in C# are typically used for arrays, indexers, attributes and pointers; parentheses are usually used for method calls. VB.NET uses parentheses for both. When translating VB.NET code to C#, be sure to replace those offending parentheses with brackets.

Beware, too, that VB.NET allows parentheses to be omitted when calling parameterless functions and procedures. This will also cause exceptions when converting VB.NET code to C#.

Links below for additional information.

[] Operator (C# Reference)
http://msdn.microsoft.com/en-us/library/a3hd7ste.aspx

Methods (C# Programming Guide)
http://msdn.microsoft.com/en-us/library/ms173114.aspx

Force parentheses even when calling parameterless functions in VB.NET?
http://stackoverflow.com/questions/1298605/force-parentheses-even-when-calling-parameterless-functions-in-vb-net

Retrieving MAX Value Across Columns in T-SQL

I recently discovered a neat trick for quickly obtaining the max value across multiple columns in the same row. As I was mentally working out the approach, I hit up the search engine to see if, as is often the case, someone else had already crafted an elegant solution. I found the one I liked best at http://www.calsql.com/2012/10/max-function-from-multiple-columns.html.

My use of the Transact-SQL Table Value Constructor (TVC) had primarily been limited to INSERT statements, e.g. INSERT
INTO
SomeTable
VALUES (‘AAA’),(‘BBB’),(‘CCC’);
. Thinking of using it in a SELECT statement as a means to retrieve the max value across columns wasn’t something I had considered. Hats off to the mysterious author of the blog post at CALSQL.

 

In a nutshell, this post will demonstrate using the MAX function in tandem with the TVC to retrieve the max value that exists in each row for multiple columns. This requires SQL Server 2008 or above.

 

I had a query that was returning percentages for three different attributes in each row. The percentages indicated the likelihood for potential matches of Name, Address and Phone information against another source. You can see from the capture below that each row contains three percentages for each potential match.



To return the maximum value for each column within each row, we add a column to the result set that incorporates using the TVC with the MAX function.



Let’s break down what’s happening in the following statement:

(SELECT
MAX(mv)
FROM (VALUES (NameSimilarity),
(AddressSimilarity),
(PhoneSimilarity))
AS x(mv))
AS MaxSimilarity

 

AS
x
(mv) –
When using the TVC as a derived table it is required to have a name. In this example the derived table name is “x”.

AS
x(mv)
Columns for the derived table are enclosed in parentheses following the derived table name. In this example “mv” is used (short for MaxValue).

AS
MaxSimilarity
This is simply the column name in our master query. It is not part of the TVC and is not required for the above example to work.

 

(VALUES (NameSimilarity),
(AddressSimilarity),
(PhoneSimilarity))
– This is the nut; the TVC derived table. It warrants some in-depth explanation:

 

“VALUES” is followed by three column names from the master query. The interesting thing to note is that it allows you to pivot these three columns into a single column (see “mv” above) with three rows in the form of a derived table. This is what allows you to leverage the MAX function. In TVC syntax, items in parentheses following the VALUES keyword are individual “rows”.

 

(SELECT
MAX(mv)
FROM
This is the MAX function operating on the TVC derived table, “mv”.

 

For more information about the Table Value Constructor, be sure to check out the official documentation at http://technet.microsoft.com/en-us/library/dd776382.aspx.

 


 

Implementing a Custom Change Data Capture System – Part Two

Introduction

This posting continues from my first blog Implementing a Custom Change Data Capture System. It may be helpful to review Part One before diving into Part Two.

Part One described how our team was challenged with late arriving requirement changes for an existing Extract Transform Load (ETL) solution. The new requirements dictated that changes in the source transactional system propagate to the destination transactional system. The source system had no inherent reliable means of tracking changes, and altering the source system was not an option. Part One covered the team’s proposed solutions and briefly described the one we selected, using HASHBYTES as a means to monitor and respond to changes in the source system.

Part Two will conclude the series and provide the details of the solution our team implemented. The end of this post contains a link to downloadable content that can be used to demonstrate the custom change data capture system and follow along.

Implementation

Time was of the essence, so we quickly set forth on our proof of concept. We discovered in short order that CHECKSUM was not likely to work for us as it had some known issues. The issues are beyond the scope of this article, but as of this writing the comments on the link above touch on them.

We were a little discouraged, because CHECKSUM has the advantage of returning a value for all columns of a given table easily. Fortunately, concatenating table columns for HASHBYTES did not prove too cumbersome, and it allowed us to monitor only a subset of table columns for changes.

At a high-level, here is the solution I am about to describe in detail (I lifted and modified this nifty flow diagram out of PowerPoint from my esteemed colleague, Frederick Chun):

Update Process

  1. Compare Last Hash Value: Apollo’s Vehicle entity will be monitored for changes to the data by comparing the last HASHBYTES value.
  2. Stage Updates: Any changed data will be staged in an interim table for processing.
  3. Exclude Challenger Updated Items: If the Vehicle entity has been edited by a user directly from the Challenger system since the last Apollo update, it will be excluded from the current Apollo update and any future Apollo updates. This was a business requirement that basically said, if a user uses Challenger to update a particular entity, e.g. CarID: 123XYZ, then the user has indicated that CarID 123XYZ will no longer be updated in Apollo and updates in Challenger should not be overwritten.
  4. Update Challenger: Challenger data will be overwritten with the Apollo updates that have occurred since the last Apollo update.
  5. Archive Update History: An audit trail of all updates will be persisted into an archive table.

The thing I liked most about this approach is this—we had an existing, unalterable legacy system that contained no reliable means of tracking changes to the data. With the addition of two columns to a table in another database (SourceHashValue, SourceHashDateTime), we were able to implement a full-fledged and reliable change data capture system. The legacy system had no need of any awareness about this other database.

Recall the ReferenceCar table from Part One. This table lived in the ETL team’s reference database and served a single purpose; it provided the surrogate key mapping between the cars in Apollo and the cars in Challenger.

ReferenceCar Table

The ReferenceCar table was augmented with the following columns:

  1. SourceHashValue varbinary(8000) – This persisted the HASHBYTES hash value for the Apollo Car data from one update to the next.
  2. SourceHashDateTime datetime – This stored the timestamp of the last update from Apollo to Challenger, i.e. the timestamp of when the hash value in SourceHashValue was generated.

Updated ReferenceCar Table

And let’s take a look at the source Car table from Apollo and the destination Car table from Challenger.

Apollo Car Table (Source)

Challenger Car Table (Destination)

The 3 tables above provide the framework to determine if something changed in the source system that should be propagated to the destination system. When the Challenger Car table is initially loaded with data from the Apollo Car table using SQL Server Integration Services (SSIS), HASHBYTES is used to return a hash of the columns that will be tracked for changes, in this example, Make, Model, Year and Color.

This hash is stored in the reference.ReferenceCar table’s SourceHashValue column, along with the time stamp of when it was taken. The ReferenceCar table also contains the mapping between Apollo and Challenger, Apollo’s CarID and Challenger’s CarID, respectively (review the Updated Reference Car Table above) . Once the hash is stored with a time stamp, the next time the SSIS process runs the hash value can be used to determine if anything changed. The time stamp can be used to compare against Challenger’s LastModifiedDateTime to filter out items that were updated by users in Challenger in the interim, and permanently flag those items as “not updateable”. Lastly, a record of everything that happened is stored in an archive table.

Recall our requirements from Part One:

  • Updates to Apollo should flow to Challenger on a nightly basis
  • Once the updateable entity was edited in Challenger, updates in Apollo should be ignored
  • Providing an audit trail of updates was essential

The process described above handles these requirements. In the next section we will walk through the process from start to finish.

Walkthrough

Everything necessary to complete the walkthrough can be downloaded from this post. If you want to follow along, you will need the following sql scripts (tested on SQL Server 2008 R2):

  • 1_HashbytesDemoDatabaseSetup.sql
  • 2_HashbytesDemoDataUpdates.sql
  • 3_HashbytesDemoReviewUpdateResults.sql

and the SSIS project (compatible with Visual Studio/BIDS 2008) that contains the following files:

  • HashbytesDemo.sln
  • HashbytesDemo.dtproj
  • HashbytesDemo.dtsx

Download these items and unzip the SSIS solution, but don’t open it yet; we’ll need to setup the database first to avoid validation issues when opening the SSIS project.

Database Setup

Open and run the 1_HashbytesDemoDatabaseSetup.sql script on the server where you plan to run the demo. This will:

  1. Create the HashbytesDemo database
  2. Create schemas (to represent different databases for simplicity)
    1. apollo
    2. challenger
    3. reference
  3. Create tables
    1. apollo.Car
    2. challenger.Car
    3. reference.ReferenceCar
    4. reference.StageCar
    5. reference.StageCarArchive
  4. Adds 100 rows of test data into apollo.Car
  5. Create the stored procedure uspApolloCarUpdatedSource

After you have run the script, the database should look like this in SQL Server Management Studio’s (SSMS) Object Explorer:

The apollo.Car table should have 100 rows of data. The other tables are empty, for now. Feel free to borrow the following sql to view the data in the tables you just created.

use HashbytesDemo
go
select * from apollo.Car
select * from challenger.Car
select * from reference.ReferenceCar
select * from reference.StageCar
select * from reference.StageCarArchive

Video Example

Note that the IDENTITY columns in apollo.Car and challenger.Car are seeded with different values to allow for easier visual identification of the mapping between the 2 systems in reference.ReferenceCar. This is strictly to aid in the demonstration.

Project Setup

The SSIS solution assumes that the HashbytesDemo database is installed on your local instance of SQL Server with the Server Name localhost. If your setup is different, you will need to update the localhost.HashbytesDemo Connection Manager in the project.

Once you are up and running with the project open, you should be looking at a screen that resembles the following:

Initial Load

Assuming your Connection Manager is set up correctly, you should be able to run the SSIS project by right clicking on the package in Solution Explorer, and choosing Execute Package.

You should see the 100 rows in apollo.Car flow through to challenger.Car. This is the initial migration and all the car data in Apollo is synchronized with the car data in Challenger.

Video Example

Running the sql to view the data in apollo.Car, challenger.Car and reference.ReferenceCar should verify the data is synchronized between both systems.

Video Example

Emulating Updates

The 2nd sql script, 2_HashbytesDemoDataUpdates.sql, will emulate user updates in both the source and target. Updates in the source (Apollo) should flow through to the destination (Challenger), unless Challenger was updated in the interim by a user or some other process.

Open and run the 2_HashbytesDemoDataUpdates.sql script on the server where you are running the demo. This will:

  1. Update a subset of data in apollo.Car to emulate user data changes in the source
  2. Update a subset of data in challenger.Car to emulate conflicting user updates in the destination
  3. Display Apollo and Challenger Car data side by side to allow you to compare the data before running the SSIS process

Video Example

Looking at the data, we see the random data elements that were updated. Notice that the ones that were updated in ChallengerMake; they have more recent time stamps in ChallengerLastUpdatedTime. These are of interest because they are the ones that will become flagged as “not updateable”. We will see all the other Apollo data updates make their way through to Challenger.

Running the Update Process

We are going to execute the HashbytesDemo.dtsx package again. This time, we will not see any new data loaded, just updates being propagated from Apollo to Challenger.

Right click on the HashbytesDemo.dtsx package and choose Excecute Package. After the package has completed, take a moment to examine the row counts as they moved through the data flow tasks (DFT’s).

Video Example

The Initial Apollo to Challenger Car Data Load DFT shows that of the 100 rows found in the Apollo Car Source, none made it through to the Challenger destination. This is correct as there were no new data.

The Load Reference Data DFT shows the same, as it should always match its predecessor.

The next 2 DFT’s are the ones to watch, as they should have handled correctly propagating the updates. Let’s look at Stage Pending Apollo Car Updates first.

Notice the flow of data through the tasks. The Apollo Car Updated Source correctly captured 20 updated rows. (Please note that the demo may not always generate exactly 20 rows.) You can review the stored procedurereference.uspApolloCarUpdatedSource if you want to know more about how it worked. In a nutshell, it compared the reference.ReferenceCar.SourceHashValue with the current hash created from HASHBYTES against the data in Apollo to determine if the data were different.

The Lookup Challenger Car Data task looks up the corresponding data in Challenger to use for comparing the LastModifiedDateTime and for storing the “old” (prior to update) values in the audit trail.

The Challenger Updates Exist Conditional Split divides the data into 2 paths, updateable and not updateable. You can see that 15 rows were identified as updatable, i.e. they had no conflicting updates in Challenger. 5 rows were flagged as “not updateable”, i.e. their Challenger LastModifiedDateTime values were more recent than the reference.ReferenceCar.SourceHashDateTime. This indicates that a user (or some other process) updated the same Car in Challenger since it was last updated with Apollo’s data. Because of the requirements, these CarID’s will be flagged as permanently “not updateable” with changes from Apollo. If you look at the end of thereference.uspApolloCarUpdatedSource stored procedure, you will see where it checks thereference.StageCarArchive table for permanently flagged items.

The Challenger Updateable Derived Columns and Challenger Not Updateable Derived Columns tasks create 2 new derived columns, one for the ChallengerUpdateable flag, the other for a text description to make the audit table,reference.StageCarArchive, more readable.

Lastly, the Update Staging Destination and Update Staging Destination (Not Updateable) tasks simply inserts the data into the reference.StageCar table. 15 items are updatable, 5 are not.

Next, let’s look at the Update Challenger Cars from Stage DFT.

It pulls the staged data directly from reference.StageCar and correctly finds 20 rows. The following Updateable Conditional Split simply excludes the “not updateable” rows. The Data Conversion task is not in use and can be ignored. If there were data conversions required for the destination, that is where they would go. The next 3 OLE DB Command tasks do the work.

Update Challenger Car Command performs the update to the challenger.Car table. Update StageCar ChallengerUpdateDateTime updates the StageCar table with the time stamp of the updates to Challenger. Update ReferenceCar SourceHashValue updates the reference.ReferenceCar table SourceHashValue with the new HASHBYTES hash so that the reference.ReferenceCar table is in synch with the current value in the Apollo system. (To give context, this hash value will be used as the basis of comparison the next time the process runs. If relevant data is updated in apollo.Car in the meantime, the hash in reference.ReferenceCar won’t match and the item will be selected for updates.)

The last 2 tasks in the Control Flow, Archive Staging Data and Truncate Reference StageCar simply copy the data from reference.StageCar and insert it “as is” into reference.StageCarArchive. Then, StageCar is truncated since the data is preserved in StageCarArchive. The archive table serves 2 purposes, it provides an audit trail, and it is used to persist the CarID’s in Challenger that are permanently “not updateable”. Recall the last bit of our WHERE clause on the Apollo Car Updated Source:

Take a moment to run the 3_HashbytesDemoReviewUpdateResults.sql script to review the results in reference.StageCarArchive. (The screenshot below is clipped width-wise for display purposes.)

Notice the data with NULL ChallengerUpdateDateTime values. These are the ones that were flagged as “not updateable”. The side by side columns display the current value as it exists in challenger.Car, the old value (its value in Apollo and Challenger before the user updated the data in Apollo), and the new value (its value in Apollo AFTER it was updated by the user). Keep in mind the new value will not have flowed through to the current value if it was not updateable.

Summary

This concludes the series. I decided to knock it all out in 2 parts since it took me much longer than I anticipated to get back to Part 2 and finish it up. The demo can easily be restarted by simply running the 1_HashbytesDemoDatabaseSetup.sql script again. At a high level, the steps are these:

  1. Run 1_HashbytesDemoDatabaseSetup.sql
  2. Execute the HashbytesDemo.dtsx package (this will complete the initial load)
  3. Run 2_HashbytesDemoDataUpdates.sql
  4. Review the pending updates using the output from 2_HashbytesDemoDataUpdates.sql
  5. Execute the HashbytesDemo.dtsx package (this will process the updates)
  6. Run 3_HashbytesDemoReviewUpdateResults.sql to review the results

This is a demo project and does not represent production code. Shortcuts were made for expediency. For example, the OLE DB Command objects in the Update Challenger Cars From Stage DFT are parameterized in such a way that the data will be processed row by row. They would yield much better performance were they set based.

As the saying goes, hindsight is 20/20. Being fortunate enough to work with an amazing and talented team at Slalom and on my current project yields great results. Since I started this posting about change data capture, I have learned a few things about cleaner, simpler ways to solve the problem outlined in this series. Most notably by leveraging the MERGE statement in tandem with HASHBYTES. While MERGE is a more t-sql intensive solution, the content in this posting relies more heavily on the built-in SSIS components, and therefore has merit in that regard. At the least, I hope it provides some conceptual relevance when trying to solve similar problems.

Feel free to follow-up with questions or comments.

Download

This zip file contains the sql scripts and SSIS files necessary to run the demo described in this post. Unzip and open the SSIS solution file. Open the sql scripts in SSMS and run when indicated in the walkthrough.

Implementing a Custom Change Data Capture System – Part One

Introduction

Coming up with a title proved to be one of the most difficult parts of beginning this posting. Implementing a Custom Change Data Capture System doesn’t really ‘capture’ what should be said. I recently found myself in a situation with a client where it was necessary to quickly implement update processes between a legacy transactional application and a new transactional system. “That’s nothing new”, you say–and you are right–except that it was late in the game, there was already an extract-transform-load (ETL) solution in place that was not architected to account for updates because it was never in the scope…until that day, and (here was the nut of the challenge) nothing reliable existed in the legacy application to capture changed data.

Our team designed and implemented a viable solution. I decided to document it in two postings for a few different reasons:

Firstly, I think it is viable, and I want to share it in the hopes that it might help some folks down the road.

Secondly, I am sure it has lots of room for improvement; I am curious to hear feedback, suggestions, etc. Trust me, you won’t hurt my feelings. I don’t claim to be an expert; I think good solutions are the end result of exactly this kind of process. This solution may get blown out of the water, proved irrelevant by the existence of a better solution, or evolved based on input from bright minds; either scenario is okay with me.

Lastly, I need something to blog about. This would be way too long to fit into one post, hence the series. I’m not sure exactly how it will break down in terms of number of articles, but I will try and segment them logically. I will keep the Table of Contents updated so interested parties can jump directly to content they deem relevant. This first posting won’t have much technical value. I want to describe what we were confronted with and how we came up with the solution. At this juncture, it seems like it should go something like this, but I am sure it will play out differently.

  • Part 1: Introduction: Gives an overview of the problem and how the team arrived at the solution I’m about to document, including the requirements and limitations that influenced our thinking
  • Part 2: Implementation of the solution, diving in to the details
  • Part 3: Summary and Closing Comments

Problems and Solutions

Without further ado, let’s lay the groundwork for this story. Please note that I have taken liberties with irrelevant facts to protect client data and simplify conveyance of the solution.

We’ll call the legacy application and its database Apollo (think beginning, starts with an A), and we’ll call the new application and its database Challenger (think successor, C comes after A). We’ll call the ETL solution…wait…ready for it…the ETL solution. The original plan (before it was known updates would be required) was for Challenger to come online in phases. Each phase would encompass migrating the entirety of an existing customer’s data. The ETL solution had been built with the understanding that there would be no updates to Apollo’s data for the given customer once their data had been migrated. A customer’s data would be cutover on a specific date and users would only access Apollo for reference purposes after that date. The ETL solution dealt solely with a complete migration (per customer) and was concerned only with loading new data into Challenger’s database. Updates had never been considered and were not accounted for whatsoever in the existing ETL processes.

The ETL solution leveraged reference tables in a reference database to maintain the linkage between Apollo and Challenger. There were no functional applications or reports built around this reference data, it was just there in case it was needed in the future, e.g. a user wanting to find an entity in Challenger known only by its Apollo business key. The structure of these reference tables, using a fictitious Car entity as an example, was as follows:

ReferenceCar Table

ColumnName DateType
ReferenceCarID int
ApolloCarID int
ChallengerCarID int

So data in the ReferenceCar table looked like this:

ReferenceCarID ApolloCarID ChallengerCarID
100 9123 111456
101 9124 111457
102 9125 111458
103 9126 111459

Continuing on with this example, the Apollo Car table looked something like this:

CarID (Apollo) Make Model Year Color
9123 Datsun B210 1974 Brown
9124 Acura MDX 2004 Black
9125 Jeep Wrangler 2000 Yellow
9126 Infiniti Q56 2008 Black

For the sake of this article, let’s say the Challenger Car table was nearly identical to Apollo’s, with the exception of a DateLastUpdated column:

CarID (Challenger) Make Model Year Color DateLastUpdated
111456 Datsun B210 1974 Brown 12/10/11   1:45:10 PM
111457 Acura MDX 2004 Black 12/10/11   1:45:10 PM
111458 Jeep Wrangler 2000 Yellow 12/10/11   1:45:10 PM
111459 Infiniti Q56 2008 Black 12/10/11   1:45:10 PM

The important thing to note here is that the Apollo table was devoid of any columns like DateLastUpdated or ModifiedDateTime that reliably indicated something in a row had changed. Not long after I joined the team, the mandate came down that users would still need the ability to edit data in Apollo even after a customer’s data had been migrated. These post-cutover edits in Apollo would also need to be propagated to Challenger. We were under the gun to come up with a solution quickly, so the ETL team gathered at the whiteboard to consider our plight and propose potential solutions. At the end of the meeting, we had noted some requirements, some limitations and 3 proposed solutions.

Requirements/Assumptions

  • Updates to Apollo should flow to Challenger on a nightly basis.
  • Once the updateable entity was edited in Challenger, updates in Apollo should be ignored.
  • Providing an audit trail of updates was essential.

Limitations

  • Apollo had no reliable means to tell when data changed.
  • Apollo was on SQL 2005 so Change Data Capture (CDC) in SQL Server 2008 was not an option.
  • Apollo was a heavily used production system and any risk of operational disruption would not be well-received by the client, i.e. changing the existing Apollo database schema was not going to be well-received.

Proposed Solutions

  1. Save a snapshot of the Apollo database and use it for comparison on a nightly basis to determine changes to the Car table.
  2. Add a DateLastUpdated datetime column to the necessary tables and add triggers to these tables that would refresh the DateLastUpdated columns whenever changes were made to a row.
  3. Use Checksum or Hashbytes to determine if any changes were made to a row in the Car table.

Deciding on the Solution

We nixed the first option pretty quickly. Its advantage was that it would not require any updates to the Apollo system. Beyond that, it seemed clunky. The overhead involved in backing up and restoring snapshots, in addition to having to write the comparison code to determine what changed, was discouraging.

The second option was my suggestion and the one I was most comfortable using. I had used this same method with SQL Server (prior to 2008) for many years. It has the advantage of being “update agnostic”, i.e. it isn’t dependent upon any external application code; it is all contained in the database. It doesn’t care if a column was updated by a web service, the primary operational application, or Joe User plunking around in SQL Server Management Studio (SSMS). The trigger handles refreshing the trusty DateLastUpdated column and can even be tuned to only respond to a subset of columns on the table. Unfortunately, I was used to having some influence over the source databases and their operational systems. In this case, I had none. The big, glaring disadvantage to this solution was the fact that it could impact the production system. Granted, adding a column on the end of a table and hanging a trigger off of it can be incredibly benign (if one knows what one is doing and is familiar with the applications accessing the database), but there are no guarantees. Who is to know if some devil-may-care developer has a SELECT * query bound to a component that might break with the new column, etc., etc.? We had to look hard at our third option.

I had never used Hashbytes in my SQL programming. I was familiar with Checksums from the application development world, but had not used them in SQL, either. The proposed solution was that Hashbytes or Checksum could be used to monitor the changes in Apollo. Using one of these functions, a subset of data from Apollo could generate a value that is then used in subsequent comparison interval, e.g. the next day the new Hashbytes or Checksum value is compared with the previous day’s Hashbytes or Checksum value. If a change is detected, update Challenger. This was new territory for me, and it took me a little bit to get my head around the paradigm shift. We decided to go with option 3, provide a proof of concept, and get approval from the stakeholders.

Coming Soon

In the next post, I will dive into how we implemented the solution, extending the examples above. I will provide sample code and a download sample project that should contain everything needed to see it in action. If you have any questions or input in the meantime, please feel free to leave a comment.

FindControl in the ASP.NET Gridview Control

Just thought I’d drop a quick note to record a solution to the less than intuitive process of referencing a control that’s tucked into an EditItemTemplate on an ASP.NET GridView control while the control is in edit mode.

I came across the problem while trying to access a FileUpload control I had in an EditItemTemplate. I naturally assumed GridView.SelectedRow.FindControl was my ticket because indeed, my row was selected. My first course of action was Google, but I found surprisingly sparse results when searching for GridView EditItemTemplate FindControl.

After digging a little and determining that GridView.SelectedRow was returning Nothing (VB) in debug mode, I realized that for reasons I’m sure are well documented somewhere, GridView.SelectedRow didn’t apply when I was in edit mode. I’m going to venture a guess that it’s because SelectedRow applies to rows that are selected via the row’s Select command, so maybe EditItemTemplate has nothing to do with it. Anyway, I was able to access the control by referencing the row via the GridView.EditIndex. Example below:

(On the click event wired up to my Upload button)

Dim thisFup As New FileUpload

Dim thisRow As System.Web.UI.WebControls.GridViewRow

thisRow = gvwGalleryDetails.Rows(gvwGalleryDetails.EditIndex)

thisFup = thisRow.FindControl(“fupGalleryDetailPicture”)

If IsNothing(thisFup) = False Then

If thisFup.HasFile = True Then

thisFup.SaveAs(“images/” & thisFup.FileName)

Dim thisLabel As New System.Web.UI.WebControls.Label

thisLabel = thisRow.FindControl(“lblUploadStatus”)

If IsNothing(thisLabel) = False Then

thisLabel.Text = “File Uploaded: “ & thisFup.FileName

End If

thisLabel = thisRow.FindControl(“lblGalleryPicturePath”)

If IsNothing(thisLabel) = False Then

thisLabel.Text = “images/” & thisFup.FileName

End If

Else

Dim thisLabel As New System.Web.UI.WebControls.Label

thisLabel = thisRow.FindControl(“lblUploadStatus”)

If IsNothing(thisLabel) = False Then

thisLabel.Text = “No File Uploaded!”

End If

End If

End If