Wednesday, December 09, 2015

StackTrace Does The Detective Work

I had a really weird bug that I just could not track down. I have 3 Windows Service applications that “talk” to each other, passing data back and forth and logging bits of that data in each of their databases. In this particular case, the 3 apps have all the same code, just different config files (in real life there would be more differences between them, additional DLLs, but for testing there isn’t). The bug wasn’t causing any exceptions to be thrown, there was just an empty string being stored in the database for one particular “transaction” and that bit of data should have had some content,  not an empty string.

This is a rather tough situation to debug in Visual Studio, due to the fact that I’ve only got one of the apps running from VS and the other two are running from their EXEs (all as Console apps for testing, rather than as Windows Services). The one I have running from VS is not the one causing the issue, so I can’t easily debug without re-arranging all the config files so that I can run the problem one in VS. So, instead of doing that, I write the data out to XML files here and there throughout the process and I can look at the XML files to help find the problem. Unfortunately, this wasn’t working this time because all the data looked fine!

Consequently, I had no idea where this was going wrong. I only knew that right before the data was saved to the database (from a common Save method that is called throughout the app), that the Agency column was empty. Since this method is called from many places, it was going to be difficult to find which calling code caused the problem without debugging. Then I hit on an idea … maybe I could use StackTrace while the code was running! I didn’t know if that was possible, but it turns out that it is and it helped solve my perplexing bug. Here’s what I did:

In the DataAccess class method that saves the data, I check if the Agency string is null or empty and if so, I then call the following TroubleShootEmptyAgency() method:

private void TroubleShootEmptyAgency(string Agency, string Xml)
{
StringBuilder sb = new StringBuilder();
if (Agency == null)
sb.AppendLine("Null Agency");
else
sb.AppendLine("Empty Agency");

sb.AppendFormat("*** DataSet Content ***\r\n{0}\r\n***\r\n", Xml);

// Now, the fun part! The StackTrace!
System.Diagnostics.StackTrace stacktrace = new System.Diagnostics.StackTrace(true);
if (stacktrace != null)
{
sb.AppendLine("StackTrace:");
foreach (var frame in stacktrace.GetFrames())
{
sb.AppendFormat("{0}\n", frame.ToString());
}
}
// Write it to a File
System.IO.File.WriteAllText("EmptyAgency.xml", sb.ToString());
}

The StackTrace enabled me to see exactly what the call stack was that led up to the data trying to be saved with an empty Agency string. Consequently, I knew exactly which class and method was causing the problem (and then I clearly saw why that Agency string was empty there).

We all know that StackTrace can be very useful, but perhaps not everyone knows that it can be obtained from a running application. I didn’t know it until I tried it. Keep this in your bag of tricks, it just might come in handy!

Happy coding!

Wednesday, November 25, 2015

Information Exchange With Transformations

In the industry that I work in, Public Safety, there is a standard called NIEM, which stands for National Information Exchange Model (https://en.wikipedia.org/wiki/National_Information_Exchange_Model). It is an XML-based standard for sharing information between law enforcement agencies, first responders and other organizations. The various exchange scenarios are documented/described as XML schemas (.xsd’s), which are very complex schemas.

I have been using simple Typed DataSets since the early days of .NET (1.0 was buggy, 1.1 was much better). They are easy to fill from a database and easy to pass around between the layers/tiers of any application that I have written. However, to pass data from a law enforcement application to a fire department application, for example, there needs to be a common schema between them. Clearly, the simple Typed DataSet I use for a Police application will look totally different than the Typed DataSet I use for a Fire application. The way to solve this dilemma is to transmit the data with a common schema … NIEM to the rescue!

Transformations are used to Transform the Police DataSet to the common NIEM schema, then transmit the data as XML in the NIEM format to the Fire application, which then Transforms the NIEM XML data it has received into the schema for the Fire DataSet. That’s the process, in a nutshell.

In this blog post, I will show how to utilize Transformations to accomplish this. I am not going to show any of our actual Transformation schemas (.xslt) because that’s proprietary to our company. The NIEM schemas aren’t proprietary, but our own schemas are.

Also, I have to say that creating the .xslt’s without a tool is impossibly hard to do. I do not recommend it at all. I have used Altova’s MapForce and it is incredibly easy to use its graphic interface. I highly recommend it. Here is a link to information about MapForce, http://www.altova.com/mapforce.html, along with a link to what they call their Mission Kit, a whole toolbox of XML tools: http://www.altova.com/xml_tools.html (the Mission Kit includes MapForce and other goodies). They have free trial downloads if you want to try before you buy or just to experiment with. MapForce Basic Edition is $249. I do not work for Altova, I just use their MapForce product. There are probably other products on the market that will do the same thing, but you’d have to look for them yourself if you’re interested.

OK, now that that’s out of the way, let me show you some code. This is a simplified structure, just to show you the basic concepts of how to do this. In reality, I have many more Transformations and store them in a Dictionary. For the example I’m showing you here, I have two Typed DataSets, (PoliceDataSet and FireDataSet), and the common schema that I’m calling FakeCommonInformation (which would be something like the NIEM schema I mentioned above, if this were a real application).

Here are the two Transform variables and two methods to call to do the Transformations that I’ll be using in this sample code:

// be sure you have a using System.Xml.Xsl;
private XslCompiledTransform txToCommon;
private XslCompiledTransform txToLocal;

public string TransformToCommon(string Xml)
{
StringReader sr = new StringReader(Xml);
XmlReader xr = new XmlTextReader(sr);
StringWriter sw = new StringWriter();
XmlTextWriter xw = new XmlTextWriter(sw);
xw.Formatting = Formatting.Indented;

this.txToCommon.Transform(xr, xw);
return sw.ToString();
}
public string TransformToLocal(string Xml)
{
StringReader sr = new StringReader(Xml);
XmlReader xr = new XmlTextReader(sr);
StringWriter sw = new StringWriter();
XmlTextWriter xw = new XmlTextWriter(sw);
xw.Formatting = Formatting.Indented;

this.txToLocal.Transform(xr, xw);
return sw.ToString();
}

Here’s the code to initialize those Transform variables. Notice that they are loaded from individual .xslt files.

// Instantiate and Load Transformations
this.txToCommon = new XslCompiledTransform();
this.txToCommon.Load(@"..\..\PoliceDataSet-to-PartsCommon.xslt");

this.txToLocal = new XslCompiledTransform();
this.txToLocal.Load(@"..\..\PartsCommon-to-FireDataSet.xslt"); // path and filename

There are ways to actually compile all your .xslt files into one DLL, but I think that I’ll leave that for the next post I write.

All these schemas are not really Police, Fire or NIEM … it’s just that I had these test files lying around and re-used them for this post. Let’s just pretend that the columns and data contained in the DataSets actually have something to do with Public Safety. And here’s the scenario we’ll be mimicking with this sample code:

  • A Police application gets data from it’s database and fills the PoliceDataSet.
  • The application now needs to share this information with the Fire application.
  • The txToCommon Transformation is used to transform the data in the PoliceDataSet into an XML string that represents the Common format.
  • That Common XML string is transmitted over the wire (via a Web Service or other mechanisms) to the Fire application.
  • The Fire application uses the txToLocal Transformation to transform the Common XML to an XML string that represents the FireDataSet and the new XML is used to fill the FireDataSet with data it can use.

Now, I’m going to show you the schemas I used to do all this, so that you can take the pieces and put them in your own test application to see how well it works.

First, here’s the PoliceDataSet.xsd :

<?xml version="1.0" encoding="utf-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="PoliceDataSet">
<xs:complexType>
<xs:choice minOccurs="0" maxOccurs="unbounded">
<xs:element name="Part">
<xs:complexType>
<xs:sequence>
<xs:element name="code" type="xs:string"/>
<xs:element name="name" type="xs:string" minOccurs="0"/>
<xs:element name="type" type="xs:integer" minOccurs="0"/>
<xs:element name="deleted" type="xs:integer" minOccurs="0"/>
<xs:element name="cost" type="xs:decimal" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
</xs:element>
</xs:schema>

Next, the FireDataSet.xsd:

<?xml version="1.0" encoding="utf-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="FireDataSet">
<xs:complexType>
<xs:choice minOccurs="0" maxOccurs="unbounded">
<xs:element name="Part">
<xs:complexType>
<xs:sequence>
<xs:element name="FireCode" type="xs:string"/>
<xs:element name="FireName" type="xs:string" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:choice>
</xs:complexType>
</xs:element>
</xs:schema>

Notice the difference between the two DataSets. The Police has 5 columns, whereas the Fire only has 2 columns. In the Police, two of the columns are “code” and “name”, whereas in the Fire, those same two entities are called “FireCode” and “FireName”. The Transformations take care of matching those up.

And, of course, we can’t forget the Common schema (the actual file is called PartsCommon.xsd):

<?xml version="1.0" encoding="utf-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="FakeCommonInformation">
<xs:annotation>
<xs:documentation> Hello World</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="Parts" minOccurs="0">
<xs:annotation>
<xs:documentation>Parts Needed</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence maxOccurs="unbounded">
<xs:element name="Part">
<xs:complexType>
<xs:sequence>
<xs:element name="code" type="xs:string"/>
<xs:element name="name" type="xs:string" minOccurs="0"/>
<xs:element name="type" type="xs:integer" minOccurs="0">
<xs:annotation>
<xs:documentation>0 (Zero) for single ingredient and 1 (One) for recipe</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="deleted" type="xs:integer" minOccurs="0"/>
<xs:element name="cost" type="xs:decimal" minOccurs="0"/>
<xs:element name="messageID" type="xs:string" minOccurs="0"/>
<xs:element name="quantityUnitsId" type="xs:integer" minOccurs="0"/>
<xs:element name="stockUnit" type="xs:string" minOccurs="0"/>
<xs:element name="UnitofMeasure" type="xs:string" minOccurs="0"/>
<xs:element name="tolerance" minOccurs="0" maxOccurs="1">
<xs:complexType>
<xs:sequence>
<xs:element name="toleranceType" type="xs:integer"/>
<xs:sequence>
<xs:element name="banding" minOccurs="1" maxOccurs="unbounded">
<xs:complexType>
<xs:choice>
<xs:sequence>
<xs:element name="lowerTolerance" type="xs:decimal" minOccurs="1" maxOccurs="1"/>
<xs:element name="upperTolerance" type="xs:decimal" minOccurs="1" maxOccurs="1"/>
<xs:element name="band" type="xs:decimal" minOccurs="1" maxOccurs="1"/>
</xs:sequence>
</xs:choice>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="Combinations" minOccurs="0">
<xs:annotation>
<xs:documentation>Mix combinations of Parts.</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence maxOccurs="unbounded">
<xs:element name="Combination">
<xs:complexType>
<xs:sequence>
<xs:element name="partCode" type="xs:string">
<xs:annotation>
<xs:documentation>Validated against a Part entry that represents a Combination</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="locationCode" type="xs:string" minOccurs="0">
<xs:annotation>
<xs:documentation>Must previously have been entered using</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="yield" type="xs:integer" minOccurs="0"/>
<xs:element name="minProducts" type="xs:integer" minOccurs="0"/>
<xs:element name="maxProducts" type="xs:integer" minOccurs="0"/>
<xs:element name="addToStock" type="xs:integer" minOccurs="0"/>
<xs:element name="expiryDays" type="xs:double" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="version" type="xs:decimal" use="required" fixed="2.1"/>
</xs:complexType>
</xs:element>
</xs:schema>

As you can see, this schema looks absolutely nothing like typical DataSet schema. It’s much more complex and more hierarchical than simple tables and rows of columns in each table.

So, what do the Transformation .xslt’s look like?

Here’s the PoliceDataSet-to-PartsCommon.xslt:

<?xml version="1.0" encoding="UTF-8"?>
<!--
This file was generated by Altova MapForce 2012r2sp1

YOU SHOULD NOT MODIFY THIS FILE, BECAUSE IT WILL BE
OVERWRITTEN WHEN YOU RE-RUN CODE GENERATION.

Refer to the Altova MapForce Documentation for further details.
http://www.altova.com/mapforce
-->
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs">
<xsl:output method="xml" encoding="UTF-8" indent="yes"/>
<xsl:template match="/">
<FakeCommonInformation>
<xsl:attribute name="xsi:noNamespaceSchemaLocation" namespace="http://www.w3.org/2001/XMLSchema-instance">PartsCommon.xsd</xsl:attribute>
<Parts>
<xsl:for-each select="PoliceDataSet/Part">
<Part>
<code>
<xsl:value-of select="string(code)"/>
</code>
<xsl:for-each select="name">
<name>
<xsl:value-of select="string(.)"/>
</name>
</xsl:for-each>
<xsl:for-each select="type">
<type>
<xsl:value-of select="string(number(string(.)))"/>
</type>
</xsl:for-each>
<xsl:for-each select="deleted">
<deleted>
<xsl:value-of select="string(number(string(.)))"/>
</deleted>
</xsl:for-each>
<xsl:for-each select="cost">
<cost>
<xsl:value-of select="string(number(string(.)))"/>
</cost>
</xsl:for-each>
</Part>
</xsl:for-each>
</Parts>
</FakeCommonInformation>
</xsl:template>
</xsl:stylesheet>

And the PartsCommon-to-FireDataSet.xslt:

<?xml version="1.0" encoding="UTF-8"?>
<!--
This file was generated by Altova MapForce 2012r2sp1

YOU SHOULD NOT MODIFY THIS FILE, BECAUSE IT WILL BE
OVERWRITTEN WHEN YOU RE-RUN CODE GENERATION.

Refer to the Altova MapForce Documentation for further details.
http://www.altova.com/mapforce
-->
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs">
<xsl:output method="xml" encoding="UTF-8" indent="yes"/>
<xsl:template match="/">
<FireDataSet>
<xsl:attribute name="xsi:noNamespaceSchemaLocation" namespace="http://www.w3.org/2001/XMLSchema-instance">FireDataSet.xsd</xsl:attribute>
<xsl:for-each select="FakeCommonInformation/Parts/Part">
<Part>
<FireCode>
<xsl:value-of select="string(code)"/>
</FireCode>
<xsl:for-each select="name">
<FireName>
<xsl:value-of select="string(.)"/>
</FireName>
</xsl:for-each>
</Part>
</xsl:for-each>
</FireDataSet>
</xsl:template>
</xsl:stylesheet>

And now, the code to run through the above scenario:

1) A Police application gets data from it’s database and fills the PoliceDataSet (I’m just going to add 2 rows manually):

// Fill the DataSet (from database or wherever);
PoliceDataSet dsPolice = new PoliceDataSet();
var row = dsPolice.Part.NewPartRow();
row.code = "S000510";
row.name = "My New Part";
row.type = 0;
row.cost = 5.5000M;
dsPolice.Part.AddPartRow(row);

row = dsPolice.Part.NewPartRow();
row.code = "S000610";
row.name = "Your New Part";
row.type = 1;
row.cost = 6.6660M;
dsPolice.Part.AddPartRow(row);

2) Now the Police application needs to share this information with the Fire application, so we’ll use the txToCommon Transformation to transform the data in the PoliceDataSet into an XML string that represents the Common format:

string xmlCommon = this.TransformToCommon(dsPolice.GetXml());

3) That xmlCommon string can now be transmitted over the wire (via a Web Service or other mechanisms) to the Fire application.

4) The Fire application uses the txToLocal Transformation to transform the Common XML that it has received remotely, to an XML string that represents the FireDataSet and the new XML is used to fill the FireDataSet with data the Fire application can now use.

// Transform the Common XML string that you receive "over the wire" 
// to an XML string that represents the FireDataSet schema.
string xmlFire = this.TransformToLocal(receivedCommonXml);

// Use the Xml to fill a new FireDataSet, so your Fire application can now use the data
FireDataSet dsFire = new FireDataSet();
StringReader sr = new StringReader(xmlFire);
dsFire.ReadXml(sr, XmlReadMode.Fragment);

Take a look at the string returned from dsFire.GetXml() once you’re done and you’ll see that the code column has now become FireCode and the name column has now become FireName. It should look like this:

<FireDataSet>
<Part>
<FireCode>S000510</FireCode>
<FireName>My New Part</FireName>
</Part>
<Part>
<FireCode>S000610</FireCode>
<FireName>Your New Part</FireName>
</Part>
</FireDataSet>

Your Transformation is now complete and you’ve come over to the Dark Side (a lame attempt at Stars Wars humor).

Happy coding!  :0)

Monday, September 28, 2015

More Windows 10 Developer Goodies!

A month ago, I posted links to various Developer resources for getting started with developing apps for Windows 10 (see my post: Windows 10 Is Here! Let's Develop Some Apps!). I now have an additional list with more great resources:

And lastly, this isn’t specifically a Windows 10 thing, but hey, it’s Scott Guthrie! … it’s worth reading just for that alone!  ;0)  http://weblogs.asp.net/scottgu/announcing-great-new-sql-database-capabilities-in-azure?WT.mc_id=dx_MVP4024627

Saturday, September 12, 2015

AzureCon September 29th


I plan to attend #AzureCon, Sept. 29th. It's a free online all day event. Hear from the experts about the latest Azure innovation. Enjoy a live Q&A with the architects and engineers who are building the latest features. There are more than 50 sessions to choose from:
https://azure.microsoft.com/en-us/azurecon/?WT.mc_id=dx_MVP4024627 

Seriously, it sounds like it will be quite interesting!

Thursday, August 27, 2015

Windows 10 Is Here! Let’s Develop Some Apps!

Now that Windows 10 has launched and the latest Visual Studio 2015 has been released, are you ready to start taking advantage of the latest products and technology for writing some cool Apps? Here are some resources to help you get started:

This free course from MVA (Microsoft Virtual Academy) is aimed more at IT Professionals than for Developers, but some of you may be interested in this as well: https://www.microsoftvirtualacademy.com/en-US/training-courses/getting-started-with-windows-10-for-it-professionals-10629/?Wt.mc_ic=dx_MVP4024627

And, of course, there’s always Channel 9: https://channel9.msdn.com/windows/?Wt.mc_ic=dx_MVP4024627

Developers, Developers, Developers!!!   ;0)

Friday, July 31, 2015

SQL Server - Chunking Large Deletes

I’ve gotten a bit tired of one of our implementation guys always asking what to do about the database log file getting full and failing because of it. This happens occasionally during the nightly purge job, which deletes a day’s worth of old rows of data that we don’t need (usually a day from a week to 30 days ago, configurable as a parameter to the Stored Procedure).

Normally this is not a problem … our databases are set up with Simple Recovery, so the log file gets re-used and will usually not out-grow it’s allocated size. The only time it became a problem is on heavy-use days when the deletes were using up more than the log file’s allocated size. Another thing that occasionally caused it to happen is if the nightly purge didn’t run for some reason and then the next night, there would likely be twice as many rows of data to delete and then it became an endless cycle of failures … especially when the implementation guys don’t notice it happening and a week goes by without anything getting purged!!! Arrgghh!

Well, there is a solution to the problem and that is to chunk the deletes into more byte-size pieces (pun intended!) … in other words, “bulk” delete a smaller subset of data within a transaction, and continue in a loop until the entire set of data has all been deleted. Here’s the beauty of this methodology: with a Simple Recovery database, all you have to do is issue the CHECKPOINT command after the COMMIT TRANSACTION, and the log file use gets set back to the beginning of the log file. This can also be done with the Full Recovery database, but you issue a BACKUP LOG command instead.

Here’s a sample of the T-SQL that you’d put into your Stored Procedure. Note that I’m deleting all rows earlier than a specified date (and with a nightly job, that amounts to deleting one day’s worth of data).

-- @delay is passed into the SP as a parameter
IF @delay > 0
SET @delay = -@delay

IF @delay < 0
BEGIN
DECLARE @time datetime
DECLARE @rc INT

SET @time = DATEADD(day, @delay, GETDATE())

-- delete from message
SET @rc = 1
WHILE @rc > 0
BEGIN
BEGIN TRANSACTION
DELETE TOP (100000) message WHERE saveddatetime < @time
SET @rc = @@ROWCOUNT
COMMIT TRANSACTION
CHECKPOINT
END

-- I have a bunch more deletes from other tables, each in the same format
SET @rc = 1
-- etc...
END

I didn’t figure this stuff out by myself. I found a really great post by Aaron Bertrand here: http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes He posts a few nice charts comparing various combinations.

After reading that post, you might notice that the time it takes to perform these DELETEs could take a little longer using this method, but if all you care about is the size of the log file, it might not matter to you. One way to speed it up is to disable the indexes, do the deletes, enable the indexes. This is mentioned in the following blog post: http://shades-of-orange.com/post/2014/05/28/Delete-a-Large-Number-of-Rows-from-a-Table-in-SQL-Server  The only caveat that I can think of when messing with the indexes is that, depending on how many indexes you have to enable after you’re done, it may take a bit of time to re-index the data. In the blog post, the writer says it took an extra minute, but YMMV. I opted not to mess with the indexes … for me, it was all about the log file size, not the time it was taking.

Tuesday, June 30, 2015

DataAccess - Revisiting Yet Again

Almost 6 years ago, I wrote a 3-part series of blog posts about DataAccess. You can find them here: Part I, Part II and Part III.  These posts are still relevant, even though they are old. But, I’ve been meaning to add two things to the BBDataAccess class that I posted in Part III and I just keep forgetting to do it. Well, today is the day …

Implementing IDisposable

First, I wanted to make the class implement IDisposable. Here’s the addition that needs to be added to the BBDataAccess class:

public class BBDataAccess : IDisposable
{
// declarations & methods from Part III's previously published example
// ...
// ...

#region IDisposable Members

public void Dispose()
{
if (this.oConnection != null)
{
if (this.oConnection.State != ConnectionState.Closed)
this.oConnection.Close();
this.oConnection.Dispose();
}
}

#endregion
}

When calling methods on this class before we implemented IDisposable, you would have done something like this:

// Without IDisposable
CustomerDataAccess da = new CustomerDataAccess();
CustomerDataSet ds = da.GetCustomer(1);
// rest of your code

Now that you’ve implemented IDisposable, you’d call it like this:

// When the base BBDataAccess class implements IDisposable
using (CustomerDataAccess da = new CustomerDataAccess())
{
CustomerDataSet ds = da.GetCustomer(1);
// rest of your code
}

Transactions via TransactionScope

I also wanted to introduce Transactions using TransactionScope. You can make many things transactional, not just database access, so TransactionScope can be quite useful. Before we begin, take a look at my blog post about TransactionScope.

First, let’s just show you an example of how you might use TransactionScope in your calling code:

// The Utils class is from my TransactionScope blog post ...
// you *did* read it already, didn't you? ;0)
bool IsOK = false;
using (TransactionScope scope = Utils.GetTransactionScope())
{
using (CustomerDataAccess da = new CustomerDataAccess())
{
CustomerDataSet ds = da.GetCustomer(1);

// Do some stuff with the CustomerDataSet.
// Then call a method to create an Order.
// And then call another method that creates an Invoice.
// Then let's get the changes and save everything.

CustomerDataSet dsChanged = (CustomerDataSet)ds.GetChanges();
CustomerDataSet dsDeleted = (CustomerDataSet)ds.GetChanges(DataRowState.Deleted);

IsOK = da.SaveCustomerOrder(dsChanged, dsDeleted);
if (IsOK)
IsOK = da.SaveCustomerInvoice(dsChanged, dsDeleted);

if (IsOK)
scope.Complete();
}
}

We could also add TransactionScope to the CustomerDataAccess and/or to the base BBDataAccess class from Part III. If we did that, you would end up with nested Transactions, each one enlisting in the previous Transaction. This is usually a good thing.

For the CustomerDataAccess class, you could add the TransactionScope in each method like this:

public bool SaveCustomerOrder(CustomerDataSet dsChanged, CustomerDataSet dsDeleted)
{
this.DoIt = delegate()
{
// To use TransactionScope be sure to add the System.Transactions namespace
// If there is already a current ("ambient") TransactionScope, this one
// will enlist in the same transaction
using (TransactionScope scope = Utils.GetTransactionScope())
{
this.SaveTable(dsChanged.Customer, "csp_CustomerPut");
this.SaveTable(dsChanged.Orders, "csp_OrderPut");

if (dsDeleted != null)
{
this.SaveTable(dsDeleted.Orders, "csp_OrderDelete");
this.SaveTable(dsDeleted.Customer, "csp_CustomerDelete");
}

scope.Complete();
}
};
this.UndoIt = delegate()
{
dsChanged.RejectChanges();
dsDeleted.RejectChanges();
};

return this.Start();
}

You could also utilize TransactionScope in the Start method of the base class, like this:

public virtual bool Start()
{
bool success = false;

// Enter the retry loop
while (!success)
{
try
{
// Start the transaction and execute the methods. Any failures from here
// to the Complete() may result in an exception being thrown and caught which
// may lead to a retry or an abort depending on the type of error
using (TransactionScope scope = Utils.GetTransactionScope())
{
this.m_DoIt();

success = true;

scope.Complete();
}
}
catch (Exception e)
{
// Typically we'll want a Retry if the database error was the
// result of a deadlock, but you can Retry for any reason you wish.
if (this.RetryRequested(e) && this.attempt < this.maxRetries)
{
// Try to undo any changes before we retry
// If we can't, just abort
if (!this.UndoChanges())
{
this.AddToErrorMessage(e);
this.ForceCleanup();
break;
}

this.attempt++;

// Might want to sleep for random period
//System.Random rand = new System.Random();
//System.Threading.Thread.Sleep(rand.Next(MinSleepTime, MaxSleepTime));
}
else
{
// Max number of retries exceeded or an unrecoverable error
// - fail and return error message
this.AddToErrorMessage(e);
this.ForceCleanup();
break;
}
}
}

return (success);
}


That’s it for now. I hope this all makes sense to you, dear Reader. If not, please don’t hesitate to ask questions in the comments!