Sunday, May 27, 2018

(A)Synchronicity

Back in the day, "The Police" sang a song about Synchronicity. Today, I'm writing a blog about Asynchronicity.  ;0)

To be more precise, this blog post will explore whether or not there are performance differences between the various ways to write asynchronous code. I will be looking specifically at the code needed for a multi-threaded application, where the threads just do their thing, and we don't care what they're doing (in other words, kind of a Fire-And-Forget type of thing). I will explore these three methodologies:

  • System.Threading.Thread.Start()
  • System.Threading.ThreadPool.QueueUserWorkItem()
  • System.Threading.Tasks.Task.Run()

I'm betting that the latter two methods are simply syntactic sugar on top of the original Thread.Start() ... let's see if we can get some numerical evidence of that. And, just for the heck of it, I'm going to throw in a freebie ... I'm going to compare the Task.Run() both without and with a .ConfigureAwait(false) ... (not that it matters for this discussion). So, here's the code I used for testing this (called from a Button Click so I could easily run it again):

private System.Diagnostics.Stopwatch stopWatchThread;
private System.Diagnostics.Stopwatch stopWatchQueueUserWork;
private System.Diagnostics.Stopwatch stopWatchTask;
private System.Diagnostics.Stopwatch stopWatchTaskAwaitFalse; private void CompareThreeAsyncMethods()
{
this.stopWatchThread = new Stopwatch();
Console.WriteLine("Start Thread at {0}", DateTime.Now);
this.stopWatchThread.Start();
ThreadStart threadStart = new ThreadStart(RunThread);
Thread myThread = new Thread(threadStart);
myThread.Start(); this.stopWatchQueueUserWork = new Stopwatch();
Console.WriteLine("Start QueueUserWork at {0}", DateTime.Now);
stopWatchQueueUserWork.Start();
ThreadPool.QueueUserWorkItem(WorkToDo, new WorkSettings() { stopWatch = stopWatchQueueUserWork, WorkName = "QueueUserWork" }); this.stopWatchTask = new Stopwatch();
Console.WriteLine("Start Task at {0}", DateTime.Now);
stopWatchTask.Start();
Task.Run(() =>
{
WorkToDo(new WorkSettings() {stopWatch = stopWatchTask, WorkName = "Task" });
}); this.stopWatchTaskAwaitFalse = new Stopwatch();
Console.WriteLine("Start Task AwaitFalse at {0}", DateTime.Now);
stopWatchTaskAwaitFalse.Start();
Task.Run(() =>
{
WorkToDo(new WorkSettings() { stopWatch = stopWatchTaskAwaitFalse, WorkName = "TaskAwaitFalse" });
}).ConfigureAwait(false); Console.WriteLine("Show results as threads complete ... {0}\r\n", DateTime.Now);
}
public class WorkSettings
{
public System.Diagnostics.Stopwatch stopWatch { get; set; }
public string WorkName { get; set; }
}
// Need this only for the Thread option
private void RunThread()
{
this.WorkToDo(new WorkSettings() { stopWatch = this.stopWatchThread, WorkName = "Thread" });
} private void WorkToDo(object obj)
{
WorkSettings work = (WorkSettings)obj;
for (int i = 0; i < 2147483647; i++) // largest int 2,147,483,647
{
//Thread.Sleep(500);
}

work.stopWatch.Stop();
Console.WriteLine("{0} Stopped at {1}. Elapsed Time {2} milliseconds\r\n", work.WorkName, DateTime.Now, work.stopWatch.ElapsedMilliseconds);
}

By starting a StopWatch before I start running its Asynchronous method, I take into account any "prep" work that might be involved in starting the call to the method. If there is any, it should be reflected in the overall time to complete a method. Here are the results:

Start Thread at 5/26/2018 5:13:38 PM
Start QueueUserWork at 5/26/2018 5:13:38 PM
Start Task at 5/26/2018 5:13:38 PM
Start Task AwaitFalse at 5/26/2018 5:13:38 PM
Show results as threads complete ... 5/26/2018 5:13:38 PM Thread Stopped at 5/26/2018 5:13:44 PM. Elapsed Time 5669 milliseconds
Task Stopped at 5/26/2018 5:13:44 PM. Elapsed Time 5775 milliseconds
TaskAwaitFalse Stopped at 5/26/2018 5:13:44 PM. Elapsed Time 5836 milliseconds
QueueUserWork Stopped at 5/26/2018 5:13:44 PM. Elapsed Time 5859 milliseconds // last (10th) time
Thread Stopped at 5/26/2018 5:18:14 PM. Elapsed Time 5890 milliseconds
TaskAwaitFalse Stopped at 5/26/2018 5:18:15 PM. Elapsed Time 5978 milliseconds
QueueUserWork Stopped at 5/26/2018 5:18:15 PM. Elapsed Time 5982 milliseconds
Task Stopped at 5/26/2018 5:18:15 PM. Elapsed Time 6008 milliseconds

// Average
Average for Thread: 5986 milliseconds
Average for Task: 5964 milliseconds
Average for TaskAwaitFalse: 5973 milliseconds
Average for QueueUserWork: 6048 milliseconds

Judging by those results, I think that the minuscule difference might be attributed to them all running at once & taking up CPU resources. But, we can test that as well. I'm going to run them individually. Run one. Let it complete. Run another. Let it complete. I wonder if the times will be *exact* then! Let's re-work the code to look like this:

private CompareAsync methodIs = CompareAsync.Begin;
private CompareAsync currentMethodRunning = CompareAsync.Begin;
private void CompareThreeAsyncMethodsIndividually()
{
while (methodIs != CompareAsync.Done)
{
if (methodIs == CompareAsync.Begin)
methodIs = CompareAsync.Thread;
if (methodIs == currentMethodRunning)
continue;
switch (methodIs)
{
case CompareAsync.Thread:
currentMethodRunning = methodIs;
this.stopWatchThread = new Stopwatch();
Console.WriteLine("Start Thread at {0}", DateTime.Now);
this.stopWatchThread.Start();
ThreadStart threadStart = new ThreadStart(RunThread);
Thread myThread = new Thread(threadStart);
myThread.Start();
break;
case CompareAsync.QueueUserWork:
currentMethodRunning = methodIs;
this.stopWatchQueueUserWork = new Stopwatch();
Console.WriteLine("Start QueueUserWork at {0}", DateTime.Now);
stopWatchQueueUserWork.Start();
ThreadPool.QueueUserWorkItem(WorkToDo, new WorkSettings() { stopWatch = stopWatchQueueUserWork, WorkName = "QueueUserWork" });
break;
case CompareAsync.Task:
currentMethodRunning = methodIs;
this.stopWatchTask = new Stopwatch();
Console.WriteLine("Start Task at {0}", DateTime.Now);
stopWatchTask.Start();
Task.Run(() =>
{
WorkToDo(new WorkSettings() { stopWatch = stopWatchTask, WorkName = "Task" });
});
break;
case CompareAsync.TaskAwaitFalse:
currentMethodRunning = methodIs;
this.stopWatchTaskAwaitFalse = new Stopwatch();
Console.WriteLine("Start Task AwaitFalse at {0}", DateTime.Now);
stopWatchTaskAwaitFalse.Start();
Task.Run(() =>
{
WorkToDo(new WorkSettings() { stopWatch = stopWatchTaskAwaitFalse, WorkName = "TaskAwaitFalse" });
}).ConfigureAwait(false);
break;
default:
break;
}
}
}


And added an extra bit at the end of the WorkToDo() method such that it would be able to determine which method would run next:

private void WorkToDo(object obj)
{
WorkSettings work = (WorkSettings)obj;
for (int i = 0; i < 2147483647; i++) // largest int 2,147,483,647
{
//Thread.Sleep(500);
}
work.stopWatch.Stop();
Console.WriteLine("{0} Stopped at {1}. Elapsed Time {2} milliseconds\r\n", work.WorkName, DateTime.Now, work.stopWatch.ElapsedMilliseconds);

// ADDED THIS
// Set it to the next one
switch (methodIs)
{
case CompareAsync.Thread:
methodIs = CompareAsync.QueueUserWork;
break;
case CompareAsync.QueueUserWork:
methodIs = CompareAsync.Task;
break;
case CompareAsync.Task:
methodIs = CompareAsync.TaskAwaitFalse;
break;
case CompareAsync.TaskAwaitFalse:
methodIs = CompareAsync.Done;
break;
}
}

And now the results are this:

Start Thread at 5/26/2018 5:29:32 PM
Thread Stopped at 5/26/2018 5:29:37 PM. Elapsed Time 4877 milliseconds

Start QueueUserWork at 5/26/2018 5:29:37 PM
QueueUserWork Stopped at 5/26/2018 5:29:42 PM. Elapsed Time 4850 milliseconds

Start Task at 5/26/2018 5:29:42 PM
Task Stopped at 5/26/2018 5:29:47 PM. Elapsed Time 4918 milliseconds

Start Task AwaitFalse at 5/26/2018 5:29:47 PM
TaskAwaitFalse Stopped at 5/26/2018 5:29:52 PM. Elapsed Time 4978 milliseconds

// last time (10th)
Thread Stopped at 5/26/2018 5:32:37 PM. Elapsed Time 4910 milliseconds
QueueUserWork Stopped at 5/26/2018 5:32:42 PM. Elapsed Time 4821 milliseconds
Task Stopped at 5/26/2018 5:32:46 PM. Elapsed Time 4894 milliseconds
TaskAwaitFalse Stopped at 5/26/2018 5:32:51 PM. Elapsed Time 4924 milliseconds

// Average
Average for Thread: 4986 milliseconds
Average for QueueUserWork: 4975 milliseconds
Average for Task: 4966 milliseconds
Average for TaskAwaitFalse: 4962 milliseconds

Well now ... there's still a difference between the 4 methods, isn't there? However, it is still teeny. Running each of these individually (instead of starting all 4 methods at the same time, as I did in the first test) produces the same results pretty much, the difference being that each is about 1000 milliseconds faster than the first test.

The difference between the fastest & slowest averages for the second test is only about .2% and that's in milliseconds! I'd say that these various methodologies for running asynchronous threads, aren't really any different (statistically speaking). Hey, I just like to mess around with numbers.  ;0)

Happy coding!  =0)

Sunday, February 25, 2018

Parent/Child DataBinding

I have seen a few questions lately on the MSDN Forums about synchronizing parent/child data on a Windows Form. I thought that I had written a blog post about this years ago, but apparently I'm mis-remembering (probably remembering all the Forum posts that I've answered in the past)! This particular post will deal with parent/child using DataSets, and we can deal with this in two ways: using DataRelations or using the Filter property of the child BindingSource.

First, let's take a look at using DataRelations ... this is a pretty straightforward process (especially if you are already using DataRelations).

// First, you'll need two BindingSources:
private BindingSource bsParent;
private BindingSource bsChild;

You obviously need to have a DataRelation between two DataTables in your DataSet. If you don't already know how to set that up, it's pretty straightforward:

this.dsCust.Relations.Add("CustomerOrdersRelation",
    this.dsCust.Tables["Customer"].Columns["CustID"],
    this.dsCust.Tables["Orders"].Columns["CustID"]);

I usually recommend that DataBinding be set up in the Form's Load event handler. You'd have code like this (this would be after you've already set up a DataRelation):

this.bsParent = new BindingSource();
this.bsChild = new BindingSource();
 
this.bsParent.DataSource = this.dsCust;
this.bsParent.DataMember = "Customer";
this.bsChild.DataSource = this.bsParent;
this.bsChild.DataMember = "CustomerOrdersRelation";

There are two key things to note here:
  1. The DataSource for the bsChild is the bsParent.
  2. The DataMember for the bsChild is the Relationship, rather than a DataTable. This is what makes the whole thing work.
Now, we can use the BindingSources to DataBind parent/child DataGridViews and/or TextBoxes or other Controls, depending on the functionality you need:

this.oGridCustomer.DataSource = this.bsParent;
this.oGridOrders.DataSource = this.bsChild;
this.txtOrderDescription.DataBindings.Add("Text", this.bsChild, "description");

There *is* one little "gotcha" that you sometimes need to watch out for. That's the subject of bsChild.EndEdit(). The potential problem and solution is described in my blog post: https://geek-goddess-bonnie.blogspot.com/2010/03/uncommitted-child-table-changes.html

However, if you *can't* (or don't want to) use a DataRelation, then you do this with a Filter on the child BindingSource. It's only slightly different, because we have to utilize the bsParent.PositionChanged event handler to reset the Filter.

The bsParent is set up the same way as it was above. It's the bsChild that will be different:

bsChild.DataSource = this.dsCust.Tables["Orders"];
bsChild.Filter = string.Format("CustID = '{0}'", this.dsCust.Tables["Customer"].Rows[bsParent.Position]["CustID"]);
bsParent.PositionChanged += bsParent_PositionChanged;

And, for the Parent PositionChanged event handler, we just reset the Child Filter:

private void bsParent_PositionChanged(object sender, EventArgs e)
{
    bsChild.Filter = string.Format("CustID = '{0}'", this.dsCust.Tables["Customer"].Rows[bsParent.Position]["CustID"]);
}

And, obviously, since the two grids and the TextBox are set up using the bsParent and bsChild, nothing needs to change there.

In either scenario, as the user moves through the rows in the Customer grid, the corresponding child rows will be all that's displayed in the Order grid and the corresponding Order description in the textbox.


Tuesday, January 30, 2018

Use and Generate PUT Stored Procedures

To me, the "best practice" for DataAccess to a database is to always utilize Stored Procedures, if your database server supports them. Microsoft SQL Server, of course, supports Stored Procs as does Microsoft Access. I use SQL Server exclusively, so this blog post will be limited to that database server. It's easy enough to take the concepts shown here and apply the same methodology to an Access database, or to any other database that allows Stored Procs.

However, if you Google for information about the best practices involving Stored Procedures, you will, of course, find a gazillion conflicting opinions ... some agreeing with my opinion and others not. But still, I think it's an important best practice.

OK, now I'll get down off my soap box and on to the topic of my post!

I have a base DataAccess class, than can be used directly, or sub-classed to a domain use (for example, a large inventory system may have an OrdersDataAccess, an InventoryDataAccess and a CustomerDataAccess ... all subclassed from the BaseDataAccess). I have a series of DataAccess posts here in my blog, and I will show bits and pieces of it here in this post to illustrate my point. If you'd like to read these additional posts, you can use this search: http://geek-goddess-bonnie.blogspot.com/search?q=DataAccess Some of the earlier posts are incomplete, code-wise, but get "fleshed out" in later posts.

I see people on the Forums struggling with simple CRUD activity, trying to utilize the SqlCommandBuilder (usually incorrectly) and having all kinds of problems. SqlCommandBuilder is not the answer, in fact, I never use it ... there is an easier way to Insert and/or Update from a DataTable and do it by using the same Stored Proc (one for each table).  It's what I refer to as PUT procs, and the PUT Stored Proc will either INSERT or UPDATE depending on whether or not the Primary Key (PK) is included in the Parameters.

One caveat here ... I use the DataSet/DataTable paradigm for my DTOs (Data Transfer Objects). The same concepts can apply to POCO, with just different methods in the DataAccess class needed. (POCO means "Plain Old CLR Object which is defined here in Wikipedia: https://en.wikipedia.org/wiki/Plain_old_CLR_object as "a POCO does not have any dependency on an external framework and generally does not have any attached behaviour." ... but go ahead and read WikiPedia if you are unfamiliar with the term).

Here's a simple example of a PUT Stored Proc. I will show some methods you can use to create a Utility for generating such Procs automatically at the end of this blog post:

CREATE PROCEDURE [dbo].[bsp_StatusPut]
    @statuskey             bigint = NULL OUTPUT,
    @statusdatetime        datetime = NULL,
    @status                varchar(max) = NULL,
    @agency                varchar(100) = NULL,
    @messagetype           varchar(128) = NULL,
    @id                    uniqueidentifier = NULL
AS
    SET NOCOUNT ON
    SET XACT_ABORT ON

    BEGIN TRANSACTION
    IF (@statuskey <= 0 OR @statuskey IS NULL)
    BEGIN
        -- Insert Values into the Status table
        INSERT Status
            (
            statusdatetime,
            status,
            agency,
            messagetype,
            id)
        SELECT 
            @statusdatetime,
            @status,
            @agency,
            @messagetype,
            @id
 
        SELECT @statuskey = SCOPE_IDENTITY()
    END
    ELSE
    BEGIN
        -- Update the Status table
        UPDATE Status
        SET 
            statusdatetime = ISNULL(@statusdatetime, statusdatetime),
            status = ISNULL(@status, status),
            agency = ISNULL(@agency, agency),
            messagetype = ISNULL(@messagetype, messagetype),
            id = ISNULL(@id, id)
        WHERE statuskey = @statuskey
    END
 
    COMMIT TRANSACTION
    RETURN 0
GO


Now, as you can see, this SP doesn't have many parameters, and it would be easy to add them one-by-one. However, there's an easier way when you have a lot of columns in your table.  In my DataAccess class, I  have a set  of methods for dealing with Parameters. Here are the relevant bits of code from a BaseDataAccess class to go along with my example:

// This is not the entire class, see blog posts for more of the class
public class BaseDataAccess : IDisposable
{
    private IDbConnection oConnection;
    private IDbCommand oCommand;
    private IDbDataAdapter oAdapter;
    private IDataParameterCollection oParms;
    
    public string ConnectionString = "";
 
    public BaseDataAccess(string connectionString)
    {
        this.ConnectionString = connectionString;
        this.oConnection = new SqlConnection(connectionString);
        this.oCommand = new SqlCommand();
        this.oAdapter = new SqlDataAdapter((SqlCommand)this.oCommand);
        this.oCommand.Connection = this.oConnection;
        this.oCommand.CommandType = CommandType.StoredProcedure;
    }
 
    protected void SetAllParameters(DataRow Row)
    {
        this.ClearParameters();
        for (int i = 0; i < Row.Table.Columns.Count; i++)
        {
            this.AddParms(Row.Table.Columns[i].ColumnName, Row[i]);
        }
    }
    public void AddParms(string ParmName, object ParmValue)
    {
        if (ParmName.StartsWith("@") == false)
            ParmName = "@" + ParmName;
        
        if (ParmValue != DBNull.Value)
        {
            if (this.oCommand.Parameters.IndexOf(ParmName) > -1)
                this.oCommand.Parameters[ParmName].Value = ParmValue;
            else
                this.oCommand.Parameters.AddWithValue(ParmName, ParmValue);
        }
        else
            this.RemoveParm(ParmName);
    }
    protected void ClearParameters()
    {
        this.oCommand.Parameters.Clear();
    }
    public void RemoveParm(string ParmName)
    {
        if (ParmName.StartsWith("@") == false)
            ParmName = "@" + ParmName;
        if (this.oCommand.Parameters.IndexOf(ParmName) > -1)
            this.oCommand.Parameters.RemoveAt(ParmName);
    }
    public void SetParmDirection(string ParmName, ParameterDirection direction)
    {
        if (ParmName.StartsWith("@") == false)
            ParmName = "@" + ParmName;
        if (this.oCommand.Parameters.IndexOf(ParmName) > -1)
            this.oCommand.Parameters[ParmName].Direction = direction;
    }
    
    public void ExecuteCommand()
    {
        this.ExecuteCommand(this.oCommand.CommandText);
    }
    public void ExecuteCommand(string StoredProcName)
    {
        this.OpenConnection();
        this.oCommand.CommandText = StoredProcName;
        try
        {
            this.oCommand.ExecuteNonQuery();
        }
        catch (SqlException ex)
        {
            // Process the SqlException with your normal logging methodology
        }
    }
 
    public void OpenConnection()
    {
        try
        {
            if (this.oConnection.State != ConnectionState.Open)
            {
                this.oConnection.Open();
            }
        }
        catch (Exception ex)
        {
            // Process the Exception with your normal logging methodology
        }
    }
 
    public void Dispose()
    {
        if (this.oConnection != null)
        {
            if (this.oConnection.State != ConnectionState.Closed)
                this.oConnection.Close();
            this.oConnection.Dispose();
        }
    }
}


Here is how you might use these in your own DataAccess classes, for example, if you've created a StatusDataAccess class, sub-classed from BaseDataAccess. My example uses a Typed DataSet, but it could just as easily be used with a plain old DataSet (this is based on my SP example above):

public void UpdateStatus(StatusDataSet ds)
{
    using (DataAccess da = new DataAccess(this.MyConnectionString))
    {
        da.SetCommand("bsp_StatusPut");
        foreach (StatusDataSet.StatusRow call in ds.Status)
        {
            da.SetAllParameters(call);
            da.SetParmDirection("statuskey", ParameterDirection.InputOutput);
            da.ExecuteCommand();
        }
    }
}


OK, as promised earlier, here is a set of methods you can use to automatically generate the SQL Script to create such PUT Stored Procedures for every table in your database. Write yourself a database utility application, if you don't already have one, and include these methods in it:

#region Generating PUT Stored Procs
 
StringBuilder ProcBuilder;
public string GeneratePutStoredProc(string DatabaseName, string TableName)
{
    DataTable dtColumns = new DataTable(); ;
    string ErrorMsg = "";
    this.ProcBuilder = new StringBuilder();
 
    // First, let's fill a DataTable with the information for each column in the database Table
    using (SqlConnection conn = new SqlConnection(this.ConnectionStringStatus))
    {
        try
        {
            conn.Open();
            dtColumns = conn.GetSchema("Columns", new[] { DatabaseName, null, TableName });
        }
        catch (Exception ex)
        {
            ErrorMsg = ex.Message + "\r\n";
            if (ex.InnerException != null && ex.InnerException.Message != ex.Message)
                ErrorMsg += ex.InnerException.Message + "\r\n";
        }
    }
    // Get error-handling out of the way
    if (dtColumns.Rows.Count == 0 && ErrorMsg != "")
        ErrorMsg += string.Format("No columns retrieved for {0}/{1}.\r\n", DatabaseName, TableName);
    if (ErrorMsg != "")
    {
        ErrorMsg += "Check your database and/or connection";
        MessageBox.Show(ErrorMsg);
        return "";
    }
 
    // One of our database conventions is that the name of the PK column is always TableName + "Key"
        string KeyName = TableName + "Key";
 
    // Then start building the SQL script for the creation of the Stored Proc,
    // calling a few methods to do the heavy lifting
    this.BuildHeader(TableName, "PUT");
    this.BuildParameterList(dtColumns, KeyName);
 
    this.ProcBuilder.Append("\r\nAS\r\n");
    this.ProcBuilder.Append("\tSET NOCOUNT ON\r\n");
    this.ProcBuilder.Append("\tSET XACT_ABORT ON\r\n\r\n");
    this.ProcBuilder.Append("\tBEGIN TRANSACTION\r\n");
 
    this.ProcBuilder.AppendFormat("\tIF (@{0} <= 0 OR @{0} IS NULL)\r\n", KeyName);
    this.ProcBuilder.Append("\tBEGIN\r\n");
    this.BuildInsert(dtColumns, KeyName, TableName);
    this.ProcBuilder.Append("\tEND\r\n\tELSE\r\n\tBEGIN\r\n");
    this.BuildUpdate(dtColumns, KeyName, TableName);
    this.ProcBuilder.Append("\tEND\r\n\r\n\tCOMMIT TRANSACTION\r\n\tRETURN 0\r\n");
 
    this.BuildFooter();
 
    return this.ProcBuilder.ToString();
}
 
protected void BuildHeader(string TableName, string ProcType)
{
    string prefix = "bsp";
    string fullProcName = String.Format("{0}_{1}{2}", prefix, TableName, ProcType);
 
    this.ProcBuilder.Append("\r\n");
    this.ProcBuilder.Append("---------------------------------------------------------------\r\n");
    this.ProcBuilder.AppendFormat("-- Stored Procedure {0}\r\n", fullProcName);
    this.ProcBuilder.AppendFormat("-- Basic {1} Procedure for {0} \r\n", TableName, ProcType);
    this.ProcBuilder.AppendFormat("-- Automatically generated {0}\r\n", DateTime.Now);
    this.ProcBuilder.Append("---------------------------------------------------------------\r\n");
    this.ProcBuilder.AppendFormat("IF EXISTS(SELECT * FROM sys.objects WHERE name LIKE '{0}' and type = 'P')\r\n", fullProcName);
    this.ProcBuilder.AppendFormat("\tDROP PROCEDURE {0}\r\n", fullProcName);
    this.ProcBuilder.Append("GO\r\n");
    this.ProcBuilder.Append("SET QUOTED_IDENTIFIER OFF \r\n");
    this.ProcBuilder.Append("GO\r\n");
    this.ProcBuilder.Append("SET ANSI_NULLS ON \r\n");
    this.ProcBuilder.Append("GO\r\n\r\n");
    this.ProcBuilder.AppendFormat("CREATE PROCEDURE {0}\r\n", fullProcName);
}
protected void BuildParameterList(DataTable dtColumns, string KeyName)
{
    StringBuilder sb = new StringBuilder();
    foreach (DataRow row in dtColumns.Rows)
    {
        if (sb.Length > 0)
            sb.Append(",\r\n");
        sb.AppendFormat("\t@{0}\t\t{1} = NULL", row["COLUMN_NAME"], row["DATA_TYPE"]);
        if (row["COLUMN_NAME"].ToString().ToUpper() == KeyName.ToUpper())
            sb.Append(" OUTPUT");
    }
    this.ProcBuilder.Append(sb.ToString());
}
protected void BuildInsert(DataTable dtColumns, string KeyName, string TableName)
{
    this.ProcBuilder.AppendFormat("\t\t-- Insert Values into the {0} table\r\n", TableName);
    this.ProcBuilder.AppendFormat("\t\tINSERT {0}\r\n", TableName);
 
    StringBuilder sbColumns = new StringBuilder();
    StringBuilder sbParms = new StringBuilder();
    foreach (DataRow row in dtColumns.Rows)
    {
        if (row["COLUMN_NAME"].ToString().ToUpper() == KeyName.ToUpper())
            continue;
        if (sbColumns.Length > 0)
            sbColumns.Append(",\r\n");
        if (sbParms.Length > 0)
            sbParms.Append(",\r\n");
        sbColumns.AppendFormat("\t\t\t{0}", row["COLUMN_NAME"]);
        sbParms.AppendFormat("\t\t\t@{0}", row["COLUMN_NAME"]);
    }
    this.ProcBuilder.Append("\t\t\t(\r\n");
    this.ProcBuilder.AppendFormat("{0})\r\n", sbColumns.ToString());
    this.ProcBuilder.Append("\t\tSELECT\r\n");
    this.ProcBuilder.AppendFormat("{0}\r\n\r\n", sbParms.ToString());
    this.ProcBuilder.AppendFormat("\t\tSELECT @{0} = SCOPE_IDENTITY()\r\n", KeyName);
}
protected void BuildUpdate(DataTable dtColumns, string KeyName, string TableName)
{
    this.ProcBuilder.AppendFormat("\t\t-- Update the {0} table\r\n", TableName);
    this.ProcBuilder.AppendFormat("\t\tUPDATE {0}\r\n", TableName);
    this.ProcBuilder.Append("\t\tSET\r\n");
 
    StringBuilder sb = new StringBuilder();
    foreach (DataRow row in dtColumns.Rows)
    {
        if (row["COLUMN_NAME"].ToString().ToUpper() == KeyName.ToUpper())
            continue;
        if (sb.Length > 0)
            sb.Append(",\r\n");
        sb.AppendFormat("\t\t\t{0} = ISNULL(@{0}, {0})", row["COLUMN_NAME"]);
    }
    this.ProcBuilder.Append(sb.ToString());
    this.ProcBuilder.AppendFormat("\r\n\t\tWHERE {0} = @{0}\r\n", KeyName);
}
protected void BuildFooter()
{
    this.ProcBuilder.Append("\r\n");
    this.ProcBuilder.Append("GO\r\n");
    this.ProcBuilder.Append("SET QUOTED_IDENTIFIER OFF \r\n");
    this.ProcBuilder.Append("GO\r\n");
    this.ProcBuilder.Append("SET ANSI_NULLS ON \r\n");
    this.ProcBuilder.Append("GO\r\n");
}
 
#endregion


That's it. Happy coding!!  =0)

Saturday, December 02, 2017

Putting Up The Tree

From the subject title, I bet you thought this was going to be a post about Christmas trees (seeing as it's the start of the Holiday Season). But no, this is still a geeky blog.  ;0)
 
A friend of mine recently asked me for help with TreeViews on Windows Forms. I used to do a lot of WinForm work back in the early days of working with .NET, but not very much recently. In fact, I don't do much UI stuff anymore, except for a little test application I keep adding to which is used to help with answering questions on the Forums (and writing some of my blog posts) ... but there was nothing in my test app with a TreeView. So, consequently, I had to really dig around to be able to help him.
 
I remembered the concept I wanted to tell him about easily enough ... it deals with constructing a tree with data that uses self-referencing keys/ids. So, you only need one database table, and it would contain the data for multiple TreeViews. Forget about multiple tables (parent, child, grandchild ... where does it end?) ... one table will handle it all and it's not all that complicated.  The part I couldn't remember was the code to actually fill a TreeView with that data, but it didn't take too long to come up with it. Bear in mind that this concept of self-referencing columns is a concept that can be used for anything that is hierarchical in nature, not just for a TreeView on a Windows Form!!
 
First, let's look at the database table structure.  It should be something like this:

PK    Description       ID            IsRoot    ParentID        RootID

It's probably best to use self-referencing IDs rather than keys ... unless your ID's are not unique. For example, if your data is for an inventory system, then most likely all your Part IDs would be unique across all of your inventory. In that case, use those Part IDs for self-referencing. 

PK    Description       ID            IsRoot    ParentID        RootID
0     Display Assembly  101-A045      1         NULL            101-A045
1     Camera            P101-0024     0         101-A045        101-A045
2     Tape              P100-0004     0         P101-0024       101-A045
3     3M Tape           P100-0124     0         P101-0024       101-A045
4     Display Panel     P101-A045     0         P101-0024       101-A045
5     LCD Cable         P101-0046     0         P101-A045       101-A045
6     X Tape            P100-0084     0         P101-A045       101-A045
7     Graphic Board     P101-A023     0         P101-0024       101-A045
8     Screws 4          P100-0110     0         P101-A023       101-A045

The tree would look like this (you can display the ID if you wish, this particular example shows the ID) :

+Display Assembly [101-A045]
    +---Camera [P101-0024]
        |---Tape [P100-0004]
        |---3M Tape [P100-0124]
        +---Display Panel [P101-A045]
        |    |---LCD Cable [P101-0046]
        |    |---X Tape [P100-0084]
        +---Graphic Board [P101-A023]
        |    |---Screws 4 [P100-0110]

But, if your data is of the type code/description, where code is not unique because different *kinds* of codes might utilize the same number/symbols for the code, then you'll have to use the keys instead. I'm thinking of something like a system that integrates information from several different organizations, each organization having its own set of codes and descriptions. Those codes could easily be similar amongst the organizations. Note that in the following example, I'm going to show data for two different Organizations (the Organization is not something stored in this particular data, but depending on your application, you *could* add another column for an Organization name to your database table, or just use the Root's Description for that).

These are codes for imaginary Fire and Police departments. Note that each department has its own Root, so that each department can display its own hierarchy of codes. Note also that there *are* some codes that are the same in both Departments:

PK    Description        ID            IsRoot    ParentID        RootID
0     MyTown Fire Dept   FIRE          1         NULL             0
1     Resource Type      RSRC          0         0                0
2     Battalion          BAT           0         1                0
3     BLS Ambulance      BA            0         1                0
4     Command            CMD           0         1                0
5     Engine             ENG           0         1                0
6     Helicopter         COPT          0         1                0
7     Truck              TRK           0         1                0
8     Call Type          CALL          0         0                0
9     Brush Fire         BRUSH         0         8                0
10    EMS                EMS           0         8                0
11    River Rescue       RR            0         8                0
12    Structure Fire     STRUCT        0         8                0
13    Traffic Accident   TA            0         8                0
                    
14    MyTown Police Dept POLICE        1         NULL              14
15    Unit Type          UNIT          0         14                14
16    Ambulance          MEDIC         0         15                14
17    Bicycle            BIKE          0         15                14
18    Chief              CHIEF         0         15                14
19    Helicopter         COPT          0         15                14
20    Mobile Command     CMD           0         15                14
21    Patrol Car         CRUISER       0         15                14
22    Swat Team          SWAT          0         15                14
23    Incident Type      CALL          0         14                14
24    Break & Enter      B&E           0         23                14
25    Burglary           BURG          0         23                14
26    EMS                EMS           0         23                14
27    Homicide           HOM           0         23                14
28    Robbery            ROBR          0         23                14
29    Traffic Accident   TRAFFIC       0         23                14

Now, because we have data for more than one organization, we could start out getting data from the database like this:

SELECT * MyCodesTable WHERE IsRoot = 1

That will show the organizations that we have data for.
 
Then, if we're displaying data for the Fire Department, we do:

SELECT * MyCodesTable WHERE RootID = 0

and for the Police Department:

SELECT * MyCodesTable WHERE RootID = 14

Here is how our two trees would look: 

+MyTown Fire Dept [FIRE]
    +---Resource Type [RSRC]
        |---Battalion [BAT]
        |---BLS Ambulance [BA]
        |---Command [CMD]
        |---Engine [ENG]
        |---Helicopter [COPT]
        |---Truck [TRK]
    +---Call Type [CALL]
        |---Brush Fire [BRUSH ]
        |---EMS [EMS]   
        |---River Rescue [RR]    
        |---Structure Fire [STRUCT]
        |---Traffic Accident [TA]


 
+MyTown Police Dept [POLICE]    
    +---Unit Type [UNIT]
        |---Ambulance [MEDIC]
        |---Bicycle [BIKE]
        |---Chief [CHIEF]  
        |---Helicopter [COPT]
        |---Mobile Command [CMD]  
        |---Patrol Car [CRUISER]
        |---Swat Team [SWAT]
    +---Incident Type [CALL]
        |---Break & Enter [B&E]
        |---Burglary [BURG]   
        |---EMS [EMS]  
        |---Homicide [HOM]   
        |---Robbery [ROBR]   
        |---Traffic Accident [TRAFFIC]

Now on to the last bit: how to code this sucker! First, we'll need some data. Here is some XML you can use (copy/paste, save as an XML file, Tree.xml).

<?xml version="1.0" standalone="yes"?>
<TreeDataSet>
  <TreeNode>
    <PK>1</PK>
    <Description>Display Assembly</Description>
    <ID>101-A045</ID>
    <IsRoot>1</IsRoot>
    <ParentID></ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>2</PK>
    <Description>Camera Chassis</Description>
    <ID>P101-0024</ID>
    <IsRoot>0</IsRoot>
    <ParentID>101-A045</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>3</PK>
    <Description>Tape VHB</Description>
    <ID>P100-0004</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-0024</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>4</PK>
    <Description>3M Foam Tape</Description>
    <ID>P100-0124</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-0024</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>5</PK>
    <Description>Display Touch Panel Assembly</Description>
    <ID>P101-A045</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-0024</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>6</PK>
    <Description>LCD Display Ribbon Cable</Description>
    <ID>P101-0046</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-A045</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>7</PK>
    <Description>S-11730 Kapton Tape</Description>
    <ID>P100-0084</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-A045</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>8</PK>
    <Description>G5 Board</Description>
    <ID>P101-A023</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-0024</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>9</PK>
    <Description>Screw 4-24x0.375</Description>
    <ID>P100-0110</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-A023</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
</TreeDataSet>

The code is pretty straightforward. Assuming that you know the RootID already, call the FillTheTree() method passing it the instance of the TreeView on your Form along with the RootID:

this.FillTheTree(this.oTree, "101-A045");

And now, the two FillTheTree() methods. The second method calls itself recursively:

private void FillTheTree(TreeView tree, string RootID)
{
    DataSet dsTree = new DataSet();
    // Here's where you'll actually retrieve the data from the database with this:
    // SELECT * FROM MyCodesTable WHERE RootID = @RootID
    dsTree.ReadXml("Tree.xml");
    tree.Nodes.Clear();
    DataView dvTree = new DataView(dsTree.Tables[0]);
    dvTree.RowFilter = string.Format("ID = '{0}'", RootID);
    this.FillTheTree(tree, dvTree, RootID);
}
// Calls itself recursively
private void FillTheTree(TreeView tree, DataView dvTree, string parentNodeID, TreeNode ParentNode = null)
{
    string nodeText, ID;
    TreeNode ChildNode = null;
    foreach (DataRowView dvRow in dvTree)
    {
        ID = dvRow["ID"].ToString();
        nodeText = string.Format("{0}[{1}]", dvRow["Description"], ID);
        if (ParentNode != null)
        {
            ChildNode = ParentNode.Nodes.Add(nodeText);
            ChildNode.Tag = ID;
        }
        else
        {
            // The ParentNode will only initially be NULL when starting, meaning we're adding the root
            ParentNode = tree.Nodes.Add(nodeText);
            ParentNode.Tag = ID;
        }
    }
    // Now, loop through the child nodes of parent node, if there are any
    // Calling recursively to continue traversing the tree.
    if (ParentNode.Nodes.Count > 0)
    {
        foreach (TreeNode node in ParentNode.Nodes)
        {
            dvTree.RowFilter = string.Format("ParentID = '{0}'", node.Tag.ToString());
            this.FillTheTree(tree, dvTree, node.Tag.ToString(), node);
        }
    }
    else if (ParentNode.NextNode == null && ParentNode.Parent == null)
    {
        dvTree.RowFilter = string.Format("ParentID = '{0}'", parentNodeID);
        this.FillTheTree(tree, dvTree, parentNodeID, ParentNode);
    }
    else
        return;
}

That's all there is to it!  Happy coding!  =0)

Friday, October 27, 2017

REST with SSL

I wrote a post a few years ago about how to create a generic Proxy class to make it easy to consume REST web services:

http://geek-goddess-bonnie.blogspot.com/2014/06/proxy-class-for-consuming-restful.html

As I became aware of more functionality that I needed to add to my ProxyBaseREST class (see above post for the code for the class, abbreviated but useful as is), I realized that I should post the changes I made in a 2nd post:

http://geek-goddess-bonnie.blogspot.com/2014/08/revisiting-rest-proxy.html

In that post, I had added functionality for passing credentials (username/password) in the header of the call to the REST service, like this:

WebClient WC = new WebClient();
WC.Credentials = new NetworkCredential(this.UserName, this.Password);

This will work fine, but you may have noticed that I'm using the WC.Credentials. The upshot of using Credentials like this is .NET will make two calls to the REST service. This would dramatically increase your network traffic. Why does .NET do this? See this for an explanation:

https://stackoverflow.com/questions/6338942/why-my-http-client-making-2-requests-when-i-specify-credentials)

But then I solved that particular dilemma by using Basic Authorization ... of course, there's a little more to it than that!  Isn't that *always* the case?  ;0)  So, I added a static field to my ProxyBaseREST class (which you can change in an appSetting in your config file, depending on whether or not you need BasicAuthorization for any specific REST service implementation):

protected static bool UseBasicAuthorization = false;

public ProxyBaseREST(string baseAddress)
{
this.BaseAddress = baseAddress;
this.ReadAppSettings();
}
public ProxyBaseREST(string baseAddress, string userName, string password) : this(baseAddress)
{
this.UserName = userName;
this.Password = password;
}

protected void ReadAppSettings()
{
string setting = ConfigurationManager.AppSettings.Get("UseBasicAuthorization");
if (setting.IsNotNullOrEmpty() && setting.ToLower() == "true")
UseBasicAuthorization = true;
}

And, now we can handle our calls like this:

protected string GetJSON()
{
string json = "";
using (WebClient WC = new WebClient())
{
WC.Headers["Content-type"] = "application/json";
json = this.OpenWebClientAndRead(WC);
}
return json;
}
private string OpenWebClientAndRead(WebClient WC)
{
try
{
if (this.UserName.IsNotNullOrEmpty() && this.Password.IsNotNullOrEmpty())
{
if (UseBasicAuthorization)
WC.Headers[HttpRequestHeader.Authorization] = "Basic " + Convert.ToBase64String(Encoding.ASCII.GetBytes(this.UserName + ":" + this.Password));
else
WC.Credentials = new NetworkCredential(this.UserName, this.Password);
}

Stream stream = WC.OpenRead(this.BaseAddress + this.Parameters);

StreamReader reader = new StreamReader(stream);
string json = reader.ReadToEnd();
stream.Close();

if (json == "[]")
json = "";
return json;
}
catch (Exception ex)
{
// Log your error
return "";
}
}

But, sometimes there are problems with certificates, so I needed a way to ignore those Certificate errors (such as a non-signed certificate, out of date or other possible problems with the certificate). This can be done by setting up a validation callback. So, to do that, I needed to add two more static fields, and change the constructor yet again:

protected static bool IgnoreSslErrors = true;
protected static RemoteCertificateValidationCallback sslFailureCallback = null;

public ProxyBaseREST(string baseAddress)
{
this.BaseAddress = baseAddress;
if (sslFailureCallback == null) // static property, so this is only done once
{
sslFailureCallback = new RemoteCertificateValidationCallback(delegate { return true; });
this.ReadAppSettings();
if (IgnoreSslErrors)
ServicePointManager.ServerCertificateValidationCallback = sslFailureCallback;
}
}
protected void ReadAppSettings()
{
string setting;

setting = ConfigurationManager.AppSettings.Get("IgnoreSslErrors");
if (setting.IsNotNullOrEmpty() && setting.ToLower() == "false")
IgnoreSslErrors = false;

setting = ConfigurationManager.AppSettings.Get("UseBasicAuthorization");
if (setting.IsNotNullOrEmpty() && setting.ToLower() == "true")
UseBasicAuthorization = true;
}

I've set this up very simply, in that I'm ignoring any errors (by simply returning true from the ServerCertificateValidationCallback, any certificate will be allowed). There are better ways to handle this, but this simple code satisfied my needs. Here is what the MSDN documentation says:

https://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.servercertificatevalidationcallback(v=vs.110).aspx

An application can set the ServerCertificateValidationCallback property to a method to use for custom validation by the client of the server certificate. When doing custom validation, the sender parameter passed to the RemoteCertificateValidationCallback can be a host string name or an object derived from WebRequest (HttpWebRequest, for example) depending on the CertificatePolicy property.

When custom validation is not used, the certificate name is compared with host name used to create the request. For example, if Create(String) was passed a parameter of "https://www.contoso.com/default.html";, the default behavior is for the client to check the certificate against www.contoso.com.

And, whether or not you need something more complicated than I've done, here is a StackOverflow post that may be helpful:

https://stackoverflow.com/questions/20914305/best-practices-for-using-servercertificatevalidationcallback

Friday, September 29, 2017

TransactionScope and Database Queues

Are you familiar with using a database as a data queue? Basically, you have a table that acts as a queue ... you pop a row out of the table and use the information in the row to do some processing. You delete the row from the table after you're done with the processing. This can be done one row at a time, or the table can be accessed from multiple threads simultaneously (which will obviously speed up your queue processing). Of course, using multiple threads would depend on whether or not the order that you process the data is important .

We have a server-side application (multi-threaded and runs as a Windows Service) that uses this queue processing mechanism quite extensively. The main use is to receive messages from other servers (usually via http, but could also be via tcp) and then immediately write the message to a queue database table for incoming messages. Then, on other threads, we pop the data off the queue and call various methods to process it.  Another way we use queuing is in an opposite operation, where during our processing of data, we determine that we need to send data out to another server. Rather than have the method that's doing the processing also handle sending it out to another server (which could compromise the method's processing speed), the method simply writes it to another queue database table used for outgoing messages.

Here is some simple code for getting the data from your database queue table ... let's call the table "incoming". The actual data is contained in a class, let's call it the MyMessage class, that is serialized and stored in a column in the incoming table, let's call the column "message" and define it as varchar(max). Typically the class that contains the following code, will run it in a loop, possibly with multiple threads, I didn't include that part of the code:

// see my blog post for this static Utils method: https://geek-goddess-bonnie.blogspot.com/2010/12/transactionscope-and-sqlserver.html
using (TransactionScope scope = Utils.GetTransactionScope())
{
MyMessage message = null;
string query;
try
{
// see my blog for several posts about DataAccess classes
// https://geek-goddess-bonnie.blogspot.com/search?q=DataAccess+class
using (DataAccess da = new DataAccess(this.MyConnectionString))
{
// In my real application, I use Typed DataSets, but a plain DataTable is fine for this blog post
DataTable dt = this.GetMessageFromQueue(da);

// Look for a row in the dataset
// This is a TOP 1 query and will only return a single row
// No results means there's nothing "on this queue"
if (dt.Rows.Count > 0)
{
// we can delete it right away, it won't actually be deleted
// until the scope.Complete() executes
this.RemoveMessageFromQueue(dt, da);
// Your MyMessage class should be marked [Serializable] and will
// need static methods to serialize (GetString) and deserialze (GetObject)
message = MyMessage.GetObject(dt.Rows[0]["message"]);
// Send the message to the ProcessMessage() method, which gets overridden in derived classes
if (this.ProcessMessage(message))
scope.Complete();

this.Found = true;
}
else
{
// In my real application, as I mentioned, I run this code in a loop, possibly with multiple threads.
// Don't sleep the thread if the last time returned a result, but otherwise sleep to keep from spinning endlessly
if (this.Found == true)
this.Found = false;
else
Thread.Sleep(100);
}
}
}
}
// the da.commands below are part of the sample DataAccess class described
// in Part 2 and Part 3 of my blog post series
protected virtual DataTable GetMessageFromQueue(DataAccess da)
{
// although I typically use Stored Procs, for this example, I am not.
da.IsStoredProc = false;

DataTable dt = new DataTable();
da.ClearParameters();
query = "SELECT TOP 1 incomingkey, message FROM incoming " +
"WITH (UPDLOCK, READPAST) ORDER BY 1"
da.FillData(dt, query );
return dt;
}
protected virtual void RemoveMessageFromQueue(DataTable dt, DataAccess da)
{
da.ClearParameters();
da.AddParm("@incomingkey", dt.Rows[0]["incomingkey"]);
da.ExecuteCommand("DELETE FROM incoming WHERE incomingkey = @incomingkey");
}
// This must be overridden in the sub-class
protected virtual bool ProcessMessage(MyMessage message)
{
throw new NotImplementedException();
}

Notice the virtual ProcessMessage() method. Different sub-classes of this class could implement anything thing they want in this overridden method. That could mean using another TransactionScope (or not) ... and if you *do* use another TransactionScope, you have the option of having it participate in the current Transaction (by using TransactionScope.Required), or not (by using TransactionScope.RequiresNew or TransactionScope.Suppress).

Now, on to the part where TransactionScope becomes really handy. In most cases, we'll want whatever we do in the ProcessMessage() method to participate in that active Transaction. If there are problems in the ProcessMessage, we return false and everything gets "undone", because we don't execute the scope.Complete() ... that means the Message is *not* removed from the queue and is available to be popped off and tried again (you may need to implement a way to only retry a certain number of times before logging an error and returning true).

There is something to be aware of in all of this ... and that is elevated Transactions. If you are creating any database connections in your ProcessMessage() method (which is quite likely and we do all the time in our application), than the Transaction will be elevated and you will need to be sure you have enabled Distributed Transactions. It's not too hard to do. I don't have a blog post about that, but here is an excellent blog post (that I reference a lot if I'm setting up a new computer that needs DTC): 

http://yrushka.com/index.php/sql-server/security/configure-msdtc-for-distributed-transactions/

It's an old post (from 2011), but it's still valid and useful. If you use Evernote, you should "Clip this page", rather than "Clip bookmark" or add to your Favorites. That way you'll have all the details in Evernote, just in case the blog ever disappears.

I think that this is enough to get you started with this concept. If you have any questions about any of this, please leave me a comment.

Happy Coding!  =0)