Saturday, December 02, 2017

Putting Up The Tree

From the subject title, I bet you thought this was going to be a post about Christmas trees (seeing as it's the start of the Holiday Season). But no, this is still a geeky blog.  ;0)
 
A friend of mine recently asked me for help with TreeViews on Windows Forms. I used to do a lot of WinForm work back in the early days of working with .NET, but not very much recently. In fact, I don't do much UI stuff anymore, except for a little test application I keep adding to which is used to help with answering questions on the Forums (and writing some of my blog posts) ... but there was nothing in my test app with a TreeView. So, consequently, I had to really dig around to be able to help him.
 
I remembered the concept I wanted to tell him about easily enough ... it deals with constructing a tree with data that uses self-referencing keys/ids. So, you only need one database table, and it would contain the data for multiple TreeViews. Forget about multiple tables (parent, child, grandchild ... where does it end?) ... one table will handle it all and it's not all that complicated.  The part I couldn't remember was the code to actually fill a TreeView with that data, but it didn't take too long to come up with it. Bear in mind that this concept of self-referencing columns is a concept that can be used for anything that is hierarchical in nature, not just for a TreeView on a Windows Form!!
 
First, let's look at the database table structure.  It should be something like this:

PK    Description       ID            IsRoot    ParentID        RootID

It's probably best to use self-referencing IDs rather than keys ... unless your ID's are not unique. For example, if your data is for an inventory system, then most likely all your Part IDs would be unique across all of your inventory. In that case, use those Part IDs for self-referencing. 

PK    Description       ID            IsRoot    ParentID        RootID
0     Display Assembly  101-A045      1         NULL            101-A045
1     Camera            P101-0024     0         101-A045        101-A045
2     Tape              P100-0004     0         P101-0024       101-A045
3     3M Tape           P100-0124     0         P101-0024       101-A045
4     Display Panel     P101-A045     0         P101-0024       101-A045
5     LCD Cable         P101-0046     0         P101-A045       101-A045
6     X Tape            P100-0084     0         P101-A045       101-A045
7     Graphic Board     P101-A023     0         P101-0024       101-A045
8     Screws 4          P100-0110     0         P101-A023       101-A045

The tree would look like this (you can display the ID if you wish, this particular example shows the ID) :

+Display Assembly [101-A045]
    +---Camera [P101-0024]
        |---Tape [P100-0004]
        |---3M Tape [P100-0124]
        +---Display Panel [P101-A045]
        |    |---LCD Cable [P101-0046]
        |    |---X Tape [P100-0084]
        +---Graphic Board [P101-A023]
        |    |---Screws 4 [P100-0110]

But, if your data is of the type code/description, where code is not unique because different *kinds* of codes might utilize the same number/symbols for the code, then you'll have to use the keys instead. I'm thinking of something like a system that integrates information from several different organizations, each organization having its own set of codes and descriptions. Those codes could easily be similar amongst the organizations. Note that in the following example, I'm going to show data for two different Organizations (the Organization is not something stored in this particular data, but depending on your application, you *could* add another column for an Organization name to your database table, or just use the Root's Description for that).

These are codes for imaginary Fire and Police departments. Note that each department has its own Root, so that each department can display its own hierarchy of codes. Note also that there *are* some codes that are the same in both Departments:

PK    Description        ID            IsRoot    ParentID        RootID
0     MyTown Fire Dept   FIRE          1         NULL             0
1     Resource Type      RSRC          0         0                0
2     Battalion          BAT           0         1                0
3     BLS Ambulance      BA            0         1                0
4     Command            CMD           0         1                0
5     Engine             ENG           0         1                0
6     Helicopter         COPT          0         1                0
7     Truck              TRK           0         1                0
8     Call Type          CALL          0         0                0
9     Brush Fire         BRUSH         0         8                0
10    EMS                EMS           0         8                0
11    River Rescue       RR            0         8                0
12    Structure Fire     STRUCT        0         8                0
13    Traffic Accident   TA            0         8                0
                    
14    MyTown Police Dept POLICE        1         NULL              14
15    Unit Type          UNIT          0         14                14
16    Ambulance          MEDIC         0         15                14
17    Bicycle            BIKE          0         15                14
18    Chief              CHIEF         0         15                14
19    Helicopter         COPT          0         15                14
20    Mobile Command     CMD           0         15                14
21    Patrol Car         CRUISER       0         15                14
22    Swat Team          SWAT          0         15                14
23    Incident Type      CALL          0         14                14
24    Break & Enter      B&E           0         23                14
25    Burglary           BURG          0         23                14
26    EMS                EMS           0         23                14
27    Homicide           HOM           0         23                14
28    Robbery            ROBR          0         23                14
29    Traffic Accident   TRAFFIC       0         23                14

Now, because we have data for more than one organization, we could start out getting data from the database like this:

SELECT * MyCodesTable WHERE IsRoot = 1

That will show the organizations that we have data for.
 
Then, if we're displaying data for the Fire Department, we do:

SELECT * MyCodesTable WHERE RootID = 0

and for the Police Department:

SELECT * MyCodesTable WHERE RootID = 14

Here is how our two trees would look: 

+MyTown Fire Dept [FIRE]
    +---Resource Type [RSRC]
        |---Battalion [BAT]
        |---BLS Ambulance [BA]
        |---Command [CMD]
        |---Engine [ENG]
        |---Helicopter [COPT]
        |---Truck [TRK]
    +---Call Type [CALL]
        |---Brush Fire [BRUSH ]
        |---EMS [EMS]   
        |---River Rescue [RR]    
        |---Structure Fire [STRUCT]
        |---Traffic Accident [TA]


 
+MyTown Police Dept [POLICE]    
    +---Unit Type [UNIT]
        |---Ambulance [MEDIC]
        |---Bicycle [BIKE]
        |---Chief [CHIEF]  
        |---Helicopter [COPT]
        |---Mobile Command [CMD]  
        |---Patrol Car [CRUISER]
        |---Swat Team [SWAT]
    +---Incident Type [CALL]
        |---Break & Enter [B&E]
        |---Burglary [BURG]   
        |---EMS [EMS]  
        |---Homicide [HOM]   
        |---Robbery [ROBR]   
        |---Traffic Accident [TRAFFIC]

Now on to the last bit: how to code this sucker! First, we'll need some data. Here is some XML you can use (copy/paste, save as an XML file, Tree.xml).

<?xml version="1.0" standalone="yes"?>
<TreeDataSet>
  <TreeNode>
    <PK>1</PK>
    <Description>Display Assembly</Description>
    <ID>101-A045</ID>
    <IsRoot>1</IsRoot>
    <ParentID></ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>2</PK>
    <Description>Camera Chassis</Description>
    <ID>P101-0024</ID>
    <IsRoot>0</IsRoot>
    <ParentID>101-A045</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>3</PK>
    <Description>Tape VHB</Description>
    <ID>P100-0004</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-0024</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>4</PK>
    <Description>3M Foam Tape</Description>
    <ID>P100-0124</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-0024</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>5</PK>
    <Description>Display Touch Panel Assembly</Description>
    <ID>P101-A045</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-0024</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>6</PK>
    <Description>LCD Display Ribbon Cable</Description>
    <ID>P101-0046</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-A045</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>7</PK>
    <Description>S-11730 Kapton Tape</Description>
    <ID>P100-0084</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-A045</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>8</PK>
    <Description>G5 Board</Description>
    <ID>P101-A023</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-0024</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
  <TreeNode>
    <PK>9</PK>
    <Description>Screw 4-24x0.375</Description>
    <ID>P100-0110</ID>
    <IsRoot>0</IsRoot>
    <ParentID>P101-A023</ParentID>
    <RootID>101-A045</ParentID>
  </TreeNode>
</TreeDataSet>

The code is pretty straightforward. Assuming that you know the RootID already, call the FillTheTree() method passing it the instance of the TreeView on your Form along with the RootID:

this.FillTheTree(this.oTree, "101-A045");

And now, the two FillTheTree() methods. The second method calls itself recursively:

private void FillTheTree(TreeView tree, string RootID)
{
    DataSet dsTree = new DataSet();
    // Here's where you'll actually retrieve the data from the database with this:
    // SELECT * FROM MyCodesTable WHERE RootID = @RootID
    dsTree.ReadXml("Tree.xml");
    tree.Nodes.Clear();
    DataView dvTree = new DataView(dsTree.Tables[0]);
    dvTree.RowFilter = string.Format("ID = '{0}'", RootID);
    this.FillTheTree(tree, dvTree, RootID);
}
// Calls itself recursively
private void FillTheTree(TreeView tree, DataView dvTree, string parentNodeID, TreeNode ParentNode = null)
{
    string nodeText, ID;
    TreeNode ChildNode = null;
    foreach (DataRowView dvRow in dvTree)
    {
        ID = dvRow["ID"].ToString();
        nodeText = string.Format("{0}[{1}]", dvRow["Description"], ID);
        if (ParentNode != null)
        {
            ChildNode = ParentNode.Nodes.Add(nodeText);
            ChildNode.Tag = ID;
        }
        else
        {
            // The ParentNode will only initially be NULL when starting, meaning we're adding the root
            ParentNode = tree.Nodes.Add(nodeText);
            ParentNode.Tag = ID;
        }
    }
    // Now, loop through the child nodes of parent node, if there are any
    // Calling recursively to continue traversing the tree.
    if (ParentNode.Nodes.Count > 0)
    {
        foreach (TreeNode node in ParentNode.Nodes)
        {
            dvTree.RowFilter = string.Format("ParentID = '{0}'", node.Tag.ToString());
            this.FillTheTree(tree, dvTree, node.Tag.ToString(), node);
        }
    }
    else if (ParentNode.NextNode == null && ParentNode.Parent == null)
    {
        dvTree.RowFilter = string.Format("ParentID = '{0}'", parentNodeID);
        this.FillTheTree(tree, dvTree, parentNodeID, ParentNode);
    }
    else
        return;
}

That's all there is to it!  Happy coding!  =0)

Friday, October 27, 2017

REST with SSL

I wrote a post a few years ago about how to create a generic Proxy class to make it easy to consume REST web services:

http://geek-goddess-bonnie.blogspot.com/2014/06/proxy-class-for-consuming-restful.html

As I became aware of more functionality that I needed to add to my ProxyBaseREST class (see above post for the code for the class, abbreviated but useful as is), I realized that I should post the changes I made in a 2nd post:

http://geek-goddess-bonnie.blogspot.com/2014/08/revisiting-rest-proxy.html

In that post, I had added functionality for passing credentials (username/password) in the header of the call to the REST service, like this:

WebClient WC = new WebClient();
WC.Credentials = new NetworkCredential(this.UserName, this.Password);

This will work fine, but you may have noticed that I'm using the WC.Credentials. The upshot of using Credentials like this is .NET will make two calls to the REST service. This would dramatically increase your network traffic. Why does .NET do this? See this for an explanation:

https://stackoverflow.com/questions/6338942/why-my-http-client-making-2-requests-when-i-specify-credentials)

But then I solved that particular dilemma by using Basic Authorization ... of course, there's a little more to it than that!  Isn't that *always* the case?  ;0)  So, I added a static field to my ProxyBaseREST class (which you can change in an appSetting in your config file, depending on whether or not you need BasicAuthorization for any specific REST service implementation):

protected static bool UseBasicAuthorization = false;

public ProxyBaseREST(string baseAddress)
{
this.BaseAddress = baseAddress;
this.ReadAppSettings();
}
public ProxyBaseREST(string baseAddress, string userName, string password) : this(baseAddress)
{
this.UserName = userName;
this.Password = password;
}

protected void ReadAppSettings()
{
string setting = ConfigurationManager.AppSettings.Get("UseBasicAuthorization");
if (setting.IsNotNullOrEmpty() && setting.ToLower() == "true")
UseBasicAuthorization = true;
}

And, now we can handle our calls like this:

protected string GetJSON()
{
string json = "";
using (WebClient WC = new WebClient())
{
WC.Headers["Content-type"] = "application/json";
json = this.OpenWebClientAndRead(WC);
}
return json;
}
private string OpenWebClientAndRead(WebClient WC)
{
try
{
if (this.UserName.IsNotNullOrEmpty() && this.Password.IsNotNullOrEmpty())
{
if (UseBasicAuthorization)
WC.Headers[HttpRequestHeader.Authorization] = "Basic " + Convert.ToBase64String(Encoding.ASCII.GetBytes(this.UserName + ":" + this.Password));
else
WC.Credentials = new NetworkCredential(this.UserName, this.Password);
}

Stream stream = WC.OpenRead(this.BaseAddress + this.Parameters);

StreamReader reader = new StreamReader(stream);
string json = reader.ReadToEnd();
stream.Close();

if (json == "[]")
json = "";
return json;
}
catch (Exception ex)
{
// Log your error
return "";
}
}

But, sometimes there are problems with certificates, so I needed a way to ignore those Certificate errors (such as a non-signed certificate, out of date or other possible problems with the certificate). This can be done by setting up a validation callback. So, to do that, I needed to add two more static fields, and change the constructor yet again:

protected static bool IgnoreSslErrors = true;
protected static RemoteCertificateValidationCallback sslFailureCallback = null;

public ProxyBaseREST(string baseAddress)
{
this.BaseAddress = baseAddress;
if (sslFailureCallback == null) // static property, so this is only done once
{
sslFailureCallback = new RemoteCertificateValidationCallback(delegate { return true; });
this.ReadAppSettings();
if (IgnoreSslErrors)
ServicePointManager.ServerCertificateValidationCallback = sslFailureCallback;
}
}
protected void ReadAppSettings()
{
string setting;

setting = ConfigurationManager.AppSettings.Get("IgnoreSslErrors");
if (setting.IsNotNullOrEmpty() && setting.ToLower() == "false")
IgnoreSslErrors = false;

setting = ConfigurationManager.AppSettings.Get("UseBasicAuthorization");
if (setting.IsNotNullOrEmpty() && setting.ToLower() == "true")
UseBasicAuthorization = true;
}

I've set this up very simply, in that I'm ignoring any errors (by simply returning true from the ServerCertificateValidationCallback, any certificate will be allowed). There are better ways to handle this, but this simple code satisfied my needs. Here is what the MSDN documentation says:

https://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.servercertificatevalidationcallback(v=vs.110).aspx

An application can set the ServerCertificateValidationCallback property to a method to use for custom validation by the client of the server certificate. When doing custom validation, the sender parameter passed to the RemoteCertificateValidationCallback can be a host string name or an object derived from WebRequest (HttpWebRequest, for example) depending on the CertificatePolicy property.

When custom validation is not used, the certificate name is compared with host name used to create the request. For example, if Create(String) was passed a parameter of "https://www.contoso.com/default.html";, the default behavior is for the client to check the certificate against www.contoso.com.

And, whether or not you need something more complicated than I've done, here is a StackOverflow post that may be helpful:

https://stackoverflow.com/questions/20914305/best-practices-for-using-servercertificatevalidationcallback

Friday, September 29, 2017

TransactionScope and Database Queues

Are you familiar with using a database as a data queue? Basically, you have a table that acts as a queue ... you pop a row out of the table and use the information in the row to do some processing. You delete the row from the table after you're done with the processing. This can be done one row at a time, or the table can be accessed from multiple threads simultaneously (which will obviously speed up your queue processing). Of course, using multiple threads would depend on whether or not the order that you process the data is important .

We have a server-side application (multi-threaded and runs as a Windows Service) that uses this queue processing mechanism quite extensively. The main use is to receive messages from other servers (usually via http, but could also be via tcp) and then immediately write the message to a queue database table for incoming messages. Then, on other threads, we pop the data off the queue and call various methods to process it.  Another way we use queuing is in an opposite operation, where during our processing of data, we determine that we need to send data out to another server. Rather than have the method that's doing the processing also handle sending it out to another server (which could compromise the method's processing speed), the method simply writes it to another queue database table used for outgoing messages.

Here is some simple code for getting the data from your database queue table ... let's call the table "incoming". The actual data is contained in a class, let's call it the MyMessage class, that is serialized and stored in a column in the incoming table, let's call the column "message" and define it as varchar(max). Typically the class that contains the following code, will run it in a loop, possibly with multiple threads, I didn't include that part of the code:

// see my blog post for this static Utils method: https://geek-goddess-bonnie.blogspot.com/2010/12/transactionscope-and-sqlserver.html
using (TransactionScope scope = Utils.GetTransactionScope())
{
MyMessage message = null;
string query;
try
{
// see my blog for several posts about DataAccess classes
// https://geek-goddess-bonnie.blogspot.com/search?q=DataAccess+class
using (DataAccess da = new DataAccess(this.MyConnectionString))
{
// In my real application, I use Typed DataSets, but a plain DataTable is fine for this blog post
DataTable dt = this.GetMessageFromQueue(da);

// Look for a row in the dataset
// This is a TOP 1 query and will only return a single row
// No results means there's nothing "on this queue"
if (dt.Rows.Count > 0)
{
// we can delete it right away, it won't actually be deleted
// until the scope.Complete() executes
this.RemoveMessageFromQueue(dt, da);
// Your MyMessage class should be marked [Serializable] and will
// need static methods to serialize (GetString) and deserialze (GetObject)
message = MyMessage.GetObject(dt.Rows[0]["message"]);
// Send the message to the ProcessMessage() method, which gets overridden in derived classes
if (this.ProcessMessage(message))
scope.Complete();

this.Found = true;
}
else
{
// In my real application, as I mentioned, I run this code in a loop, possibly with multiple threads.
// Don't sleep the thread if the last time returned a result, but otherwise sleep to keep from spinning endlessly
if (this.Found == true)
this.Found = false;
else
Thread.Sleep(100);
}
}
}
}
// the da.commands below are part of the sample DataAccess class described
// in Part 2 and Part 3 of my blog post series
protected virtual DataTable GetMessageFromQueue(DataAccess da)
{
// although I typically use Stored Procs, for this example, I am not.
da.IsStoredProc = false;

DataTable dt = new DataTable();
da.ClearParameters();
query = "SELECT TOP 1 incomingkey, message FROM incoming " +
"WITH (UPDLOCK, READPAST) ORDER BY 1"
da.FillData(dt, query );
return dt;
}
protected virtual void RemoveMessageFromQueue(DataTable dt, DataAccess da)
{
da.ClearParameters();
da.AddParm("@incomingkey", dt.Rows[0]["incomingkey"]);
da.ExecuteCommand("DELETE FROM incoming WHERE incomingkey = @incomingkey");
}
// This must be overridden in the sub-class
protected virtual bool ProcessMessage(MyMessage message)
{
throw new NotImplementedException();
}

Notice the virtual ProcessMessage() method. Different sub-classes of this class could implement anything thing they want in this overridden method. That could mean using another TransactionScope (or not) ... and if you *do* use another TransactionScope, you have the option of having it participate in the current Transaction (by using TransactionScope.Required), or not (by using TransactionScope.RequiresNew or TransactionScope.Suppress).

Now, on to the part where TransactionScope becomes really handy. In most cases, we'll want whatever we do in the ProcessMessage() method to participate in that active Transaction. If there are problems in the ProcessMessage, we return false and everything gets "undone", because we don't execute the scope.Complete() ... that means the Message is *not* removed from the queue and is available to be popped off and tried again (you may need to implement a way to only retry a certain number of times before logging an error and returning true).

There is something to be aware of in all of this ... and that is elevated Transactions. If you are creating any database connections in your ProcessMessage() method (which is quite likely and we do all the time in our application), than the Transaction will be elevated and you will need to be sure you have enabled Distributed Transactions. It's not too hard to do. I had originally posted a link here to another person's blog about how to configure MSDTC, but I discovered 3 years after writing this, that that post got hijacked or disappeared. Luckily, 3 years later, I have been able to recreate that post in my own blog and have updated this to reflect the new link:

https://geek-goddess-bonnie.blogspot.com/2020/10/configure-msdtc-for-distributed.html

I think that this is enough to get you started with this concept. If you have any questions about any of this, please leave me a comment.

Happy Coding!  =0)

Thursday, August 31, 2017

OneDrive Problems and Windows 10 Creator Update


Another departure from my normal blog posts (which are usually developer-related about .NET, C# and database stuff). The title of this post is a bit misleading ... there is probably absolutely nothing wrong with the Creator Update and OneDrive, but I panicked and over-reacted. Imagine that!

First, just in case you're lazy and don't want to bother reading my lovely narrative below ...

TL;DR 
How to get your OneDrive "re-set" if things get screwed up:
  1. Exit OneDrive from the right-click options.
  2. Go to the C:\Users\YourUserName\AppData\Local\Microsoft\OneDrive\settings. Make a copy of the settings folder somewhere, just in case it gets screwed up.
  3. Delete everything in the folder (so, you still want the settings folder, but it should be empty).
  4. Start OneDrive back up again in one of two ways:
    • Search for OneDrive in the Start Menu and run it, which I didn't do.
    • Go to C:\Users\YourUserName\AppData\Local\Microsoft\OneDrive\Update and run the OneDriveSetup.exe. This is what I did (since it was the first thing suggested to me). I'm guessing that running OneDrive.exe would probably run the OneDriveSetup.exe anyway, since it knows that there's nothing in the settings folder!
  5. Once OneDrive has started:
    • It will ask you for your OneDrive login credentials.
    • It will ask you where your local OneDrive folder is (don't worry, it didn't lose it or wipe it out or anything). None of your files get lost.
    • Then, you get to the part where you specify the folders to sync.
    • And everything works fine after that!
OK, now that we got *that* out of the way, here's what happened to cause this whole mess:

I had updated (unknowingly) to the Creator Update over the weekend. It's a totally misleading description for the update, because it says "Update for Windows 10 Version 1607 for x64-based Systems (KBxxxxxxx)".  The Creator Update is version 1703, so you would *think* that the description would be something like "Update for Windows 10 to Version 1703" ... but Noooo.   I don't remember the KB number for that particular update (I subsequently got another Update today, Monday, that is apparently also the Creator Update, which I have not yet applied ... it's KB4033637, but good luck finding anything about it).

The Update totally got my One Drive all confused (or so I thought), and rather than try and figure out how to fix it, I panicked and just went back to my previous Windows version. You can easily do that by going to Settings | Update and Security | Recovery and then choose Get Started under the heading "Go back to an earlier build" (be careful with this one, though ... because if it's not done soon after the update, any changes to your system subsequent to that will be lost when it reverts to the earlier build). Everything looked fine, so I just went on about my business (well, messing around with my computer on a Saturday isn’t quite business).

Later in the day, I got an email from OneDrive about a lot of files in my Recycle Bin.  I went to OneDrive online and saw that at least 800 files had been plopped in the online Recycle Bin!!! These are photos (I take a lot of pictures, I'm a photography hobbyist).  Well, I sure knew that it had happened because of the hijinks of the Creator Update and so, silly me, I moved them out of the Recycle Bin and back where they belonged. And then it was time for dinner and movie.

Several hours later, I checked it before I went to bed and OneDrive was still downloading files. I went to look at the files locally and saw that I was getting dangerously low on C disk drive space. What happened?!?  It was downloading *EVERYTHING*!!!!!!  And I had *NOT* had it set up that way previously. Under my Photography folder, of the almost 6000 files, in 38 folders, I had OneDrive set up to only sync this year’s photos (less than 800 files). I immediately Paused the Sync and went to bed. Deal with it in the morning …

So now it's Sunday morning ... I figured that I could just go to the Settings and put things back the way they were (only syncing the current year's photo folders).  Nope!!  I couldn’t access Settings while Syncing was paused. OK, let’s try again.  Un-paused the Sync, went to the Settings and tried to uncheck one lousy folder … but the OK button stubbornly remained disabled. I made sure that I did this on one of the folders that had already synced overnight, just in case that was the problem. It still didn’t work. OK, let's go to File Explorer and access the Settings via right-clicks. But, still no joy ... there were absolutely *NO* OneDrive related *anything* in File Explorer (such as “View on OneDrive.com”, etc. … nothing!!).

I finally got tired of Googling for an answer and asked around on a few MVP-related resources. I almost instantly got a reply from one of my fellow MVPs, Mike Halsey (@MikeHalsey on Twitter … or Google him if you're interested, you'll get a lot of hits). He suggested I try the following steps and *it worked*!!! And, I'll repeat it again, even though it's at the top of the post, I added to it just a bit:
  1. Exit OneDrive from the right-click options.
  2. Go to the C:\Users\YourUserName\AppData\Local\Microsoft\OneDrive\settings. Make a copy of the settings folder somewhere, just in case it gets screwed up.
  3. Delete everything in the folder (so, you still want the settings folder, but it should be empty).
  4. Start OneDrive back up again in one of two ways:
    • Search for OneDrive in the Start Menu and run it, which I didn't do.
    • Go to C:\Users\YourUserName\AppData\Local\Microsoft\OneDrive\Update and run the OneDriveSetup.exe. This is what I did (since Mike suggested that first). I'm guessing that running OneDrive.exe would probably run the OneDriveSetup.exe anyway, since it knows that there's nothing in the settings folder!
  5. Once OneDrive has started:
    • It will ask you for your OneDrive login credentials.
    • It will ask you where your local OneDrive folder is (don't worry, it didn't lose it or wipe it out or anything). None of your files get lost. This is the part that freaked me out when the Creator Update started asking me this right off the bat. I really thought that all was lost and, like I said, I panicked. Silly me.
    • Then, you get to the part where you specify the folders to sync.
    • And everything works fine after that!
Pretty easy, although it was a bit nerve-wracking. ;0)  Thanks, Mike!!

I will probably go ahead and try the Creator Update again, because I *really* want the OneDrive Placeholders, like we had back in Windows 8.1. I think the weird stuff I was seeing really wasn't a problem. Although, I won't swear to that!

Sunday, May 28, 2017

Accessing Oracle Databases in .NET

We recently went through hell trying to get an Oracle provider installed that works and is not a huge set of files that are impossible to install. We are primarily a SQL Server "shop" and hadn't had to use Oracle in a long, long time. So, this was quite an ordeal. I've been using SQL Server for 17 years (no expert, but I know my way around) ... but Oracle always mystifies me. It's just so different!

Anyway, I was not the one looking for a valid provider, it was my husband (we work together) ... and he finally found something that worked. And, luckily, he found it before he pulled out all his hair (and he *does* have a lot of it)!  You'll need to use this NuGet package (it's only a 2.5 MB download) and quite easy to install (as most NuGet packages are):

https://www.nuget.org/packages/Oracle.ManagedDataAccess/

If you went to the above link, you'll see that all you have to do is install the package from the NuGet Package Manager Console command line. The Package Manager Console has been built into Visual Studio (2012 and later). Find it under Tools | NuGet Package Manager | Package Manager Console. If, for some reason, you do not see the NuGet stuff under Tools (either because you have a VS earlier than 2012 or it just wasn't installed when you installed VS), then it can be installed manually as shown here:

https://docs.microsoft.com/en-us/nuget/guides/install-nuget#nuget-package-manager-in-visual-studio

I should also mention that when you add the NuGet Oracle.ManagedDataAccess package, it'll add a section to the config file that contains a sample connection string:

<oracle.manageddataaccess.client>
<version number="*">
<dataSources>
<dataSource alias="SampleDataSource" descriptor="(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL))) "/>
</dataSources>
</version>
</oracle.manageddataaccess.client>

We added the following <connectionStrings> setting in the config:

<connectionStrings>
<add name="Oracle" connectionString="DATA SOURCE=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=XXX.XXX.XXX.XXX)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=MyDatabase)));User id=MyID;Password=MyPwd;"/>
</connectionStrings>

Obviously, change the XXX.XXX.XXX.XXX to your server's IP address, as well as changing MyDatabase, MyID and MyPwd to your own settings.

That should do the trick! Happy coding!  =0)

Friday, March 31, 2017

Fire and Forget

Lately, I've run across a few questions in the Forums about being able to spin off a thread that might contain a long-running process that doesn't need to be monitored in any way. It's what's called a "Fire and Forget" process. This is useful for processes where the caller doesn't need to have any direct feedback from the process, it just needs to initiate that process and then move on. I can see where this could be quite useful for running some SQL scripts or Stored Procedures on a database.

For example, say that you have a UI (WinForms or WPF) and a button click (or whatever) to start the Processes. You don't want to block the UI thread, so that's an important thing to keep in mind. Let's also say that we have another class we use that contains all the processes that we want to Fire and Forget.  An excellent way to deal with all this is to initiate Fire and Forget threads using Task.Run().

In order to simulate this, I'm going to write to a TextBox from the UI thread, and use a Console.WriteLine() in the Fire and Forget processes to show the progress of each call. When running your application from Visual Studio, you can see the output from the Console.WriteLine() in the Output window (look for it under "Debug | Windows | Output" if you don't already have it show up when you're debugging).

So, first look at this code:

Utils util = new Utils();
Console.WriteLine("Run Tasks in 'for' loop ...");
for (int i = 0; i < 10; i++)
{
Task.Run(() =>
{
util.FireAndForget(i);
});

this.TextBox1.Text += string.Format("Started Thread {0} ...\r\n", i);
}

And here is the FireAndForget() method in the Utils class:

public void FireAndForget(int i)
{
Console.WriteLine("Starting Task #{0}", i);
// Simulate long running thread, but make them random time periods
var rand = new Random();
Thread.Sleep(rand.Next(10000)); // 10 seconds or less
Console.WriteLine("Completed Task #{0}", i);
}

Running this code, you can see in the UI that the TextBox sequentially lists all 10 threads as having started, and watching the Output window, you can see that the TextBox shows all 10 before the Output window shows all 10 (in other words, each running in a separate thread). And they are Completed at different times.

But, wait ... something is wrong!!  Notice in the Output window that you will often see Tasks with duplicate numbers! And, if you comment out the line for setting the TextBox1.Text, you will see every single Task has the number 10!! Why is this happening?

It has to do with something called Closures. And it's because the i variable used in Task.Run(()=>{ util.FireAndForget(i) }); by design, uses the current value of i (10), not the value of i when the delegate was created (0 thru 9). Closures close over variables, not over values. That's a quote from Eric Lippert's blog post, which will probably explain things a lot better than I can. See it here (and note that he has a link to Part 2 also): https://blogs.msdn.microsoft.com/ericlippert/2009/11/12/closing-over-the-loop-variable-considered-harmful/

The reason that setting the TextBox.Text in the UI seems to not affect it as much is simply because of the delay that the UI thread takes to update UI, in case you were wondering ...

There are two ways around this:

for (int i = 0; i < 10; i++)
{
// By creating a new variable each time through the for loop
// and using that instead, you can avoid the problem
int ii = i;
Task.Run(() =>
{
util.FireAndForget(ii);
});

this.TextBox1.Text += string.Format("Started Thread {0} ...\r\n", i);
}

If you've looked at the link to Eric's blog post, you'll see that Microsoft decided to fix this issue in C# 5, but *only* in the foreach, not in the for. So, in the foreach version, you could do it like this with no problems:

//Console.WriteLine("Run Tasks in 'foreach' loop ...");
int[] iArray = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
foreach (int i in iArray)
{
Task.Run(() =>
{
util.FireAndForget(i);
});
this.TextBox1.Text += string.Format("Started Thread {0} ...\r\n", i);
}

So, I diverged a bit from the original intent of this post, showing both the Fire And Forget process, and the little "gotcha" that you might have encountered had you used a for instead of a foreach.

Happy coding!  =0)

Sunday, February 26, 2017

It's All About The Data

Every application has to deal with *some* kind of data. Where it's stored externally and how it's used in the application can vary widely ... but in this blog post, I will deal with SQL Server database storage and reading the data into either a DataSet, DataTable or a List of objects. I will *not* talk about Entity Framework nor LINQ-to-SQL, mainly because I pretty much only use DataSets.

I have seen many posts on Forums, blogs and elsewhere, recommending the use of the Load() method of a DataSet or DataTable. The Load() method takes an IDataReader parameter (so, in this case, I'd use a SqlDataReader). Supposedly, according to the many times that I've seen this recommended, it is supposed to be the fastest way to do this.

Personally, I have always used a SqlDataAdapter and its Fill() method to put the result set(s) returned from a SQL call into a DataTable or a DataSet. And I always thought that this was the fastest (although I had never tested that).

A couple of days ago, I saw this Load() method recommended again, several times ... and again I wondered about the performance of the Load() vs the Fill(), so I decided that it's about time to run some performance tests to put the question to rest, once and for all. And, while I was at it, I also decided to throw in a test putting data into a List<T> in addition to the DataSet/DataTable tests (not for my benefit, since I seldom use List<T> in this manner; but for you, Dear Reader).

So, first, here is the benchmarking code:

private void TestLoadvsFillvsList()
{
DataTable dtGlobal;
DataSet dsGlobal;
decimal LoadMilli;
decimal FillMilli;
decimal ListMilli;

Stopwatch oWatch = new Stopwatch();

// The DataTable.Load(IDataReader) method:
using (SqlConnection conn = newSqlConnection(this.ConnectionString))
{
SqlCommand sc = new SqlCommand("select * from Logdata", conn);
conn.Open();
this.dtGlobal = new DataTable();

oWatch.Start();
SqlDataReader dr = sc.ExecuteReader();
this.dtGlobal.Load(dr);
oWatch.Stop();

// If you have multiple SELECTs in your SqlCommand, you can put the multiple result sets
// into a DataSet with syntax similar to the following examples:
//this.dsGlobal.Load(dr, LoadOption.OverwriteChanges, dtLogData, dtMessage);
//this.dsGlobal.Load(dr, LoadOption.OverwriteChanges, "Table1", "Table2");
}
LoadMilli = oWatch.ElapsedMilliseconds;
oWatch.Reset();
Console.WriteLine("Load Time {0} milliseconds, Row Count, {1}", LoadMilli, this.dtGlobal.Rows.Count);

// The DataAdapter.Fill(DataSet/DataTable) method:
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
{
SqlCommand sc = new SqlCommand("select * from Logdata", conn);
conn.Open();
this.dsGlobal = new DataSet();

oWatch.Start();
SqlDataAdapter da = new SqlDataAdapter(sc);
da.Fill(this.dsGlobal); // could also Fill(this.dtGlobal)
oWatch.Stop();
}
FillMilli = oWatch.ElapsedMilliseconds;
oWatch.Reset();
Console.WriteLine("Fill Time {0} milliseconds, Row Count, {1}", FillMilli, this.dsGlobal.Tables[0].Rows.Count);

// The while (dr.Read()) method:
List<LogData> logList = new List<LogData>();
using (SqlConnection conn = new SqlConnection(this.ConnectionString))
{
SqlCommand sc = new SqlCommand("select * from Logdata", conn);
conn.Open();

LogData oLog; ;
oWatch.Start();
SqlDataReader dr = sc.ExecuteReader();
while (dr.Read())
{
oLog = new LogData();
oLog.logdatakey = (long)dr["logdatakey"];
oLog.logdatetime = (DateTime)dr["logdatetime"];
oLog.message = dr["message"].ToString();
oLog.category = dr["category"].ToString();
logList.Add(oLog);
}
oWatch.Stop();
}
ListMilli = oWatch.ElapsedMilliseconds;
oWatch.Reset();
Console.WriteLine("List Time {0} milliseconds, Row Count, {1}", ListMilli, logList.Count);
Console.WriteLine("------------------------------------------------");

if (LoadMilli > FillMilli)
Console.WriteLine("Fill is {0:0.00} faster than Load", LoadMilli / FillMilli);
else
Console.WriteLine("Load is {0:0.00} faster than Fill", FillMilli / LoadMilli);

Console.WriteLine("A List of objects is faster than either one!", FillMilli / LoadMilli);
Console.WriteLine("List is {0:0.00} times faster than Load", LoadMilli / ListMilli);
Console.WriteLine("List is {0:0.00} times faster than Fill", FillMilli / ListMilli);
Console.WriteLine("------------------------------------------------");
}
public class LogData
{
public long logdatakey { get; set; }
public DateTime logdatetime { get; set; }
public string message { get; set; }
public string category { get; set; }
}

OK, so now notice in the last set of Console.WriteLine() statements above where I state that using a List<T> is *always* faster than using a DataSet/DataTable (and by a lot, as I'll get to in a minute). What this tells me is that if you have no use for DataSets at all, then you'll do just fine using a SqlDataReader to add your data to a List<T>, and then use your List elsewhere in your processing.

I used a sample size of 425,376 rows in the database table. The average time for the 3 methods were approximately as follows, in milliseconds:

Load 4040
Fill 3030
List 2020

So, doing the math:
The Fill about 1.33 times faster than the Load.
The List is about 2 times faster than the Load.
The List is about 1.5 times faster than the Fill.

I feel vindicated! I can now safely reply to these forum posts that, indeed, the .Fill() is significantly faster than the .Load()!!

Happy coding!! :0)

Tuesday, January 17, 2017

A Lot of Misbehaving - Bad Computer!!

Last week, both of my computers decided to misbehave at almost the same time. Windows Defender was the first to misbehave by hogging 40% of my CPU, which left my fan running constantly and my computer getting pretty warm ... this was on my Surface Pro 4, running Windows 10. Next, Windows Update did nearly the same thing, again hogging CPU and causing a constantly running fan and excess heat … this was on my Dell Venue 11 Pro, running Windows 8.1. Luckily for me, the Dell didn’t start having problems until after I had resolved the problems on the Surface, but I only got a day of breathing room. Talk about a frustrating week!!! So, how did I fix these naughty computers? Read on …

Windows Defender

The weird thing about this problem was that this computer wasn’t even idle at the time … Defender isn’t supposed to be running any kind of Scan while the computer is in use, and I was *definitely* using it when this mess started! But, I left it running, assuming it would finish up soon … 4 hours later, it was still running!  I realized later (when trying a Quick Scan on my other computer), that my scans only take about 3-8 minutes, not hours and hours and hours!! That should have been my first clue. I Googled and saw several suggestions, none of which helped:

  • I tried rebooting, didn’t help.
  • I set it to exclude scanning the file MsMPEng.exe, didn’t help.
  • I tried scanning with sfc (System File Check, using /verifyonly rather than /scannow), it told me I had no problems.

I hibernated overnight, because I didn’t want it overheating during the night. Obviously, when I brought it back from hibernation the next morning, it continued running at 40% CPU, as expected, and I shut it down again.

Later that day, I ran Defender from its UI on my *other* computer and I noticed that it shows a progress bar and a running count of the files it is scanning (I guess I never bothered to run it from the UI before). So, I decided to try a Quick Scan on *this* computer, even though Defender was still scanning (the 40% CPU). Lo and behold, the UI showed that it got stuck after scanning 12,471 files … number 12,472 just stayed up there and now my CPU usage rose to nearly 80%!  Yikes!!  I shut it down again, and worked the rest of the day from my other computer (I do all of my development work on VM’s that are hosted somewhere else on our network, so I could actually work on stuff from either computer).

Over the course of the day, I periodically started it up just to check my email or to try something I found while Googling (and hoping that just the act of restarting it might eventually magically make it stop), and then would shut it down again because no magic happened and the CPU was still running high. The only thing different that I did before shutting it down that night was to Disable the Scheduled Scan in the Task Scheduler, so it would at least not keep trying to run a scan.

The next morning after startup, it continued misbehaving and I continued shutting it down and starting it back up again to check my email. After a couple of rounds of this up and down, finally magic *did* happen! All of a sudden it started working. I think that Disabling the Scheduled Scan definitely helped. But the real kicker was my discovery from a fellow MVP (Derek Knight) as to *why* I probably had the problem in the first place.

Back in the day (ending with Windows 7), a full ShutDown was always better than a Restart because, basically, when you started back up from a full ShutDown it was starting up totally “from scratch”, like a cold boot. A Restart didn’t shut everything down before it came back up and didn’t do all of the “checks” that a full start up did. But now, starting with Windows 8, it’s totally the opposite behavior! And the problem that I was having was most likely related to always doing a ShutDown. Derek says that “Defender has an annoying habit of getting hung up on scanning a non existent driver that doesn't exist any longer. A full Restart loads a new instance of the drivers. That normally solves it. This can also happen when you shut down, if Fast Startup is enabled. After 3 or 4 start ups, some existing drivers don't fully load so Defender sees them as corrupt. They are in memory but not physically able to be scanned.”

Fast Startup was most likely the culprit … just prior to all hell breaking loose with my Defender problems, I had been having a lot of issues/warnings about a corrupted bluetooth mouse driver. Restarting seemed to solve the problem, but sometimes it didn’t (probably when I ShutDown instead of Restarted). Anyway, read this very interesting article from “How-To Geek” about Fast Startup. I turned mine off and have not had any problems since:

http://www.howtogeek.com/243901/the-pros-and-cons-of-windows-10s-fast-startup-mode/

Windows Update

No sooner had I fixed the Defender problem on my Surface, then my Dell Venue started misbehaving with Windows Update. Arrrrgggghhhh!!!   =0(   Apparently, Update was getting stuck while “Checking for Updates”. Maybe it was stuck on a file download or maybe it was something else, I don’t know what. I only know that it would run up my CPU to about 25-30% and never stop!

So, I ended up shutting off Windows Update until I could find a solution. To turn it off go to Control Panel | System and Security | Windows Update | Change Settings and choose “Never check for updates” … like this:

I had found a few suggestions to try while Googling, like running the built-in Update Troubleshooter, which you can find by typing “troubleshooter” in the Start | Search box and click the Troubleshooter result. Then start Troubleshooting by clicking on System and Security | Fix Problems With Windows Update:

It didn’t work for me, but it’s a good starting place.

Next, it was “How-to Geek” to the rescue again:

http://www.howtogeek.com/247380/how-to-fix-windows-update-when-it-gets-stuck/

The How-To Geek offers several things you can try to fix the problem. I had already tried one or two of them that I had found from my previous Googling. I decided that I was so sick and tired of trying every little suggestion I found when none of them ever worked, that I was going to skip to the end of the article and try the sure-fire, easy way (according to How-To Geek, it has always worked for them in the past when none of the other solutions did) … and that is using the third-party tool called WSUS Offline Update. The link to download it is provided in the above article, as well as instructions as to what to do with it once you’ve downloaded it. What WSUS does is download a bunch of Windows Updates, depending on which Windows version you’ve got and which downloads you’ve already installed. Be forewarned that it may take a while to download if you’re way behind on updates. Mine took all night to download (but, my Internet connection stinks … so YMMV). Installing the Updates took a *lot* less time than downloading them. But, it’s the end result that matters and when all is said and done, this *worked*!!!! No more stalling when it’s trying to Check for Updates!

Oh, and don’t forget to change the Windows Updates settings back to “Install Updates Automatically”.