You’ve heard all kinds of good things about the various .TryParse() methods, and rightly so. They’re a good way to convert an object to the appropriate data type, such as int, long, decimal, DateTime, etc., without having to worry about the conversion throwing an exception if the object cannot be converted to the appropriate data type. For example, you could get an exception if you simply tried to cast the object:
or if you used the Convert methods:
But, the .TryParse() methods avoid all that and simply do the conversion if possible. If the string can be converted to the data type, the value is put into the out parameter and the method returns true. If it cannot be converted, a default value is put into the out parameter and the method returns false.
Now, let me say that the DateTime.TryParse() works fine too, but there is one gotcha to look out for. If the DateTime object that you wish to Parse comes from a database and you need the entire precision of that value, then you can’t use .TryParse(). In other words, more precision than down to the second. Let’s look at a DateTime value that I retrieve from a SQL Server database table, that was put into a column via GETDATE() in a SQL Stored Procedure: 09:31:45.333 Note that this is a bit more precise than down to 45 seconds. There’s that decimal value of .333 after it. Why would you care about that?
Here’s the scenario that bit me: Rows are entered in a database table every time an activity occurs, which could be as often as 30 seconds. Periodically, I need to retrieve the data for the most recent activity. So, I use “SELECT MAX(StatusTime) FROM etc...” that returns the most recent DateTime from the above mentioned table, which I retrieve by running command.ExecuteScalar(). The only datatype that .ExecuteScalar() returns is an object. In order to then make use of this newly retrieved date, I need to convert it to a DateTime variable. I had been so enamored of using TryParse() for this type of thing, that I just went ahead and used it here too.
The only time that the value returned from the query will be DBNull is at the very start, when the app is first used and the database contains no data or when it’s the first time using that particular ActivityID. In either case, I was ok with the default value, which is DateTime.MinValue.
So, in my testing, when retrieving activity within a date range, I never got the very last set of activity records. I would get everything before 09:31:45.000, but I knew that there was also activity “posted” at 09:31:45.333 which wasn’t being retrieved.
The culprit? The .ToString()!! The time portion of my date went from being 09:31:45.333 in the database, to being 09:31:45 in my LastStatusDate variable because the .ToString() doesn’t “recognize” the decimal portion of the time. Consequently, any activity that had the time of 09:31:45.333 was not being retrieved!!
The obvious solution to this dilemma is to check that the object returned from the .ExecuteScalar() is of the DateTime datatype and then cast the object to my LastStatusDate variable. I could see that potentially I may want to use this functionality more than just this one place, so I decided to write an extension method for object data types to handle this. I actually wrote two of them and they work perfectly!
The reason I wrote two different extension methods is that sometimes you need to test the boolean return value of a TryParse() kind of method and sometimes you don’t. Here’s how to use both extension methods:
Or, if you don’t wish to check the boolean, such as maybe adding a date to a List<DateTime>:
Or just setting the variable:
Obviously these two extension methods can be used with any type of object (for example, a DataRow column in a DataTable), I just showed it here using the same object (the result returned from the ExecuteScaler() method) for consistency in my example.
They are pretty handy extension methods … I highly recommend adding them to your bag of tricks!