I ran into an interesting issue when doing some math with a double rather than a decimal. I’m maintaining some existing code and since part of the original code used doubles, I continued writing my new code using doubles as well. The original author had created a string.ToDouble(defaultVal) extension method to use for such purposes (data is being retrieved from a TextBox.Text, even though it’s numeric data), but NO string.ToDecimal(defaultVal) extension method existed. So I didn’t really give it any thought … and just continued to use the double.
I know that there’s a difference as to when you should use double and when you should use decimal … but it seems that I can never remember which is which. And, all along I thought one was floating point and one was not … but that’s not quite true either. It seems that double (and float also) are floating binary point types, whereas decimal is a floating decimal point type.
I found a really good reply that explains this quite nicely in a StackOverflow thread: Difference Between Decimal Float and Double The very first reply is the one I’m referring to and the gist of it is the following:
A double represent a number like this: 10001.10010110011
Whereas a decimal represents a number like this: 12345.65789
As for what to use when:
-
For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
-
For values which are more artifacts of nature which can't really be measured exactly anyway,
float
/double
are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.
Read the reply in the linked thread, which has a little more to it than what I excerpted above.
Luckily for me, in my testing I used numbers that would cause this difference to be apparent , otherwise I might never have noticed it, which would have been bad once it was in production! Here’s where I went wrong:
Using Double.TryParse("14.19", out Large) and Double.TryParse("10.00", out Small) ... yields the correct numbers if you look just at Large (14.19) and Small (10.0). However, if you do math on the two, such as Large - Small, you don't get what you'd expect. You'd expect 4.19, but what you get is 4.1899999999999995. Usually not a problem, because it would most likely get rounded along the way … and, in fact does correctly display in the TextBox as 4.19. No harm, no foul … unless you happen to be checking to see if 4.19 <= Large - Small ... it should be true (if rounded), but it won't because it doesn't round at that point (4.19 is definitely NOT less than or equal to 4.1899999999999995)!!! But, cast Large and Small to a decimal before even doing the math and everything is fine.
I immediately wrote my own string.ToDecimal(defaultVal) extension method
Maybe after this little fiasco I won’t forget anymore. ;0)
ReplyDeleteGood article. Thanks for sharing.
Thanks for reading. =0)
Delete