Precision in Date/Time Data Types

There is a general misconception, that is common across many different programming/scripting languages, regarding Date/Time data types, their precision, and how this differs in theory versus practice.  For the sake of this article, I will be discussing the following data types: DATETIME in T-SQL, DATETIME2(7) in T-SQL, and DateTime in C#.NET.

Data Type Precision (Theoretical Precision)

Data Type Theoretical Precision
SQL DATETIME 3.33 ms
SQL DATETIME2(7) 100 ns
C# DateTime 100 ns

When viewing the information presented above, the natural assumption is that each of these data types will give you a unique value at intervals of their precision.  For example, if you were using a SQL DATETIME and queried the time every 1 ms, you would expect to see a unique value roughly once out of every three queries.  The same would be expected of the higher precision data types, should they be queried on relatively similar timelines.

This, unfortunately, is not how Date/Time data types work.  This theoretical precision is representative only of what the data type structure is physically capable of holding, not of what the variable will ever practically contain.

Operating System Precision (Practical Precision)

The important factor that most programmers forget is that every program/query/script is running in the context of some Operating System and these Operating Systems are further running in the context of a processor and RAM.  The processor, while very fast, is consumed with many different actions happening to handle multitasking loads, and thus cannot dedicate a large percentage of its cycles to updating a clock.  The way processors get around this issue is by updating a portion of memory with the current date and time, at some predetermined interval.  The Operating System exposes this data through any one of several counters.  When a program requests the current time from the OS, the OS gives the program the latest reading of the “current” time.  In this way, the counter can be said to be highly, but not perfectly accurate, and highly precise to within some threshold (which we will see next).

When a program requests the current date and time from the operating system, it does so through a built-in function/method.  SQL DATETIME can get its data from GETDATE(), SQL DATETIME2(7) should get its data from SYSTEMDATETIME(), and C# DateTime can get its data from DateTime.Now.  GETDATE() hits an OS counter that has a precision of approximately 15 ms, while SYSTEMDATETIME() and DateTime.Now hit an OS counter that has a precision of approximately 1 ms.

Q.E.D., when you call GETDATE(), you will receive a data value that, while capable of holding a time down to a precision of 3.33 ms, will actually only be precise to about 15 ms (in Windows 2000).  In our previous example of querying every 1 ms, you would only see a unique value roughly once out of every 15 queries, as opposed to the 3 we previously would have assumed.

Demonstration

In order to see the effect we have discussed, the following code will capture the number of times we saw unique values and the average time between unique values.

-- SQL GETDATE
DECLARE @TimeStart DATETIME
DECLARE @Time DATETIME
DECLARE @TimeEnd DATETIME
DECLARE @I INT
DECLARE @Count INT
 
SET @I = 0
SET @Count = 0
SET @TimeStart = GETDATE()
SET @Time = @TimeStart
 
WHILE @I < 10000000
      BEGIN
            SET @TimeEnd = GETDATE()
            IF @TimeEnd != @Time
                  BEGIN
                        SET @Count = @Count + 1
                        SET @Time = @TimeEnd
                  END
            SET @I = @I + 1
      END
PRINT CAST(@Count AS VARCHAR) + ' unique values'
PRINT CAST(DATEDIFF(millisecond, @TimeStart, @TimeEnd) / CAST(@Count AS REAL) AS VARCHAR) + ' milliseconds'
 -- SQL SYSTEMDATETIME
DECLARE @TimeStart DATETIME2
DECLARE @Time DATETIME2
DECLARE @TimeEnd DATETIME2
DECLARE @I INT
DECLARE @Count INT
 
SET @I = 0
SET @Count = 0
SET @TimeStart = SYSDATETIME()
SET @Time = @TimeStart
 
WHILE @I < 10000000
      BEGIN
            SET @TimeEnd = SYSDATETIME()
            IF @TimeEnd != @Time
                  BEGIN
                        SET @Count = @Count + 1
                        SET @Time = @TimeEnd
                  END
            SET @I = @I + 1
      END
 
PRINT CAST(@Count AS VARCHAR) + ' unique values'
PRINT CAST(DATEDIFF(microsecond, @TimeStart, @TimeEnd) / CAST(@Count AS REAL) AS VARCHAR) + ' microseconds'
// C# DateTime
DateTime timeStart;
DateTime time;
DateTime timeEnd = DateTime.Now;
 
int i = 0;
int count = 0;
timeStart = DateTime.Now;
time = timeStart;
 
while(i < 10000000)
{
       timeEnd = DateTime.Now;
       if(timeEnd != time)
       {
              count++;
              time = timeEnd;
       }
       i++;
}
 
Console.WriteLine("{0} unique values",count);
Console.WriteLine("{0} microseconds", ((timeEnd.Ticks - timeStart.Ticks) / count) / (TimeSpan.TicksPerMillisecond / 1000));

From executing these I found the following results:

Data Type Theoretical Precision Practical Precision
SQL DATETIME (on Win2000) 3.33 ms 15.5556 ms
SQL DATETIME (on Win2K3) 3.33 ms 3.33333 ms
SQL DATETIME2(7) 100 ns 1003.71 µs
C# DateTime 100 ns 1000.21 µs

From these results, we can discern a couple things.  First, the counter that SQL Server accesses in Windows 2000 has a practical precision of approximately 15 ms, while the counter that SQL Server accesses in Windows 2003 (and beyond) has a practical precision of approximately 3.33 ms, which is basically the same as the theoretical precision.  Secondly, In both the case of SQL DATETIME2(7) and C# Datetime, the practical precision is 1 ms, which falls woefully short of the 100 ns theoretical precision.

Conclusion

Developers should be aware of what their data types and functions are doing under the surface.  In the case of SQL DATETIME2 and C# DateTime, developers must keep in mind that the true precision of the system date and time retrieved through those data types only has a precision of 1 ms.

  1. Thanks for posting this.. It’s been a pleasure to read 🙂

  2. SYSDATETIME() is not the only way to get data into a DATETIME2 column. INSERT also works, and can use data coming from a more granular source.

    • Of course, you could get the data that you are storing in DATETIME2 from any source and that source could be more precise than the Operating System’s counter. I did not mean to insinuate that the lower-order bits were useless, only that people sometimes misunderstand the meaning of the DATETIME2’s ability to store down to 100ns. I only hoped to clarify for some developers any unexpected behavior they may be experiencing due to misunderstanding the SYSTEMDATETIME() data source. Thank you for your comment.

  3. Another take-away is to always use DATETIME2 for date-times in SQL Server so you don’t have any truncation problems storing/retrieving into C# DateTime variables.

    • This is true, as long as you are needing that deep of a precision. If you only need a DATETIME2(4) or less, you can save yourself 1 or 2 bytes over traditional 8-byte DATETIME. But, in that scenario, you will, of course, have to make sure that your .NET app doesn’t need a deeper precision. And, of course, if you need even less precision, SMALLDATETIME and DATE only take 4 and 3 bytes, respectively.

  4. Dates, Date-pickers, and the Devil – nate_the_dba - pingback on May 17, 2017 at 1:07 PM

Reply to MikeR ¬
Cancel reply

NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*

Trackbacks and Pingbacks: