8

SQLServer datetime format is stored as 8 bytes where the first four bytes are number of days since Jan 1, 1900 and the other four bytes are number of ticks since midnight. And the tick is 1/300 of the second.

I'm wondering why is that? Where is that 1/300 came from? There must be some historic reason for that.

Martin Smith
  • 438,706
  • 87
  • 741
  • 845
Stas
  • 1,707
  • 15
  • 25
  • I've often wondered why the closest to midnight you could get was 23:59:59.997. – Jonathan Feb 17 '14 at 10:48
  • 2
    @Jonathan - Which is usually a good indication that you should be using an exclusive endpoint (i.e. midnight of the following day) when working with datetime data and wanting a complete day's worth - if you've hardcoded the `997` or used other tricks to calculate it, you'll have to change all of your code if you shift to `datetime2`. – Damien_The_Unbeliever Feb 17 '14 at 10:56
  • 1
    It goes back to Sybase days. The maximum 32 bit signed int would allow precision up to 24,855 ticks per second so I wondered about that too. – Martin Smith Feb 17 '14 at 11:03

1 Answers1

5

Yes, there is a historical reason: UNIX !

For details, read this excelent article by Joe Celko.

Here is the detail you're looking for:

Temporal data in T-SQL used to be a prisoner of UNIX system clock ticks and could only go to three decimal seconds with rounding errors. The new ANSI/ISO data types can go to seven decimal seconds, have a true DATE and TIME data types. Since they are new, most programmers are not using them yet.

Kenan Zahirovic
  • 1,587
  • 14
  • 24
  • 1
    Is he saying that Unix also only had 300 ticks per second? Or just that both use the concept of a tick? – Martin Smith Feb 17 '14 at 10:57
  • It is about the old resolution of the first unix internal clocks - has nothing to do wbout how they ticked, just how their time resolution was. Sql server has quite a lot of old unix (sybase) ancestory, just check the timestamp column that has no timestamp at all ;) – TomTom Feb 17 '14 at 11:12