Replies: 3 comments
-
|
Here are my thoughts... BackgroundBarring the use of an atomic clock, no RTC (and thus no timestamp) will have nanosecond accuracy (even if it has nanosecond precision). So, it only makes sense to compare nanosecond-precision timestamps for data samples coming from the same device; they aren't portable. When comparing samples between devices, millisecond-precision will be more than sufficient. ProposalSeparate wall clock and monotonic time. Wall clock times would have millisecond precision while monotonic times would have nanosecond precision.
|
Beta Was this translation helpful? Give feedback.
-
|
@bminer thanks for the input -- still processing. Do you know why so many formats Go, NTP, etc. separate seconds and nanoseconds into two variables? What is the benefit of this over just using 64-bit nanoseconds? I guess int64 seconds would practically never run out (Go time.Time). Maybe 584 years is not enough for some people, but I can't really imagine software deployed today ever still running in 584 years ... 32 bits worth of seconds is only 136 years, so 64-bit nanoseconds would generally be better than 32-bit seconds + 32-bit nanoseconds (what NTP does). SQLite -- is there any harm in storing nanosecond precision? Space would be a few more bytes per point. I agree that for comparing timestamps for CRDT operations, MS is plenty, as times will not be synced between systems any more closely than that. However, since points also represent events in time, I hesitate to throw away the precision as it may be useful in rules or other algorithms. InfluxDB offers the following suggestion:
|
Beta Was this translation helpful? Give feedback.
-
I think some of the rationale is historical in nature. Storing UNIX timestamps in seconds as an int32 was a common standard for a long time. For example, some file systems still use int32 to store UNIX timestamps in seconds. But most importantly, I think that comparing nanosecond-precision timestamps is an uncommon use case because of clock drift.
Yeah, pretty much. But, since SQLite doesn't store the high-precision time series data anyway, I think it's okay to truncate to nearest ms.
I could be wrong, but I don't think we need high precision for syncing the CRDT. Upstream nodes could have clocks that differ by hundreds or even thousands of ms. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Doing a little research on how timestamps are stored:
https://github.com/simpleiot/simpleiot/blob/master/docs/adr/4-time.md
... and thinking through if we should make any changes -- especially in the timestamps stored in SQLite -- thinking a single value of ns since 1970 makes sense. Is there ever a time where having seconds separate is beneficial? It seems like splitting out seconds and ns just makes everything harder -- comparisons, math, etc.
cc @bminer
Beta Was this translation helpful? Give feedback.
All reactions