Despite the , this is actually an excellent way to do things, by making the epoch a type parameter, like C++'s time_point
. That keeps flexibility while still preventing things like subtracting a unix-epoch point from a when-the-machine-started point.
In my opinion, a type signature like this:
fn as_millis(&self) -> u64
immediately says one thing very clearly: the output type is u64
. If one needs to handle (almost) billion-year timescales in millisecond precision, then one should definitely evaluate the type requirements, and it should be obvious that this method is inappropriate. For anyone else wanting non-negative millisecond numbers this is fine, so just panic on overflow (just like integer arithmetic does on overflow).
Should we base Duration on i128, since time machines might eventually lead to negative duration in a distant, but perhaps not astronomically distant, future?  That would also simplify the comparison of Durations and computations about relative Durations.
Maybe a little off topic, but a more realistic use case for a negative Duration would be for SystemTime::duration_since
(or something to similar) to return a Duration that might be negative directly instead of a Result.
Ambiguity: should ms
then refer to milliseconds or microseconds?
The SI unit for 10^-3 seconds is “ms”; the SI unit for 10^-6 seconds is “µs”, which when limited to ASCII (i.e., Unicode not enabled) is conventionally written in the programming community as “us”.
Here are the various SI unit prefixes. “µ” is the only prefix that is not in the ASCII character set. Note that the conventional substitutes for “µ” when limited to seven-bit ASCII are “u” or “mc”. (The latter is commonly used for mass, such as for pharmaceuticals.)
Precisely. Therefore I think the chrono authors made a conscious decision to not be inconsistent in naming those methods, and pay for it with the current, longer-than-a-single-prefix-letter option. And to be honest, I don’t really see the problem with it.
One thing that might be forgotten with a u128 implementation is that u128 is free ergonomically, you are still extremely likely to need to as u64
or try_into::<u64>()
when you want to do anything with the returned value. Neither of those are particularly nice. Returning a Result<u64, TooBig>
or just a plain u64 is likely to be more ergonomic in essentially every case.
That said, I don’t know what the use case is for millisecond precision out to the heat death of the universe. Are there people out there who actually want that, or is this more a matter of philosophical purity?
Personally, I’d just like to be able to write:
println!("{} ns", duration.as_nanos());
For most other operations, Duration already support arithmetic so I don’t really see much point in converting to a u128/u64/f64.
Emscripten seems like one that might matter
Ok, back of envelope… 2^64 ~ 10^20, and 1 century is ~ 10^10/3 …, so we’re talking about durations on the order of 10^9 years?
If you’re trying to track that long of a timespan to millisecond precision, I’m pretty sure you’re implementing a duration library yourself or your code is buggy. I don’t know why we’d want to double the size of the return type on this function to allow buggy code to run. Somthing like 100% of the people calling it should be calling unwrap
on it anyway. (Unless there’s a use case I’m not seeing here?)
And if that isn’t good enough, we could always add as_millis_checked
or as_millis_widening
for those who want to be really sure.
+1 to panic on overflow, but its name should have to_
prefix, not as_
as it can panics.
I personally use either u32 for both time and duration, measured in seconds with a resolution of 1s and a span of 136 years (e.g., for crypto key expiration time relative to Unix/Posix time zero), or a mixed-radix (u32, u32) pair for a resolution of 1 ns and a span of 136 years. I do sometimes need to use TAI rather than UTC, to avoid the bump when leap-seconds are added to the calendar, because I sometimes work with automating continuous processes where derivative computations (e.g., PID) matter.
I’ve never had an application requirement that mandates wasting 4B (u64) or 8B or 12B (u128) for time and duration measurements that are totally pointless in those applications. Granted that simulations dealing with cosmological time or quantum physics may require extreme durations and ultra-fine time resolutions that do necessitate u128 (or even better, f128); I’m unlikely to encounter them in the programs that I write or maintain.
In micro-benchmarking, it can be useful to accumulate nanosecond measurements for several seconds, then divide by the number of iterations. That is the main usage which I can think about myself for a u64 duration.
High-resolution timestamps are easier to justify, as non-volatile storage systems often need both to know precisely when files where created (for programs like Make or synchronization tools) and to store them for a long time.
Clearly there is an occasional need for either high resolution or long duration, though probably not simultaneously. My problem is that choosing a type representation that simultaneously satisfies both of these needs either leads to the cumulative imprecision of f64
or the (comparatively) large storage of u128
.
Much of my work is on small IoT devices for industrial automation systems, where I often need timestamps on database records. The proposed increased storage size of Duration
impacts database memory use, as well as code size on SoTs that do not have native support for u64
and u128
. For me the change of Duration
to u128
simply means that I will seldom use methods that require a Duration
. Others working in the IoT space may come to similar conclusions.
Might this not be a good use of Constant Generics? Which would allow picking the storage resolution/duration of the Duration type? Something like:
let x = Duration::<u32>::new();
let y = Duration::<u16>::new();
let z = Duration::<u128>::new();
Then, have the API take/return the appropriate storage type in millisecond? Though, the more I think about it, the less I think that solves anything???
As far as I know, Duration already uses 2x64bit storage internally. The proposed change only aims to better expose this internal precision in the interface.
Search for Duration
in the docs, then click [src]:
pub struct Duration {
secs: u64,
nanos: u32, // Always 0 <= nanos < NANOS_PER_SEC
}
As far as I can tell, it has this layout because that's the layout libc
on Linux uses for time, which I believe is higher accuracy than what Windows uses. So it's the size it is to avoid lossy conversion.
So, if someone wanted a more space-efficient, higher-resolution, or longer period “Duration” could this possibly be made generic?
let smallDurationWithLowResolution = Duration::<u8,u8>::new();
let smallDurationWithHighResolution = Duration::<u8,u128>::new();
let largeDurationWithLowResolution = Duration::<u128,u8>::new();
let standardCurrentDuration = Duration::<u64,u32>::new();
and then define “Duration” as:
type Duration = Duration<u64,u32>;
or something like that. Would that not allow best-case usage by the client to select the desired storage/length of time/resolution to the best of their needs? Would this create a mess?
This seems wildly unnecessary; there’s no point in configuring the nanosecond portion to anything other than u32
, because that’s how much space you need to store subsecond nanoseconds. For this to work, you’d need to selectively reduce the accuracy to… what, larger powers of ten? Treat it as binary fixed point with no leading digits?
And I’m not sure there’s any reason to want to reduce seconds accuracy below u64
, because that only gives you ~70 years which, if you’re measuring anything relative to the UNIX epoch is going to run out completely by 2039.
This only makes sense if you’ve got a structurally different type that lets you select both resolution and size (as a single integer type), but at that point it’s got nothing to do with Duration
, and there’s no need for it to be in the standard library.
Duration
is supposed to be the relative counterpart to Instant
and SystemTime
, whose accuracy is determined by the host environment. In that context, I don’t see any reason to change any of them.
I agree that this is a useful case. But Duration already implements Div<u32>
, so you should be able to do this without converting at all. I'm not 100% sure why there isn't also a Div<u64>
implementation, but that could probably be added if there was demand.