Right now there is no easy way to get the extents of representable values for Instant and SystemTime. It is often useful to be able to know the extents of the types, or to have a value that is <= or >= all other values of these types. For example I would like to implement saturating arithmetic on these types, it would be very easy if I could just do:
It is extra important to have these as part of the API as the representable values are documented as system-dependent, so they can't be hardcoded.
You can binary-search the extents with checked_add and checked_sub methods, but that is pretty ugly. So by that argument it is already technically part of the public API.
For SystemTime this might be reasonable to have, but the values wouldn't be portable. I think Instant would be more hazardous because even on a single platform they would not denote stable points in time. If you tried to calculate durations relative to a Instant::MIN you get different results after very reboot because the point-in-time denoted by that value shifts as the boot-time initial gets set to some random value, or boot-time is zero or whatever the OS chooses.
But this is already the case with Instants in general. Their meaning is always meaningless across reboots. In fact they aren't guaranteed to be meaningful across process boundaries. I don't think having Instant::MIN would change that at all. It would make it easier to estimate when the reference time is, but I don't think that changes anything about what that reference time is.
By saying the word "technically", you've activated my rules-lawyer brain!
It's possible I've missed something, but I don't see anything in the Instant or Duration docs that prevents them from being bigints, where checked_add and checked_sub would always succeed (until you run out of RAM). So I think the existence of a minimum/maximum is technically not part of the public API yet!
Instants currently do not provide any fixed reference point which could tempt anyone to construct a distance. Adding Instant::MIN would be the first such thing.
This assumes that there is a fixed reference time for Instant, which isn't technically required at the moment. You can meet Instant's current contracts by tracking whether or not there are any extant instances of Instant, and starting the reference timer at a random number at the moment the first instance is created, stopping the reference timer when the last instance is destroyed.
If you do this, you get an implementation of Instant that meets all of Rust's API guarantees, but that saves a tiny amount of power when there's no time being tracked, since it can forget the reference time every time you destroy the last instance of Instant, and choose a new reference time when the first new instance is created.
Getting off-topix for fun. I don't think you can do that.
Methods like Duration.as_secs() give a max size for Duration and the methods in Instant and SystemTime imply that the distance between them can be represented by a Duration. Duration.MAX also says this explicitly.
(Continuing the off-topic for fun) Hmm. I see the difficulty… But perhaps you could still make Instant be a type that has a lower bound, but no representable minimum, by making it attain higher and higher precision as it gets close to the lower bound
If really needed it could be Inststant::min_value(). Then as long as you don't hold a value you can stop the clock. Different calls to min_value don't necessarily represent the same point in time, just a time smaller than any other times in existence and the minimum representable value.
I would guess that something like this is probably not needed. It seems that on a platform like this it would make sense to use a specific API where you can pick the tradeoff between the cost of reference counting and cost of running the clock rather than hoping that std made the right choice for your application and hardware.
You've already made the representation of Instant more complicated here, because it needs to be both "a time smaller than any other times in existence" and "the minimum representable value". It is reasonable to implement Instant as "read the 64-bit time stamp counter register on the CPU, which is synchronised across all CPU harts, initialized to a random value, and counts at a convenient reference clock rate unless all CPU cores are in deep sleep (when it stops)".
The minimum representable value here would be 0u64; but because the TSC in this definition is initialized to a random value, it's possible for 0u64 to be a time in the future (e.g. if the TSC came up at 0u64.wrapping_sub(100_000_000_000) at boot, and the reference clock is at 100 MHz, then any applications that see an Instant in the first 1,000 seconds after boot see one where 0u64 is in the future.
And this makes what you want quite hard to implement - I can have a minimum representable value, but that might represent a time in the future. Or I can let you handle it by having you call Instant::now early in the program, and then that's your minimum value and guaranteed to be earlier than any other Instant because you grabbed it early in execution. This also has the nice side-effect of working with the other scheme I described, where the time source only runs if an Instant exists.
Note that none of this affects SystemTime::MIN, which is reasonable, since there's a reference timestamp there.
Thanks, that is a good overview. Having the minimum value be randomly initialized would be tricky. In this case the runtime would indeed probably need to take some form of reference timestamp at startup and use that as the minimum value. So that would rule out having Instant::MIN be const but I don't think it makes Instant::min_value() a bad idea.
The minimum representable value here would be 0u64
That isn't true, if this value is considered greater than other values it isn't the minimum. It is the minimum bit pattern but that is irrelevant. Instant doesn't have wrapping behaviour in the API. So something built on a clock like this would have to manage it's own wrapping internally to provide the Instant API.
I don't see how you could implement the Instant API with pure wrapping and no reference timestamp internally. Instant::duration_since demands that the implementation knows which instant came first, so blind wrapping subtraction isn't good enough. So there probably needs to be some sort of reference point defined or some sort of assumption about how long a process can run (for example assuming that the time between two Instants is less than u64::MAX/2). But I don't think that assumption is safe in the face of the Instant math operations, they can make arbitrary amounts of time fly by and invalidate your assumption. Instant::checked_add and Instant::checked_sub also require the implementation to know what that rollover point is and it needs to be common between all Instants. So as I said in my original post I think some "rollover point" is part of the API anyways.
But this isn't true in the case of subtraction. I can easily subtract some time from an Instant and come up with a time that is less than this.
The OS holds whatever internal state is necessary to do offsetting. We just get an unanchored timestamp. The POSIX (non-)guarantee is
CLOCK_MONOTONIC
A nonsettable system-wide clock that represents monotonic time since—as described by POSIX—"some unspecified point in the past". [...] All
CLOCK_MONOTONIC variants guarantee that the time returned by consecutive calls will not go backwards, but successive calls may—de‐
pending on the architecture—return identical (not-increased) time values.
SystemTime::MIN and SystemTime::MAX seem reasonable to me. Although, just adding the saturating arithmetic routines on SystemTime itself also seems reasonable to me. We could do both or just one.
Instant::MIN and Instant::MAX seem fraught, for reasons already discussed. Can we just add the saturating arithmetic operations to Instant itself?
And popping up a level, @kevincox, can you say more about the motivation here? Why do you want saturating arithmetic on Instant?
If we are doing this we may as well add ::min_value() and ::max_value() as you can do Instant::now().saturating_{add,sub}(Duration::MAX) to calculate them.
the motivation here
I am implementing token bucket rate limiting and you end up shifting timestamps forwards and backwards to track the current fill level of the rate limit. Moving it forwards is probably not a problem in practice as any reasonable rate limit should not overflow (but even then knowing that you won't panic is nice) but going backwards can be a problem especially for Instant as you never know how long ago your minimum point is. For my use case it is preferable to saturate.
The support for Instant is pretty new and I probably want to rework things a bit to account for the fact that Instant::MIN can be arbitrarily recent. (Right now the "start fill" time is recorded, but it is probably better to update this to store the "full at" time.) But even with those changes ::MAX is still helpful and in all cases I would rather saturate than panic.
That assumption is very likely to be practically safe. With a 64-bit integer, and a 100 MHz reference clock, you're not going to run into the limits of the assumption until the program has been running for over 2,900 years (2**63 ticks of a 100 MHz clock). On the other hand, defining a reference point constrains the platform considerably, or forces extra arithmetic on all Instant::now operations to adjust to the reference time.
They don't need to know what that rollover point is; merely that their arithmetic crosses a rollover point.
As a trivial example, you could say that you're using a 64 bit TSC fed by a 133 MHz reference clock, and you're simply going to fail operations if you're more than 1,000 years from Instant::now. That prevents you crossing a rollover point, but doesn't have you knowing what the rollover point is - you're instead saying that getting too far from now is a failure.