Duration as milliseconds


Would you explain the “~70 years”?

2^64 > 1.844E+19. There are 60 * 60 * 24 * 365.24 = 31_556_736 seconds in an average year. That means that a u64 can represent > 5.84E+11 years at 1 s resolution, or > 584 years at 1 ns resolution, and a u32 can represent > 136 years at a 1 s resolution. Where does the 70 years come from?

Edit: Clarified the resolution for the u32 case.


Working on Planetary Annihilation I was tracking down a server crash. Long story short, an integer was overflowing which many steps later led to a bad array access and kaboom.

The overflow worked like this. Building a factory has some amount of WorkLeft that is being done at some Rate. WorkLeft / Rate is TimeLeft. This was in floating point seconds and then converted to 32-bit signed integer milliseconds. 2^31 milliseconds is 24.86 days which should be more than enough for a game that only lasts a couple of hours.

But of course it wasn’t. The problem is that if you have a very slow rate (available material is less than a trickle) combined with a very large amount of work left (asteroid propelling engine) then TimeLeft can be a hilarious large number. In our case this happened pretty rarely because rate was typically either zero or a normal number. Because this was in C++ I couldn’t let the int overflow and had to detect it beforehand. I wrote a whole blog post about this.

It’s surprisingly easy to divide a reasonable large number by a reasonable small number to produce an outrageously large number. For example the circumference of the earth (~40 million meters) divided by the speed of a slug (.0028 meters per second) is 14.3 trillion seconds.

Do I think people need u128 bits of millisecond precision? No. But do I think it’s very plausible to encounter extremely large numbers when working with far more normal orders of magnitude.

I’m not experienced enough with Rust yet to have a strong vote as to what should be done. My current vote is a Result. Because I’ve already written overflow detection code and without a Result I’ll probably have to write it again.


If you go below 64-bits, you get 32-bits. The 32-bit overflow date is around 2040. The epoch is 1970. That’s a roughly 70 year span, which is conspicuously half of your “> 136 years at a 1 s resolution” remark. The one thing I forgot to consider is whether the overflow date was because of signed or unsigned integers.


So, for the record in this thread, the overflow date for an i32 with 1 s resolution, relative to the 1970 Unix Epoch basepoint, occurs in 2038, while for the u32 under discussion in this thread it would be 2106. Earlier in this thread I suggested that Duration might be useful as a signed quantity. If others take that view, than i64 is the minimum reasonable representation for Duration in seconds, with i128 or (i64, u32) needed for Duration in nanoseconds.


For a negative-capable tuple duration, I think you’d need i32 for the nanoseconds, unless you want to represent -1 nanosecond as (-1 second, +999_999_999 nanoseconds).


I strongly object to Duration being a signed quantity, it’s conceptually nonsense: a duration is an amount of time between two events. I can’t think of a single ‘amount’ measure that can be meaningfully negative: see distance or volume for examples.

Durations are often used as offsets when calculating times, and offsets being relative measurements can be negative, but I think conflating the two concepts is a good way to make it unclear what is actually meant, and possibly introduce subtle bugs as a result. I much prefer the current way SystemTime::duration_since() handles negative offsets, with it returning a Result, as that is a good signal saying “hey, have you properly considered this?”.

Also, I can’t vote in the poll for some reason, it just shows the results, but I’d vote for u128, though if that’s a problem for some platforms I’d be happy with the Result option too. @forrestthewoods makes a good point about huge numbers popping out of perfectly normal arithmetic, so I don’t think panicking is the right thing to do, and I can see clamping the value leading to all sorts of WTF moments.


I strongly object to Duration being a signed quantity, it’s conceptually nonsense: a duration is an amount of time between two events.

I mostly agree in principle. But in practice I use negative values with time all the, ahem, time. For example calculating intersection points of a line (or ray) with a sphere. Or any type of work involving a timeline. In Planetary Annihilation would could actually render replays in reverse! Rewinding is a pretty cool effect.

Having typesafe, convertible types for time values is of immense value. We’ve all run into bugs where a value wanted to be seconds but we assumed it was in milliseconds. However negative values of time is also something I need as part of normal arithmetic.


Sure, negative time values are very useful (I added support for them to the filetime crate in a PR because they’re useful), let’s just please not allow Duration to have them.


Huh? What’s negative January 15th, 2003? What’s negative 5 days, 13 hours, 21 minutes, 11 seconds, 986 milliseconds?

Seems like the second (duration) actually has a sensible interpretation of negation to me whereas the former does not.


Sorry, “time values” is imprecise, I mean that it’s useful to express “negative 5 days” for arithmetical purposes, but that’s not a duration (I call it an offset, because it’s only meaningful relative to some point in time).

My point is that I’d like to keep being able to distinguish the difference between “there are 5 days between the 5th and the 10th” (duration) and “the 5th is 5 days before the 10th” (offset) in Rust.

Interestingly, I did come across a previous thread discussing signed vs. unsigned Duration in Rust, and RFC #1040, so it looks like Duration was signed at some point before 1.0 but stabilized as unsigned? The motivation given for that choice in the RFC is “unsigned types remove a number of caveats and ambiguities”, but it doesn’t detail them…


This is the distinction between SystemTime, which nobody is suggesting should be negatable, and Duration, where an additive inverse is totally reasonable.

C++ does a good job of this, where a time_point is a duration after(/before) a phantom epoch type.


Yes, that is what I said. My point exactly. In fact, who I was replying to suggested that Duration didn’t make sense to negate, but, negative time values make sense to negate. I was pointing out how that didn’t make sense. It all comes down to whether Duration should be scalar or vector.


Just a random thought: if the type invariant of Duration is meant to be “just a span between two events” and that can’t be negative, maybe we should have another type, Offset that explicitly marks a negative or positive time offset relative to some Instant. Consider the analogy to usize and isize!


I might get lynched for this, but why not an f64… which can’t overflow, only get more inaccurate with larger numbers. Expressing the Duration as milliseconds is going to be inaccurate anyways since the nanoseconds / microseconds get rounded up or down. This is usually not what I want - I do want the 10.2 ms, but I don’t want to do the “nanoseconds + seconds * 1_000_000 as f32 / 1_000” dance.

Often times I need the “Duration as milliseconds” for things like frame times. So if you add milliseconds as an integer, it’s not going to be of much use (for me), since it doesn’t include the sub-millisecond part. So I still have to do the annoying “nanoseconds + seconds” thing, because otherwise, it’s simply going to be inaccurate (only showing “16ms” instead of “16.25 ms”).

I mean, when do you need Durations in milliseconds… in frame times, for gaming / graphics work. Or in measuring server response times. That’s the top two applications that I can think of. In neither do you want milliseconds as integers. While representing the Duration as an integer is technically correct, for me, a f64 would be the pragmatic choice.


Prior Art: .Net does this with TotalSeconds/TotalMilliseconds/etc properties.

I don’t think any of the the as_* methods should return floating point, but I’d definitely like in favour of a convenient method to get a duration in floating-point, maybe .floating_seconds() or .total_seconds() or some better name to contrast with subsec/as that I can’t come up with right now.


Why not go fully generic and specialise for:

-> Seconds<f64>


Well, full generic is TimeSpan<f64, Ratio<1, 1000>> instead of .total_millis() :upside_down_face:

Edit: And this full generality is great for interop with other things. That way GetTickCount can use <u32, Ratio<1, 1_000>> while GetTickCount64 uses <u64, Ratio<1, 1_000>>, and a FILETIME can be directly converted to a <u64, Ratio<1, 10_000_000>>. (Though those are TimePoints, not just TimeSpans, since they have an epoch.)