Pre-RFC: Move SystemTime and Duration to libcore

Proposal

In the future, we might move std::time::Instant to libcore. This is out of the scope of this proposal.

In the future, we might provide some pluggable interface so that an application and/or platform port can provide its own implementation of the std::time::now(). This is out of the scope of this proposal.

This proposal is not adding any new functionality; it is just exposing some more functionality from libcore that’s already exposed from libstd.

Motivation

Some crates want to be able to compare times without depending on libstd, so that they can work with the #[no_std] feature. Exposing the functionality for comparing times from libcore is helpful for this. Concrete examples include my webpki crate and some other crates I am developing. I believe this would eventually be useful for rustls and many other crates. Rust operating systems might even be able to use these types as their native time/duration types.

libcore/libstd doesn’t necessarily know how to get the current time (in UTC) on every platform, and libstd might not be available. But, the application may have a way of getting the current system time (in UTC), in which case it can construct a SystemTime itself. (Consider, for example, a network of IoT devices where only one trusted device knows what time it is, and other devices on the network fetch the time from it.) Or, the application may not need to do time comparisons based on the current time, but rather only on two explicitly-given times, in which case now() and elapsed() are not needed. This is why they would stay libstd-only. For now, it is expected that #![no_std] applications would construct a SystemTime by adding some Duration to UNIX_EPOCH, where the Duration is constructed by some non-libcore/libstd API that returns the current time as an integer (milliseconds or whatever) relative to the (or some) epoch.

This subset of the API is a good fit for libcore because it is self-contained. In particular, it doesn’t depend on the memory allocator or other things that aren’t normally present in libcore. Since the code is platform-independent, this proposal doesn’t make it harder to port libcore. Applications that don’t use this functionality won’t be (negatively) affected by it.

Drawbacks

libstd would still need to provide std::time::SystemTime::now() and std::time::SystemTime::elapsed() for backward compatibility, which means that exposing std::time::SystemTime wouldn’t be as simple as pub use core::time::SystemTime. However, I assume there’s already some mechanism for doing this that’s already considered acceptable for other uses.

Alternatives

A crate that doesn’t want to depend on libstd could, in theory, use traits and/or other abstraction mechanisms to provide an API that accepts std::time::SystemTime or some other kind of time. Then #![no_std] uses of the crate would use the trait. I experimented with doing this in one crate (webpki), and it can be made to work, but it isn’t as clean as just having SystemTime always available.

Thank you

Thank you for taking the time to review this proposal.

3 Likes

Right now only libstd can construct a SystemTime, so for your idea to work, it would need to expose a public way of constructing SystemTime. The representation of SystemTime however, depends heavily on the platform's method of acquiring the current time, so you'd have an API in libcore that depends on system libraries even though libcore is supposed to be independent of that. What would the representation of SystemTime be on a platform where libstd doesn't know how to get the current time? What happens if libcore stabilizes on a specific representation for that platform and then libstd wants to use a different API with a different representation to get the time?

There is already such a way: Add a Duration to UNIX_EPOCH.

The representation of SystemTime however, depends heavily on the platform's method of acquiring the current time, so you'd have an API in libcore that depends on system libraries even though libcore is supposed to be independent of that. What would the representation of SystemTime be on a platform where libstd doesn't know how to get the current time? What happens if libcore stabilizes on a specific representation for that platform and then libstd wants to use a different API with a different representation to get the time?

It's pretty much always going to be a 64-bit integer offset from some epoch that would be a constant Duration away from UNIX_EPOCH, right?. Maybe the units would vary?

I actually did a very similar thing in mozilla::pkix; see mozillapkix/lib/pkixtime.cpp at master · briansmith/mozillapkix · GitHub. There, we didn't need sub-second resolution, so it was a bit easier.

unix vs windows The main difference between representations is the level of precision and the range of times that can be represented.

1 Like

AFAICT, there could be a single representation of SystemTime as a Duration (thus nanosecond solution) since the UNIX epoch, and we can say that times before the UNIX epoch are not necessarily supported. (I think this is kind of implied already.) Or we could even define SystemTime as a number of nanoseconds since the UNIX epoch.

SystemTime necessarily depends on what sort of times the system can return. Windows can handle system times that are older than the UNIX epoch. What would libstd do if its SystemTime was simply nanoseconds since the UNIX epoch and it received such a time from Windows?

1 Like

Maybe the same thing as when you're reading an NTFS filesystem on Linux and the last modified time of a file is earlier than the Unix epoch? A user of SystemTime can't expect times earlier than the Unix epoch to work. If a person needs the actual timestamp of a file, or whatever, that might be earlier than the Unix epoch then an operating-system-specific API should be used.

In that case it is the operating system's problem, not Rust's problem.

SystemTime is that operating system specific API. It is supposed to be able to precisely represent any time that the OS returns so it can be given back to the OS without any loss of information. That is why it is called SystemTime and not UnixTime.

I propose this:

In libcore, SystemTime only needs to be specified to be able to store the result of adding any Duration to UNIX_EPOCH; i.e. times with nanosecond resolution on or after 1970-01-01 and before whatever would cause a 64-bit overflow with nanosecond resolution. Thus, the default representation can indeed be a Duration relative to UNIX_EPOCH.

In libstd, in addition to handling any result of adding a Duration to UNIX_EPOCH, SystemTime also needs to be able to handle “any time the OS returns”. This might call for a specialized representation for libstd; the same specialized representation shoujld be used for libcore on that same platform.

About 585 years after the epoch, which is why Duration is not simply 64-bit nanoseconds, but rather 64-bit seconds with a 32-bit nanosecond part.

I imagine the "Sealed traits" part is because of copying another pre-RFC?

Yes! I edited that out now.

I like the idea of the RFC (and am generally in favor of moving things into core), but I understand and sort of agree with the sentiment that SystemTime is platform-specific. I’d be more interested in seeing a generic Date/Time API that can work in core. There should be conversions between that and SystemTime.

There is one, Instant. However, it has the undesirable property of promising to be monotonically increasing, which requires it to be less efficient in various ways. And, it doesn't provide any functionality to ground it into a calendar.

We could define a third time type, to go along with Instant and SystemTime. But, IMO, it wouldn't improve the situation for any system where libstd already implements SystemTime, which is almost every platform. Adding a new type would be pure increased complexity for those platforms. My idea of generalizing SystemTime so it can work in libcore is to avoid adding any new complexity.

It is interesting that Duration has nanosecond resolution but SystemTime doesn't necessarily. I guess that means that laws of addition don't apply; e.g. this assertion may fail: assert_eq!(s + d - s, d). If this is allowed, it may not be a good idea to use Duration as the internal storage of a generic SystemTime implementation, since it would be storing the nanoseconds component which is generally unnecessary.

In any case, if the libs team would rather have a seperate type like UnixTime then I can define this in my own crate, I think.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.