This seems wildly unnecessary; there’s no point in configuring the nanosecond portion to anything other than u32, because that’s how much space you need to store subsecond nanoseconds. For this to work, you’d need to selectively reduce the accuracy to… what, larger powers of ten? Treat it as binary fixed point with no leading digits?
And I’m not sure there’s any reason to want to reduce seconds accuracy below u64, because that only gives you ~70 years which, if you’re measuring anything relative to the UNIX epoch is going to run out completely by 2039.
This only makes sense if you’ve got a structurally different type that lets you select both resolution and size (as a single integer type), but at that point it’s got nothing to do with Duration, and there’s no need for it to be in the standard library.
Duration is supposed to be the relative counterpart to Instant and SystemTime, whose accuracy is determined by the host environment. In that context, I don’t see any reason to change any of them.