For clarity, I meant having it as an opaque wrapper type, not a type alias. The semantics would not change from current.
Really overengineered idea:
type u8 = Wrapping<UInt<0..=255>>;
The knock-on effects of this don't sit well with me. What is the type of Wrapping<UInt<A..=B>> + Wrapping<UInt<C..=D>>
? How does this mesh with the return type of UInt<A..=B> + UInt<C..=D>
?
A type error, just like u8 + u32
. I'm not saying it's necessarily a good idea (I have no concrete idea), just an interesting one.
The biggest issue is, well, u8
isn't actually Wrapping<UInt<0..=255>>
, because that's Wrapping<u8>
. u8
is instead WrappingOrPanicking<UInt<0..=255>>
. And since Wrapping<u8>
works, we'd need to support both Wrapping<UInt<0..=255>>
and Wrapping<WrappingOrPanicking<UInt<0..=255>>
or do some sort of magic to make Wrapping<u8>
not do that nesting, even in the face of generics.
Interesting idea, but probably not a very viable one. Thus why I called it overengineered.
Or we just make UInt<A..=B>
have the conditionally-wrapping-or-panicking modular arithmetic behavior, then add e.g. Growing<UInt<A..=B>>
for the growing bounds behavior.
I still don't really like it, but that seems at least a possible solution.
I agree about this not being the greatest path, but thinking about this some moreā¦is this something we'd be OK with?
let a: Wrapping<10..=20> = 15;
let b = a + 7;
assert_eq!(b, 11); // originally had 12, but `20` eats up a step, so it should be 11
Maybe instead of a completely free choice of a range, instead, there should be Biased<Bias, Size>
which is an integer which can represent Size
distinct values which are offset from 0
by Bias
. I would suppose that something like type UInt<A..=B> = Biased<A, Len<A..=B>::SIZE>;
might be possible with enough const
magic. It'd make 1-based indexing APIs (e.g., Lua or Julia bindings) easier to work with I supposeā¦
(not to anyone in particular)
I didn't mean to derail this thread; it should probably be split off to a new thread if @moderators wouldn't mind.
For the primitive integers, it's honestly probably best to avoid discussion on that, as realistically it's not a blocker in any way and may never change.
Just like to quickly drop this: If the range of an N bit signed integer is [-2^(N-1), 2^(N-1)-1], then that would imply that the only value of a 0-bit integer is -1/2. A perfect compromise between -1 and 0.
Except for the fact that -1/2 is not an integer...
Anyway, I think this whole "0-bit integer should be 0 or -1" argument is just a distraction from the more interesting questions in this thread. I suggest that if people really want to discuss that question that a new thread should be created dedicated to the numerical value of a 0-bit integer.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.