What's a "zero-bit integer"?

Overflow is a gotcha for all fixed-size types:

let x: u<N> = 7;

only works if N >= 3. So you always run into the problem of constraining how many bits you need.

wouldn't -1 be the multiplicative identity in i1? At least assuming wrapping arithmetic..

Well, I guess with wrapping arithmetic, i1 is no different from u1 through. (Except of course for how it converts to other, bigger, signed integer types.)

Yes - it's true that -1 is the multiplicative identity within i1, if you interpret it as the finite field with 2 elements. (Similarly, the singular value of i0 would be the multiplicative identity within i0!) But it doesn't agree with the multiplicative identity of any larger iN you project it into, which could potentially be a problem.

1 Like

right... I forgot that wrapping multiplication always works the same between uN and iN. This means that projection into larger integer types is actually the only thing (besides overflow conditions1) that differentiates uN from iN. This demonstrates the problem with i0. While generally, all numbers from uN map to nonnegative numbers when projected to larger number types, for iN you have half of them mapped to negative numbers and half of them to nonnegative numbers. Since you wouldn't be able to split the set of elements of i0 in half, there's a problem! (u8 u0 is the only integer type with an odd number of elements.) The splitting-in-half was not required for uN, so u0 is not a problem.

1but those can be defined in terms of projections, too. Overflow is when projecting first (into a sufficiently larger integer type) vs. adding/multiplying first gives different results.

1 Like

Why does u8 have an odd number of variants? I don't think it does.

1 Like

It has exactly 1 value, the value 0. And 1 is odd :slight_smile:

Edit: Oh! I said u8; I meant u0!! Stupid typos due to habit :sweat_smile:


0b101.(0) = 0b100.(1), and the choice that values of integer types represent the lower bound of (n, n + 1) instead of the upper bound is to some extent arbitrary.

On a more practical note, RISC-V extension B draft includes slo and sro instructions, which do exactly that. (The stated reason is that it’s meant as replacement for shift-through-carry. I can see some other applications too.)

Cool! I've actually had reasons to want this operation in the past (in my case, for doing math about binary-aligned bounding boxes in collision detection code)

This doesn't change the result of a left shift:

0b101 << 1 = 0b100.111111... << 1 = 0b1001.111111... = 0b1010

I disagree, integers don't represent ranges, they represent single numbers.

You assume that the implied fractional part of an integer value is infinite sequence of zero bits. But it’s equally self-consistent to assume it is an infinite sequence of one bits: in other words, to take 0b101 as representing the real number 0b101.(1) instead of 0b101.(0). In this model, shifting in ones is equivalent to moving the fractional point to the right. (Of course, this changes the representable value range from [0, 2N) to (0, 2N], and arithmetic instructions would have to be appropriately adjusted as well. It’s also harder to explain how this works with signed types. But it’s possible to do.)

What you're saying is that a sequence of bits (b3, b2, b1, b0), instead of representing 8 * b3 + 4 * b2 + 2 * b1 + 1 * b0, could be used to represent an infinite sum 8 * b3 + 4 * b2 + 2 * b1 + 1 * b0 + 1/2 * 1 + 1/4 * 1 + 1/8 * 1 + ... = 8 * b3 + 4 * b2 + 2 * b1 + 1 * b0 + 1.

I suppose you could do that, but that's not what we are already doing with u32 and i32, so why would that have any bearing on what we should do with u0, i0?

The infinite sum is a more complicated math formula, so I don't see that as an equally good, symmetrical idea. Nobody does that in practice. Also it would mean 0 can't be represented in u32, and it would change the meaning of bitwise operators like & and |.

This needs more visibility. The number of possible states for N bits is 2^N, so for an uninhabited type, 2^N = 0 holds. The (hand-wavy) solution for that equation is N = -inf, however, we can't represent that usefully in actual code/hardware, hence, uninhabited types are zero-sized.

(Still, I can't wait for the day when I can magically obtain arbitrary amounts of free memory by allocating uninhabited types!)


Isn't binary 0.11111... = 1, similar to how decimal 0.9999... = 1? I feel like we make fewer outrageous assumptions by expecting operations on numbers to be consistent with the mathematical values they represent, in which case it only makes sense for there to be an infinite sequence of zeros to the right of the fractional point, otherwise the operation 0b0 + 0b0 => 0b0 equates to the expression 1 + 1 = 1.

If I've missed something, let me know, I just don't currently see how assuming an infinite sequence of ones is internally consistent at all.


I actually have this in lccc's IR/intermediate system. uint(0) can store the value 0 and takes up no space. In fact, I use this fact in a few places, and uint nonzero(0) is the canonical uninhabited type in xir, as the only possible value, 0, is invalid for the type.

You can do this already. You need to actually store a value in the allocation, though, not just allocate it :smirk:*.

*The as-if rule excludes programs that result in undefined behaviour, thus an implementation need not actually allocate that memory if you produce a value of an uninhabited type to store.

1 Like

The assumption that the fraction point is followed by zeroes is embedded into the pencil-and-paper addition algorithm as well, in that the initial value of the carry digit (applied to the least-significant digit of the result) is 0. If you start with a carry of 1 instead, you get the correct result that 0.(9) + 0.(9) = 1.(9), or 0b0.(1) + 0b0.(1) = 0b1.(1). Just like multiplying by 2 has a different interpretation in terms of bit shifts, addition also acts differently on such bit representations.