Pre-RFC: Generic integers v2

Does this feature make sense from a performance perspective?

Presumably u42 (etc) is slower than u64 for many operations. What should the documentation story be for this, not all users (especially those coming from high level languages) may have the required knowledge make suitable judgement calls on when to use this.

(I have a C/C++ background and I don't know exactly what the performance overhead will be even!)

Also, for big types (as may be used for bignum, crypto and suchlike) are underlying LLVM types and algorithms as efficient as dedicated bignum libraries provide (gmp etc)?

2 Likes

The size and alignment of uint<N>and int<N> should be rounded up to a power of two.

Shouldn't the alignment be unspecified, so that it can be platform specific according to the architecture's native alignment for such sized integers?

4 Likes

Probably not materially so. For operations in locals, LLVM will widen the intermediates, then just mask once at the end if it's actually necessary.

6 Likes

I think we should have uint<N> and sint<N>/int<N> where N is a usize, specifically because const generics don't support casting between u32/usize and don't look like they will for a long time. usize would be needed for both portable-simd bitmasks Mask<iN, LEN> (iN is ignored for bitmasks, LEN is usize) and probably for things like BitSlice where indexing and length all use usize

That makes sense, and has aleviated some concerns.

Though I still think it might be hard to teach (it will be the "too many types of strings" all over again, something that I with my C/C++/Erland/Haskell/Python background had no issues with, but I have seen time and again on reddit and urlo).

Should it be presented as a advanced topic (mostly for FFI and packing structs into less memory)? Something on the level of repr C/repr packed?

Or is this intended to be taught alongside other numeric types? How will the difference between this and pattern types (if we do get those) be taught? That in particular could be really confusing for new users (and me too, it seems pattern types would be strictly more powerful than this).

You say Option<uint<7>> will be one byte in size, but also that the padding bits for uint will always be zero. This is contradictory.

I've been thinking it as not that different from what happens today. Basically, we already don't teach u16 specifically at all. It shows up in https://doc.rust-lang.org/book/ch03-02-data-types.html?highlight=u16#integer-types but nowhere else in the entire book. If you know you need it, you can use it, but most people just don't.

So if Unsigned<24> now exists too, and works in a perfectly analogous way, then that might be entirely fine.

The main difference is subtyping. You can pass a &(u32 is (..10)) to something that wants a &u32, but you can't pass a &u16 to something that wants a &u32.

Adding &u24 to that doesn't change things.

1 Like

It isn't contradictory; the Some variant will always have a zero in the padding bit, and the None variant doesn't have any integer payload, so, its value can set that bit to a nonzero value.

1 Like

Also to just blanket-reply to the folks talking about bounded integers: I'm not sure how many of you have actually read the RFC, but I would appreciate some more direct feedback to that.

My general opinion is that:

  1. Ranged integers would ultimately require variable-bit-length integers under the hood anyway, given the way that compilers work
  2. Ranged integers bring the question of size/alignment even more into question, since it's unclear whether these should always have the smallest possible size, or if they should allow different sizes/alignments too. (Another issue: is bounded_int<256..=256> zero-sized or does it take up two bytes of memory?)
  3. Ranged integers are compatible with pattern types, which effectively "solves" the problem: you specify an integer of the appropriate number of bits, then use a pattern type to narrow down its value.

However, it's still worth discussing whether these are the right set of assumptions as well.

In terms of whether the parameter should be a usize, I think my final answer will be: yes, it will be a usize, but it will be limited to u32::MAX on 64-bit or higher systems. This both solves the problem of integers being ludicrously large and crashing the compiler, and that they would overflow values like uint<N>::BITS. (Although personally, I think that this constant may be deprecated after this change is done.)


In terms of the size/alignment being explicitly defined: we actually got bitten by this before with u128 being only aligned to a u64 on some systems, despite the C ABI explicitly defining it to have a 128-bit alignment. But! I don't know what _BitInt plans to do in C23, and maybe we should follow that. I assume that it's explicitly not defined by the spec, but compiler authors will probably explicitly define it.

3 Likes

The size and alignment of uint<N> and int<N> should be rounded up to a power of two.

I think we should specifically not require the size to be a power of two, since that would be hugely pessimizing for large integer types (there's no good reason to waste a whole bunch of space once you're beyond the size that can reasonably be handled by 1-2 load/store instructions). For reference, iirc _BitInt(N) on x86-64 unix uses the following algorithm to figure out size:

fn bit_int_size_in_bytes(bits: usize) -> usize {
    match bits {
        0 => panic!("C doesn't permit 0-bit integers, sadly"),
        ..=8 => 1, // 8-bit int
        ..=16 => 2, // 16-bit int
        ..=32 => 4, // 32-bit int
        _ => bits.div_ceil(8).next_multiple_of(8), // 64, 128, or array of 64-bit integers
    }
}
2 Likes

How exactly does this affect alignment? Since we know that u128 is aligned to 128 bits, does that mean that all larger integers are aligned to 128 bits even if their size is a multiple of 64, or does the alignment depend on whether it's an odd or even multiple of 64?

(For now I'm going to say it caps at an alignment of 128 bits, but I'm free to changing this depending on what makes the most sense.)

currently _BitInt(N) types are aligned to 64-bits if N > 32, but because that's not compatible with __int128 I hope they change that, it would be really annoying to have 2 different 128-bit types with different alignment. I think they should change just 65 to 128-bit integers to have 128-bit alignment and larger integers have 64-bit alignment again. @workingjubilee opened an issue about that Concerns about _BitInt(128) alignment (#11) · Issues · x86 psABIs / x86-64 psABI · GitLab

current ABI definition:

2 Likes

Hmm, that makes sense. I think I'm going to keep with my current definition since we're still in the draft phase, and we can change depending on what they decide upon. Ultimately, I think that it would be ideal to be ABI-compatible with whatever C23 decides upon if possible, but as I mention in the RFC, we can still have some wrapper aliases that mess with the ABI if that's not the case.

In that case I think it's better to just delete the size/alignment requirements from the draft, since the power-of-two size is imho a terrible idea, and saying alignments are a power-of-two is redundant.

It's kind of hard to track these in the internals thread, but the latest change has made it so that the guide-level is still vague about size/alignment (expect it to not match when it's not a power of two) and the reference-level now says:

If N <= 128, then uint<N> and int<N> should have a size/alignment rounded up to a power of two. Past that point, the alignment should remain at 128 bits and the size should be a multiple of 64 bits. These guarantees may be changed in the future depending on the development of a standard ABI for _BitInt, and how people use these types.

You're right that alignment being a power of two is redundant, but it's rounded up to a power of two, which is what's relevant here. I guess it's still kind of vague, but eh, I have already done enough editing for now.

2 Likes

You might check in with the portable-simd folks, because they have the same decision to make about what Simd<u32, 3> should be -- they're having the same questions about whether that should pad up to 4 elements or just have the same alignment as 1 element. And you have exactly the same problem here with u24.

My inclination here is that you should just say that u<N> is always ⌈N/8⌉ bytes exactly. Because LLVM can still use wider intermediates internally anyway, for things in local variables, and if you wanted your u210 to be more aligned you could just use u256, or put it in a repr(align)ed wrapper.

1 Like

Strictly speaking not necessarily, but I concur that that's the most reasonably manner of implementation.

Imo this should be unspecified at first.

Here I disagree. In previous discussions, my recollection is that people generally preferred pattern types be ABI compatible with the "original" type. This necessarily rules out niche value optimization. And that's not mentioning int<A, B> + int<C, D> = int<{A+C}, {B+D}>, for which I can't imagine how it would work with only pattern types.

I did :raised_hand: That's why I initially raised the question about the type name; my intent wasn't to have yet another discussion on ranged integers (though that's where it seems to have wound up). I think having generic integers in this manner, particularly when it's done by bit and not by byte, would be useful.

1 Like

Overflow semantics

[…]

The compiler, or at least backends like LLVM, should be able to optimise series of operations to perform these conversions less often, but it should be noted that they must always occur, even in release mode.

(emphasis mine)

It's not obvious to me why these conversions must always occur. Can you expand on that?

My understanding is that existing operations (such as <u32 as Add>::add) assume they won't ovewflow. When that assumption is violated, weird (but defined) things happen. If one wants wrapping behavior, the wrapping_* operations are available.

I would have expected that behavior to extend to generic integer types. Although, the resulting weirdness of an unexpected overflow would dwarf anything we have today. And wrapping_* operations would still need the additional conversions, of course.

1 Like

Regarding alignment and size:

I also don't think requiring size and alignment to be a power of two is wise. All big int implementations I've seen so far effectively boil down to a [u64; N], thus having the alignment of one of the existing integer types. While this is a restriction due to how they're implemented I think it does make sense. There is little to no benefit in really large alignment restrictions (The largest one I've seen on x86 so far was a 512 byte alignment, but for an entire data structure, not for numbers).

What do you think about having the alignment be based on the size of usize (which already has an architecture dependent alignment+size)?

alignment = min(
    next_power_of_two(number_of_bits_needed),
    alignment_of::<usize>() * X
)

With X being 1 2 or 4. This way there is a direct connection to the underlying hardware (should we ever get 128 bit hardware), and thanks to X we can eliminate problems with the existing alignment of u128 and help out SIMD hardware which may have stricter alignment requirements than that of usize.

With size being the smallest multiple of alignment that fits the data.

The biggest downside/issue I can think of is on 8-32 bit CPUs (embedded), there the alignment of a u64 would be smaller than it is now (please correct me if I'm wrong), so perhaps the above calculation is too simple.

Another option would be to limit uint<N> to N <= 128, with size and alignment matching the existing next-power-of-two uN types, and handle big bigints in some other way, e.g. provide intrinsics for bigint libraries.