Forbidden/niche values

I'm almost certain that I have seen this discussed on IRLO before, but I'm unable to find the previous thread(s).

Rust has "niche value optimization" for certain situations, such as Option<&T> and NonZeroX. Currently, these are implemented at the language level. I'd like to see this moved to a lib implementation, such that any user can declare forbidden values for their types.

A trait could be defined as the following, ensuring that the user writing the code is aware that it can lead to undefined behavior (due to it being unsafe) and ensuring that the compiler is able to evaluate it lazily at compile-time.

pub unsafe trait ForbiddenValues {
    const fn forbidden_values() -> impl FusedIterator<Item = Self>;
}

This trait would rely on a large number of unstable features, including GATs and the not-yet-RFC-accepted const fn in traits.

For obvious reasons, this method cannot be called by anyone. Ever. This is consistent behavior with Drop. Presumably similar language methods could be used to prevent the calling of this new trait. The trait would exist solely to aid the compiler.

I am not sure what mechanism Rust actually uses to perform niche value optimization, but I presume that the compiler would be able to use an arbitrary byte sequence (which is fundamentally what any type is, of course) for the optimization. The byte sequence would be knowable at compile-time given the method is const fn.


The proposed trait would replace the existing #[rustc_layout_scalar_valid_range_start(1)] attributes on NonZeroU8 and similar. In its place could be the following trivial trait impl.

unsafe impl ForbiddenValues for NonZeroU8 {
    const fn forbidden_values() -> impl FusedIterator<Item = Self> {
        yield NonZeroU8(0);
    }
}

This example obviously makes some assumptions about the future of the Generator trait and its interaction with Iterator, but it could be implemented via a ZST that implements Iterator: I'm just representing it this way for simplicity and potential future viability.

As to real-world usage, I have the UtcOffset struct in the time crate. Its definition is incredibly simple.

pub struct UtcOffset {
    hours: i8,
    minutes: i8,
    seconds: i8,
}

There are invariants that are already enforced and assumed to be valid internally.

  • hours is in -23..=23
  • minutes is in -59..=59
  • seconds is in -59..=59
  • The signs of all nonzero field values must match.

I suspect most users will find it simpler to filter out valid values out of the full set rather than directly generating invalid values. This would be simple to implement, even for a situation like this.

unsafe impl ForbiddenValues for UtcOffset {
    const fn forbidden_values() -> impl FusedIterator<Item = Self> {
        const fn is_valid(offset: &UtcOffset) -> bool {
            (-23..=23).contains(&offset.hours)
                && (-59..=59).contains(&offset.minutes)
                && (-59..=59).contains(&offset.seconds)
        }

        for ((hours, minutes), seconds) in (..).zip(..).zip(..) {
            let value = UtcOffset { hours, minutes, seconds };
            if !is_valid(&value) {
                yield value;
            }
        }
    }
}

Note that this implementation doesn't even consider the final bullet above — it really doesn't need to. With just the first three bullets, there are already nearly four million niche values. No reasonable program needs that many of course. If they do, they will probably be fine dealing with the slightly larger struct size.

Does this design seem feasible? What issues might come up?

2 Likes

What about leap seconds?

That's wholly irrelevant to the topic at hand.

6 Likes

Even if this is a magic function that can't be called at runtime, it still feels morally wrong to produce invalid (as in UB) values in the method implementation alongside other code that can't make those values.

Additionally, this would require two-phase compilation and/or fixpoint iteration aiui. The crate would have to be compiled once (as least through const evaluation) to const evaluate the trait impl to discover the niches, and then scrapped and recompiled with the new information. mem::size_of is const, so const code can observe the results of niche optimizations.

The compiler has basic cycle detection for const evaluation and trait resolution, but this would make it much more difficult to track in the best case.

This specific case doesn't even need the full generality of marking arbitrary combinations of members as invalid, as each member is individually constrained.

The compiler currently is only able to niche optimize using a single contiguous niche range (as of the last time I learned anything about it). Honestly, I feel that this kind of type metadata belongs in the attribute level. Eventually #[rustc_scalar_valid_start(1)] should probably be replaced with something more usable, maybe something like #[repr(niche(..0))] (on an integral primitive field). But cramming invalid value specification into the trait system feels like an improper mixing of stages. (And we have enough fixpoint-style solvers mandated in our compilation semantics already, with macro expansion and name resolution intermixed.)

15 Likes

That's probably the shot behind the barn to this idea, honestly. Ah well, still worth posting :sweat_smile: I'm sure someone can come up with a good design eventually!

With regard to each field having being range-restricted, note the fourth bullet point. There can be other invariants even if the fields weren't range-restricted.

As another example of a type that would have non-trivial per-field range restrictions: a utf-8 encoded char type stored in a [u8; 6].

It certainly feels like attributes are better here.

I'm wondering once you have placed an attribute on your newtype how does compiler help you make sure you uphold the limitation? Say how does the compiler help you not to put a 0 accidentally into an int you've said will never be zero?

The compiler should probably force you to write unsafe somewhere?

1 Like
  • As mentioned, the type for niches shouldn't be Self, but rather, something like [u8; size_of::<Self>()] (ignoring the fact that such types are currently not supported when in a generic context (for concrete / monomorphized types this is fine)).

    • Note that using u8 is even not general enough, in case of talking about a padded type having niched bits, for instance. Technically we'd need to yield a [Option<u8>; size_of::<Self>()] to also express / encode which backing bytes may be / remain uninit. I won't go that far in this post.

    This also gets rid of that "uncallable" trait odd pattern.

  • Also, the Iterator needs not only be Fused, but also finite, and I guess a bunch of other things. Rather than yielding and involving abstract const iteration, I suggest, for starters, that this use a const array of values.

  • Finally, while appending invidual values for enum layout optimizations is nice, another possible future extension would be to fully encode extra bits within a type. Imagine, for instance, (bool, EvenU32), or ([bool; 4], &'_ AtLeast16Aligned) or even (u8, &'_ AtLeast256Aligned) (that last one could be especially interesting for small pointers to "small slices").

All this leads to the following construction:

Design

// Append an unoverridable type alias.
trait BackingBits : Sized { type BackingBits; }
impl<T : Niched> BackingBits for T {
    type BackingBits = [u8; mem::size_of::<Self>()];
}

unsafe trait Niched : BackingBits {
    // This takes care of enums with `NICHES_COUNT` extra field-less variants.
    const NICHES_COUNT: usize;
    const NICHES: [Self::BackingBits; Self::NICHES_COUNTS];

    // This takes care of packing extra bits within the type
    const FULLY_NICHED_BITS: usize; // could be a `u8`

    const
    fn pack_bits (self: Self, _: [bool; Self::FULLY_NICHED_BITS])
      -> Self::BackingBits
    ;
    const
    fn unpack_bits (_: Self::BackingBits)
      -> (Self, [bool; Self::FULLY_NICHED_BITS])
    ;
}

Some examples

#[repr(transparent)]
struct NonMaxU32(u32);

unsafe impl Niched for NonMaxU32 {
    const NICHES_COUNT: usize = 1;
    const NICHES: [[u8; 4]; 1] = [ [0xff, 0xff, 0xff, 0xff] ];

    const FULLY_NICHED_BITS: usize = 0; /* maybe add the dummy bit packers, then */
    /* or split that behavior within another trait? */
}
#[repr(transparent)]
struct EvenU32(u32);

unsafe impl Niched for EvenU32 {
    // I reckon this does not read very well;
    // naming ought to be improved to express that this is the count
    // of niches *not including* those steming from the fully niched bits.
    const NICHES_COUNT: usize = 0; /* … */

    const FULLY_NICHED_BITS: usize = 1;

    const
    fn pack_bits (self: Self, [bit]: [bool; 1])
      -> Self::BackingBits
    {
        let u32 = self.0 | (if bit { 0b1 } else { 0b0 });
        unsafe { transmute(u32) }
    }
        
    const
    fn unpack_bits (bits: Self::BackingBits)
      -> (Self, [bool; 1])
    {
        let u32: u32 = unsafe { transmute(bits) };
        (Self(u32 &! 0b1), [(u32 & 0b1) == 0b1])
    }
}

While there may be more complex algorithms to have a type yield, say tens of thousands of niches (enough to make the NICHES_COUNT approach not be worth it), without necessarily leading to a nice mapping of pack-unpack bits logic, I think this is fine: more complex logic would be hard for the compiler to exploit. Heck, Rust does not even optimize (bool, &'_ u16) yet, so we may be getting way ahead of ourselves, here :grinning_face_with_smiling_eyes:


I even think the design could be simplified to express that niched bits must always map to an actual bit position within the backing bits, so that the whole FULLY_NICHED_BITS thing could lead to the simpler (for the compiler to use) API:

enum BitNiche {
    NonZero,
    NonOne,
}

// unsafe trait Niched …
const BACKING_BITS: [Option<BitNiche>; size_of::<Self>() * 8];
  • For instance, for EvenU32, that would be an array of None, …, None, Some(NonOne), and for OddU32, it would be …, Some(NonZero).

  • Granted, this may feel a BitNiche :stuck_out_tongue:

2 Likes

The niche representation can be stored as MaybeUninit<Self>. Note that uninitializedness is not an issue and might solve the exhaustiveness problem. We rely on the compiler to inspect the niche iterator during const-eval in any case, we might as well requires that it interpret uninitialized bytes as meaning 'anything' instead. This also idiomatically solves the problem of Self not actually being a valid 'Self'.

struct WithNiche(u16);
unsafe impl Niched for WithNiche {
    const NICHES: &'static [MaybeUninit<Self>] = &[
        // The octal bit-pattern 00 00  is a niche
        unsafe { mem::transmute::<[u8; 2], _>([0, 0]) },
        // AND: the bit patterns 0xff ?? are.
        unsafe { mem::transmute::<[MaybeUninit<u8>; 2], _>([
            MaybeUninit::new(0xff),
            MaybeUninit::uninit(),
        ]) },
    ];
}
1 Like

The annoying thing about using byte arrays here is dealing with endianness. NonZeroUX and NonMaxUX (where X is an integer type) are easy since 0 and max are endian independent (all bits 0 or all bits 1). But as soon as you get into more interesting niches you run into endianness issues.

A possible benefit of modelling this as a trait would be to allow using this information at run time as well. For example, I could imagine a safe transmute function that takes a binary buffer, checks if it falls into the allowed range (using this trait), and transmutes the buffer into another type. (We would still need another trait to mark this transmutation safe, of course.)

I'd be wary of returning an array over an iterator. Why does it need to be finite? The compiler would lazily call it, not eagerly evaluate it. Eager evaluation would also likely have notable compile time impact, as many people would likely generate a large number of forbidden values.

Given that Rust doesn't even have an ABI, using an array of bytes would be impossible, I think.

But either way I think what @CAD97 said is sufficient to kill this idea as posted.

2 Likes

Couldn't agree more.

I apologise if I'm missing the point here, but would a simpler design that lets you specify exactly one niche value be more practical?

I think it might be useful when an enum has more than 2 variants.

1 Like

Having an implementation that isn't forward-compatible with a large number of niche values is a non-starter to me. We wouldn't want to be supporting two systems forever.

Are attributes the right place for this information, either? With a #[repr(niche)] attribute it would be necessary to make all writes to that field unsafe. I feel that if a field has a constraint on its values like "must be in the range 0..5", such a constraint should be part of the field's type.

I guess I'm suggesting a generalization of the NonZeroU8 etc. types along the lines of this:

pub struct ConstrainedU8<const MIN: u8, const MAX: u8> {
    // MIN <= n <= MAX
    // niched by compiler magic
    n: u8,
}

// impls as for NonZeroU8:
impl <const MIN: u8, const MAX: u8> ConstrainedU8<MIN, MAX> {
    pub fn new(n: u8) -> Option<Self> { /*...*/ }
    pub unsafe fn new_unchecked(n: u8) -> Self { /*...*/ }
    pub fn get(self) -> u8 { /*...*/ }
}

That does only allow one range of valid values (I think specifying the valid range would make it easier to reason about than specifying the niche/invalid range), but possibly the compiler could be made to understand that an enum like

enum MultipleRanges {
    Range1(ConstrainedU8<0, 5>),
    Range2(ConstrainedU8<10, 15>),
    Range3(ConstrainedU8<20, 25>),
}

Only needs one byte of space.

1 Like

Would it be possible to use an already defined function in an attribute indicating the next niche? For example (following @dhm example) having this function that finds the representation of the next odd u32 number.

fn unpacked_u32_next_odd(x:[u8;4]) -> Option<[u8;4]>
{
	let value : u32 = unsafe { transmute(x) };
	let value : u32 = if (value & 1)==1 { value+2 } else {value+1};
	Some(unsafe{transmute(value)})
}

And then use it to state that in the type of even numbers the next niche is the next odd number.

#[next_niche_function(unpacked_u32_next_odd)]
struct EvenU32(u32);

And being careful when implementing EvenU32 to disallow creating those values.

#[repr(niche)]
struct Fancy(pub i32); // compilation error: can't use pub

  ... Fancy(18); // compilation error: unsafe to call

?

This would be solved with unsafe fields (which would be unsafe to write to, such as the .len field of a Vec).