Note: I think there are currently more important task then implementing this Optimasations. Nevertheless I will write them down for later usage.
Note: Issues 1230 has some overlapping with this entry.
The NonZero Trait specifies that a given value (e.g. pointer) can’t be zero. This is used for example to optimize Option by representing None through the usage of 0.
Except NonZero cases there are also many cases where a number can not be a different value e.g. the valid range for a optional number in a field might be 0…128. Currently NonZero like optimasation can be applied by wrapping the access to the field through a +1/-1 operation shifting the range from 0…128 to 1…129 witch is a unneeded overhead. Additionally it would be possible to have more than on unused Value so enums with one sized and more than on zero sized variation can be optimised, too.
- The in code representation has to be different than the current NonZero
- To allow the encoding of arbitrary not valid values for a given number type having Numbers as generic parameter is preferable. The other options is to implement markertraits for the most usual values mainly: 0 and the minimal and maximal number of u8,i8,…,u64, i64.
Currently I am not sure if the optimisations will be worth the additional
complexity. Until a implementation of given optimisations is decided
it makes no sens to discuss naming. I will use
IsNotValue for now.
In code Representation
NonZero is currently represented as following:
pub struct NonZero<T: Zeroable>(T);
By trying to keep to the current NonZero implementation this could be extended as following:
pub struct IsNotValue<T: Restrictable, Value: StaticNumber>(T); type NonZero<T: Restrictable> = IsNotValue<T, 0>;
StaticNumber is the Type for Number generic paramerts or for a number of preimplemented possible values. Note that
Restrictable is like
Zeroable a markertrait implemented for the same types.
Note: the type system has no good way to prevent e.g. a usage of
IsNotValue<u8, 300> through 300 is not in the range of
that it can not be 300 should be perfectly fine. This lead to additional
complexity because the compiler has to check if the given not used value
is in the possible range before applying the optimisation.
Having multiple not allowed values might lead to ABI problems if they are applied at ABI boundaries and it is not clear witch of the not used values is used to represented a given enum variation. A simple solution would be to use lexical ordering of the enum variation identifiers BUT then the ABI could break if a enum variation is just renamed… Nevertheless this problem only occurs if the optimisation is used at ABI boundaries (I don’t know if it currently is but there is no reason why it shouldn’t be when e.g. linking in dynamic rust libraries).
Other similar optimasations
Possible similar optimisations would be
NonNaN<T: AnyFloat>(T) for floats and
UnusedBits<T: SomeMarker, B: SomeBitMaskLikeType>(T) for the same types where
Zeroable is implemented for.
UnusedBits allows additional optimasations it also brings more complexity with it, including additional bit operations to add and remove additional information from given types. E.g. a enum with two variations containing a number not using a bit at the same position (e.g. the first)
can encode the tag in the given bit. Same for up to 4 variations with 2 unused shared bits. But this is not only quite complex it also (might) come with a performance penalty because the additional bit operations needed when extracting the number. So if implemented it should probably a opt-in optimisation. Witch leads a situation where it is only used in special cases (e.g. if the RAM usage has to be optimised in embedded programming). Therefore I dont think UnusedBits Optimasation is worth the offort (yet)