Exploit the padding?

No we mean having two slightly different stdlibs, and picking between them based on target triplets.

In particular, having ptr::read be aware of padding in one of them.

It should lower RAM usage by half. The performance hit is more than worth it. It'll be more than made up for by not hitting swap.

(would definitely be interesting if gcc-rust had a GNU extension for packed structs. the -ffast-math of Rust layout optimizations.)

I posted a concrete example of code that copies data one byte at a time, with no knowledge of the 'actual' type. How can that existing code be compliled to make your proposal work?

Where are you getting 'half' from?

I would be very surprised if any browser frequently required swap due to exhausting RAM. Do you have any concrete data about how often this happens?

You break it. Opt-in, ofc.

Yeah we don't have much RAM. The swap is almost always at least 25% full. At least the system isn't entirely unusable because of it, after a lot of tuning. But still not great.

Do you have profiles showing that a large fraction of real-world web browser RAM usage is going to struct padding?


Letting aside the fact that if you end up with a type like that you probably have bigger problems, can't you rewrite your program so that it uses an equivalent type but without padding? A bit of tuning on the most used structs should take much less effort than rewriting the stdlib and any other crate to be compatible with your proposal. Not to mention the ecosystem split and the double effort people will have to make to support both ways.


Would I be right to summarize thinking as

  • Option8 - group discriminants together
  • Zero padding - use niches
  • size/stride - remove padding


E.g. three very different but related ideas?

This one is different from all of those tbh. None of those handle (((u32, u8), u16), u8) the way we want.

That's fine, but why must you spell your type (((u32, u8), u16), u8)? Is there really no other spelling for that type that makes sense?

rust already has repr(C,packed) that has that behaviour. I'd hope that gcc-rs would support that, and not some random extension to achieve the same.

Also, if this applies to builtin types like tuples, this would also affect impls that provide stable abi guarantees, such as GitHub - LightningCreations/lccc: Lightning Creations Compiler Frontend for various languages. Either this has to become the default (which is a breaking change as mentioned above), or the abi changes with compiler flags/options outside of the two abi-control options -Z repr-rust-layout and -Z build-abi. Both of these are well beyond what I would want to implement, especially since it would likely apply to things like TokenStream that crosses the boundary between the compiler and compiled rust code, potentially breaking the knowledge I otherwise posses about the guaranteed layout of the type.

Because that's how you'd usually lay out your types, unless you happen to only ever use integers/floats directly, or objects with guaranteed no padding.

You wouldn't write them literally like that, but you would build them up like that, with generics, etc.

I still feel like #[repr(C, packed)] is the way to go if you truly need layout control. Rust still reserves the right to reorder and repack struct and enum types. I'm not sure if there's a guarantee for tuple types or if they're isomorphic to a struct with integer-named members with the matching types.

No guarantees (but mind the disclaimer).

repr(C, packed) isn't gonna make someone else's padding magically go away. it also doesn't work (nor is meant to work) with references to fields.

#[repr(packed)] is not what matter here, you just need to put them all in one struct. Even one tuple ((u32, u8, u16, u8)) will do. Then rustc will be free to reorder the fields to reduce space overhead, because it will not affect program behavior. In fact, it'll even do that (though nothing is guaranteed).

1 Like

Why do you want to be able to change someone else's padding without editing their code? If you need to do that, either make a pull request for improvements or consider forking the project to apply and maintain the changes yourself. Trying to decide the layout of an external crate's type without actually taking responsibility of it is bound to create problems.

That's why it should be the compiler's responsibility to take care of it.

We really do hope GCC Rust gets this feature, even if at the expense of a slightly non-compliant ptr::copy/etc. It will be okay, it shouldn't break too much.

Y'know, composable programming is a thing nowdays. You never really make your own structs, you just combine a bunch of other ppl's. But it optimizes badly. That should be fixed. Should you add generics to all your structs so that the user can bring in their own things into your padding? That could be an alternative, but it's not a nice one.

Don't hold your breath. gcc-rs is striving for full compatibility with rustc, and adding new features is a non-goal. lccc will be the same (though only compatibility with stable rustc is a goal), though adding features behind unstable feature gates that aren't in conflict with stable features is permitted.


This is yet another time where you should be giving concrete examples. You say that you are looking at code where the optimization you want will reduce memory consumption by 50%, that this makes the difference between fitting into RAM and not, and that it involves composing structs where you don't control all of the type definitions. OK. I'm actually willing to believe that if you show us the actual code with the problem. You might even be able to convince me that your suggested change really is the only change that will solve the problem!

But if you just keep saying "hey, (((u32, u8), u16), u8) could take less space therefore Rust must make it take less space, compatibility and type system constraints be damned," then I and everyone else here are going to write this thread off as yet another time Soni came in with an impractical proposal and wouldn't work with us to turn it into a practical one.



Alright, can we start with getting memory profilers to add a "padding-to-data ratio" to their output? Unfortunately there doesn't seem to be any tools that output this kind of information.