I believe Soni is referring to the compiler choosing to compile generic code not via monomorphization (generating N separate functions for foo<i32>
, foo<MyStruct>
, etc) but by one function that takes its generic parameter as a runtime argument wrapped in some sort of dynamic dispatch machinery (which would make it conceptually similar to a trait object). Idea: polymorphic baseline codegen is one past thread on the subject.
In the context of this thread, ignoring all the implementation challenges and cases where it would be semantically incorrect anyway, I just don’t know if the compiler would be able to tell when dynamically dispatching generic code is a win for size. It obviously isn’t an unconditional win, because monomorphization enables all sorts of other optimizations, so it’s entirely possible all the monomorphizations you care about optimize down to almost nothing and end up being faster and smaller than the dynamic dispatch machinery would. I suspect it’s something a human would have to opt-in to.
I’d be curious to hear if there are any embedded developers that have actually used trait objects over generics in real code to reduce binary size.