I think being able to downcast to arbitrary traits would probably be a misfeature.
While it’s the case that there are sometimes better options, I don’t think it’s fair to call it a misfeature. The thread you linked to is mostly weighing up the benefits of parametricity, and yet rust has never had runtime parametricity, and downcasting does not break compile-time parametricity (which rust has also explicitly opted out of with specialization).
A case where this is useful is when a library needs to take ownership of some objects which it doesn’t know the type of. If they’re all the same type, then you can often use generics, but if they’re heterogenous, you have to use Box/Rc with a trait object.
Now the user of your library has no control over what traits they expect these objects to implement: there’s no way for the library to say “I need these objects to be cloneable”, and for the user to say "I need these objects to implement Debug`, and to get a sensible trait object that meets those two requirements. In practice, the library author has to make a decision over what traits they’re going to require, and the end user is on their own if they need anything else.
The minimum set of pieces that rust would need to solve this problem the “proper” way are:
- HKT, so that the library’s API allows the traits to be specified by the user
- Multiple-trait trait objects, so that
&(X + Y+ Z) is a valid trait object
- Trait object up-casting, so the library can use
&(X + Y + Z) as an &(X + Y)
Alternatively, you just support trait object downcasting, and now the library can specify its own bounds at compile time (eg. Any + Debug) and have that be statically verified, while the user of the library always has an “out”, so they can access additional functionality on the object at runtime, if it exists.