Sure. This is true every time you select between different implementations of the same algorithm though. The problematic part is what the selection depends on, and thus, what can affect your debugging when the implementations are not interchangeable.
I really want to avoid a situation where user code somewhere in the dependency tree has a logic bug, you try to debug it, and find that disabling optimizations (to get better debug info, and decrease the change of miscompilations) affects whether the bug occurs. Optimization level is not expected, and not supposed to, affect program semantics (and the bugs that do in practice depend on the optimizer are of a special kind – mostly UB or compiler bugs). If the bug depends on which features you enabled in your dependency, that’s much easier to identify.
Thanks for the example.
I explained above why tying decisions to optimization flags is worrying to me. I also have some reservations about the argument that this is something people will want to tune solely via the optimization level. If the difference is big (e.g., a megabyte of tables), you may want to tune it independently of optimization level — you may find the performance difference of using the larger table to be neglegible but -Copt-level=3 vs -Copt-level=z unacceptable.