To expand on what Centril said, the lykenware/gll crate has the ability to not only detect an ambiguous parse, but produce a parse forest instead of a parse tree, giving us more tooling flexibility at less initial grammar annotation cost.
Personally, I trust GLL more because it’s a lot like “backtracking recursive descent” or some parser combinator approach you might use in a pure functional language, but with sharing optimizations (like memoization) and more control over the nondetermistic execution (i.e. A | B can execute A before B, or A interleaved with B).
There’s a real possibility of writing a computer-checked formal proof that a GLL implementation’s optimizations can’t result in the wrong forest (I can’t remember if I’ve seen this done for other implementations, but it’s possible).
Meanwhile, (G)LR is like stack programming languages, and has associated complexity in the transformation of a grammar to that computational model.
I’ve also, sadly, not heard yet of a GLR implementation that produces a parse forest (tree-sitter doesn’t AFAIK).
But none of this should really matter, since we plan to produce a CFG, first and foremost, which means any CFG parsing algorithm (not just GLL and GLR) should work.