Proposal: Grammar Working Group

First, regarding how GLL deals with left recursion - the answer is "quite nicely, thank you", but the actual mechanism ties back into the memoization. Essentially, before attempting to match a rule, a pre-memoziation is recorded for the rule. If the rule then attempts to invoke itself at the same position, the pre-memoization is promoted to a full memoization of a cyclic result, which then appears in the SPPF. The same technique can be seen (...modulo some excessively dense notation...) in Pierre Boullier's paper (PS.GZ) that first introduced the Range Concatenation Grammars.

The paper @eddyb referred to (PDF) is an instance of a general technique called "Guided Parsing", where one uses a simplified approximation of a grammar to guide parsing with the full grammar.

Some background - I presume you're familiar with the most-discussed levels of the Chomsky hierarchy, namely the regular languages, the context-free languages, and the context-sensitive languages.

In between context-free and context-sensitive, there are a number of intermediate classes of grammars - some fascinating ones include the conjunctive grammars, the boolean grammars, linear context-free rewriting systems, positive range concatenation grammars, and "full" range concatenation grammars.

To explain these, I'll first introduce an alternate syntax for CFG, and then explain the rest in terms of that.

In classic BNF, you have rule := subrule other_subrule. This can be seen as a form of logic program, but the arguments are implicit. These can be made explicit, resulting in rule(segment1 segment2) := subrule(segment1), other_subrule(segment2) - this takes a single argument (which is pattern-matched into two variables covering disjoint ranges), and delegates to subrules. Disjunctions are represented, as in BNF, by the existence of multiple rules with the same name. The syntactic restrictions necessary to permit only CFGs are:

  1. Rules take only a single argument
  2. Every variable in the head of the rule must occur exactly once in the body of the rule

The conjunctive grammars (usually defined as "CFG with conjunction") may be defined as the relaxation of rule 2, to allow variables to be used multiple times. LCFRS may be defined as the relaxation of rule 1, to allow rules of higher arity, though that arity must be the same for all rules that share a name.

Positive range concatenation grammars (PRCGs), then, apply both of these relaxations, plus an additional extension: the ability to concatenate variables so long as the head pattern-matched them from adjacent segments of input.

The addition of negation, then, affords one the "full" range concatenation grammars.

An alternate name for LCFRS, used mostly when they are expressed in this syntax (which is, in fact, RCG syntax) is "Simple RCG" or "SRCG". In addition, a k-RCG is an RCG whose predicates have at most arity k.

One can then define the other grammars as various trivial cases of RCG:

  • CFG is 1-SPRCG
  • conjunctive grammars are 1-PRCG
  • boolean grammars are 1-RCG
  • LCFRS are SPRCG
  • etc.

(As an interesting aside, CFG can be characterized as "regular languages plus least fixpoints" in a formal sense (PDF))

This use of a common syntax makes the primary contribution of the paper much more tractable - a sound and complete algorithm for generating a 1-PRCG approximation (which accepts a superset of the original language) from an arbitrary PRCG. From there, a 1-SPRCG approximation is relatively trivial (dropping all but one side of each conjunction), and existing techniques can even take you all the way down to the regular languages.

Another paper (PDF) points out that the 1-PRCGs have worst-case performance O(|G|n^3), the same as the CFGs, affording a nice increase in power without a loss of performance. Work elsewhere by Alexander Okhotin (PDF) has shown that this result extends to the boolean grammars, which as noted above are equivalent to the 1-RCGs.

In addition, (P)RCG is a nice formalism in its own right - it characterizes the PTIME-recognizable languages.

EDIT: Did up a table of the relationships of the grammar formalisms:

DegreePositiveSimple
CFG1
Conjunctive1
???1
Boolean1
LCFRSk
PRCGk
???k
RCGk

If you want more info on the topic, I highly recommend Laura Kallmeyer's "Parsing Beyond Context-Free Grammars", ISBN 978-3-642-14845-3, DOI 10.1007/978-3-642-14846-0

Last-minute stinger: A 2-RCG for Rust raw strings:

RawBytes('b' remainder) := RawString(remainder)
RawString('r' lhs '"' body '"' rhs) := Hashes(lhs, rhs), !ContainsDelimiter('"' rhs, body)
Hashes('#' a, '#' b) := Hashes(a, b)
Hashes("", "") := true
ContainsDelimiter(needle, prefix middle suffix) := Hashes(needle, middle)

This can be transformed into a PRCG, though that is left as an exercise for the reader :stuck_out_tongue:

9 Likes