This extends off this other Q&A thread, but is going into details that are out of scope from the original question.
I am generating a parser that is to parse a context-sensitive grammar which can take in the following subset of symbols:
,
, [
, ]
, {
, }
, m/[a-zA-Z_][a-zA-Z_0-9]*/
, m/[0-9]+/
The grammar can take in the following string { abc[1] }, }
and parse it as ({
, abc[1]
, },
}
).
Another example would be to take: { abc[1] [, }
and parse it as ({
, abc[1]
, [,
, }
).
This is similar to the grammar used in Perl for the qw() syntax. The braces indicate that the contents are to be whitespace tokenized. A closing brace must be on its own to indicate the end of the whitespace tokenized group. Can this be done using a single lexer/tokenizer, or would it be necessary to have a separate tokenizer when parsing this group?
10
Your grammar has ambiguities that make it impossible to know what to do with, say, the letter a
without context. In your case, the string abc
can have two interpretations: it can be an identifier (I’m assuming that’s what your first m//
defines), or it can be part of a string literal quoted in your { ... }
notation (I’ll call that a “quoted list”). Lexical analyzers (tokenizers) aren’t smart enough to handle that kind of ambiguity, because their concept of context is very simplistic. Parsers, on the other hand, can understand context at very deep levels.*
Language designers sometimes add sigils to their identifiers (e.g., $abc
) to make them easier to tokenize. This is why you can have a Perl variable named $for
even though bare-naked for
has special meaning. For similar reasons, C lexers tokenize /"[^"]*"/
into a string literal because it has a context-independent syntax that doesn’t appear anywhere else in the language.
Back to your problem: Prematurely tokenizing a string of alphanumerics into an IDENTIFIER
would mean the quoted list { abc[1]xyz }
would be fed to the parser as {
IDENTIFIER
[
NUMBER
]
IDENTIFIER
}
. That’s useful if those were the chunks in which you needed it, but you’d otherwise have to incorporate being able to handle combining all combinations of those tokens into the grammar for your quoted list. Then you’d have to handle reassembling them back into string literals. If you haven’t guessed by now, that would get complex and ugly very quickly. But because parsers understand context, putting that wisdom there makes it clean and easy.
For what you’re doing, there shouldn’t be much of a tokenizer at all because so much of it is context-sensitive, and that’s all parser territory. Whitespace doesn’t seem to matter except in the context of a quoted list, so you could tokenize that as well as things that aren’t ambiguous like LETTER
and DIGIT
.
// NOTE: This code doesn't handle the case where whitespace is
// interspersed with the tokens. See the comments.
quoted-list ::= '{' quoted-list-item-set '}'
quoted-list-item-set ::=
<nothing>
| string-of-non-whitespace-characters
| string-of-non-whitespace-characters WHITESPACE quoted-list-item-set
// This ends up being things you have to put together and return,
// so that eventually you end up with a single string.
string-of-non-whitespace-characters ::=
non-whitespace-character
| non-whitespace-character string-of-non-whitespace-characters
non-whitespace-character ::= <anything in the set '!'..'~'>
identifier ::= LETTER alphanumeric-string
alphanumeric-string ::=
<nothing>
| alphanumeric alphanumeric-string
alphanumeric ::= LETTER | DIGIT
// ...etc...
// This prevents the parser from barfing on whitespace in any other context.
things-that-get-ignored ::= WHITESPACE
*This is why you should use a parser to interpret something complex like XML and not fall into the trap of trying to understand it with regular expressions.
8
Yes, it’s certainly possible to create a single tokenizer which can parse that.
I can easily create a context-free regular tokenizer which will correctly parse your language.
However, many popular tools may make it hard and/or impossible to tokenize your input the way you want, even if your tokenizer could be described by a regular grammar. Using different tools, you may find it extremely easy to tokenize your input any way you want.
What approach you take to solve this problem may be largely dictated to you by your choice of tool.
3
From what I see all it takes is making lexer to recognize opening brace and switching into greedy state. In such state you define new set of patterns, which are, well, greedy except for whitespaces. And single right brace has the effect of popping the state and going back to what was before.
The described approach would correctly tokenize both your examples.
As I understand it, lexers are non-discriminatory. They break up tokens without context. Thus the first example cannot be parsed using that tokenizer as the closing brace is not tightly bound to the comma. The second example cannot be parsed that way, even if you tried to paste tokens together from within the parser (adding overhead and complexity), because there would not be a way to know when the first token ends and the next begins.
Both could be achieved if whitespaces were not ignored, but would add more complexity to the grammar/parser for all the other grammar rules.
So yes, it can be done. But it depends on if the added overhead and complexity is worth it.
5