Archive
Ambiguitiy in Xtext grammars – part 2
In this continuation of the previous instalment, we’re going to take an ambiguous grammar and resolve its ambiguity.
As an example, consider the situation that we have a (arguably slightly stupid) language involving expressions and statements, two of which are variable declaration and assignment. (Let’s assume that all other statements start off by consuming an appropriate keyword token.) So, the following is valid, Java-like syntax (SomeClass is the identifier of a class thingy defined elsewhere):
SomeClass.SomeInnerClass localVar := ... localVar.intField := 42
Now, let’s implement a “naive” Xtext grammar fragment for this:
Variable: name=ID; Statement: VariableDeclaration | Assignment; VariableDeclaration: typeRef=ClassRef variable=Variable (':=' value=Expression)?; ClassRef: type=[Class] tail=FeatureRefTail?; Assignment: lhs=AssignableSite ':=' value=Expression; AssignableSite: var=[VariableDeclaration] tail=FeatureRefTail?; FeatureRefTail: '.' feature=[Feature] tail=FeatureRefTail?;
Here, Class and Feature are quite standard types that both have String-valued ‘name’ features and have corresponding syntax elsewhere. Expression references an expression sub language which is at least able to do integer literals. Note that a Variable is contained by a VariableDeclaration so you can refer to a variable without needing to refer to its declaration. (You can find this grammar on GitHub.)
Now, let’s run this through the Xtext generator:
error(211): ../nl.dslmeinte.xtext.ambiguity/src-gen/nl/dslmeinte/xtext/ambiguity/parser/antlr/internal/InternalMyPL.g:415:1: [fatal] rule ruleStatement has non-LL(*) decision due to recursive rule invocations reachable from alts 1,2. Resolve by left-factoring or using syntactic predicates or using backtrack=true option. error(211): ../nl.dslmeinte.xtext.ambiguity.ui/src-gen/nl/dslmeinte/xtext/ambiguity/ui/contentassist/antlr/internal/InternalMyPL.g:472:1: [fatal] rule rule__Statement__Alternatives has non-LL(*) decision due to recursive rule invocations reachable from alts 1,2. Resolve by left-factoring or using syntactic predicates or using backtrack=true option.
Even though Xtext itself doesn’t warn us about any problem (upfront), ANTLR spits out two errors back at us, and flat-out refuses to generate a parser after which the Xtext generation process crashes completely. The problem is best illustrated with the example DSL proza: its first line corresponds to a token stream ID-Keyword(‘.’)-ID-Keyword(‘:=’)-… while the second line corresponds to a stream ID-Keyword(‘.’)-ID-Keyword(‘:=’)-INT(42). (Note that whitespace is usually irrelevant and therefore, typically hidden which is Xtext’s default anyway.) Both lines start with consuming an ID token and because of the k=1 lookahead, the parser doesn’t stand a chance of distinguishing the variable declaration parser rule from the assignment one: only the fourth token reveals the distinction ID vs. Keyword(‘:=’). Note that since the nesting can be arbitrarily deep, any finite lookahead wouldn’t suffice meaning that we’d have to switch on the backtracking – one could think of this as setting k=∞.
To recap the situation with the token streams in comments:
SomeClass.SomeInnerClass localVar := ... // ID-Keyword('.')-ID-[WS]-ID-[WS]-Keyword(':=')-[WS]-... localVar.intField := 42 // ID-Keyword('.')-ID-[WS]-Keyword(':=')-INT(42)
So, how do we deal with this ambiguity? One answer is to left-factor(ize) the grammar – as is already suggested by the ANTLR output. The trade-off is that our grammar becomes more complicated and we might have to do some heavy lifting outside of the grammar. But that is only to be expected since the grammar deals first and foremost with the syntax – what Xtext provides extra has everything to do with inference of the Ecore meta model (to which the EMF models conform) and only marginally so with semantics, by means of the default behavior for lazily-resolved cross-references.
Analogous to the left-factorized pattern for expression grammars, we’re going to implement the lookahead manually and rewrite nodes in the parsing tree to have the appropriate type. First note that our statements always begin with an ID token which either equals a variable name or a class name. After that any number of Keyword(‘.’)-ID sequences follow (we don’t care about whitespace, comments and such for now) until we either encounter an ID-Keyword(‘:=’) sequence or a Keyword(‘:=’) token, in both cases followed by an expression of sorts.
So, the idea is to first parse the ID-(Keyword(‘.’)-ID)* token sequence (which we’ll call the head) and then rewrite the tree according to whether we encounter an ID or the Keyword(‘:=’) token first. In Xtext, there’s a distinction between parser and type rules but only type rules give us code completion through scoping out-of-the-box, so we would like to use a type rule for the head. The head starts with either a reference to a Class or to a VariableDeclaration. Unfortunately, we can’t distinguish between these two at parse level so we have to have a common super type:
HeadTarget: Class | Variable;
However, due to the way that Xtext tries to “lift” or automatically Refactor identical features (having the same name, type, etc.), we need to introduce an additional type (that’s used nowhere) to suppress the corresponding errors:
Named: Class | Variable | Attribute;
Now we can make the Head grammar rule, reusing the FeatureRefTail rule we already had:
Head: target=[HeadTarget] tail=FeatureRefTail?;
And finally, the new grammar rule to handle both Assignment and VariableDeclaration:
AssignmentOrVariableDeclaration: Head ( ({VariableDeclaration.assignableSite=current} name=ID ':=' (value=Expression)?) | ({Assignment.lhs=current} ':=' value=Expression) );
This works as follows:
- Try to parse and construct a Head model element without actually creating a model element containing that Head;
- When the first step is successful, determine whether we’re in a variable declaration or an assignment by looking at the next tokens;
- Create a model element of the corresponding type and assign the Head instance to the right feature.
This is commonly referred to a “tree rewriting” but in the case of Xtext that’s actually slightly misleading, as no trees are rewritten. (In fact, Xtext produces models which are only trees as long as there are no unresolved references.)
To complete the example, we have to implement the scoping (which can also be found on GitHub). I’ve already covered that (with slightly different type names) in a previous blog post, but I will rephrase that here. Essentially, scoping separates into two parts:
- Determining the features of the type of a variable. This type is specified by the typeRef feature (of type Head) of a VariableDeclaration. This is a actually a type system computation as the Head instance in the VariableDeclaration should already be completely resolved.
- Determining the features of the previous element of a Head instance as possible values of the current FeatureRefTail.feature. For this we only want the “direct features” since we’re actively computing a scope.
(The scoping implementation uses a type SpecElement which is defined as a super type of Head and FeatureRefTail, but this is merely for convenience and type-safety of said implementation.)
In conclusion, we’ve rewritten an ambiguous grammar as an unambiguous one so we didn’t need to use backtracking with all its associated disadvantages: less performance, ANTLR reports no warnings about unreachable alternatives, “magic”, etc. We also found that this didn’t really complicate the grammar: it expresses intent and mechanism quite clearly and doesn’t feel like as kluge.
Ambiguities in Xtext grammars – part 1
In this blog in two instalments, I’ll discuss a few common sources of ambiguity in Xtext grammars in the hopes that it will allow the reader to recognise and fix these situations when they arise. This instalment constitutes the theoretical bit, while the next one will discuss a concrete example.
By default (at least: the default for the Itemis distro) Xtext relies on ANTLR to produce a so-called LL(k) parser, where LL(k) stands for “Left-to-right Leftmost-derivation with lookahead k“, where k is a positive integer or * = ∞ – see the Wikipedia article for more information (with a definite “academic” feel, so be warned). This means that grammars which are not LL(k) yield ANTLR errors during the Xtext generation stating that certain alternatives of a decision have been switched off. These are actual errors: the generated parser quite probably does not accept the full language as implied by the grammar (disregarding the fact that it’s not actually LL(k)) because the parser doesn’t follow certain decision paths to try and parse the input. We say that this grammar is ambiguous.
Xtext and “LL(1) conflicts”
Remember that first of all that the parser tokenizes its input into a linear stream of tokens (by default: keywords, IDs, STRINGs and all kinds of white space and comments) before it parses it into an EMF model (in the case of Xtext, but into an AST for general parsers). The problem is that an Xtext grammar specifies more than just the tokenizing and parsing behavior: it also specifies cross-references whose syntax often introduce an ambiguity on the parsing level by consuming the same token (ID, by default). The second parsing phase (still completely generated by ANTLR from an ANTLR grammar) only uses information on the token type, but doesn’t use additional information, such as a symbol table it may have built up. Such a strategy wouldn’t work with forward references anyway as the parser is essentially one-pass: references are resolved lazily only after parsing. This means that we have to beware especially of language constructs which start off by consuming the same token types (such as IDs) left-to-right but whose following syntax are totally different. This is a typical example of the FIRST/FIRST conflict type for a grammar; see also the Wikipedia article. The other non-recursive conflict type is FIRST/FOLLOW and is a tad more subtle, but it can be dealt with in the same way as the FIRST/FIRST conflict: by left-factorization.
Left-recursion
A grammar is left-recursive if (and only if) it contains a parser rule which can recursively call itself without first consuming a token. A left-recursive grammar is incompatible with LL(k) tech and Xtext or ANTLR will warn you about your grammar being left-recursive: Xtext detects left-recursion at the parser rule level while ANTLR detects left-recursion at the token level. Expression languages provide excellent examples of left-recursive grammars when trying to implement them in Xtext naively. For expression languages there’s a special pattern (ultimately also based on left-factorization) to deal both with the left-recursion, the precedence levels and creating a useable expression tree at the same time: see Sven’s oft-referenced blog and two of my blogs.
Backtracking
There are several ways to deal with ambiguities, one of which is enabling backtracking in the ANTLR parser generator. To understand what backtracking does, have a look at the documentation: essentially, it introduces a recovery strategy to try other alternatives from a parser rule, even if the input already matched with one alternative based on the specified lookahead. (See also this blog for some examples of the backtracking semantics and the difference with the grammar being LL(*).) I’m not enamored of backtracking because ANTLR analysis doesn’t report any errors anymore during analysis, so while it may resolve some ambiguities, it will not warn you about other/new ones. (It also tends to cause a bit of a performance hit, unless memoization is switched on using the memoize option.) In case you do really need backtracking, you should have a good language testing strategy in place with both positive and negative tests to check whether the parser accepts the intended language.
It’s my experience that very few DSLs actually require backtracking. In fact, if your DSL does really need it, chances are that you’re actually implementing something of a GPL which you should think about twice anyway. A quite common case requiring backtracking is when your language uses the same delimiter pair for two different semantics, e.g. expression grouping and type casting in most of the C-derived languages. Using different delimiters is an obvious strategy, but you might as well think hard about why you actually need to push something as unsafe as type casting on your DSL users.
Configuring the lookahead
To manually configure the lookahead used in the ANTLR grammar generated Xtext fragment (instead of relying on ANTLR’s defaults), you’ll have to do a bit of hacking: you have to create a suitable custom implementation of such a fragment, because MWE2 doesn’t have syntax for integer literals or revert to using MWE(1) to configure that. I’ll present a good illustration of this in the worked example in the next instalment which contains nested type specifications (or “path expressions”, as I called them in an earlier blog) which can have an arbitrary nesting depth. Using left-factorization we can rewrite the grammar to be LL(1), at the cost of some extra indirection structure in the meta model and some extra effort in implementing scoping and validation.
The Dangling Else-problem
A(nother) common source of ambiguity is known as The Dangling Else-problem (see the article for a definition) which is a “true” ambiguity in the sense that it doesn’t fall in one of the LL(1) conflicts categories described above. The only way to deal with that type of ambiguity in Xtext 1.0.x is to have a language (unit) test to check whether the dangling else ends up in the correct place – “usually”, that’s as else-part for the innermost if. Note that Xtext 2.0 has (some) support for syntactic predicates which allow you to deal with this declaratively in the grammar.
Next time, a concrete, worked example!
Using syntactic predicates in Xtext, part 2
This blog is a continuation of the previous one about how to use syntactic predicates in Xtext. As promised, I’ll provide a few more examples, most of which come from the realm of GPL-like languages.
But first, a little summary is in order. As stated in the previous blog, a syntactic predicate is an annotation in an Xtext grammar which indicates to the ANTLR parser generator how a (potential) ambiguity should be resolved by picking the (first) one which is decorated with ‘=>‘. The annotation can be applied to:
- a(n individual) keyword (such as ‘else‘),
- a rule call (unassigned or as part of an assignment) and
- a grouped parse expression, i.e. a parse expression between parentheses.
One thing to keep in mind -not only for syntactic predicates but in general- that an Xtext grammar has at least three and often four responsibilities:
- defining the lexing behavior through definition and inclusion of terminals;
- defining the parsing behavior through parser rules which determine how tokens are matched and consumed;
- defining how the model is populated;
- (when not using an existing Ecore model) defining the meta model.
Syntactic predicates influence the second of these but not the others. It is, after all, a syntactic predicate, not a semantic one – which Xtext doesn’t have in any case. Just as without using syntactic predicates, parsing behavior is not influenced by how the model is populated: instead, it is governed solely by the types of the tokens it receives from the lexer. This is easily forgotten when you’re trying to write grammars with cross-references like this:
SomeParserRule: Alternative1 | Alternative2; Alternative1: ref1=[ReferencedType1|ID]; Alternative1: ref2=[ReferencedType2|ID];
In this case, the parser will always consume the ID token as part of Alternative1 even if its value is the (qualified) name of something of ReferencedType2. In fact, ANTLR will issue a warning about alternative 2 being unreachable so it is disabled. For a workaround this problem, see this older blog: it uses a slightly different use case as motivation but the details are the same. The only thing a syntactic predicate can do here is to explicitly favor one alternative over the other.
Some examples from Xbase
The Xtend and the Xbase languages that Xtext ships with both use plenty of syntactic predicates to avoid ambiguities in their grammars and to avoid having to use backtracking altogether. This already indicates that syntactic predicates are a necessary tool, especially when creating GPL-like or otherwise quite expressive DSLs. Note again that syntactic predicates are typically found near/inside optional parts of grammar rules since optionality automatically implies an alternative parsing route.
A good example can be found in the Xbase grammar in the form of the XReturnExpression rule: see GitHub. It uses a syntactic predicate on an assignment to force the optional XExpression following the ‘return‘ keyword to be parsed as part of the XReturnExpression rather than being an XExpression all on its own – which would have totally different semantics, but could be a viable interpretation considering Xtend doesn’t require separating/ending semi-colons.
The Xbase grammar also shows that syntactic predicates are an effective way to disambiguate the use of pairs of parentheses for denoting a list of arguments to a function call from that for grouping inside an expression: once again, see GitHub – here, the syntactic predicate applies to a grouped parse expression, i.e. everything between the parentheses pair starting just before the ‘=>‘.
Unforeseen consequences
Even if you don’t (have to) use syntactic predicates yourself, it’s important to know of their existence. As an example, the other day I was prototyping a DSL which used the JvmTypeReference type rule from Xbase followed by an angled bracket pair (‘<‘, ‘>’) which held ID tokens functioning as cross-references. I was momentarily surprised to see parse errors arise in my example along the lines of “Couldn't resolve reference to JvmType 'administrator'.” The stuff between the angled brackets was being interpreted as a generic type parameter!
It turns out that the JvmTypeReference parser rule uses a syntactic predicate on an angled bracket pair surrounding generic type parameters. This explains both the behavior and the lack of warnings by ANTLR about grammar ambiguities. You’d probably have a hard time figuring out this behavior before finding an innocuous ‘=>‘ here. In the end, I changed “my” angled brackets to square brackets to resolve this. This shows that syntactic predicates, just like backtracking, can be a double-edged sword: it can solve some of your problems but you have to really know how it works to be able to understand what’s going on.
I hope that this was useful for you: please let me know whether it is! I’m not planning on a third installment but you never know: a particular enticing use case might just do the trick.
Using syntactic predicates in Xtext, part 1
Xtext 2.x comes with the possibility to define syntactic predicates in the grammar. But what exactly are these syntactic predicates and how can they be used to avoid or resolve ambiguities in your grammar? The reference documentation is characteristically succinct on the subject. This might mean that it’s either very simple or very complicated 😉
In short: syntactic predicates provide a way to force the parser to make certain choices by annotating the grammar using a ‘=>‘.
Fortunately, it’s actually quite simple but you have to dive a little deeper into the parsing technology used by Xtext to really understand it. Xtext uses ANTLR* ‘under the hood’ to generate the lexer and recursive-descent parser. To leverage ANTLR, Xtext generates an** ANTLR grammar from an Xtext one. As such, it is ANTLR that does most of the heavy lifting while the Xtext runtime sort-of piggybacks on the ‘stuff’ ANTLR generates to build a full model from the parsed text and provide the functionality that ANTLR doesn’t.
During the generation of lexer and parser, ANTLR performs a thorough analysis of the grammar generated by Xtext to check for non-LL(*) behavior (i.e., left-recursion) and nondeterminism (i.e., ambiguities) in the grammar. The former it deals with by reporting an error “[fatal] rule xxx has non-LL(*) decision due to recursive rule invocations reachable from alts n, m, …. Resolve by left-factoring or using syntactic predicates or using backtrack=true option.” for every left-recursive rule and quitting the process, leaving you with a broken Xtext project. Left-recursion usually originates from trying to implement an expression language along the lines of
Expression: Literal | '(' Expression ')' | left=Expression op=('+'|'-'|'*'|'/') right=Expression
There’s a string of material (see here, here and here) detailing the ‘right’ patterns for writing such languages in a non-left-recursive manner in Xtext which also takes care of precedence and associativity. Since those patterns don’t use syntactic predicates (well, they can but it’s not essential), I won’t talk about these any more here.
Switching on backtracking should really be the very last option you try, as it doesn’t guarantee to solve the problem your grammar has but it does guarantee to obscure any problem, simply by not reporting any, even the ones that are easy to fix. Furthermore, backtracking ‘merely’ tries all the possible options, picking the first one that works: in essence it’s a ‘precognitive’ syntactic predicate, but at the expense of time and memory. If we can tweak our grammar with syntactic predicates so that no backtracking is required, we get a parser that performs better and more predictable if only because we’ve documented part of its behavior in the grammar.
The perfunctory example: the dangling else-problem
The most well-known application of syntactic predicates is also the simplest. Consider this grammar (header stuff omitted):
Model: statement+=IfStatement*; IfStatement: 'if' condition=Expression 'then' then=Expression ('else' else=Expression)?; Expression: IfStatement | {ValueReference} name=ID;
When having Xtext generate the language infrastructure for this grammar, you’ll get a warning from ANTLR saying “Decision can match input such as “‘else'” using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input“. This means that there are is an ambiguity in the grammar. ANTLR detects this and makes a choice for you, because otherwise it would have to return a forest of parse trees instead of just one per parse, or roll a dice to cope with the nondeterminism. We’ll see in a minute that a syntactic predicate allows you to choose yourself, instead of having to rely on ANTLR to pick the right choice – with the chance of your luck running out.
Of course, we already were expecting this behavior, so let’s fire up ANTLRWorks on the InternalMyDsl.g file in the principal/non-UI Xtext project (easily findable using the Ctrl/-Shift-R shortcut) to see how we might use that in general. First, ask ANTLRWorks to perform the same analysis ANTLR itself does during parser generation through Ctrl/-R. Then, click the ruleIfStatement (conveniently marked in red) to see the Syntax Diagram for it. This will look like this:
Since ANTLR already reported to only use alternative 1, this is the way that the if-statement will be parsed: the optional else-part will be matched as part of the current invocation of the IfStatement rule. For the canonical example input “if a then if b then c else d”, it means that the parse will be equivalent to “if a then (if b then c else d)”, i.e. the else-part belongs to the second, inner if-statement and not the first, outer if-statement. This result is what we usually would want since it complies with most existing languages and also because the else-part is visually closer to the inner if so it’s more natural that it binds to that instead of the outer if.
By unchecking alternative 1 and checking alternative 2, we get the following:
Now, these ‘faulty’ diagrams in ANTLRWorks are usually a bit funky to interpret because the arrows don’t really seem to start/end in logical places. In this case, we should read this as: the optional else-part can also be matched as part of the invocation of the IfStatement rule invoking the IfStatement rule for a second time – it’s probably convenient to think of the outer, respectively, inner invocation. For our ubiquitous example input, it would mean that the parse is equivalent to “if a then (if b then c) else d” – with the else-part belonging to the first, outer if-statement and not the inner if-statement.
Note that it’s a bit hard to implement a recursive-descent parser with this behavior, since the execution of the inner IfStatement rule should somehow decide to leave the matching and consumption of the following ‘else‘ keyword to the execution of an (not necessarily the direct caller rule!) outer IfStatement rule. ANTLR tends to favor direct matching and consuming tokens as soon as possible, by the currently-called parser rule, over a more circuitous approach.
You can influence the alternatives-picking behavior by placing syntactic predicates in the Xtext grammar. One advantage is that make the choice explicit in your grammar, which both serves to document it as well to eradicate the corresponding warning(s). Another advantage might be is that you make a different choice from the one ANTLR would make: in fact, you can ‘trigger’ a syntactic predicate not only from a single token but also from a series of tokens – more on that in a next blog. Note that syntactic predicates favor the quickest match as well – by design.
Syntactic predicates in an Xtext grammar consist of a ‘=>‘ keyword in front of a keyword, rule call, assignment (i.e., an assigned rule call) or a grouped parse expression (including any cardinality). So, in our case the IfStatement rule becomes:
IfStatement: 'if' condition=Expression 'then' then=Expression (=>'else' else=Expression)?;
The ‘=>‘ now forces ANTLR to not consider the second alternative path at all and always match and directly consume an ‘else‘ and an ensuing Expression, which happens to match the situation without a syntactic predicate – but now this behavior is clearly intentional and not a happenstance.
Since this blog already runs to some length, I’m deferring some more examples, insights and hints & tips to a next blog. One of the examples will revolve around some common GPL-like language features which can be difficult to implement without syntactic predicates but are blissfully uncomplicated with them.
*) Not entirely by default, but it’s thoroughly recommended: see this explanation for more details on that matter.
**) Actually, Xtext generates two ANTLR grammars: one for full parsing, and one which extracts just enough information to provide the content assist functionality with. They’re essentially the same as far as the pure ANTLR part is concerned.
Pre- and postfix operators in Xtext
While re-visiting some Xtext DSLs I made earlier, I came across one which some ambiguities (reported by ANTLR) due to the use of a prefix operator in an expressions sub language. Fixing that stymied me enough to warrant this blog. (It’s actually a sort of addendum to Sven Efftinge’s excellent blog on expression parsing. I’ll do a blog later on to recap everything.)
For the sake of minimizing effort for all parties involved, we’ll extend the Arithmetics example that’s shipped with Xtext with a unitary minus as prefix operator and the signum function (which returns -1, 0, or 1 depending on whether the operand is negative, zero or positive) as postfix operator. (I chose the signum function because it’s an instance method of the BigDecimal class which is used for the interpreter.)
If you’ve had a good, hard look at the grammar of the Arithmetics example or Sven’s blog, you might have realized that the patterns there amount exactly to implementing a classical recursive-descent parser, right in Xtext’s grammar definition language. The rules of the expression language form a (strictly-ordered) sequence, each of which start of calling the next rule in the sequence before trying to match anything else (and doing a tree rewrite with the result of the rule call). The net effect is that the sequence order equals the precedence order, with the first rule corresponding to the lowest precedence and the last rule in the sequence corresponding to the highest level, typically consisting of the ubiquitous parentheses and possibly other things like literals, variable references and such.
We’re going to extend that pattern to deal with pre- and postfix operators as well. The relevant section of the Arithmetics grammar consists of lines 46-52 of the Arithmetics.xtext file:
Multiplication returns Expression: PrimaryExpression (({Multi.left=current} '*' | {Div.left=current} '/') right=PrimaryExpression)*; PrimaryExpression returns Expression: '(' Expression ')' | {NumberLiteral} value=NUMBER | {FunctionCall} func=[AbstractDefinition] ('(' args+=Expression (',' args+=Expression)* ')')?;
We’re going to add two rules, called UnitaryMinus and Signum, in between the Multiplication and PrimaryExpression rules, so we have to change the Multiplication rule:
Multiplication returns Expression: UnitaryMinus (({Multi.left=current} '*' | {Div.left=current} '/') right=UnitaryMinus)*;
Matching the unitary minus prefix operator is simple enough:
UnitaryMinus returns Expression: '-' expr=UnitaryMinus;
Since this rule always consumes at least one token, the ‘-‘ character, from the input, we can recursively call UnitaryMinus without causing left-recursion. The upshot of this is that ‘–37’ is parsed as -(-(37)). Unfortunately, the rule (as it is) would break the sequence of rules, so that we’d lose the higher levels of precedence altogether. To prevent that, we also call the next rule, Signum, as an alternative:
UnitaryMinus returns Expression: Signum | ({UnitaryMinus} '-' expr=UnitaryMinus);
(We need the {UnitaryMinus} action here to make sure the Xtext generator generates a corresponding type in the Ecore model which holds the parsed info.)
Implementing the postfix operator is a matter of calling the next rule (PrimaryExpression) and performing a tree rewrite in case the postfix operator can be matched:
Signum returns Expression: PrimaryExpression ({Signum.expr=current} 's')?;
This is all that’s required for the grammar. Now, we only have to fix the Calculator class and add a couple of test cases in the CalculatorTest class. The Calculator class uses the PolymorphicDispatcher class I wrote about earlier which means we just have to add methods corresponding to the new UnitaryMinus and Signum types:
protected BigDecimal internalEvaluate(UnitaryMinus unitaryMinus, ImmutableMap<String,BigDecimal> values) { return evaluate(unitaryMinus.getExpr(),values).negate(); } protected BigDecimal internalEvaluate(Signum signum, ImmutableMap<String,BigDecimal> values) { return new BigDecimal( evaluate(signum.getExpr(),values).signum() ); }
We add a couple of test cases, specifically for the new operators:
public void test_unitary_minus_and_signum() throws Exception { check(-1, "1 + -2"); // == 1 + (-2) check(1, "-1 + 2"); // == (-1) + 2, not -(1 + 2) check(-1, "-3.7s"); // == -(3.7s) == -1 check(0, "1 + -7s"); // == 1 + -(7s) == 1 + -(1) == 0 check(2, "--2"); // == -(-2) }
Now go out and add your pre- and postfix operators to your DSL today! 🙂
When to solve stuff with a DSL
One of the great benefits of DSLs (textual or graphical alike), is that it allows you to separate essential complexity, i.e. complexity which is intrinsic to the problem you’re trying to solve, from incidental complexity, i.e. complexity which is caused by the approach chosen to solve that problem. This is certainly the case for external DSLs, since these often evolve from a ‘technical clean slate’. Whether you’re actually able to achieve that separation depends on a couple of things:
- the measure in which you understand the problem space or domain;
- the measure in which you understand the solution space;
- your skill as a DSL/language designer.
But first and foremost, it depends on your ability to recognize a situation in which ‘death by incidental complexity’ is likely to occur or even already occurring which in turn depends on the quality of your communication with the project (team). During one of my recent projects, I found myself in the situation that I didn’t (allow myself to) recognize such a situation until it was almost too late and a lot of effort had been wasted.
A tale from the crypt
This particular project entailed building a custom middleware Web application, which we were luckily able to do rather successfully and efficiently using Model-Driven Software Development. The central use case for this application consisted of quite a lot of complicated screen, with one screen being extremely complex…and becoming more and more complex as time progressed, due to an liberal amount of change requests (scope creep, anyone?). The screen was complex for a number of reasons:
- it was big: the number of possible input fields, buttons, etc. almost ran to the triple digits;
- it was highly dynamic: depending on values of certain input fields, check boxes and such, other input fields were or were not visible and/or editable;
- no roundtrips to the server were allowed for performance and usability reasons, which led to duplication of logic in both the server code (Java) as well as the client code (JavaScript);
- the mapping from the object model to values on the screen and (especially) back, as well as roundtripping to other screens, was tedious;
- testability was problematic and only partially automated.
To make matters worse: because it was one of the few screens which didn’t fit the modeling language used (UML2 + custom profiles), almost nothing of the screens functionality could be modeled so everything had to be hand-coded and hand-integrated with the rest of the application. In the process, the requirements for this screen were separated from the application model and documented in the usual amalgam of freeform Word and Excel documents.
So, over a period of about half a year a developer colleague of mine slaved away over this gargantuan screen for what must have been at least a full workload, guided and assisted much of the time by a business analyst colleague who was maintaining said Excel documents and who had previously come to like and rely on the rigor of modeling quite a lot. All this happened within eyeshot…which unfortunately didn’t actually prompt me to take some interest and see why this damned screen was taking a lot more time to complete than initially expected. To further implicate myself: I had implemented the initial version of the screen when it wasn’t very dynamic yet, and I didn’t really mind not being part of the actual developing effort. As karma usually has it: the developer colleague decided to continue his career with another company and ‘The Beast’ was handed back to me, together with a rather hefty bunch of change requests 😉 Despite being knowledgeable about our architecture, I couldn’t really make head or tails of the implementation which suffered from the essential and incidental complexities essentially being multiplied instead of added up -always a bad sign in (non-generated) code, as is an almost-exponential effort curve for implementing changes.
So there I was with a few hundred K’s of (not so homogeneous) Java and JSP code serving a Web page with a lot of JavaScript code, plus some change requests for which I had very little idea of implementing in said code base. The first thing I usually do in these circumstances, is to Refactor away. After about a week, this Refactoring hardly made a dint in reducing the total complexity and only increased the quality of the JavaScript code a bit. Time for plan B, which was to mentally throw the existing solution out of the window and think about how a DSL solution (which was plausible given that we already were an MDSD shop) would look.
The Solution
For a couple of days I feverishly but happily worked on implementing a textual DSL (using Xtext, obviously) to capture the screens’ requirements and generate a Web page mockup from it, as that would allow the business analyst to validate her DSL instance functionally on the push of a button (literally). In total, it took me about 1.5 week to implement the DSL, a half-working mockup generator (with the mockups already being quite close to an actual implementation inside the application’s architecture), a DSL instance capturing about 80% of the screen in all its dynamic and purely essential detail plus some additional stuff to be able to interact with the entity model (in UML). When I showed what I had to my colleagues, the business analyst said ‘Please move over, I have a screen to finish!’ which is by far the nicest response I’ve ever had to anything I’ve created. Unfortunately, the project was cancelled before we could progress from here, for reasons beyond our control, but not before having obtained a full GO from the project manager for replacing the existing implementation with a DSL-based one.
The most important thing I learned from this is that I should have taken an interest in what was going on a mere 1m from my desk which would have allowed me to recognize the potential of a DSL to help and get a grip on the complexity of the things at hand at an earlier stage which could have saved a lot of effort and head aches. Interesting is the fact that it took me only a few days to come up with a quite complex and rather mature DSL (having a sub-DSL for Boolean expressions, references of features of an entity model using path expressions, among various other not-so-trivial constructs) using Xtext even though it was my first DSL built with Xtext version 0.7.2 using not-so-trivial scoping -future posts will discuss and detail some of the features mentioned. This means that the ramp-up of a complete DSL+generator solution in these or similar circumstances can be measured in weeks rather than in months or longer. Also, progress was quite linear and number and complexity of features -no exponential curve or 20%-80%-rule in sight.
Addendum
Ron Kersic had already argued that ‘incidental complexity’ is a much better nomer than ‘accidental complexity’ since the primary semantics of incidental is ‘happening in connection with or resulting from something more important’ while that of accidental is ‘occurring by chance, unexpectedly, or unintentionally’. Although I agreed with him, ‘accidental complexity’ is a reasonably established concept…until Fred Brooks chose to forego on it and use ‘incidental complexity’ instead. So, I happily replaced all occurrences of ‘accidental’ with ‘incidental’ in the body of the blog. And thanks again to Ron for pointing this out 🙂