Archive for the ‘DSLs’ Category

Xtext tip: “synthetic” parser rules

December 22, 2011 2 comments

This is a quick one to share a simple trick which may come in handy when creating an Xtext grammar.

Let’s say your grammar has a type rule T1 (i.e., a rule which corresponds to an EClass in the Ecore meta model). Let’s also say that some other type rule T2 composes that type somehow, i.e., it has a feature someT1 to which something of type T1 is assigned. Let’s say that you want to limit the syntactic possibilities for the composition of a T1 instance somewhat, e.g. in the case that T1 is a group of alternatives but a few alternatives are invalid when used inside T2.

This is a wholly legitimate situation because Xtext grammars usually have a number of responsibilities at the same time, amongst which are defining (1) a mapping to an Ecore meta model and defining (2) the syntax of the DSL.

Let’s sum this situation up in some grammar code:

T1: A1 | A2 | A3;

T2: 'a-t2' someT1=T1;

Let’s say that we would want to exclude A3 from the possible T1‘s in any T2. We could do this via a validation which simply checks the someT1 feature of any T2, reporting an error if it’s an A3. But that means that the parser itself still allows an A3 at that spot which could open up a whole can of smelly worms – e.g., left-recursion or some ambiguity. Also, the content assist that comes out-of-the-box will make syntax suggestions for A3.

Hence, we would like to inform the parser about the restricted syntax. One possibility would be:

T1: T1WithoutA3 | A3;

T1WithoutA3: A1 | A2;

T2: 'a-t2' someT1=T1WithoutA3;

This works perfectly, but it also ‘pollutes’ the meta model a bit. Since the meta model is mostly consumed by downstream clients like interpreters and code generators, this would only cause confusion. But more importantly: if we re-use an existing Ecore meta model (by means of the returns clause) this solution is not possible, since we would have to add a super type T1WithoutA3 to the A1 and A2 types which are sealed inside the re-used Ecore meta model – Xtext will issue an error as soon as we try it.

The clean solution consists of using something which I’ve termed a “synthetic parser rule” and has the following form:

T1:  A1 | A2 | A3;

T1WithoutA3 returns T1: A1 | A2;

T2: 'a-t2' someT1=T1WithoutA3;

Now there’s no pollution of the meta model, but the syntax will be restricted as we’d like it. Note that is very much something which is part of the standard Xtext repertoire but this trick works especially well in the face of type hierarchies and re-used Ecore meta model or inheriting Xtext grammars.

Using syntactic predicates in Xtext, part 2

December 20, 2011 8 comments

This blog is a continuation of the previous one about how to use syntactic predicates in Xtext. As promised, I’ll provide a few more examples, most of which come from the realm of GPL-like languages.

But first, a little summary is in order. As stated in the previous blog, a syntactic predicate is an annotation in an Xtext grammar which indicates to the ANTLR parser generator how a (potential) ambiguity should be resolved by picking the (first) one which is decorated with ‘=>‘. The annotation can be applied to:

  • a(n individual) keyword (such as ‘else‘),
  • a rule call (unassigned or as part of an assignment) and
  • a grouped parse expression, i.e. a parse expression between parentheses.

One thing to keep in mind -not only for syntactic predicates but in general- that an Xtext grammar has at least three and often four responsibilities:

  1. defining the lexing behavior through definition and inclusion of terminals;
  2. defining the parsing behavior through parser rules which determine how tokens are matched and consumed;
  3. defining how the model is populated;
  4. (when not using an existing Ecore model) defining the meta model.

Syntactic predicates influence the second of these but not the others. It is, after all, a syntactic predicate, not a semantic one – which Xtext doesn’t have in any case. Just as without using syntactic predicates, parsing behavior is not influenced by how the model is populated: instead, it is governed solely by the types of the tokens it receives from the lexer. This is easily forgotten when you’re trying to write grammars with cross-references like this:

SomeParserRule: Alternative1 | Alternative2;
Alternative1: ref1=[ReferencedType1|ID];
Alternative1: ref2=[ReferencedType2|ID];

In this case, the parser will always consume the ID token as part of Alternative1 even if its value is the (qualified) name of something of ReferencedType2. In fact, ANTLR will issue a warning about alternative 2 being unreachable so it is disabled. For a workaround this problem, see this older blog: it uses a slightly different use case as motivation but the details are the same. The only thing a syntactic predicate can do here is to explicitly favor one alternative over the other.

Some examples from Xbase

The Xtend and the Xbase languages that Xtext ships with both use plenty of syntactic predicates to avoid ambiguities in their grammars and to avoid having to use backtracking altogether. This already indicates that syntactic predicates are a necessary tool, especially when creating GPL-like or otherwise quite expressive DSLs. Note again that syntactic predicates are typically found near/inside optional parts of grammar rules since optionality automatically implies an alternative parsing route.

A good example can be found in the Xbase grammar in the form of the XReturnExpression rule: see GitHub. It uses a syntactic predicate on an assignment to force the optional XExpression following the ‘return‘ keyword to be parsed as part of the XReturnExpression rather than being an XExpression all on its own – which would have totally different semantics, but could be a viable interpretation considering Xtend doesn’t require separating/ending semi-colons.

The Xbase grammar also shows that syntactic predicates are an effective way to disambiguate the use of pairs of parentheses for denoting a list of arguments to a function call from that for grouping inside an expression: once again, see GitHub – here, the syntactic predicate applies to a grouped parse expression, i.e. everything between the parentheses pair starting just before the ‘=>‘.

Unforeseen consequences

Even if you don’t (have to) use syntactic predicates yourself, it’s important to know of their existence. As an example, the other day I was prototyping a DSL which used the JvmTypeReference type rule from Xbase followed by an angled bracket pair (‘<‘, ‘>’) which held ID tokens functioning as cross-references. I was momentarily surprised to see parse errors arise in my example along the lines of “Couldn't resolve reference to JvmType 'administrator'.” The stuff between the angled brackets was being interpreted as a generic type parameter!

It turns out that the  JvmTypeReference parser rule uses a syntactic predicate on an angled bracket pair surrounding generic type parameters. This explains both the behavior and the lack of warnings by ANTLR about grammar ambiguities. You’d probably have a hard time figuring out this behavior before finding an innocuous ‘=>here. In the end, I changed “my” angled brackets to square brackets to resolve this. This shows that syntactic predicates, just like backtracking, can be a double-edged sword: it can solve some of your problems but you have to really know how it works to be able to understand what’s going on.

I hope that this was useful for you: please let me know whether it is! I’m not planning on a third installment but you never know: a particular enticing use case might just do the trick.

Using syntactic predicates in Xtext, part 1

December 5, 2011 5 comments

Xtext 2.x comes with the possibility to define syntactic predicates in the grammar. But what exactly are these syntactic predicates and how can they be used to avoid or resolve ambiguities in your grammar? The reference documentation is characteristically succinct on the subject. This might mean that it’s either very simple or very complicated 😉

In short: syntactic predicates provide a way to force the parser to make certain choices by annotating the grammar using a ‘=>‘.

Fortunately, it’s actually quite simple but you have to dive a little deeper into the parsing technology used by Xtext to really understand it. Xtext uses ANTLR* ‘under the hood’ to generate the lexer and recursive-descent parser. To leverage ANTLR, Xtext generates an** ANTLR grammar from an Xtext one. As such, it is ANTLR that does most of the heavy lifting while the Xtext runtime sort-of piggybacks on the ‘stuff’ ANTLR generates to build a full model from the parsed text and provide the functionality that ANTLR doesn’t.

During the generation of lexer and parser, ANTLR performs a thorough analysis of the grammar generated by Xtext to check for non-LL(*) behavior (i.e., left-recursion) and nondeterminism (i.e., ambiguities) in the grammar. The former it deals with by reporting an error “[fatal] rule xxx has non-LL(*) decision due to recursive rule invocations reachable from alts n, m, …. Resolve by left-factoring or using syntactic predicates or using backtrack=true option.” for every left-recursive rule and quitting the process, leaving you with a broken Xtext project. Left-recursion usually originates from trying to implement an expression language along the lines of

    | '(' Expression ')'
    | left=Expression op=('+'|'-'|'*'|'/') right=Expression

There’s a string of material (see herehere and here) detailing the ‘right’ patterns for writing such languages in a non-left-recursive manner in Xtext which also takes care of precedence and associativity. Since those patterns don’t use syntactic predicates (well, they can but it’s not essential), I won’t talk about these any more here.

Switching on backtracking should really be the very last option you try, as it doesn’t guarantee to solve the problem your grammar has but it does guarantee to obscure any problem, simply by not reporting any, even the ones that are easy to fix. Furthermore, backtracking ‘merely’ tries all the possible options, picking the first one that works: in essence it’s a ‘precognitive’ syntactic predicate, but at the expense of time and memory. If we can tweak our grammar with syntactic predicates so that no backtracking is required, we get a parser that performs better and more predictable if only because we’ve documented part of its behavior in the grammar.

The perfunctory example: the dangling else-problem

The most well-known application of syntactic predicates is also the simplest. Consider this grammar (header stuff omitted):


    'if' condition=Expression 'then' then=Expression
    ('else' else=Expression)?;

    IfStatement | {ValueReference} name=ID;

When having Xtext generate the language infrastructure for this grammar, you’ll get a warning from ANTLR saying “Decision can match input such as “‘else'” using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input“. This means that there are is an ambiguity in the grammar. ANTLR detects this and makes a choice for you, because otherwise it would have to return a forest of parse trees instead of just one per parse, or roll a dice to cope with the nondeterminism. We’ll see in a minute that a syntactic predicate allows you to choose yourself, instead of having to rely on ANTLR to pick the right choice – with the chance of your luck running out.

Of course, we already were expecting this behavior, so let’s fire up ANTLRWorks on the InternalMyDsl.g file in the principal/non-UI Xtext project (easily findable using the Ctrl/-Shift-R shortcut) to see how we might use that in general. First, ask ANTLRWorks to perform the same analysis ANTLR itself does during parser generation through Ctrl/-R. Then, click the ruleIfStatement (conveniently marked in red) to see the Syntax Diagram for it. This will look like this:

Since ANTLR already reported to only use alternative 1, this is the way that the if-statement will be parsed: the optional else-part will be matched as part of the current invocation of the IfStatement rule. For the canonical example input “if a then if b then c else d”, it means that the parse will be equivalent to “if a then (if b then c else d)”, i.e. the else-part belongs to the second, inner if-statement and not the first, outer if-statement. This result is what we usually would want since it complies with most existing languages and also because the else-part is visually closer to the inner if so it’s more natural that it binds to that instead of the outer if.

By unchecking alternative 1 and checking alternative 2, we get the following:

Now, these ‘faulty’ diagrams in ANTLRWorks are usually a bit funky to interpret because the arrows don’t really seem to start/end in logical places. In this case, we should read this as: the optional else-part can also be matched as part of the invocation of the IfStatement rule invoking the IfStatement rule for a second time – it’s probably convenient to think of the outer, respectively, inner invocation. For our ubiquitous example input, it would mean that the parse is equivalent to “if a then (if b then c) else d” – with the else-part belonging to the first, outer if-statement and not the inner if-statement.

Note that it’s a bit hard to implement a recursive-descent parser with this behavior, since the execution of the inner IfStatement rule should somehow decide to leave the matching and consumption of the following ‘else‘ keyword to the execution of an (not necessarily the direct caller rule!) outer IfStatement rule. ANTLR tends to favor direct matching and consuming tokens as soon as possible, by the currently-called parser rule, over a more circuitous approach.

You can influence the alternatives-picking behavior by placing syntactic predicates in the Xtext grammar. One advantage is that make the choice explicit in your grammar, which both serves to document it as well to eradicate the corresponding warning(s). Another advantage might be is that you make a different choice from the one ANTLR would make: in fact, you can ‘trigger’ a syntactic predicate not only from a single token but also from a series of tokens – more on that in a next blog. Note that syntactic predicates favor the quickest match as well – by design.

Syntactic predicates in an Xtext grammar consist of a ‘=>‘ keyword in front of a keyword, rule call, assignment (i.e., an assigned rule call) or a grouped parse expression (including any cardinality). So, in our case the IfStatement rule becomes:

    'if' condition=Expression 'then' then=Expression
    (=>'else' else=Expression)?;

The ‘=>‘ now forces ANTLR to not consider the second alternative path at all and always match and directly consume an ‘else‘ and an ensuing Expression, which happens to match the situation without a syntactic predicate – but now this behavior is clearly intentional and not a happenstance.

Since this blog already runs to some length, I’m deferring some more examples, insights and hints & tips to a next blog. One of the examples will revolve around some common GPL-like language features which can be difficult to implement without syntactic predicates but are blissfully uncomplicated with them.

*) Not entirely by default, but it’s thoroughly recommended: see this explanation for more details on that matter.
**) Actually, Xtext generates two ANTLR grammars: one for full parsing, and one which extracts just enough information to provide the content assist functionality with. They’re essentially the same as far as the pure ANTLR part is concerned.

Implementing existing DSLs with Xtext – a case study, part 1

November 28, 2011 4 comments

Unlike the previous installment, which held no technical details whatsoever, this one’s going to get a tad greasier, but not much.

Obtaining the language specs

First item on the agenda is obtaining the specifications of the language, if they exist. If they don’t exist, you want to get hold of as many people as you can which might have the specification in their heads. In both cases, you also want to get hold of ‘proza’, i.e. actual content written using the language. It’s important to verify whether that proza is complete and valid, and if not: what error messages are expected. Also, pre-existing tooling can and should be used to validate your own efforts by comparing output from both on the same input.

In the case of CSS it’s relatively easy: all specifications can be found on the W3C web site. However, here it’s already clear that it’s not going to be a walk in the park: the specification is divided into numerous modules which have different status (recommendation vs. official) and varying applicability (through profiles/media: HTML, printing, hearing assistance, etc.).

For Less it’s slightly more difficult: the web site provides a lot of examples but it doesn’t constitute a spec. In fact, careful inspection of the less.js parser sources and unit tests shows that the spec is a bit wider than the examples with respect to selectors of mixins.

This showcases an important non-technical aspect of endeavors such as these:

Customer expectation management

Since the DSL already exists, it’s quite probable that it will have domain users which have been using this DSL and have certain expectations towards this newer implementation. Also, management will have its reasons to go greenlight something that already exists in some shape or form for which they already paid.

It’s important to get social with these customers, gauge their expectations and to set goals for your implementation accordingly. The golden rule here is to provide something that’s good enough to make it worth changing from the existing tooling to the newer tooling. By providing this value as early as possible, you will have an army of testers to battle-harden your implementation and iron out any bugs or incompatibilities.

The silver rule is not to overdo it: pick the initial scope responsibly, get a green light on the scope from users and management, deliver the value and harvest feedback.

In the case of an open-source endeavor like Less/CSS, this is quite a bit trickier: there are plenty of users (certainly of CSS) but probably none share a manager with you so you have to rely on social media to get people to know your stuff, try it out and provide you with feedback. But even here you have to manage  expectations. Note that you can also get feedback some by measuring download numbers and page traffic on the GitHub repository.

For Less/CSS, I’ve set the following initial scope after careful consideration:

Be able to completely parse all CSS and Less unit tests in the GitHub repo, giving errors only where the content really is invalid.

For CSS, this means that I’m ‘vaguely’ going to support CSS3. Obviously, this leaves quite a number of aspects out to dry. To name just a few:

  • Compliance to a documented sub set of the CSS specs.
  • Knowledge about actual CSS properties.
  • Validation of Less expressions (type safety) – this requires implementing a evaluation engine.
  • More sensible highlighting than the defaults provide.
  • Formatting.
  • Generation of CSS from Less.

However, I feel that a basic compliance to the CSS and Less syntax with the niceties that Xtext provides out-of-the-box based on the grammar and a minimal implementation of scoping, reflects the minimum that would make the Less editor useful in the sense that it improves the development experience over eliciting a response from the less.js parser.

Of course, a good CSS editor already exists in the Eclipse WTP project, so I only intend to support CSS to the point that it makes the Less editor more useful. The CSS editor is therefore currently nothing more than a runtime dependency of the Less editor. In fact, it’s more of a development convenience than anything else, at the moment. I might roll the CSS language into the Less language somewhere along the line – probably when I start generating the CSS-intrinsic parts of the grammar.

Future ‘releases’ (i.e., the occasional new download) will gradually provide more functionality, depending on the amount of time I allocate for this project which will also depend on the amount of feedback I get. So, if you like it, be sure to drop me a line saying that and also what you would like to see in the plug-ins.

Finally, a word on:

Harvesting pre-existing implementations

It’s rare that pre-existing tooling can actually be used or even harvested to create an Xtext version, however, for numerous reasons. For example, the parsing tech used might be sufficiently different from Xtext’s LL(*) attribute grammar language to make a pre-existing grammar useless: e.g., it may be LALR(k), it may not be an attribute grammar (meaning that parsing and subsequent model creation are separate phases) or it may use things like semantic predicates and symbol tables to resolve parsing issues. Even an ANTLR grammar is usually quite insufficient to transform into an Xtext grammar.

So, whenever someone (probably a manager) suggests this approach, just point him/her to this paragraph 😉 On the other hand, having the source code of a pre-existing implementation really is very useful, especially as supplement or even substitute of a real language spec.

Please leave a comment if you are finding this useful. In case you’re not able to duplicate my efforts for your own DSL(s) or lack the time to do so yourself: you can hire me as a consultant.

Categories: DSLs, The How, Xtext

Path expressions in entity models, revisited

November 23, 2011 6 comments

Some 15 months ago, I wrote a blog detailing how to implement path-like expressions in Xtext DSLs using a custom scope provider. At lot has changed in the Xtext ‘verse since then, and I was triggered to update that blog by a comment on it. Also, the blog seems to be one of my more popular ones and I can refer to it every now and then on the Eclipse Xtext forum.

Most of what I wrote then which wasn’t particular to the ‘Xtext Domain-Model Example’ is still true: the scope provider API hasn’t changed (apart from IGlobalScopeProvider) and the way that scoping is triggered from the generated parser is fundamentally the same. However, the domain model example itself has changed a lot: it now serves to show (off) a lot of features that were introduced with Xtext 2 and Xtend(2). In particular, it relies heavily on Xbase by extending the grammar from Xtype (a sub language of Xbase) and extending the custom scope provider (in Xtend) and validator implementations (in Java) from their Xbase versions, meaning the DSL gets a lot of functionality essentially for free.

This also means that the grammar has changed enough from the original to make it rather delicate to adapt for my original intentions. In particular, the notions of Attribute and Reference have been clobbered together into the notion of Property which directly references a JVM type. To adapt the example I’d have to rely on classes like JvmFeatureDescriptionProvider in order not to re-invent the wheel, but I fear that bringing all that extra machinery is getting in the way of the idea I was trying to get across.

So, instead of that, I’ve opted for the easy way by starting from the original grammar and custom scope provider implementation, porting those over to Xtext 2.1(.0 or .1) and adapting those. In the process, I’ve tried to make the most of the latest, newest features of Xtext 2.x and especially Xtend: all custom code is done in Xtend, rather than Java.

For your convenience, I packed everything in ZIPs and uploaded them to my GitHub repository: complete Eclipse projects and only the essential files – I’m assuming you’ll find out how to interpret the latter in an Xtext context. For even more convenience, I exposed three of the essential files through GitHub’s gist feature: see below, in the running text.

It should be clearly discernible what I’ve added to the grammar to implement the path expressions. I’ve also added a PathElement convenience super type, just to make life a little easier (read: more statically-typed) on the Java/Xtend side of things.

I rewrote the scope provider implementation completely in Xtend. To be able to replace completely with DomainModelScopeProvider.xtend, you’ll have to add “generateStub = false” to the configuration of the scoping generator fragment in the MWE2 workflow file – otherwise, you’ll get two generated DomainModelScopeProvider classes after running the worklow. Note that the implementations of the scope_Reference_opposite and scope_PathTail_feature are much more succinct and readable than they originally were, because of Xtend’s collection extensions and built-in polymorphic dispatch feature.

Also note that I made use of a separate Xtend file DomainModelExtensions implementing derived properties of model elements in a domain model. Xtend’s extension mechanism allows me to use these extensions transparently with no boilerplate at all apart from the extension declaration.

I also re-implemented the validator in Xtend. Because the Java validator generator fragment doesn’t have a generateStub parameter like the scoping fragment, we have to use a bit of boilerplate in the form of the DomainModelJavaValidator Java class extending the DomainModelXtendValidator Xtend class.

Lastly, I’ve implemented a couple of simple unit tests in Xtend using Xtext 2’s new JUnit4-ready unit testing capabilities. Note that it’s quite simple to do parameterized tests on a fairly functional level: see the ScopingTest Xtend class (in the .tests project) for details.

Categories: DSLs, The How, Xtend(2), Xtext

Implementing existing DSLs with Xtext – a case study, part 0

November 3, 2011 5 comments

This is the first installment in a series of blog posts on how to implement a “typical” existing DSL using Xtext. It’s intended as an informal guide showing how I tend to go about implementing a DSL, solving often-occurring problems and making design decisions. I’m actually going to implement two DSLs: CSS (version 3, on average) and Less. These are technical DSLs rather than domain DSLs, but I’ve only got a finite amount of time per week so live with it, alright?

The case of existing DSLs is both an interesting and a common one: we don’t have to design the language which means we can focus purely on the implementation and don’t have to bother with everything that goes into designing a good language. On the other hand, we are often forced to deal with “features” which might be cumbersome to implement in the “parsing tech du jour”. In the green field case, you’d have the freedom to reconsider having that feature in the language at all, but not here.

For this case study, I chose Less because it appealed to me: I kind-of loath working with CSS directly. Although Less provides a way of getting feedback on syntactical and semantic validity through less.js and highlighting/language assist modules have been developed for various editors, I’d rather have a full-fledged editor for the Less language instead of a tedious “change-compile-fix type” development cycle. Also, it seemed a rather good showcase for Xtext outside of the now-popular JVM language envelope.

It became apparent early on that I’d also need a full implementation of CSS as well since Less extends CSS, which gives me a chance to show how to do language composition as well as grammar generation with Xtext. Other aspects which will be addressed, are: expression sub languagestype systemsevaluation and code generation using model-to-model transformation.

This post doesn’t have any technical details yet: sorry! You can follow my efforts on/through Github, which holds the sources as well as a downloadable ZIP with deployable Eclipse plug-in JARs.

Categories: The How, Xtext

To mock a…DSL?

October 3, 2011 Leave a comment

Even with all these language workbenches and DSL frameworks which make creating a DSL a lot easier than it used to be, it’s still not a trivial matter – hence, it takes a non-trivial effort to implement even a rough first draft of a DSL. This also means that it makes sense to get an idea of what your DSL should actually look like before you really start hacking away at the implementation. This is especially true for graphical DSLs which are generally harder to build than textual DSLs, so you can save a lot of wasted efforts by investing in a well-executed mockup phase.

Very rarely you’ll have an actual language specification upfront. The situation that you’re creating a completely new DSL is both much more common and also much more interesting: it is first and foremost an adventure into Domain Land in which you get to learn new people, see interesting things and discover hoards of hidden knowledge, often buried in caches of moldy Word or Excel documents. Often, a language-of-sorts is already around, but without a clearly-defined syntax, let alone a formal definition and without any tooling. It’s our job, as DSL builders, to bring order to this chaos.

Get to know the domain

That’s why the first item on the agenda is getting to know the domain, and the people living it, really well. Us geeks have a tendency to cut corners here: after all, we managed to ace the Compiler and Algorithm Analysis courses, so what could possible be difficult about, say, refrigerator configurations, right? But bear in mind that a DSL is, well, domain-specific so you’d better make sure it fits into the heads of the people which make up your domain and it better fit good, making them more productive even taking into account that they need to learn new tools and probably a new way of working as well.

If the fit is sub-optimal, they’re like to bin your efforts as soon as they’ve come across the first hurdle – which probably even means that you lost your foot in the door regarding model-driven/DSL solutions. (Another way is saying the same thing is that a domain is defined by the group of people which are considered part of it, not the other way around.)

Make mockups

Therefore, the second item on the agenda is coming up with a bunch of mockups. The intent of these mockups is (1) for your domain users to get an idea of their DSL by gauging what actual content would look like expressed in it, and (2) for you to gain feedback from that to validate and guide your DSL design and implementation. It’s important that you use actual content the domain users are really familiar with for these mockups: introducing a DSL tends to be seen as a disruptive innovation even by the most innovation-prone people (and we all know that organizations are rife with that kind…) so your domain users must be able to see where things are going for them.

You don’t achieve that by using something that is too generic/unspecific (e.g., of the ‘Hello world!’ type), too small (which probably means you’re not even beginning to touch corner cases where it really matters what the DSL looks like and how much expressivity it allows) or not broad enough (i.e., overly focusing on a particular aspect the DSL addresses instead of achieving a good spread).

It’s also good to have a few variants for the DSL. There are a lot of, often fairly continuously-valued parameters you can tweak in a DSL:

  1. Is the syntax overly verbose or on-the-verge-of-cryptic minimal?
  2. How is the language factored: one big language or several small ones, each addressing one specific aspect?
  3. How is the prose (i.e., content expressed in the DSL) modularized: one big model, or spread over multiple files?
  4. Is there a way to make content re-usable or to make abstractions?
  5. How is visibility (scoping) and naming (e.g., qualified names with local imports) organized?

Each of these parameters (and also the many more I didn’t list) determine how good the fit with the people in the domain and their way of working is going to be. Also, the eventual language design is going to influence the engineering and the complexity of the implementation directly. The mockup phase is as good a time as any to find out how much leeway you have in the design to lighten the load and optimize implementation efforts and proposing variants is a good way of exploring the space.

What to make mockups with

What do you use for mockups? Anything that allows you to approximate the eventual syntax of the language. For textual DSLs, you can use anything that allows you to mimic the syntax highlighting (i.e., font style and color – I consider highlighting part of the syntax). For graphical DSLs, anything besides Paint could possibly work. In both cases, you’d best pick something that both you and your domain users are already familiar with so that it is easy for everyone involved to contribute to the process by changing stuff and even coming up with their own mockups. Chances are OpenOffice or any of its commercial rivals provide you with more than enough.

Obviously, you’re going to miss out on the tooling aspect: a good editing experience (content assist, navigation of references, outlines, semantic search/compare, etc.) makes up a large part of any modern DSL. Keep in mind though that the DSL prose is going to be read much more often than it is going to be written (i.e., modified or created), and the use of mockups reflects that. Martin Fowler coined the term ‘business-readable DSLs’ because of this and the fact that domain users who are really able to write DSL prose seem to be relatively rare. In any case, you should try whether your domain users that they will be actually be able to create and modify prose, using only the proposed syntax and no tooling.


Having arrived at a good consensus on the DSL’s syntax and design, you can start hammering away at an implementation knowing that the first working draft should not come as a surprise to your domain users. In true Agile fashion, you should present and share working but incomplete versions of the DSL implementation as soon and often as possible and elicit feedback. This also countermands the traditional “MDSD as a magical black box”-thinking which is often present in the uninitiated.

To conclude: making mockups of your DSL before you start implementing is useful and saves you a lot of wasted implementation effort later on.

Categories: DSLs, The How

Using Xtext’s Xtend(2) language

September 19, 2011 4 comments

The 2.0 release of Xtext also saw the introduction of the Xtend language. “Wait a minute, didn’t that already exist in Xtext 1.0 and openArchitectureWare 4.x before that?” I hear you saying. Indeed, the new language is should actually be called Xtend2 but since it’s meant to replace both Xtend(1) and Xpand, the makers have opted to drop the 2 suffix and assume that you’d know the difference. In any case, the languages differ in the file extensions used: .xtend for Xtend(2) and .xpt/.ext for Xpand/Xtend(1). Xpand and Xtend(1) are still part of the Xtext distribution, apparently for convenience, backwards compatibility and easy of migration, although there’s no support on these languages and execution engines any more. I also noted that the Xtext generator still relies heavily on Xpand/Xtend(1).

I’ve been using Xtend (i.e., Xtend2, but I’m going to drop the ‘2’ from now on as well) for some time now as a replacement for Xpand/Xtend -and even JSP- and I wanted to share my experiences and impressions -mostly good- with you and discuss a number of differences between Xpand/Xtend and Xtend.

The good

Xtend provides a decidedly functional programming-flavored abstraction over Java. Xtend files are compiled down to Java source code on-the-fly (courtesy of an Xtext builder participant – more on that later). The pros of this approach are performance, integration and debuggability using ye ole’ Java debugger. The generated Java code is fairly readable and 1-to-1 with the original Xtend code so tracing back to that is not really difficult, although full traceability would have been a boon here. I’ve never gotten into the groove of Xtend(1)’s custom debugger, preferring println()-style debugging over it. Compilation also means that you can refer to the compiled Java classes from outside of Xtend. It’s even possible to use Xtend as a type-safe replacement language for JSP.

The rich strings are by far the biggest improvement over the old Xpand/Xtend combination: the intelligent white space handling is nothing short of brilliant. They bring templates into the functional realm: a template is now “just” a function, so you can freely mix templates and “regular” code as it makes sense. Don’t forget that a juxtaposition of rich strings evaluates all but only returns the last, though – there’s no automagical concatenation.

It’s extremely easy to hook into the Xtext builder mechanism using a builder participant. In fact, this is the out-of-the-box situation so you only have to open the generated <MyDSL>Generator.xtend and implement the doGenerate function to have your UI language plug-in automagically fire up that generation on projects with the Xtext nature. Since the Xtext builder  works in an incremental manner, the generation is only triggered for the files which have been touched or which (transitively) depend on it, whereas it used to be “the whole shebang”. If you factored and modularized your language in a sensible way, this means that turnaround for the users of your generation is much, much quicker.

The extension mechanism works just as in Xpand/Xtend. Both the dispatch and create semantics are just a bit more general than their Xpand/Xtend counterparts which is good. I also like the as-syntax for casts: because of type inference, the end of an expression feels like a better place for the cast and it support the usual thought process (“Oh wait, now I have to cast this too!”) better as well.

The bad^H^H^Hnot entirely fortunate

To be fair: I’ve only found a few things less fortunate and they are definitely not show stoppers – they all have reasonable workarounds or are quite minor to begin with. But, since this is the interWebs I’m going to share them with you regardless 😉 You might run into them yourself, after all.

The biggest thing is that Xtend derives its type system directly from the Java type system. Xpand/Xtend had a type system which allowed you to compose various type systems, with the EMF and JavaBeans type systems being the most widely used after the built-in type system which came with a whole lot of convenience for the common base and collection types. In Xtend, you essentially only have an improved JavaBeans type system so you’ll have to rely on what Java and its libraries offer you or else you’ll have to knock it out yourself.

In particular, the following useful features of Xpand/Xtend are missing:

  • «EXPANDFOREACH…» construct. This means that you find yourself typing«FOR item : items»«item.doSomething»«ENDFOR»quite often, especially since it’s often quite natural to factor out the contents of the loop. The workaround for this consists of a utility function-replacement using closures but the result is slightly less aesthetically pleasing than it used to be.
  • Syntactic sugar (shorthand) for property collection. “” was Xpand/Xtend shorthand for “collection.collect(i|“. In Xtend neither are possible and you’ll have to use .fold or .reduce to achieve the same and the equivalent code is nowhere near as readable as its Xpand/Xtend counterparts.
  • Using this as a parameter name. It was perfectly alright to use this as a parameter name in the definitions of Xtend functions and Xpand DEFINEs. It was even OK to say «FOREACH items AS this»…«ENDFOREACH». Since “property” is shorthand for “” (it still is) this allowed you to create extremely succinct code. The User Guide mentions that it should be possible to re-define this but I couldn’t get that to work, so you’ll have to qualify all object access with the name of the parameter or variable.
On the IDE side of things, I miss Java’s automatic ordering and cleaning of imports. Also, the content assist on Java types in the Xtend editor doesn’t do the “captical trick” where “DCPF” expands to “DefaultConstructionProxyFactory”, e.g..
Lastly, Xtend doesn’t offer control over visibility of features in the generated Java and also doesn’t support abstract, static or (non-default) constructors. This has led me to use Java for building APIs and implementing the typical abstract/support part of it and Xtend for the “final” implementation – which tends to benefit most from the succinct syntax and features such as closures and rich strings.

The ugly

Again: I’ve only found a few of these and none of them show stoppers.

First of all, the compilation to Java does not always yield compiling code. (This is in full compliance with Joel Spolsky’s Law of Leaky Abstractions, since Xtend is an abstraction of Java.) Although the User’s Guide mentions that uncaught, checked exceptions thrown from your code are automatically wrapped in unchecked exceptions, this is not the case: instead, they are added to the throws-clause of the containing methods (corresponding to an Xtend function). This can break an override function or it can wreak havoc on code using this function. I’ve found myself writing a lot of trycatches to cope, which detracted from the otherwise quite succinct syntax quite a bit. Obviously, I would have had to write these anyway if I were using Java, but that’s not the point of Xtend, I think. (To compound the matter: you can’t explicitly declare a throws-clause in Xtend.)

Also, generic types involving wildcards (‘?’) are not water tight, although it’s fair to say this is a major problem with the Java type system in general and often enough extremely hard to get right. Not using wildcards is almost always possible, so that’s the obvious workaround.


All in all and despite my few, slight misgivings, Xtend is quite an improvement over Xpand/Xtend. I’d heartily recommend to start using it, both as a replacement to Xpand/Xtend (and JSP) as well as to Java itself.

Categories: Xtend(2)

Annotation-based dispatch for scope providers

One of the slightly awkward aspects of Xtext is that the org.eclipse.xtext.scoping.impl.AbstractDeclarativeScopeProvider class essentially relies on a naming convention to dispatch scoping to the correct method. Such a strategy is quite brittle when you’re changing your grammar (or the Ecore meta model) as it doesn’t alert you to the fact that method names should change as well.

In an earlier blog, I already gave a tip to help deal with both this as well as with knowing which method to actually implement, but that doesn’t make this a statically-checked enterprise yet. The challenge here is to come up with a suitable compile-time Java representation of the references and types (or in grammar-terms: features and parser rules) involved, otherwise it wouldn’t be static checking, right? Unfortunately, the rather useful Java representation of references provided by the <MyDSL>PackageImpl class is a purely run-time representation.

Instead, I chose to make do with the IDs defined for types and their features in the <MyDSL>Package class instead to come up with an implementation of an annotation-based strategy for scope provider specification. It turns out that this is rather easy to do by extending the AbstractDeclarativeScopeProvider class. I’ve pushed my implementation -aptly called AbstractAnnotationBasedDeclarativeScopeProvider (which would probably score a triple word value in a game of Scrabble)- to my open-source GitHub repository: the source and the plugin JAR.


Usage is quite simple (as it should be): have your scope provider class extend  AbstractAnnotationBasedDeclarativeScopeProvider and add either a ScopeForType or ScopeForReference annotation (both conveniently contained in theAbstractAnnotationBasedDeclarativeScopeProvider class) to the scoping methods. The ScopeForType annotation takes the class(ifier) ID of the EClass to scope for. The ScopeForReference annotation also takes the feature ID of the reference (contained by the EClass specified) to scope for. Both these IDs are found in the <MyDSL>Package class as simple int constants. Note that it’s not checked whether these IDs actually belong together (in the second case) as the <MyDSL>Package class doesn’t actually encode that information.

As long as you use the <MyDSL>Package class to obtain IDs, this notation is checked at compile-time so that when something changes, chances are good that the annotation breaks and you’re alerted to the fact you have to change something.

As an example, consider the DomainmodelScopeProvider scope implementation class for the Domain Model example project shipped with Xtext 1.0.x: have that class extend AbstractAnnotationBasedDeclarativeScopeProvider and add the following annotation to scope_Reference_opposite method to use the annotation-based strategy.

@ScopeForReference(classId=DomainmodelPackage.REFERENCE, featureId=DomainmodelPackage.REFERENCE__OPPOSITE)

The current implementation is nice enough to tell you (as a warning in the log4j log) that it’s encountered a method which seems to comply to the naming-based strategy but nevertheless refuses to call the method to avoid having mixed-strategy behavior. I might change the behavior in the future to remove this nicety or to make it configurable (and non-default), though.

Design decisions

I could have used a Class<? extends EObject> instance to specify the EClass (or rather, the Java type of the instances of that). The example given above would then look as follows.

@ScopeForReference(class=Reference.class, featureId=DomainmodelPackage.REFERENCE__OPPOSITE)

However, you still need the right feature ID to specify the EReference so I chose to stick to using two IDs which clearly belong together as it communicates a little clearer. I also thought it’s better to use standard Ecore infrastructure throughout and not to rely on the particular way Ecore maps to actual Java classes.

Let me know…

…what you think of this approach! Is it useful? Is the notation clear? Do you have a preference for the use of the Class<? extends EObject>-style? Be sure to drop me line.

Categories: The How, Xtext

Checklist for Xtext DSL implementations

April 8, 2011 1 comment

Currently I’m in the US, working with a team that’s building a number of DSLs with Xtext and have been doing that for some time already. The interesting thing is that this team is quite proficient at doing that and tackling all sorts of gnarly problems (coming either from a legacy language which they have to emulate to some level or from requirements coming from the end users), even though most of them have only been working with Xtext for a few months. However, during the past week I realized that I unconsciously use a collection of properties which I check/look for in Xtext DSLs and since I use it unconsciously I wasn’t really aware of the fact that not everyone was using the same thing. In effect, the team had already run into problems which they had solved either completely or partly in places which were downstream from the root causes of the problem. The root causes generally resided at the level of the grammar or scope provider implementation and would (for the most part) have been covered by my unconscious checklist. Had the team had my checklist, they’d probably saved both time and headaches.

Since existing sources (i.e., the Xtext User Guide and, e.g., Markus Völter’s “MD* Best Practices” paper) are either reference-typed or quite general and somewhat hard to easily map to the daily Xtext practice, I figured I’d better make this list explicit. I divvied the checklist up in three sections: one concerning the Generate<MyDsl>.mwe2 file, one concerning the grammar file and one concerning the Java artifacts which augment the grammar.

Generator workflow

  1. Do the EPackages imported in the grammar file correspond 1:1 with the referencedGenModels in the workflow?
  2. Do you know/understand what the configured fragments (especially pertaining to naming, scoping, validation) provide out-of-the-box?
  3. Is backtracking set to false (default) in the options configuration for the XtextAntlrGeneratorFragment? I find that backtracking is rarely needed and unless it is, enabling backtracking introduces quite a performance hit and. More importantly, it might hide ambiguities (i.e., they don’t get reported during the generation phase) in the grammar at a point you didn’t need the backtracking for anyway.

To expand a little on the second item, here’s a list of the most important choices you’ve got:

  • naming: exporting.SimpleNamesFragment versus exporting.QualifiedNamesFragment
  • scoping: scoping.ImportURIScopingFragment versus scoping.ImportNamespacesScopingFragment
  • validation.JavaValidatorFragment has two composedChecks by default: ImportUriValidator which validates importURI occurrences (only useful in case you’ve configured the ImportURIGlobalScopeProvider in the runtime module, either manually or by using ImportURIScopingFragment), and NamesAreUniqueValidator (which checks whether all objects exported from the current Resource have unique qualified names).


  1. Any left-recursion? This should be pretty obvious since Xtext generator breaks anyway and leaves the DSL projects in an unbuildable state.
  2. No ambiguities (red error messages coming from the ANTLR parser generator)? Ambiguities generally either come from ambiguities at the token level (e.g., having a choice ‘|’ which consume the same token type) or overlapping terminal rules (somewhat rarer since creating new terminal rules and/or changing existing ones is fortunately not that common).
  3. Does the grammar provide semantics which are not syntactical in nature? Generally: grammar is for syntax, the rest (scope provision, validation, name provision, etc.) is for semantics.
  4. Did you document the grammar by documenting the semantics of each of the rules, also specifying aspects such as naming, scoping, validation, formatting, etc. (unfortunately, in comment-form only)? Since the grammar is the starting point of the DSL implementation, it’s usually best to put as much info in there as possible…
  5. Did you add a {Foo} unassigned action to the rules which do not necessarily assign to the type? (Saves you from unexpected NullPointerExceptions.)

To expand on the second item pertaining to ambiguities:

  • Most ambiguities of the first kind are introduced by incorrect setup of an expression sub language. Make sure you use the pattern(s) described in Sven‘s and two of my blog posts.
  • Favor recursive over linear structures in the context of references into recursive structures. This makes implementing the scope provider all the more easier (or even: possible). For a example of this: see this blog post.

Java artifacts

First some checks which pertain to implementation of the custom local scope provider:

  1. Are you using the “narrow” form (signature: IScope scope_<Type>_<Reference>(<ContextType> context, EReference ref), where Reference is a feature of Type) as much as possible?
  2. Are you using the “wide” form (signature: IScope scope_<Type>(<ContextType> context, EReference ref)) where it makes sense?
  3. Have you chosen the ContentType (see previous item) to be convenient so you don’t need to travel up the containment hierarchy?

For the rest of the Java artifacts:

  1. Is your custom IQualifiedNameProvider implementation bound in the runtime module?
  2. Does the bound IQualifiedNameProvider implementation compute a qualified name for the model root? (Important in case you’re using the org.eclipse.xtext.mwe.Reader class.)
  3. Have you implemented value converters (see §5.7) for all the data type rules in the grammar?
  4. Have you bound the value converter class in the runtime module?
Categories: DSLs, The How, Xtext