Archive

Archive for the ‘The How’ Category

Using syntactic predicates in Xtext, part 1

December 5, 2011 5 comments

Xtext 2.x comes with the possibility to define syntactic predicates in the grammar. But what exactly are these syntactic predicates and how can they be used to avoid or resolve ambiguities in your grammar? The reference documentation is characteristically succinct on the subject. This might mean that it’s either very simple or very complicated ūüėČ

In short: syntactic predicates provide a way to force the parser to make certain choices by annotating the grammar using¬†a ‘=>‘.

Fortunately, it’s actually quite simple but you have to dive a little deeper into the parsing technology used by Xtext to really understand it. Xtext uses ANTLR* ‘under the hood’ to generate the lexer and recursive-descent parser. To leverage ANTLR, Xtext generates an** ANTLR grammar from an Xtext one. As such, it is ANTLR that does most of the heavy lifting while the Xtext runtime sort-of piggybacks on the ‘stuff’ ANTLR generates to build a full model from the parsed text and provide the functionality that ANTLR doesn’t.

During the generation of lexer and parser, ANTLR performs a thorough analysis of the grammar generated by Xtext to check for non-LL(*) behavior (i.e., left-recursion) and¬†nondeterminism (i.e., ambiguities) in the grammar. The former it deals with by reporting an error “[fatal] rule xxx has non-LL(*) decision due to recursive¬†rule invocations reachable from alts n, m, …. Resolve by left-factoring or¬†using syntactic predicates or using backtrack=true option.” for every left-recursive rule and quitting the process, leaving you¬†with a broken Xtext project.¬†Left-recursion usually originates from trying to implement an expression language along the lines of

Expression:
      Literal
    | '(' Expression ')'
    | left=Expression op=('+'|'-'|'*'|'/') right=Expression

There’s a string of material (see¬†here,¬†here¬†and¬†here)¬†detailing the ‘right’ patterns for writing such languages in a non-left-recursive manner in Xtext which also takes care of precedence and associativity. Since those patterns don’t use syntactic predicates (well, they can but it’s not essential), I won’t talk about these any more here.

Switching on backtracking should really be the very last option you try, as it doesn’t guarantee to solve the problem your grammar has but it does guarantee to obscure any problem, simply by not reporting any, even the ones that are easy to fix. Furthermore, backtracking ‘merely’ tries all the possible options, picking the first one that works: in essence it’s a ‘precognitive’ syntactic predicate, but at the expense of time and memory. If we can tweak our grammar with syntactic predicates so that no backtracking is required, we get a parser that performs better and more predictable if only because we’ve documented part of its behavior in the grammar.

The perfunctory example: the dangling else-problem

The most well-known application of syntactic predicates is also the simplest. Consider this grammar (header stuff omitted):

Model:
    statement+=IfStatement*;

IfStatement:
    'if' condition=Expression 'then' then=Expression
    ('else' else=Expression)?;

Expression:
    IfStatement | {ValueReference} name=ID;

When having Xtext generate the language infrastructure for this grammar, you’ll get a warning from ANTLR saying “Decision can match input such as “‘else'” using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input“. This means that there are is an ambiguity in the grammar.¬†ANTLR detects this and makes a choice for you, because otherwise it would have to return a forest of parse trees instead of just one per parse, or roll a dice to cope with the nondeterminism. We’ll see in a minute that a syntactic predicate allows you to choose yourself, instead of having to rely on ANTLR to pick the right choice – with the chance of your luck running out.

Of course, we already were expecting this behavior, so let’s fire up ANTLRWorks on the InternalMyDsl.g file in the principal/non-UI Xtext project (easily findable using the Ctrl/Ô£Ņ-Shift-R shortcut) to see how we might use that in general.¬†First, ask ANTLRWorks to perform the same analysis ANTLR itself does during parser generation through Ctrl/Ô£Ņ-R. Then, click the ruleIfStatement (conveniently marked in red) to see the Syntax Diagram for it. This will look like this:

Since ANTLR already reported to only use alternative 1, this is the way that the if-statement will be parsed: the optional else-part will be matched as part of the current invocation of the IfStatement rule. For the canonical example input “if¬†a¬†then¬†if¬†b¬†then¬†c¬†else¬†d”, it means that the parse will be equivalent to “if¬†a¬†then¬†(if¬†b¬†then¬†c¬†else¬†d)”, i.e. the else-part belongs to the second, inner if-statement and not the first, outer if-statement. This result is what we usually would want since it complies with most existing languages and also because the else-part is visually closer to the inner if¬†so it’s more natural that it binds to that instead of the outer if.

By unchecking alternative 1 and checking alternative 2, we get the following:

Now, these ‘faulty’ diagrams in ANTLRWorks¬†are usually a bit funky to interpret because the arrows don’t really seem to start/end in logical places. In this case, we should read this as: the optional else-part can also be matched as part of the invocation of the¬†IfStatement¬†rule invoking the¬†IfStatement¬†rule for a second time – it’s probably convenient to think of the outer, respectively, inner invocation. For our ubiquitous example input, it would mean that¬†the parse is equivalent to “if¬†a¬†then¬†(if¬†b¬†then¬†c)¬†else¬†d” – with the else-part belonging to the first, outer if-statement and not the inner if-statement.

Note that it’s a bit hard to implement a recursive-descent parser with this behavior, since the execution of the inner¬†IfStatement¬†rule should somehow decide to leave the matching and consumption of the following ‘else‘ keyword to the execution of an¬†(not necessarily the direct caller rule!) outer¬†IfStatement¬†rule. ANTLR tends to favor direct matching and consuming tokens as soon as possible, by the currently-called parser rule, over a more circuitous approach.

You can influence the alternatives-picking behavior by placing syntactic predicates in the Xtext grammar. One advantage is that make the choice explicit in your grammar, which both serves to document it as well to eradicate the corresponding warning(s). Another advantage might be is that you make a different choice from the one ANTLR would make: in fact, you can ‘trigger’ a syntactic predicate not only from a single token but also from a series of tokens – more on that in a next blog. Note that syntactic predicates favor the quickest match as well – by design.

Syntactic predicates in an Xtext grammar consist of a ‘=>‘ keyword in front of a keyword, rule call, assignment (i.e., an assigned rule call) or a grouped parse expression (including any cardinality). So, in our case the¬†IfStatement¬†rule becomes:

IfStatement:
    'if' condition=Expression 'then' then=Expression
    (=>'else' else=Expression)?;

The¬†‘=>‘ now forces ANTLR to not consider the second alternative path at all and always match and directly consume an ‘else‘ and an ensuing Expression, which happens to match the situation without a syntactic predicate – but now this behavior is clearly intentional and not a happenstance.

Since this blog already runs to some length, I’m deferring some more examples, insights and hints & tips to a next blog. One of the examples will revolve around some common GPL-like language features which can be difficult to implement without syntactic predicates but are blissfully uncomplicated with them.

*) Not entirely by default, but it’s thoroughly recommended: see¬†this explanation¬†for more details on that matter.
**) Actually, Xtext generates two ANTLR grammars: one for full parsing, and one which extracts just enough information to provide the content assist functionality with. They’re essentially the same as far as the pure ANTLR part is concerned.

Advertisements

Implementing existing DSLs with Xtext – a case study, part 1

November 28, 2011 4 comments

Unlike the previous installment, which held no technical details whatsoever, this one’s going to get a tad greasier, but not much.

Obtaining the language specs

First item on the agenda is obtaining the specifications of the language, if they exist. If they don’t exist, you want to get hold of as many people as you can which might have the specification in their heads. In both cases, you also want to get hold of ‘proza’, i.e. actual content written using the language. It’s important to verify whether that proza is complete and valid, and if not: what error messages are expected. Also, pre-existing tooling can and should be used to validate your own efforts by comparing output from both on the same input.

In the case of CSS it’s relatively easy: all specifications can be found on the W3C web site. However, here it’s already clear that it’s not going to be a walk in the park: the specification is divided into numerous modules which have different status (recommendation vs. official) and varying applicability (through profiles/media: HTML, printing, hearing assistance, etc.).

For Less it’s slightly more difficult: the web site provides a lot of examples but it doesn’t constitute a spec. In fact, careful inspection of the less.js parser sources and unit tests shows that the spec is a bit wider than the examples with respect to selectors of mixins.

This showcases an important non-technical aspect of endeavors such as these:

Customer expectation management

Since the DSL already exists, it’s quite probable that it will have domain users which have been using this DSL and have certain expectations towards this newer implementation. Also, management will have its reasons to go greenlight something that already exists in some shape or form for which they already paid.

It’s important to get social with these customers, gauge their expectations and to set goals for your implementation accordingly. The golden rule here is to provide something that’s good enough to make it worth changing from the existing tooling to the newer tooling. By providing this value as early as possible, you will have an army of testers to battle-harden your implementation and iron out any bugs or incompatibilities.

The silver rule is not to overdo it: pick the initial scope responsibly, get a green light on the scope from users and management, deliver the value and harvest feedback.

In the case of an open-source endeavor like Less/CSS, this is quite a bit trickier: there are plenty of users (certainly of CSS) but probably none share a manager with you so you have to rely on social media to get people to know your stuff, try it out and provide you with feedback. But even here you have to manage  expectations. Note that you can also get feedback some by measuring download numbers and page traffic on the GitHub repository.

For Less/CSS, I’ve set the following initial scope after careful consideration:

Be able to completely parse all CSS and Less unit tests in the GitHub repo, giving errors only where the content really is invalid.

For CSS, this means that I’m ‘vaguely’ going to support CSS3. Obviously, this leaves quite a number of aspects out to dry. To name just a few:

  • Compliance to a documented sub set of the CSS specs.
  • Knowledge about actual CSS properties.
  • Validation of Less expressions (type safety) – this requires implementing a evaluation engine.
  • More sensible highlighting than the defaults provide.
  • Formatting.
  • Generation of CSS from Less.

However, I feel that a basic compliance to the CSS and Less syntax with the niceties that Xtext provides out-of-the-box based on the grammar and a minimal implementation of scoping, reflects the minimum that would make the Less editor useful in the sense that it improves the development experience over eliciting a response from the less.js parser.

Of course, a good CSS editor already exists in the Eclipse WTP project, so I only intend to support CSS to the point that it makes the Less editor more useful.¬†The CSS editor is therefore currently nothing more than a runtime dependency of the Less editor. In fact, it’s more of a development convenience than anything else, at the moment. I might roll the CSS language into the Less language somewhere along the line – probably when I start generating the CSS-intrinsic parts of the grammar.

Future ‘releases’ (i.e., the occasional new download) will gradually provide more functionality, depending on the amount of time I allocate for this project which will also depend on the amount of feedback I get. So, if you like it, be sure to drop me a line saying that and also what you would like to see in the plug-ins.

Finally, a word on:

Harvesting pre-existing implementations

It’s rare that pre-existing tooling can actually be used or even harvested to create an Xtext version, however, for numerous reasons. For example, the parsing tech used might be sufficiently different from Xtext’s LL(*) attribute grammar language to make a pre-existing grammar useless: e.g., it may be LALR(k), it may not be an attribute grammar (meaning that parsing and subsequent model creation are separate phases) or it may use things like semantic predicates and symbol tables to resolve parsing issues. Even an ANTLR grammar is usually quite insufficient to transform into an Xtext grammar.

So, whenever someone (probably a manager) suggests this approach, just point him/her to this paragraph ūüėČ On the other hand, having the source code of a pre-existing implementation really is very useful, especially as supplement or even substitute of a real language spec.

Please leave a comment if you are finding this useful. In case you’re not able to duplicate my efforts for your own DSL(s) or lack the time to do so yourself: you can hire me as a¬†consultant.

Categories: DSLs, The How, Xtext

Path expressions in entity models, revisited

November 23, 2011 6 comments

Some 15 months ago, I wrote a blog detailing how to implement path-like expressions in Xtext DSLs using a custom scope provider. At lot has changed in the Xtext ‘verse since then, and I was triggered to update that blog by a comment on it. Also, the blog seems to be one of my more popular ones and I can refer to it every now and then on the Eclipse Xtext forum.

Most of what I wrote then which wasn’t particular to the ‘Xtext Domain-Model Example’ is still true: the scope provider API hasn’t changed (apart from IGlobalScopeProvider) and the way that scoping is triggered from the generated parser is fundamentally the same.¬†However, the domain model example itself has changed a lot: it now serves to show (off) a lot of features that were introduced with Xtext 2 and Xtend(2). In particular, it relies heavily on Xbase by extending the grammar from Xtype (a sub language of Xbase) and extending the custom scope provider (in Xtend) and validator implementations (in Java) from their Xbase versions,¬†meaning the DSL gets a lot of functionality essentially for free.

This also means that the grammar has changed enough from the original to make it rather delicate to adapt for my original intentions. In particular, the notions of Attribute and Reference have been clobbered together into the notion of Property which directly references a JVM type. To adapt the example I’d have to rely on classes like JvmFeatureDescriptionProvider in order not to re-invent the wheel, but I fear that bringing all that extra machinery is getting in the way of the idea I was trying to get across.

So, instead of that, I’ve opted for the easy way by starting from the original grammar and custom scope provider implementation,¬†porting those over to Xtext 2.1(.0 or .1) and adapting those. In the process, I’ve tried to make the most of the latest, newest features of Xtext 2.x and especially Xtend: all custom code is done in Xtend, rather than Java.

For your convenience, I packed everything in ZIPs and uploaded them to my GitHub repository: complete Eclipse projects and only the essential files¬†– I’m assuming you’ll find out how to interpret the latter in an Xtext context.¬†For even more convenience, I exposed three of the essential files through GitHub’s gist feature: see below, in the running text.

It should be clearly discernible what I’ve added to¬†the grammar¬†to implement the path expressions. I’ve also added a PathElement convenience super type, just to make life a little easier (read: more statically-typed) on the Java/Xtend side of things.

I rewrote the scope provider implementation completely in Xtend. To be able to replace DomainModelScopeProvider.java¬†completely with DomainModelScopeProvider.xtend, you’ll have to add “generateStub = false” to the configuration of the scoping generator fragment in the MWE2 workflow file – otherwise, you’ll get two generated DomainModelScopeProvider classes after running the worklow. Note that the implementations of the¬†scope_Reference_opposite¬†and scope_PathTail_feature are much more succinct and readable than they originally were, because of Xtend’s collection extensions and built-in polymorphic dispatch feature.

Also note that I made use of a separate Xtend file DomainModelExtensions¬†implementing derived properties of model elements in a domain model. Xtend’s extension mechanism allows me to use these extensions transparently with no boilerplate at all apart from the extension declaration.

I also re-implemented the validator in Xtend. Because the Java validator generator fragment doesn’t have a generateStub parameter like the scoping fragment, we have to use a bit of boilerplate in the form of the DomainModelJavaValidator Java class extending the DomainModelXtendValidator Xtend class.

Lastly, I’ve implemented a couple of simple unit tests in Xtend using Xtext 2’s new JUnit4-ready unit testing capabilities. Note that it’s quite simple to do parameterized tests on a fairly functional level: see the ScopingTest Xtend class (in the .tests project) for details.

Categories: DSLs, The How, Xtend(2), Xtext

Implementing existing DSLs with Xtext – a case study, part 0

November 3, 2011 5 comments

This is the first installment in a series of blog posts on how to implement a “typical” existing DSL using Xtext. It’s intended as an informal guide showing how I tend to go about implementing a DSL, solving often-occurring problems and making design decisions.¬†I’m actually going to implement¬†two¬†DSLs: CSS (version 3, on average) and¬†Less. These are technical DSLs rather than domain DSLs, but I’ve only got a finite amount of time per week so live with it, alright?

The case of existing DSLs is both an interesting and a common one: we don’t have to design the language which means we can focus purely on the implementation and don’t have to bother with everything that goes into designing a good language. On the other hand, we are often forced to deal with “features” which might be cumbersome to implement in the “parsing tech du jour”. In the green field case, you’d have the freedom to reconsider having that feature in the language at all, but not here.

For this case study, I chose Less because it appealed to me: I kind-of loath working with CSS directly. Although Less provides a¬†way of getting feedback on syntactical and semantic validity through less.js and highlighting/language assist modules have been developed for various editors, I’d rather have a full-fledged editor for the Less language instead of a tedious “change-compile-fix type” development cycle. Also, it seemed a¬†rather good showcase for Xtext outside of the now-popular JVM language envelope.

It became apparent early on that I’d also need a full implementation of CSS as well since Less extends CSS, which gives me a chance to show¬†how to do¬†language¬†composition¬†as well as grammar generation with Xtext.¬†Other aspects which will be addressed, are:¬†expression sub languages,¬†type systems,¬†evaluation¬†and¬†code generation using model-to-model transformation.

This post doesn’t have any technical details yet: sorry! You can follow my efforts on/through Github, which holds the sources as well as a downloadable ZIP with deployable Eclipse plug-in JARs.

Categories: The How, Xtext

To mock a…DSL?

October 3, 2011 Leave a comment

Even with all these language workbenches and DSL frameworks which make creating a DSL a lot easier than it used to be, it’s still not a trivial matter – hence, it takes a non-trivial effort to implement even a rough first draft of a DSL. This also means that it makes sense to get an idea of what your DSL should actually look like before you really start hacking away at the implementation.¬†This is especially true for graphical DSLs which are generally harder to build than textual DSLs, so you can save a lot of wasted efforts by investing in a well-executed mockup phase.

Very rarely you’ll have an actual language specification upfront. The situation that you’re creating a completely new DSL is both much more common and also much more interesting: it is first and foremost an adventure into Domain Land in which you get to learn new people, see interesting things and discover hoards of hidden knowledge, often buried in caches of moldy Word or Excel documents. Often, a language-of-sorts is already around, but without a clearly-defined syntax, let alone a formal definition and without any tooling. It’s our job, as DSL builders, to bring order to this chaos.

Get to know the domain

That’s why the first item on the agenda is getting to know the domain, and the people living it,¬†really well. Us geeks have a tendency to cut corners here: after all, we managed to ace the Compiler and Algorithm Analysis courses, so what could possible be difficult about, say,¬†refrigerator¬†configurations, right? But bear in mind that a DSL is, well, domain-specific so you’d better make sure it fits into the heads of the people which make up your domain and it better fit good, making them more productive even taking into account that they need to learn new tools and probably a new way of working as well.

If the fit is sub-optimal, they’re like to bin your efforts as soon as they’ve come across the first hurdle – which probably even means that you lost your foot in the door regarding model-driven/DSL solutions. (Another way is saying the same thing is¬†that a domain is defined by the group of people which are considered part of it, not the other way around.)

Make mockups

Therefore, the second item on the agenda is coming up with a bunch of mockups. The intent of these mockups is (1) for your domain users to get an idea of their DSL by gauging what¬†actual content would look like expressed in it, and (2) for you to gain feedback from that to validate and guide your DSL design and implementation.¬†It’s important that you use actual content the domain users are really familiar with¬†for these mockups: introducing a DSL tends to be seen as a disruptive innovation even by the most innovation-prone people (and we all know that organizations are rife with that kind…) so your domain users must be able to see where things are going for them.

You don’t achieve that by using something that is too generic/unspecific (e.g., of the ‘Hello world!’ type), too small (which probably means you’re not even beginning to touch corner cases where it really matters what the DSL looks like and how much expressivity it allows) or not broad enough (i.e., overly focusing on a particular aspect the DSL addresses instead of achieving a good spread).

It’s also good to have a few variants for the DSL. There are a lot of, often fairly continuously-valued parameters you can tweak in a DSL:

  1. Is the syntax overly verbose or on-the-verge-of-cryptic minimal?
  2. How is the language factored: one big language or several small ones, each addressing one specific aspect?
  3. How is the prose (i.e., content expressed in the DSL) modularized: one big model, or spread over multiple files?
  4. Is there a way to make content re-usable or to make abstractions?
  5. How is visibility (scoping) and naming (e.g., qualified names with local imports) organized?

Each of these parameters (and also the many more I didn’t list) determine how good the fit with the people in the domain and their way of working is going to be. Also, the eventual language design is going to influence the engineering and the complexity of the implementation directly.¬†The mockup phase is as good a time as any to find out how much leeway you have in the design to lighten the load and optimize implementation efforts and proposing variants is a good way of exploring the space.

What to make mockups with

What do you use for mockups? Anything that allows you to approximate the eventual syntax of the language. For textual DSLs, you can use anything that allows you to mimic the syntax highlighting (i.e., font style and color – I consider highlighting part of the syntax). For graphical DSLs, anything besides Paint could possibly work. In both cases, you’d best pick something that both you and your domain users are already familiar with so that it is easy for everyone involved to contribute to the process by changing stuff and even coming up with their own mockups. Chances are OpenOffice or any of its commercial rivals provide you with more than enough.

Obviously, you’re going to miss out on the tooling aspect: a good editing experience (content assist, navigation of references, outlines, semantic search/compare, etc.) makes up a large part of any modern DSL. Keep in mind though that the DSL prose is going to be read¬†much more often than it is going to be written¬†(i.e.,¬†modified¬†or created), and the use of mockups reflects that.¬†Martin Fowler coined the term ‘business-readable DSLs’ because of this and the fact that domain users who are really able to write¬†DSL prose seem to be relatively rare. In any case, you should try whether your domain users that they will be actually be able to create and modify prose, using only the proposed syntax and no tooling.

Conclusion

Having arrived at a good consensus on the DSL’s syntax and design, you can start hammering away at an implementation knowing that the first working draft should not come as a surprise to your domain users. In true Agile fashion, you should present and share working but incomplete versions of the DSL implementation as soon and often as possible and elicit feedback. This also countermands the traditional “MDSD as a magical black box”-thinking which is often present in the uninitiated.

To conclude: making mockups of your DSL before you start implementing is useful and saves you a lot of wasted implementation effort later on.

Categories: DSLs, The How

Annotation-based dispatch for scope providers

One of the slightly awkward aspects of Xtext is that the org.eclipse.xtext.scoping.impl.AbstractDeclarativeScopeProvider class essentially relies on a naming convention to dispatch scoping to the correct method. Such a strategy is quite brittle when you’re changing your grammar (or the Ecore meta model) as it doesn’t alert you to the fact that method names should change as well.

In an earlier blog, I already gave a tip to help deal with both this as well as with knowing which method to actually implement, but that doesn’t make this a statically-checked enterprise yet. The challenge here is to come up with a suitable compile-time Java representation of the references and types (or in grammar-terms: features and parser rules) involved, otherwise it wouldn’t be static checking, right? Unfortunately, the rather useful Java representation of references provided by the <MyDSL>PackageImpl class is a purely run-time representation.

Instead, I chose to make do with the IDs defined for types and their features in the <MyDSL>Package class instead to come up with an¬†implementation of an annotation-based strategy for scope provider specification. It turns out that this is rather easy to do by extending the¬†AbstractDeclarativeScopeProvider class.¬†I’ve pushed my implementation -aptly called AbstractAnnotationBasedDeclarativeScopeProvider (which would probably score a triple word value in a game of Scrabble)- to my open-source GitHub repository: the source and the plugin JAR.

Usage

Usage is quite simple (as it should be): have your scope provider class extend ¬†AbstractAnnotationBasedDeclarativeScopeProvider and add either a¬†ScopeForType¬†or¬†ScopeForReference¬†annotation (both¬†conveniently contained in theAbstractAnnotationBasedDeclarativeScopeProvider¬†class) to the scoping methods. The ScopeForType annotation takes the class(ifier) ID of the EClass to scope for. The¬†ScopeForReference annotation also takes the feature ID of the reference (contained by the EClass specified) to scope for. Both these IDs are found in the <MyDSL>Package class as simple int constants. Note that it’s not checked whether these IDs actually belong together (in the second case) as the <MyDSL>Package class doesn’t actually encode that information.

As long as you use the <MyDSL>Package class to obtain IDs, this notation is checked at compile-time so that when something changes, chances are good that the annotation breaks and you’re alerted to the fact you have to change something.

As an example, consider the DomainmodelScopeProvider scope implementation class for the Domain Model example project shipped with Xtext 1.0.x: have that class extend AbstractAnnotationBasedDeclarativeScopeProvider and add the following annotation to scope_Reference_opposite method to use the annotation-based strategy.

@ScopeForReference(classId=DomainmodelPackage.REFERENCE, featureId=DomainmodelPackage.REFERENCE__OPPOSITE)

The current implementation is nice enough to tell you (as a warning in the log4j log) that it’s encountered a method which seems to comply to the naming-based strategy but nevertheless refuses to call the method to avoid having mixed-strategy behavior. I might change the behavior in the future to remove this nicety or to make it configurable (and non-default), though.

Design decisions

I could have used a Class<? extends EObject> instance to specify the EClass (or rather, the Java type of the instances of that). The example given above would then look as follows.

@ScopeForReference(class=Reference.class, featureId=DomainmodelPackage.REFERENCE__OPPOSITE)

However, you still need the right feature ID to specify the EReference so I chose to stick to using two IDs which clearly belong together as it communicates a little clearer. I also thought it’s better to use standard Ecore infrastructure throughout and not to rely on the particular way Ecore maps to actual Java classes.

Let me know…

…what you think of this approach! Is it useful? Is the notation clear? Do you have a preference for the use of the Class<? extends EObject>-style? Be sure to drop me line.

Categories: The How, Xtext

Checklist for Xtext DSL implementations

April 8, 2011 1 comment

Currently I’m in the US, working with a team that’s building a number of DSLs with Xtext and have been doing that for some time already. The interesting thing is that this team is quite proficient at doing that and tackling all sorts of gnarly problems (coming either from a legacy language which they have to emulate to some level or from requirements coming from the end users), even though most of them have only been working with Xtext for a few months. However, during the past week I realized that I unconsciously use a collection of properties which I check/look for in Xtext DSLs and since I use it unconsciously I wasn’t really aware of the fact that not everyone was using the same thing. In effect, the team had already run into problems which they had solved either completely or partly in places which were downstream from the root causes of the problem. The root causes generally resided at the level of the grammar or scope provider implementation and would (for the most part) have been covered by my unconscious checklist. Had the team had my checklist, they’d probably saved both time and headaches.

Since existing sources (i.e., the Xtext User Guide and, e.g., Markus V√∂lter’s “MD* Best Practices” paper) are either reference-typed or quite general and somewhat hard to easily map to the daily Xtext practice, I figured I’d better make this list explicit.¬†I divvied the checklist up in three sections: one concerning the Generate<MyDsl>.mwe2 file, one concerning the grammar file and one concerning the Java artifacts which augment the grammar.

Generator workflow

  1. Do the EPackages imported in the grammar file correspond 1:1 with the referencedGenModels in the workflow?
  2. Do you know/understand what the configured fragments (especially pertaining to naming, scoping, validation) provide out-of-the-box?
  3. Is backtracking set to false (default) in the options configuration for the¬†XtextAntlrGeneratorFragment? I find that backtracking is¬†rarely needed and unless it is, enabling backtracking introduces quite a performance hit and. More importantly, it might hide ambiguities (i.e., they don’t get reported during the generation phase) in the grammar at a point you didn’t need the backtracking for anyway.

To expand a little on the second item, here’s a list of the most important choices you’ve got:

  • naming: exporting.SimpleNamesFragment versus exporting.QualifiedNamesFragment
  • scoping: scoping.ImportURIScopingFragment versus scoping.ImportNamespacesScopingFragment
  • validation.JavaValidatorFragment has two composedChecks by default: ImportUriValidator which validates importURI occurrences (only useful in case you’ve configured the ImportURIGlobalScopeProvider in the runtime module, either manually or by using ImportURIScopingFragment), and NamesAreUniqueValidator (which checks whether all objects exported from the current Resource have unique qualified names).

Grammar

  1. Any left-recursion? This should be pretty obvious since Xtext generator breaks anyway and leaves the DSL projects in an unbuildable state.
  2. No ambiguities (red error messages coming from the ANTLR parser generator)? Ambiguities generally either come from ambiguities at the token level (e.g., having a choice ‘|’ which consume the same token type) or overlapping terminal rules (somewhat rarer since creating new terminal rules and/or changing existing ones is fortunately not that common).
  3. Does the grammar provide semantics which are not syntactical in nature? Generally: grammar is for syntax, the rest (scope provision, validation, name provision, etc.) is for semantics.
  4. Did you document the grammar by documenting the semantics of each of the rules, also specifying aspects such as naming, scoping, validation, formatting, etc. (unfortunately, in comment-form only)? Since the grammar is the starting point of the DSL implementation, it’s usually best to put as much info in there as possible‚Ķ
  5. Did you add a {Foo} unassigned action to the rules which do not necessarily assign to the type? (Saves you from unexpected NullPointerExceptions.)

To expand on the second item pertaining to ambiguities:

  • Most ambiguities of the first kind are introduced by incorrect setup of an expression sub language. Make sure you use the pattern(s) described in Sven‘s and two of my blog posts.
  • Favor recursive over linear structures in the context of references into recursive structures. This makes implementing the scope provider all the more easier (or even: possible). For a example of this:¬†see this¬†blog post.

Java artifacts

First some checks which pertain to implementation of the custom local scope provider:

  1. Are you using the “narrow” form (signature: IScope scope_<Type>_<Reference>(<ContextType> context, EReference ref), where Reference is a feature of Type) as much as possible?
  2. Are you using the “wide” form (signature: IScope scope_<Type>(<ContextType> context, EReference ref)) where it makes sense?
  3. Have you chosen the ContentType (see previous item) to be convenient so you don’t need to travel up the containment hierarchy?

For the rest of the Java artifacts:

  1. Is your custom IQualifiedNameProvider implementation bound in the runtime module?
  2. Does the bound IQualifiedNameProvider implementation compute a qualified name for the model root? (Important in case you’re using the org.eclipse.xtext.mwe.Reader class.)
  3. Have you implemented value converters (see §5.7) for all the data type rules in the grammar?
  4. Have you bound the value converter class in the runtime module?
Categories: DSLs, The How, Xtext