Implementing existing DSLs with Xtext – a case study, part 1

November 28, 2011 4 comments

Unlike the previous installment, which held no technical details whatsoever, this one’s going to get a tad greasier, but not much.

Obtaining the language specs

First item on the agenda is obtaining the specifications of the language, if they exist. If they don’t exist, you want to get hold of as many people as you can which might have the specification in their heads. In both cases, you also want to get hold of ‘proza’, i.e. actual content written using the language. It’s important to verify whether that proza is complete and valid, and if not: what error messages are expected. Also, pre-existing tooling can and should be used to validate your own efforts by comparing output from both on the same input.

In the case of CSS it’s relatively easy: all specifications can be found on the W3C web site. However, here it’s already clear that it’s not going to be a walk in the park: the specification is divided into numerous modules which have different status (recommendation vs. official) and varying applicability (through profiles/media: HTML, printing, hearing assistance, etc.).

For Less it’s slightly more difficult: the web site provides a lot of examples but it doesn’t constitute a spec. In fact, careful inspection of the less.js parser sources and unit tests shows that the spec is a bit wider than the examples with respect to selectors of mixins.

This showcases an important non-technical aspect of endeavors such as these:

Customer expectation management

Since the DSL already exists, it’s quite probable that it will have domain users which have been using this DSL and have certain expectations towards this newer implementation. Also, management will have its reasons to go greenlight something that already exists in some shape or form for which they already paid.

It’s important to get social with these customers, gauge their expectations and to set goals for your implementation accordingly. The golden rule here is to provide something that’s good enough to make it worth changing from the existing tooling to the newer tooling. By providing this value as early as possible, you will have an army of testers to battle-harden your implementation and iron out any bugs or incompatibilities.

The silver rule is not to overdo it: pick the initial scope responsibly, get a green light on the scope from users and management, deliver the value and harvest feedback.

In the case of an open-source endeavor like Less/CSS, this is quite a bit trickier: there are plenty of users (certainly of CSS) but probably none share a manager with you so you have to rely on social media to get people to know your stuff, try it out and provide you with feedback. But even here you have to manage  expectations. Note that you can also get feedback some by measuring download numbers and page traffic on the GitHub repository.

For Less/CSS, I’ve set the following initial scope after careful consideration:

Be able to completely parse all CSS and Less unit tests in the GitHub repo, giving errors only where the content really is invalid.

For CSS, this means that I’m ‘vaguely’ going to support CSS3. Obviously, this leaves quite a number of aspects out to dry. To name just a few:

  • Compliance to a documented sub set of the CSS specs.
  • Knowledge about actual CSS properties.
  • Validation of Less expressions (type safety) – this requires implementing a evaluation engine.
  • More sensible highlighting than the defaults provide.
  • Formatting.
  • Generation of CSS from Less.

However, I feel that a basic compliance to the CSS and Less syntax with the niceties that Xtext provides out-of-the-box based on the grammar and a minimal implementation of scoping, reflects the minimum that would make the Less editor useful in the sense that it improves the development experience over eliciting a response from the less.js parser.

Of course, a good CSS editor already exists in the Eclipse WTP project, so I only intend to support CSS to the point that it makes the Less editor more useful. The CSS editor is therefore currently nothing more than a runtime dependency of the Less editor. In fact, it’s more of a development convenience than anything else, at the moment. I might roll the CSS language into the Less language somewhere along the line – probably when I start generating the CSS-intrinsic parts of the grammar.

Future ‘releases’ (i.e., the occasional new download) will gradually provide more functionality, depending on the amount of time I allocate for this project which will also depend on the amount of feedback I get. So, if you like it, be sure to drop me a line saying that and also what you would like to see in the plug-ins.

Finally, a word on:

Harvesting pre-existing implementations

It’s rare that pre-existing tooling can actually be used or even harvested to create an Xtext version, however, for numerous reasons. For example, the parsing tech used might be sufficiently different from Xtext’s LL(*) attribute grammar language to make a pre-existing grammar useless: e.g., it may be LALR(k), it may not be an attribute grammar (meaning that parsing and subsequent model creation are separate phases) or it may use things like semantic predicates and symbol tables to resolve parsing issues. Even an ANTLR grammar is usually quite insufficient to transform into an Xtext grammar.

So, whenever someone (probably a manager) suggests this approach, just point him/her to this paragraph 😉 On the other hand, having the source code of a pre-existing implementation really is very useful, especially as supplement or even substitute of a real language spec.

Please leave a comment if you are finding this useful. In case you’re not able to duplicate my efforts for your own DSL(s) or lack the time to do so yourself: you can hire me as a consultant.

Categories: DSLs, The How, Xtext

Path expressions in entity models, revisited

November 23, 2011 6 comments

Some 15 months ago, I wrote a blog detailing how to implement path-like expressions in Xtext DSLs using a custom scope provider. At lot has changed in the Xtext ‘verse since then, and I was triggered to update that blog by a comment on it. Also, the blog seems to be one of my more popular ones and I can refer to it every now and then on the Eclipse Xtext forum.

Most of what I wrote then which wasn’t particular to the ‘Xtext Domain-Model Example’ is still true: the scope provider API hasn’t changed (apart from IGlobalScopeProvider) and the way that scoping is triggered from the generated parser is fundamentally the same. However, the domain model example itself has changed a lot: it now serves to show (off) a lot of features that were introduced with Xtext 2 and Xtend(2). In particular, it relies heavily on Xbase by extending the grammar from Xtype (a sub language of Xbase) and extending the custom scope provider (in Xtend) and validator implementations (in Java) from their Xbase versions, meaning the DSL gets a lot of functionality essentially for free.

This also means that the grammar has changed enough from the original to make it rather delicate to adapt for my original intentions. In particular, the notions of Attribute and Reference have been clobbered together into the notion of Property which directly references a JVM type. To adapt the example I’d have to rely on classes like JvmFeatureDescriptionProvider in order not to re-invent the wheel, but I fear that bringing all that extra machinery is getting in the way of the idea I was trying to get across.

So, instead of that, I’ve opted for the easy way by starting from the original grammar and custom scope provider implementation, porting those over to Xtext 2.1(.0 or .1) and adapting those. In the process, I’ve tried to make the most of the latest, newest features of Xtext 2.x and especially Xtend: all custom code is done in Xtend, rather than Java.

For your convenience, I packed everything in ZIPs and uploaded them to my GitHub repository: complete Eclipse projects and only the essential files – I’m assuming you’ll find out how to interpret the latter in an Xtext context. For even more convenience, I exposed three of the essential files through GitHub’s gist feature: see below, in the running text.

It should be clearly discernible what I’ve added to the grammar to implement the path expressions. I’ve also added a PathElement convenience super type, just to make life a little easier (read: more statically-typed) on the Java/Xtend side of things.

I rewrote the scope provider implementation completely in Xtend. To be able to replace DomainModelScopeProvider.java completely with DomainModelScopeProvider.xtend, you’ll have to add “generateStub = false” to the configuration of the scoping generator fragment in the MWE2 workflow file – otherwise, you’ll get two generated DomainModelScopeProvider classes after running the worklow. Note that the implementations of the scope_Reference_opposite and scope_PathTail_feature are much more succinct and readable than they originally were, because of Xtend’s collection extensions and built-in polymorphic dispatch feature.

Also note that I made use of a separate Xtend file DomainModelExtensions implementing derived properties of model elements in a domain model. Xtend’s extension mechanism allows me to use these extensions transparently with no boilerplate at all apart from the extension declaration.

I also re-implemented the validator in Xtend. Because the Java validator generator fragment doesn’t have a generateStub parameter like the scoping fragment, we have to use a bit of boilerplate in the form of the DomainModelJavaValidator Java class extending the DomainModelXtendValidator Xtend class.

Lastly, I’ve implemented a couple of simple unit tests in Xtend using Xtext 2’s new JUnit4-ready unit testing capabilities. Note that it’s quite simple to do parameterized tests on a fairly functional level: see the ScopingTest Xtend class (in the .tests project) for details.

Categories: DSLs, The How, Xtend(2), Xtext

On documentation (or lack thereof)

November 16, 2011 7 comments

There are probably enough books, articles, blogs and tweets on documentation to fill a library, so there’s absolutely no need that I vent my thoughts. Fsck it, I’m going to do so anyway, spurred on by a recent tweet “discussion” and a general lack of Javadoc on classes of one of my favorite frameworks. And don’t worry: I’ll make it personal and not based on any existing literature or body of knowledge whatsoever, so it’s completely fresh and unadulterated 😉

My initial grumbling tweet read “Hmmm, why do devs confuse beta/experimental with ‘no need to document’.” To which I got the reply from @markijbema: “Most documentation is still too far away from coding. I’m willing to document if it’s DRY and away from my normal workflow.” (Note that this post is not a rant about that tweet: the tweet just happened to trigger me to write up the beliefs I’ve had for some time now.)

Now, I completely agree with the DRY-part. After all, we’ve all seen code like this:

/**
 * Increments the given integer with 1.
 *
 * @return The given integer, incremented with 1.
 * @param i
 *            - the integer to be incremented.
 */

public int increment(int i) {
  return ++i; // increment i with 1 and return
}

Good for your LoC count, bad for everything else. This complies with Mark’s remark about ‘documentation being (still) too far away from coding’. However, good documentation is not too close to the code, as documentation should augment the code but let the code itself speak for itself as much as possible. To give one good reference, despite all my promises in the first paragraph: go read Kent Beck’s Implementation Patterns.

I completely disagree with Mark’s assertion that documenting shouldn’t be part of a normal workflow, though. There are a couple of reasons for this. First of all, if it isn’t part of a normal workflow, it’s not going to happen, period. Now, a lack of documentation could very well be more desirable than bad documentation, since it can’t put anyone on the wrong foot and doesn’t cost any effort to produce.

Alas, the real world often insists on some documentation which usually means it will be done poorly. By someone else than an original coder. That someone else being a non-coder or a very bad one since he/she isn’t allowed to touch the code anymore. And it’s done about 5 minutes before the last deadline in about as many minutes. (This goes some way in explaining why 70% of all software never makes it to Production.)

Secondly, the effort it takes to write good documentation is an inverse measure of the understandability or the engineering quality of the code: if it’s difficult to describe what some piece of code does, then I’ll bet that the code is difficult to understand, hence difficult to use, to maintain and to change. If it’s easy to write good documentation, then you’re done in no time at all and there’s no reason why you shouldn’t have taken the effort 🙂

How I do it (or at least: try to)

For myself, I tend to document my code on the ‘module’ level (which can mean anything from a singleJava package to a coherent set of Eclipse projects – whatever constitutes a sensible, functional/technical breakup of the code base), on the class level and one the feature level (meaning fields or methods). Also, I tend to document design decisions in the code, right at the point where I made them.

On the module level, I stick to high-level stuff (what am I trying to achieve, what are the dependencies in terms of infrastructure, libraries and development environment, what does the road map look like and how can someone influence it, how can one contribute), possibly combined with pointers to ‘hooks’ in the code base which act as starting points from which you can traverse the code base in an meaningful direction.

On the class level I tend to document aspects like the class’ responsibility and usage patterns –What is this class intended to achieve? How can this class be used by other classes to that end?- and nothing more (apart from design decisions and such). Note that the usage patterns tend to include dependencies.

I’ve often noticed that documenting the class’ responsibility in a high-level manner is often already useful for yourself, but it’s especially important for other people who try to understand your code base. As an example: I know quite a bit about Xtext, but its code base generally lacks JavaDoc which means I have to second-guess about the function of a class and how to use. This also means I’m only going to notice gems like o.e.x.util.OnChangeEvictingCache until I stumble across them and see someone else use it.

On the feature level, I try to be as succinct as possible, especially with private features, but I don’t restrict documentation to public features. After all, visibility should be about separation of concerns, coherence and loose coupling and doesn’t pertain to the code’s understandability as it transcends the boundaries of visibility. I think quite a bit about good method names, but also about parameter names, even in languages that only have positional instead of named parameters (like Java) since these do show up in the documentation and can be used to convey meaning. Or the other way around: badly chosen names can cause more confusion than documentation can cure.

Overall, I try to tie the documentation as much as possible to the actual code by use of JavaDoc or similar DSLs, since that gives me a kind of limited static type safety on the documentation. At any point I reserve the “how” for in-code comments (e.g., Java single or multi-line comments) and the “what” and “why” for the ‘official documentation’ (e.g., JavaDoc).

Categories: SD in general

Implementing existing DSLs with Xtext – a case study, part 0

November 3, 2011 5 comments

This is the first installment in a series of blog posts on how to implement a “typical” existing DSL using Xtext. It’s intended as an informal guide showing how I tend to go about implementing a DSL, solving often-occurring problems and making design decisions. I’m actually going to implement two DSLs: CSS (version 3, on average) and Less. These are technical DSLs rather than domain DSLs, but I’ve only got a finite amount of time per week so live with it, alright?

The case of existing DSLs is both an interesting and a common one: we don’t have to design the language which means we can focus purely on the implementation and don’t have to bother with everything that goes into designing a good language. On the other hand, we are often forced to deal with “features” which might be cumbersome to implement in the “parsing tech du jour”. In the green field case, you’d have the freedom to reconsider having that feature in the language at all, but not here.

For this case study, I chose Less because it appealed to me: I kind-of loath working with CSS directly. Although Less provides a way of getting feedback on syntactical and semantic validity through less.js and highlighting/language assist modules have been developed for various editors, I’d rather have a full-fledged editor for the Less language instead of a tedious “change-compile-fix type” development cycle. Also, it seemed a rather good showcase for Xtext outside of the now-popular JVM language envelope.

It became apparent early on that I’d also need a full implementation of CSS as well since Less extends CSS, which gives me a chance to show how to do language composition as well as grammar generation with Xtext. Other aspects which will be addressed, are: expression sub languagestype systemsevaluation and code generation using model-to-model transformation.

This post doesn’t have any technical details yet: sorry! You can follow my efforts on/through Github, which holds the sources as well as a downloadable ZIP with deployable Eclipse plug-in JARs.

Categories: The How, Xtext

To mock a…DSL?

October 3, 2011 Leave a comment

Even with all these language workbenches and DSL frameworks which make creating a DSL a lot easier than it used to be, it’s still not a trivial matter – hence, it takes a non-trivial effort to implement even a rough first draft of a DSL. This also means that it makes sense to get an idea of what your DSL should actually look like before you really start hacking away at the implementation. This is especially true for graphical DSLs which are generally harder to build than textual DSLs, so you can save a lot of wasted efforts by investing in a well-executed mockup phase.

Very rarely you’ll have an actual language specification upfront. The situation that you’re creating a completely new DSL is both much more common and also much more interesting: it is first and foremost an adventure into Domain Land in which you get to learn new people, see interesting things and discover hoards of hidden knowledge, often buried in caches of moldy Word or Excel documents. Often, a language-of-sorts is already around, but without a clearly-defined syntax, let alone a formal definition and without any tooling. It’s our job, as DSL builders, to bring order to this chaos.

Get to know the domain

That’s why the first item on the agenda is getting to know the domain, and the people living it, really well. Us geeks have a tendency to cut corners here: after all, we managed to ace the Compiler and Algorithm Analysis courses, so what could possible be difficult about, say, refrigerator configurations, right? But bear in mind that a DSL is, well, domain-specific so you’d better make sure it fits into the heads of the people which make up your domain and it better fit good, making them more productive even taking into account that they need to learn new tools and probably a new way of working as well.

If the fit is sub-optimal, they’re like to bin your efforts as soon as they’ve come across the first hurdle – which probably even means that you lost your foot in the door regarding model-driven/DSL solutions. (Another way is saying the same thing is that a domain is defined by the group of people which are considered part of it, not the other way around.)

Make mockups

Therefore, the second item on the agenda is coming up with a bunch of mockups. The intent of these mockups is (1) for your domain users to get an idea of their DSL by gauging what actual content would look like expressed in it, and (2) for you to gain feedback from that to validate and guide your DSL design and implementation. It’s important that you use actual content the domain users are really familiar with for these mockups: introducing a DSL tends to be seen as a disruptive innovation even by the most innovation-prone people (and we all know that organizations are rife with that kind…) so your domain users must be able to see where things are going for them.

You don’t achieve that by using something that is too generic/unspecific (e.g., of the ‘Hello world!’ type), too small (which probably means you’re not even beginning to touch corner cases where it really matters what the DSL looks like and how much expressivity it allows) or not broad enough (i.e., overly focusing on a particular aspect the DSL addresses instead of achieving a good spread).

It’s also good to have a few variants for the DSL. There are a lot of, often fairly continuously-valued parameters you can tweak in a DSL:

  1. Is the syntax overly verbose or on-the-verge-of-cryptic minimal?
  2. How is the language factored: one big language or several small ones, each addressing one specific aspect?
  3. How is the prose (i.e., content expressed in the DSL) modularized: one big model, or spread over multiple files?
  4. Is there a way to make content re-usable or to make abstractions?
  5. How is visibility (scoping) and naming (e.g., qualified names with local imports) organized?

Each of these parameters (and also the many more I didn’t list) determine how good the fit with the people in the domain and their way of working is going to be. Also, the eventual language design is going to influence the engineering and the complexity of the implementation directly. The mockup phase is as good a time as any to find out how much leeway you have in the design to lighten the load and optimize implementation efforts and proposing variants is a good way of exploring the space.

What to make mockups with

What do you use for mockups? Anything that allows you to approximate the eventual syntax of the language. For textual DSLs, you can use anything that allows you to mimic the syntax highlighting (i.e., font style and color – I consider highlighting part of the syntax). For graphical DSLs, anything besides Paint could possibly work. In both cases, you’d best pick something that both you and your domain users are already familiar with so that it is easy for everyone involved to contribute to the process by changing stuff and even coming up with their own mockups. Chances are OpenOffice or any of its commercial rivals provide you with more than enough.

Obviously, you’re going to miss out on the tooling aspect: a good editing experience (content assist, navigation of references, outlines, semantic search/compare, etc.) makes up a large part of any modern DSL. Keep in mind though that the DSL prose is going to be read much more often than it is going to be written (i.e., modified or created), and the use of mockups reflects that. Martin Fowler coined the term ‘business-readable DSLs’ because of this and the fact that domain users who are really able to write DSL prose seem to be relatively rare. In any case, you should try whether your domain users that they will be actually be able to create and modify prose, using only the proposed syntax and no tooling.

Conclusion

Having arrived at a good consensus on the DSL’s syntax and design, you can start hammering away at an implementation knowing that the first working draft should not come as a surprise to your domain users. In true Agile fashion, you should present and share working but incomplete versions of the DSL implementation as soon and often as possible and elicit feedback. This also countermands the traditional “MDSD as a magical black box”-thinking which is often present in the uninitiated.

To conclude: making mockups of your DSL before you start implementing is useful and saves you a lot of wasted implementation effort later on.

Categories: DSLs, The How

Using Xtext’s Xtend(2) language

September 19, 2011 4 comments

The 2.0 release of Xtext also saw the introduction of the Xtend language. “Wait a minute, didn’t that already exist in Xtext 1.0 and openArchitectureWare 4.x before that?” I hear you saying. Indeed, the new language is should actually be called Xtend2 but since it’s meant to replace both Xtend(1) and Xpand, the makers have opted to drop the 2 suffix and assume that you’d know the difference. In any case, the languages differ in the file extensions used: .xtend for Xtend(2) and .xpt/.ext for Xpand/Xtend(1). Xpand and Xtend(1) are still part of the Xtext distribution, apparently for convenience, backwards compatibility and easy of migration, although there’s no support on these languages and execution engines any more. I also noted that the Xtext generator still relies heavily on Xpand/Xtend(1).

I’ve been using Xtend (i.e., Xtend2, but I’m going to drop the ‘2’ from now on as well) for some time now as a replacement for Xpand/Xtend -and even JSP- and I wanted to share my experiences and impressions -mostly good- with you and discuss a number of differences between Xpand/Xtend and Xtend.

The good

Xtend provides a decidedly functional programming-flavored abstraction over Java. Xtend files are compiled down to Java source code on-the-fly (courtesy of an Xtext builder participant – more on that later). The pros of this approach are performance, integration and debuggability using ye ole’ Java debugger. The generated Java code is fairly readable and 1-to-1 with the original Xtend code so tracing back to that is not really difficult, although full traceability would have been a boon here. I’ve never gotten into the groove of Xtend(1)’s custom debugger, preferring println()-style debugging over it. Compilation also means that you can refer to the compiled Java classes from outside of Xtend. It’s even possible to use Xtend as a type-safe replacement language for JSP.

The rich strings are by far the biggest improvement over the old Xpand/Xtend combination: the intelligent white space handling is nothing short of brilliant. They bring templates into the functional realm: a template is now “just” a function, so you can freely mix templates and “regular” code as it makes sense. Don’t forget that a juxtaposition of rich strings evaluates all but only returns the last, though – there’s no automagical concatenation.

It’s extremely easy to hook into the Xtext builder mechanism using a builder participant. In fact, this is the out-of-the-box situation so you only have to open the generated <MyDSL>Generator.xtend and implement the doGenerate function to have your UI language plug-in automagically fire up that generation on projects with the Xtext nature. Since the Xtext builder  works in an incremental manner, the generation is only triggered for the files which have been touched or which (transitively) depend on it, whereas it used to be “the whole shebang”. If you factored and modularized your language in a sensible way, this means that turnaround for the users of your generation is much, much quicker.

The extension mechanism works just as in Xpand/Xtend. Both the dispatch and create semantics are just a bit more general than their Xpand/Xtend counterparts which is good. I also like the as-syntax for casts: because of type inference, the end of an expression feels like a better place for the cast and it support the usual thought process (“Oh wait, now I have to cast this too!”) better as well.

The bad^H^H^Hnot entirely fortunate

To be fair: I’ve only found a few things less fortunate and they are definitely not show stoppers – they all have reasonable workarounds or are quite minor to begin with. But, since this is the interWebs I’m going to share them with you regardless 😉 You might run into them yourself, after all.

The biggest thing is that Xtend derives its type system directly from the Java type system. Xpand/Xtend had a type system which allowed you to compose various type systems, with the EMF and JavaBeans type systems being the most widely used after the built-in type system which came with a whole lot of convenience for the common base and collection types. In Xtend, you essentially only have an improved JavaBeans type system so you’ll have to rely on what Java and its libraries offer you or else you’ll have to knock it out yourself.

In particular, the following useful features of Xpand/Xtend are missing:

  • «EXPANDFOREACH…» construct. This means that you find yourself typing«FOR item : items»«item.doSomething»«ENDFOR»quite often, especially since it’s often quite natural to factor out the contents of the loop. The workaround for this consists of a utility function-replacement using closures but the result is slightly less aesthetically pleasing than it used to be.
  • Syntactic sugar (shorthand) for property collection. “collection.property” was Xpand/Xtend shorthand for “collection.collect(i|i.property)“. In Xtend neither are possible and you’ll have to use .fold or .reduce to achieve the same and the equivalent code is nowhere near as readable as its Xpand/Xtend counterparts.
  • Using this as a parameter name. It was perfectly alright to use this as a parameter name in the definitions of Xtend functions and Xpand DEFINEs. It was even OK to say «FOREACH items AS this»…«ENDFOREACH». Since “property” is shorthand for “this.property” (it still is) this allowed you to create extremely succinct code. The User Guide mentions that it should be possible to re-define this but I couldn’t get that to work, so you’ll have to qualify all object access with the name of the parameter or variable.
On the IDE side of things, I miss Java’s automatic ordering and cleaning of imports. Also, the content assist on Java types in the Xtend editor doesn’t do the “captical trick” where “DCPF” expands to “DefaultConstructionProxyFactory”, e.g..
Lastly, Xtend doesn’t offer control over visibility of features in the generated Java and also doesn’t support abstract, static or (non-default) constructors. This has led me to use Java for building APIs and implementing the typical abstract/support part of it and Xtend for the “final” implementation – which tends to benefit most from the succinct syntax and features such as closures and rich strings.

The ugly

Again: I’ve only found a few of these and none of them show stoppers.

First of all, the compilation to Java does not always yield compiling code. (This is in full compliance with Joel Spolsky’s Law of Leaky Abstractions, since Xtend is an abstraction of Java.) Although the User’s Guide mentions that uncaught, checked exceptions thrown from your code are automatically wrapped in unchecked exceptions, this is not the case: instead, they are added to the throws-clause of the containing methods (corresponding to an Xtend function). This can break an override function or it can wreak havoc on code using this function. I’ve found myself writing a lot of trycatches to cope, which detracted from the otherwise quite succinct syntax quite a bit. Obviously, I would have had to write these anyway if I were using Java, but that’s not the point of Xtend, I think. (To compound the matter: you can’t explicitly declare a throws-clause in Xtend.)

Also, generic types involving wildcards (‘?’) are not water tight, although it’s fair to say this is a major problem with the Java type system in general and often enough extremely hard to get right. Not using wildcards is almost always possible, so that’s the obvious workaround.

Conclusion

All in all and despite my few, slight misgivings, Xtend is quite an improvement over Xpand/Xtend. I’d heartily recommend to start using it, both as a replacement to Xpand/Xtend (and JSP) as well as to Java itself.

Categories: Xtend(2)

My favorite quote

June 15, 2011 4 comments

It’s about time for a new blog, seeing that the last one was more than a month ago, and this time it’s going to be as non-technical as I can make it.

“It’s easier to ask for forgiveness than it is to get permission.” – attributed to Grace Hopper

has become one of my favorite quotes during my years working in the software industry, to the extent that it has become something of a personal mantra. I’ve used it a fair number of times in the face of managers or processes that only had short-term incentives or optimizing measurable-but-less-than-sensible metrics in mind, rather than long-term actual goals. In fact, the end result usually was forgiveness being extended rather than retaliation being dealt.

In each of the cases where I decided on not obtaining permission, I went with what I genuinely considered to be the most sensible approach to a real hard problem but which was met by the usual counter-“arguments” like “too much risk and not enough short-term Return-On-Investment” – these particularly crop up when trying to introduce model-driven development, obviously. One such situation I described in my first blog (which in hindsight was much too long).

My conviction to stand up to superiors (and sometimes peers as well) was fueled in part by my Frisian genes -which tend to provide a propensity for mule-headed stubbornness- but more so by the realization that people with MBAs are generally the people with the least suitable background (in terms of education and experience relevant to software engineering) to weigh the pros and cons lying on the table and come up with the Right Decision™.

It’s interesting to note that Grace Hopper was an officer in the US Navy. To me this proves that even in an organization as hierarchical as the military, it is sometimes possible or even required to do something that’s not actually approved or sanctioned by the powers that be in order to answer the call of duty and achieve the important goals. In her case it certainly did lead to forgiveness -at least in the sense that it outweighed the severity of the retaliation that did occur- and even recommendation: Hopper eventually rose up to the lofty rank of Rear Admiral.

Finally, it’s interesting to note that Hopper had a large hand in coming up with the first COBOL language implementation. I’m sure she must have thought at the time that COBOL was A Good Thing™. I’d wonder what she would think of our present IT world which apparently runs for a good 80% on good’ole COBOL, with millions of new lines of code of it being written every day (maybe even on weekend days!). Maybe we need some more boldness-without-permission to go and rid the world of it.

Categories: SD in general

Annotation-based dispatch for scope providers

One of the slightly awkward aspects of Xtext is that the org.eclipse.xtext.scoping.impl.AbstractDeclarativeScopeProvider class essentially relies on a naming convention to dispatch scoping to the correct method. Such a strategy is quite brittle when you’re changing your grammar (or the Ecore meta model) as it doesn’t alert you to the fact that method names should change as well.

In an earlier blog, I already gave a tip to help deal with both this as well as with knowing which method to actually implement, but that doesn’t make this a statically-checked enterprise yet. The challenge here is to come up with a suitable compile-time Java representation of the references and types (or in grammar-terms: features and parser rules) involved, otherwise it wouldn’t be static checking, right? Unfortunately, the rather useful Java representation of references provided by the <MyDSL>PackageImpl class is a purely run-time representation.

Instead, I chose to make do with the IDs defined for types and their features in the <MyDSL>Package class instead to come up with an implementation of an annotation-based strategy for scope provider specification. It turns out that this is rather easy to do by extending the AbstractDeclarativeScopeProvider class. I’ve pushed my implementation -aptly called AbstractAnnotationBasedDeclarativeScopeProvider (which would probably score a triple word value in a game of Scrabble)- to my open-source GitHub repository: the source and the plugin JAR.

Usage

Usage is quite simple (as it should be): have your scope provider class extend  AbstractAnnotationBasedDeclarativeScopeProvider and add either a ScopeForType or ScopeForReference annotation (both conveniently contained in theAbstractAnnotationBasedDeclarativeScopeProvider class) to the scoping methods. The ScopeForType annotation takes the class(ifier) ID of the EClass to scope for. The ScopeForReference annotation also takes the feature ID of the reference (contained by the EClass specified) to scope for. Both these IDs are found in the <MyDSL>Package class as simple int constants. Note that it’s not checked whether these IDs actually belong together (in the second case) as the <MyDSL>Package class doesn’t actually encode that information.

As long as you use the <MyDSL>Package class to obtain IDs, this notation is checked at compile-time so that when something changes, chances are good that the annotation breaks and you’re alerted to the fact you have to change something.

As an example, consider the DomainmodelScopeProvider scope implementation class for the Domain Model example project shipped with Xtext 1.0.x: have that class extend AbstractAnnotationBasedDeclarativeScopeProvider and add the following annotation to scope_Reference_opposite method to use the annotation-based strategy.

@ScopeForReference(classId=DomainmodelPackage.REFERENCE, featureId=DomainmodelPackage.REFERENCE__OPPOSITE)

The current implementation is nice enough to tell you (as a warning in the log4j log) that it’s encountered a method which seems to comply to the naming-based strategy but nevertheless refuses to call the method to avoid having mixed-strategy behavior. I might change the behavior in the future to remove this nicety or to make it configurable (and non-default), though.

Design decisions

I could have used a Class<? extends EObject> instance to specify the EClass (or rather, the Java type of the instances of that). The example given above would then look as follows.

@ScopeForReference(class=Reference.class, featureId=DomainmodelPackage.REFERENCE__OPPOSITE)

However, you still need the right feature ID to specify the EReference so I chose to stick to using two IDs which clearly belong together as it communicates a little clearer. I also thought it’s better to use standard Ecore infrastructure throughout and not to rely on the particular way Ecore maps to actual Java classes.

Let me know…

…what you think of this approach! Is it useful? Is the notation clear? Do you have a preference for the use of the Class<? extends EObject>-style? Be sure to drop me line.

Categories: The How, Xtext

Checklist for Xtext DSL implementations

April 8, 2011 1 comment

Currently I’m in the US, working with a team that’s building a number of DSLs with Xtext and have been doing that for some time already. The interesting thing is that this team is quite proficient at doing that and tackling all sorts of gnarly problems (coming either from a legacy language which they have to emulate to some level or from requirements coming from the end users), even though most of them have only been working with Xtext for a few months. However, during the past week I realized that I unconsciously use a collection of properties which I check/look for in Xtext DSLs and since I use it unconsciously I wasn’t really aware of the fact that not everyone was using the same thing. In effect, the team had already run into problems which they had solved either completely or partly in places which were downstream from the root causes of the problem. The root causes generally resided at the level of the grammar or scope provider implementation and would (for the most part) have been covered by my unconscious checklist. Had the team had my checklist, they’d probably saved both time and headaches.

Since existing sources (i.e., the Xtext User Guide and, e.g., Markus Völter’s “MD* Best Practices” paper) are either reference-typed or quite general and somewhat hard to easily map to the daily Xtext practice, I figured I’d better make this list explicit. I divvied the checklist up in three sections: one concerning the Generate<MyDsl>.mwe2 file, one concerning the grammar file and one concerning the Java artifacts which augment the grammar.

Generator workflow

  1. Do the EPackages imported in the grammar file correspond 1:1 with the referencedGenModels in the workflow?
  2. Do you know/understand what the configured fragments (especially pertaining to naming, scoping, validation) provide out-of-the-box?
  3. Is backtracking set to false (default) in the options configuration for the XtextAntlrGeneratorFragment? I find that backtracking is rarely needed and unless it is, enabling backtracking introduces quite a performance hit and. More importantly, it might hide ambiguities (i.e., they don’t get reported during the generation phase) in the grammar at a point you didn’t need the backtracking for anyway.

To expand a little on the second item, here’s a list of the most important choices you’ve got:

  • naming: exporting.SimpleNamesFragment versus exporting.QualifiedNamesFragment
  • scoping: scoping.ImportURIScopingFragment versus scoping.ImportNamespacesScopingFragment
  • validation.JavaValidatorFragment has two composedChecks by default: ImportUriValidator which validates importURI occurrences (only useful in case you’ve configured the ImportURIGlobalScopeProvider in the runtime module, either manually or by using ImportURIScopingFragment), and NamesAreUniqueValidator (which checks whether all objects exported from the current Resource have unique qualified names).

Grammar

  1. Any left-recursion? This should be pretty obvious since Xtext generator breaks anyway and leaves the DSL projects in an unbuildable state.
  2. No ambiguities (red error messages coming from the ANTLR parser generator)? Ambiguities generally either come from ambiguities at the token level (e.g., having a choice ‘|’ which consume the same token type) or overlapping terminal rules (somewhat rarer since creating new terminal rules and/or changing existing ones is fortunately not that common).
  3. Does the grammar provide semantics which are not syntactical in nature? Generally: grammar is for syntax, the rest (scope provision, validation, name provision, etc.) is for semantics.
  4. Did you document the grammar by documenting the semantics of each of the rules, also specifying aspects such as naming, scoping, validation, formatting, etc. (unfortunately, in comment-form only)? Since the grammar is the starting point of the DSL implementation, it’s usually best to put as much info in there as possible…
  5. Did you add a {Foo} unassigned action to the rules which do not necessarily assign to the type? (Saves you from unexpected NullPointerExceptions.)

To expand on the second item pertaining to ambiguities:

  • Most ambiguities of the first kind are introduced by incorrect setup of an expression sub language. Make sure you use the pattern(s) described in Sven‘s and two of my blog posts.
  • Favor recursive over linear structures in the context of references into recursive structures. This makes implementing the scope provider all the more easier (or even: possible). For a example of this: see this blog post.

Java artifacts

First some checks which pertain to implementation of the custom local scope provider:

  1. Are you using the “narrow” form (signature: IScope scope_<Type>_<Reference>(<ContextType> context, EReference ref), where Reference is a feature of Type) as much as possible?
  2. Are you using the “wide” form (signature: IScope scope_<Type>(<ContextType> context, EReference ref)) where it makes sense?
  3. Have you chosen the ContentType (see previous item) to be convenient so you don’t need to travel up the containment hierarchy?

For the rest of the Java artifacts:

  1. Is your custom IQualifiedNameProvider implementation bound in the runtime module?
  2. Does the bound IQualifiedNameProvider implementation compute a qualified name for the model root? (Important in case you’re using the org.eclipse.xtext.mwe.Reader class.)
  3. Have you implemented value converters (see §5.7) for all the data type rules in the grammar?
  4. Have you bound the value converter class in the runtime module?
Categories: DSLs, The How, Xtext

Deploying plugins using ANT

April 3, 2011 3 comments

Whenever you’re developing DSLs in the form of Eclipse plugins, you’ll have to come up with a means of deploying these plugins to the Eclipse’s of your language’s users. At several times, I’ve used a simple ANT script to do just that:

<?xml version="1.0"?>
<project name="DSL deployment" default="deploy">

	<target name="check" unless="eclipse.home">
		<echo message="Property {eclipse.home} not set (run this ANT script inside the same JRE as Eclipse)." />
	</target>

	<target name="deploy" depends="check" if="eclipse.home">
  	<echo message="Deploying plugin JARs to Eclipse installation (${eclipse.home})" />
    <copy todir="${eclipse.home}/dropins">
      <fileset dir="${basedir}/lib/plugins" />
	</copy>
  	<echo message="Please restart Eclipse to activate/update the plugins." />
  </target>

</project>

You simply put all the plugins to be deployed in the lib/ directory of the containing Eclipse project and run the ANT script inside the same JRE as Eclipse, using the settings on the JRE panel of the ANT Run Configuration. The script checks for this and will signal if it’s not ran in the proper way. After deployment, you’ll have to (have) Eclipse restarted to activate the plugins. The plugins are placed in the dropins directory rather than the plugins directory which allows you to easily distinguish them from “regular” plugins.

This setup has the advantage that you can have Eclipse export the plugins to the lib/ directory of the containing Eclipse project, by pointing the Export wizard to that directory on the Destination tab. In case of errors, the logs.zip file gets dumped in lib/ directory as well.

Categories: DSLs, The How