Archive
The practical value of category theory (and art)
After my presentation “A category-theoretic view of model-driven” (slides forthcoming through the conference web site) during this year’s Code Generation conference, I got the question what the practical value of category theory was. In the heat of the moment, I chanced upon an analogy with art which very soon found its way to Twitter as:
This got some retweets and also some responses to the tune of me being completely wrong. Although the above is what I said, my actual opinion on this is a bit more nuanced. (As an aside: Félienne was kind enough to write a live-blog post about my presentation.)
First of all, art has quite some practical value: it improves people’s lives in a myriad ways by triggering deeply-felt emotional responses. Category theory (or CT, in short) doesn’t quite work like that in general although for some of its practitioners it might come close. CT certainly has value in providing its practitioners a mental framework which guides them to the sort of abstractions that work really well in mathematics and, increasingly, in functional programming to think and reason more effectively.
What didn’t quite made it to the tweet was my assertion that in practice CT does not relieve you of the hard work. Or to put in another way: there’s no substitute for thought. Framing something in the CT language and framework doesn’t “magically” give you the answers for the questions you might have. It can certainly help in reaching those answers more efficiently and phrasing them (much) more elegantly – which is precisely what proper abstraction should achieve. But at the same time, once you have these answers, it’s perfectly possible to “desugar away” from the use of CT. At the end of the day, everyone has to consider his/her ROI on learning enough CT to be able to take this route.
For the same reason I have a problem with the tendency in certain circles to outright declare adequate knowledge of CT as a criterion for admittance to those circles. (I hinted somewhat obliquely and humorously to this in my presentation.) It’s an entirely artificial entry barrier with those enforcing it doing so for not entirely autarkic reasons.
In conclusion: yes, CT can certainly have practical value but it requires the right context and considerable upfront effort to achieve that and does not constitute a prerequisite.
Radio silence?
Don’t worry, I’m still alive 😉
The past week I’ve been busy playing guitar with http://www.relocator-project.com/ on http://www.generation-prog.com/. It’s been a blast and the gig went quite well.
But: next week more DSLy stuff again.
A New Hope
(Sorry about the thoroughly geeky title: I couldn’t resist :))
Yesterday was my last tenured day at my former employer (go and check LinkedIn to find out which one it was) and today is the first day of my new life as an independent. This represents a huge challenge to me, but one I simply had to take on in order to have a chance of getting my message out to the business world -my former employer wasn’t really helping with that.
Obviously, my focus will be model-driven software development (MDSD) and domain modeling in a mix of consultancy, a bit of DSL evangelization (e.g. through this blog) and a bit of creation of/contribution to open-source projects. Apart from that, I’ll be getting my hands dirty with some mobile app development.
But the next two weeks I’ll be practicing my callous off for a gig with an international instrumental progressive band called Relocator: check here for more info about the gig. That also means I’ll be somewhat undercover/incommunicado simply because I’m locked away with a guitar in my hands. Don’t worry, I’ll be back 😉
A pure-Java arithmetics parser
Well, it’s been over a month since the last post…I’ll try and post more often in the near future.
Anyway, one of the things I did in the past month was to write a parser for arithmetic expressions by hand. This was mainly inspired by Sven Efftinge’s blog and the workshop I gave which showed that explaining such a parser is actually quite hard. Sven’s movie which actually showed the call stack is rather more enlightening than any verbal account of the matter, so I started thinking about how to do the same thing but then interactively (and without the need to do a voice over which I don’t like to do since I hate hearing my own voice). In the end, I figured that Eclipse’s debugger already provides that functionality so a parser (and lexer) hand-written in Java should already do the trick.
The parser can be found alongside the Xtext workshop material I published earlier on Google Code. Remarks:
- It is a regular Eclipse project which doesn’t use anything beyond a standard Java JRE (>=5) and JUnit4.
- The project contains three parser classes: ParserWithLeftAssociativity, ParserWithSomeRightAssociativity and ParserWithSomeNonAssociativity -the correspondence (to Sven’s blog) should be obvious 😉
- The parsers share a common lexer implementation and the two latter parsers derive from the first one both for convenience and for sake of clarity -again, the correspondence should be obvious.
- I matched the parser implementation with the structure of the Xtext grammar definition and also (to a limited extent) with the structure of a generated Antlr-based parser. Comments refer to bits of the grammar which are being matched at that point in the code.
- Arithmetic expression can contain identifiers (matching /[a-zA-Z]+/) which are essentially defined on-the-fly. I added this to make the lexer slightly more interesting.
- I didn’t try and match the structure of the lexer to that of an Xtext-generated one. Instead, it’s rather free-form and doesn’t follow any particular lexer pattern I know (e.g., from the “Dragon book”): I’ve gone for a “whatever works and communicates” approach -I hope it communicates to you as well 😉
- Error detection, reporting and recovery is basic…at most. (The parser behaves slightly better than the lexer.)
- Unit tests are not too good (or actually not asserting anything for the lexer…).
To see the parser and its rule call stack, simply enable a breakpoint at Parser#ruleExpression and run a unit test in debug mode. Let me you know if you find bugs, have possible enhancements or think this is useful (or not!).
The Xtext grammar language and its invisible infix operator
The nice thing about the Xtext framework is that it eats its own dog food: its grammar definition language (also called Xtext of course: the framework suffers slightly from reuse) is created using Xtext -there’s a wondrous principle called bootstrapping at work here 😉
Anyway, this means you can simply inspect the .xtext file and the other Java customizations directly in the plugin in case you don’t believe the User Guide or find that it’s lacking. One such case (at least for me) was the construction of unordered groups and why the following grammar fragment didn’t produce the result I expected:
Entity: transient?='transient'? & abstract?='abstract'? 'entity' name=ID;
The result I expected was to parse things like “transient entity Foo” and “abstract entity Bar”. It does that, but it doesn’t accept “abstract transient entity Foo” while it does accept “abstract entity Bar transient“, which certainly checked out with what I intended!
After looking into the grammar, I was a little surprised to find that the part of the language for defining parser rules actually uses an expression language for everything between the ‘:’ and ‘;’. (I shouldn’t have been, of course, since there’s an explicit reference to EBNF expressions and you can use parentheses to group what we now can call in all certainty, sub expressions.)
This expression language sports the following infix operators, ordered in increasing precedence:
- ‘|’ for the regular alternative operator,
- ‘&’ for the unordered alternative operator and
- concatenation of “abstract tokens”, meaning assignments, keywords, rule calls, actions and everything parenthesized, not separated by anything other than (optional!) whitespace.
Item 3 says that token concatenation is an invisible infix operator…spooky! So,
transient?='transient'? & abstract?='abstract'? 'entity' name=ID
actually means the same as
transient?='transient'? & ( abstract?='abstract'? 'entity' name=ID )
because the invisible token concatenation operator has higher precedence than the unordered group operator. This explains why “abstract entity Bar transient” is accepted (and yields the same AST as “transient abstract entity Bar”) but “abstract transient entity Foo” not (the abstract keyword is separated from the entity keyword). The fix is easy enough: just enclose the entire unordered group in parentheses, just like you would do a group of alternatives.
I also noticed that (lone) keywords and rule calls can have cardinality postfixes as well, at least syntax/grammar-wise -I haven’t checked what happens at and after generation and whether semantics are what you’d intuitively expect. It’s certainly something I haven’t seen used in any grammar so far!
Configuring local and global scope providers at the same time
This post is about how to take control of configuration of the local and global scope providers through a custom scoping fragment.
In the .mwe2 workflow file (I’m assuming you’re on Xtext 1.0.1 and Eclipse Helios) the scoping is configured through an appropriate Xtext generator fragment. Xtext is shipped with an org.eclipse.xtext.generator.scoping.AbstractScopingFragment support class and two implementations (in the same Java package): ImportNamespacesScopingFragment and ImportURIScopingFragment.
AbstractScopingFragment binds the custom scope provider implementation MyDslScopeProvider (which inherits from org.eclipse.xtext.scoping.impl.AbstractDeclarativeScopeProvider, by default) to the runtime (Guice module) configuration and provides two abstract methods to also declare which classes you want to bind to the runtime for the global scope provider and a delegate or ‘fall-back’ local scope provider which is injected into the custom scope provider (in the field AbstractDeclarativeScopeProvider#delegate). This scoping delegate is invoked in case the custom scope provider (as long as that still inherits from AbstractDeclarativeScopeProvider, which would typically be the case) happens to return a null scope. (Note that ‘local’ means nothing more than ‘confined to the current Resource‘ with ‘global’ being the opposite.)
The following figure attempts to sum up the default situation which you get out-of-the-box after creating an Xtext project (with the familiar language name ‘MyDsl’, in this case) and running the MWE2 workflow:
The MWE2 workflow file (in the 1st editor) specifies the ImportNamespacesScopingFragment as (only) scope fragment. The source for the fragment (2nd editor) shows that in this case-sensitive case, the delegate local scope provider is to be an instance of ImportedNamespaceAwareLocalScopeProvider and the global scope provider is to be an instance of DefaultGlobalScopeProvider. Indeed, the 3rd editor shows that IScopeProvider is bound to the custom local scope provider, the delegate field (which is appropriately tagged using a Guice Named annotation) to ImportedNamespaceAwareLocalScopeProvider and IGlobalScopeProvider to the DefaultGlobalScopeProvider class.
As you already can guess from the fragments’ source in the example above, the two default implementations of AbstractScopingFragment mentioned above both declare both a global scope provider class as well as a local delegate scope provider class, so when configuring both fragments in the DSL generator workflow, the bindings will override each other and order is important. This is particularly annoying in case you want both the namespace-based imports as well the global URI imports functionality -after all these don’t necessarily conflict. Also, you have to configure at least one scoping fragment in order to have the custom scope provider bound, which may mean that your DSL could have unwanted scoping properties.
The solution is simply to roll your own AbstractScopingFragment implementation and configure exactly what you need. Note that this is only possible since Xtext 1.0.1 aka SR1 since before that the org.eclipse.xtext.generator.scoping package this class resides in, was not exported from the plugin, so this class and the Xpand and Xtext files it needs are not accessible to the class loader. With any implementation of AbstractScopingFragment should come an Xpand file with the exact same name as the implementing class’ name and residing in the same package. This Xpand file allows you to execute additional templates, either as separate files or as “add-ins”. We just need to execute the one for the custom Java scope provider, so we can simply copy the ImportURIScopingFragment.xpt file from within the plugin and rename it appropriately. The following screenshot sums things up:
After modifying the Xtext generator workflow and running it again, the workflow file and binding situation now look as follows:
This is indeed what we wanted. Note that the two standard scoping fragments will usually suffice, since you can override basically anything in the custom scope provider. But, for full-blown language development as well as for language modularization it sure comes in handy to have ultimate control of the scoping architecture.