Continuous deployment: the case for instant gratification through model interpretation
The concept of continuous deployment is gaining a lot of traction these days, among tech-heads seemingly as much as “cloud” does with the infrastructure and more managerial types. In case you didn’t know: continuous deployment is the practice of deploying a new version of your software to production, at the touch (usually literally) of a button -presumably after it’s been tested sufficiently, of course. It signifies instant gratification at the customer-level since new features and/or fixes for bugs (which might have been introduced along with yesterday’s new features ;)) are “often” pushed to production, “often” regularly meaning several times a day. It also signifies near-instant gratification at the producer-level since validation of the new features (or new bugs) also occurs (almost) instantaneous -this is the reason someone like Eric Ries is pushing this practice.
Another very good reason to implement continuous deployment is that it forces you to automate a full build and deployment cycle. We’ve all been in software projects in which a full build and deployment was rather more like a magician’s spiel than the deterministic and quick process it should have been, right? That means that you’re not only wasting a lot of time on things other than adding features or fixing bugs, you’re also exposing yourself to a huge feedback cycle, not to mention to frustration.
In conclusion: it’d already be very useful for the development practice to have continuous deployment. Tacking on MDSD to the concept is a simple as firing up the generation during the automated build, provided that you consider both the models and everything behind the Generation Gap to be part of your source and that generation is automated as well -but you already did that, didn’t you?
But can we go even further? Can we find things which would benefit from a feedback cycle that’s as short as possible (preferably in the seconds range)? After all, instant feedback means that we can validate our ideas and correct mistakes as efficiently as possible. One thing immediately springs to mind (at least, mine): the generation step. Especially with a large model, generation regularly takes several minutes. And after generation, you still have to wait for the IDE to finish re-building, to fix errors in the code behind the Generation Gap and to re-start the application server. This discourages making changes to the model and encourages “fixing” the non-generated code instead of actually fixing the model, which leads to deterioration of the model quality: details which fit the abstraction level of the model are hidden behind the Generation Gap. In turn, this deteriorates the overall effectiveness of the model, so why were you doing MDSD again?
The same is even more true for MDSD contexts in which the business analysis/requirements gathering practice (I’ll leave it up to yourself to decide whether those are separate practices or not) produces the model: usually these practitioners are not tech-savvy at all, so they’re continually groping in the dark -admittedly less so than when they’d only be producing Word documents instead of models, but still. This means that any lack of feedback on their part is propagated through the development practice to the testers, causing a lot of wasted work in between.
Wouldn’t it be nice when you could immediately see the effects of a model change? Well, that’s where model interpretation comes in: create an tool which directly “runs” the model by reading in the local copy the dev/BA is working on and providing a simple rendition of the situation modeled, e.g. as a (locally-running) Web app. I don’t propose to provide full interpretation which covers everything the official generator does; instead, cover those aspects which together provide a workable impression. The essence is to skip any generation and build steps and go directly to execution.
Take Web applications as an example: I’m sure you agree that it would be very useful for use case writers to be able to validate their use case realizations and even converse with the customer using a mockup of the screens and the flow between them. At the same time, having fully working Data and Service layers underneath this is less useful: it may even be more useful to expose the explicit dependencies of a screen (flow) on data not coming from the Presentation layer through “extra” input fields.
To test this notion, I’ve created a simple Web server which polls a local model file (a textual DSL created with Xtext, of course) for changes and exposes a Web app whose request URLs and subsequent HTML responses are completely determined from the parsed model, which solely deals with screens and the flow between these. After saving the model file, a simple browser refresh is all that’s required to see the changes -even inside a running flow. This was actually not as much work as I thought: it’s perfectly possible to use Xpand as template language for the HTML content (much better than JSP!). Most importantly: it seemed to gain immediate traction with both developer types as well as with business analysis folks, especially the more tech/MDSD-savvy ones. On the other hand, people on the requirements gathering-side of things or people who were used to things like Pega were much less impressed: “This looks an awful lot like programming and we’re not having any of that,” seems to sum up their reaction.
Of course, having a model interpreter of sorts alongside a full generator means that you’ll have to update another artifact with changes to both the syntax ánd semantics of the meta model/DSL, so it’s good to carefully examine which aspects of the model really need that instant gratification. Also, the performance of loading the model plays a key role here: parsing a huge model text remains relatively expensive, so a sensible division of the model into separate files and “smart reloading” (i.e., only re-parse changed model files) will have an immediate pay-off. In fact, both concerns combine to encourage to come up with a good factoring of your domain into aspects.
All in all, I’m convinced that we need to reduce the length of the feedback cycle as much as possible and as early in the process as possible -preferably starting at the customers’ domain stakeholders. Domain modeling and MDSD provide a very good way of achieving this as they allow everyone involved to focus on the essential complexity and the right level of understanding, while current advances on the technological side make continuous deployment or model interpretation/execution completely feasible -as opposed to efforts in the past like “Executable UML”.
But what do you think?