Home > MDSD, SD in general > Continuous deployment: the case for instant gratification through model interpretation

Continuous deployment: the case for instant gratification through model interpretation

The concept of continuous deployment is gaining a lot of traction these days, among tech-heads seemingly as much as “cloud” does with the infrastructure and more managerial types. In case you didn’t know: continuous deployment is the practice of deploying a new version of your software to production, at the touch (usually literally) of a button -presumably after it’s been tested sufficiently, of course. It signifies instant gratification at the customer-level since new features and/or fixes for bugs (which might have been introduced along with yesterday’s new features ;)) are “often” pushed to production, “often” regularly meaning several times a day. It also signifies near-instant gratification at the producer-level since validation of the new features (or new bugs) also occurs (almost) instantaneous -this is the reason someone like Eric Ries is pushing this practice.

Another very good reason to implement continuous deployment is that it forces you to automate a full build and deployment cycle. We’ve all been in software projects in which a full build and deployment was rather more like a magician’s spiel than the deterministic and quick process it should have been, right? That means that you’re not only wasting a lot of time on things other than adding features or fixing bugs, you’re also exposing yourself to a huge feedback cycle, not to mention to frustration.

In conclusion: it’d already be very useful for the development practice to have continuous deployment. Tacking on MDSD to the concept is a simple as firing up the generation during the automated build, provided that you consider both the models and everything behind the Generation Gap to be part of your source and that generation is automated as well -but you already did that, didn’t you?

But can we go even further? Can we find things which would benefit from a feedback cycle that’s as short as possible (preferably in the seconds range)? After all, instant feedback means that we can validate our ideas and correct mistakes as efficiently as possible. One thing immediately springs to mind (at least, mine): the generation step. Especially with a large model, generation regularly takes several minutes. And after generation, you still have to wait for the IDE to finish re-building, to fix errors in the code behind the Generation Gap and to re-start the application server. This discourages making changes to the model and encourages “fixing” the non-generated code instead of actually fixing the model, which leads to deterioration of the model quality: details which fit the abstraction level of the model are hidden behind the Generation Gap. In turn, this deteriorates the overall effectiveness of the model, so why were you doing MDSD again?

The same is even more true for MDSD contexts in which the business analysis/requirements gathering practice (I’ll leave it up to yourself to decide whether those are separate practices or not) produces the model: usually these practitioners are not tech-savvy at all, so they’re continually groping in the dark -admittedly less so than when they’d only be producing Word documents instead of models, but still. This means that any lack of feedback on their part is propagated through the development practice to the testers, causing a lot of wasted work in between.

Wouldn’t it be nice when you could immediately see the effects of a model change? Well, that’s where model interpretation comes in: create an tool which directly “runs” the model by reading in the local copy the dev/BA is working on and providing a simple rendition of the situation modeled, e.g. as a (locally-running) Web app. I don’t propose to provide full interpretation which covers everything the official generator does; instead, cover those aspects which together provide a workable impression. The essence is to skip any generation and build steps and go directly to execution.

Take Web applications as an example: I’m sure you agree that it would be very useful for use case writers to be able to validate their use case realizations and even converse with the customer using a mockup of the screens and the flow between them. At the same time, having fully working Data and Service layers underneath this is less useful: it may even be more useful to expose the explicit dependencies of a screen (flow) on data not coming from the Presentation layer through “extra” input fields.

To test this notion, I’ve created a simple Web server which polls a local model file (a textual DSL created with Xtext, of course) for changes and exposes a Web app whose request URLs and subsequent HTML responses are completely determined from the parsed model, which solely deals with screens and the flow between these. After saving the model file, a simple browser refresh is all that’s required to see the changes -even inside a running flow. This was actually not as much work as I thought: it’s perfectly possible to use Xpand as template language for the HTML content (much better than JSP!). Most importantly: it seemed to gain immediate traction with both developer types as well as with business analysis folks, especially the more tech/MDSD-savvy ones. On the other hand, people on the requirements gathering-side of things or people who were used to things like Pega were much less impressed: “This looks an awful lot like programming and we’re not having any of that,” seems to sum up their reaction.

Of course, having a model interpreter of sorts alongside a full generator means that you’ll have to update another artifact with changes to both the syntax ánd semantics of the meta model/DSL, so it’s good to carefully examine which aspects of the model really need that instant gratification. Also, the performance of loading the model plays a key role here: parsing a huge model text remains relatively expensive, so a sensible division of the model into separate files and “smart reloading” (i.e., only re-parse changed model files) will have an immediate pay-off. In fact, both concerns combine to encourage to come up with a good factoring of your domain into aspects.

All in all, I’m convinced that we need to reduce the length of the feedback cycle as much as possible and as early in the process as possible -preferably starting at the customers’ domain stakeholders. Domain modeling and MDSD provide a very good way of achieving this as they allow everyone involved to focus on the essential complexity and the right level of understanding, while current advances on the technological side make continuous deployment or model interpretation/execution completely feasible -as opposed to efforts in the past like “Executable UML”.

But what do you think?

Advertisements
Categories: MDSD, SD in general
  1. October 24, 2010 at 11:43 pm

    I am not an adept of model interpretation because its application is too narrow: it only works for a very thought-out scenario. Web development comes to mind…

    But what about embedded development? Real-time? Automotive? Avionics? Model interpretation is too limited. Yes, it brings instant gratification, just like developing in PHP/JavaScript/Python/Lua/etc, but only if you happen to work on a domain where model interpretation is possible.

    Instead, my approach is to have model-driven code generation, but make the model as “alive” as possible. In particular, having a dynamic model where changes propagate in real-time. Here, instant gratification comes from having a “live” model.

    See my projects, ABSE (http://www.abse.info) and AtomWeaver (http://www.atomweaver.com). ABSE is a generative model-driven development approach, and AtomWeaver is an IDE that brings ABSE to life. In ABSE, in fact, the model is “executed”, but in this case, to get the generated artifacts.

    • October 25, 2010 at 4:30 pm

      The important thing is to close the feedback loop and get the shortest feedback cycle as possible, so I don’t give a cr*p whether you generate, interpret or execute. I simply chose the term ‘interpretation’ because that seems to have enough traction (still) but without the negative connotations of ‘model execution’. Model interpretation doesn’t work in any scenario per se, but the same goes for model-driven development as such, so I think it’s too narrow(-minded) to say “it’s too narrow”. It’s not so much that you have to happen to work in a domain where something like this is possible: you have to find out what parts of your development process would benefit from this, just as you have to find out what parts benefit from a model-driven approach in the first place.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: