Home > MDSD, SD in general, The Why > Abstraction sucks!

Abstraction sucks!

No, I’ve not gone mad: having obtained a Master’s Degree in Mathematics precludes me from saying such things in earnest. However, the title does seem to represent a sentiment that’s present throughout a large part of both the software developing community (especially, but not exclusively, including all the non-techie folks) and our community as a whole, which might be summed up as “abstraction means ‘more difficult’, right?“.

Wikipedia defines abstraction as “a process by which higher concepts are derived from the usage and classification of literal (“real” or “concrete”) concepts, first principles and/or other abstractions”. As a consequence, abstraction does not inherently add complexity to a situation; rather, it does quite the opposite: it reduces complexity (usually favoring the incidental part of that, thankfully) by condensing existing usage patterns into more compact wording which carries the same content and level of detail. So, when we do some Refactoring and reduce the overall size of a code base, chances are that we’re abstracting at the same time. Even when the code base grows in size, it might be because you’re introducing classes (e.g., through “Extract Interface”) which represent those higher concepts. In my own programming, I tend to search for usage patterns and other commonality which I can abstract/Refactor away so I can understand the domain better. The benefits for software creation of abstraction are: (1) gaining a better understanding of the domain as well as concepts for the ubiquitous language and (2) productivity gains through “doing more in less code” and more maintainable code, at that.

Now, it seems that there are 10 types of people: those who like abstraction and those who abhor it. That dichotomy seems to extend itself to the world of MDSD tools as well: on the one hand we have tools like Intentional, Xtext, MPS, Acceleo which allow you sculpt your modeling language (or in the case of Intentional: domain) to the n-th degree at the cost of meta-programmers’ hours, on the other hand there are tools like Mendix, BeInformed, Pega and all the BPMS, CASE, 4GL tools of old (and new) which come with their own pre-determined set of concepts and a lot out of the box-functionality (the semantics of those concepts plus a technical architecture) and which seems to have gained quite a bit of popularity, especially among the more business analysis-oriented folks. Having worked with some of these tools myself, I’m always struck by the inability of these tools to introduce new abstractions on top of the existing ones. You’re basically given the one modeling language and when you happen to see a usage pattern: well, you’d better make sure you stick to it. While I’ve reasoned earlier that architecture does have a place in modeling, I didn’t have this slightly mind-numbing variety in mind…

At the same time, there are arguments against abstraction which we’d better know in order to be able to understand why “the regular folks” like those fixed modeling language MDSD tools more than anything “us folks” care to cook up. The first one is that an abstraction has to be really, really good in order to be useful in practice. Joel Spolsky formulated the “Law of Leaky Abstractions” which says that however good your abstraction is, at some point in time you’ll going to have to know what’s going on “underneath”, i.e. the concepts the abstraction was derived from, in order to understand the abstraction itself. This is especially true in software where a minute leak can, and thus will, cause a bug which forces you to step down an abstraction level and learn about everything there before you’re able to solve the bug -which incidentally might very well be harder since you’ll have to solve it “through” the abstraction. (It’s also the reason why I’m not overly fond of internal DSLs: the host language will “shine through” at some point, anyway. Using a subset of UML + profiles also falls prey to this problem.)

The second argument is that, while abstraction can reduce the overall complexity, it does produce a bump in the learning curve: it’s one more term to learn about and remember, especially how it relates to all other concepts, making this a bit of a quadratic process. People generally like to stay in their comfort zones so it’s only natural that they don’t like being dragged out time and time again. The fixed modeling language tools excel in providing concepts and constructs which are at the right level of abstraction for a good deal of the business domains they’re aiming at, so once you’re intimate with those, there’s a lot of business value to be created with only that knowledge so there’s actually no need to be dragged out anyway.

The third argument is that it takes a special kind of software creators (developers/programmers) to go search for abstractions, find the right ones and reify them (implementation in the modeling language, education of modelers, adaptation of code generators/model interpreters, etc.), especially in a running project -this amount to the software equivalent of open heart-surgery where you’re also clobbering together the heart-lung machine on the spot.

The question is whether it’s bad that such different flavors of MDSD tools exist and whether there should be a prevalence for either one. In my opinion, it’s unfortunate that it’s hard to get users of the fixed modeling language tools to acknowledge that the fixed set of constructs and concepts is hurting them. Often, the 80/20-rule is applicable: 80% of the target application can be created in 20% of the time through modeling, while 20% of the application has to be created in a different way and at large costs since the tool doesn’t allow that part to be comfortably modeled. On the other hand, it’s at least as hard to get fans of non-fixed MDSD tools to acknowledge that these are actually not that simple to use.

To me, that is an incentive to see whether I can come up with something that enables easier meta programming (DSL creation, code generator/model interpreter implementation) and is capable of delivering those capabilities to a large audience. Stay tuned…

Categories: MDSD, SD in general, The Why
  1. Dave Orme
    January 21, 2011 at 8:01 pm

    Yes, it _is_ hard to find the right abstraction for a range of problems and that it takes time. And in the process multiple people will create competing attempts to solve the abstraction problem and succeed at varying degrees until we finally collectively settle on something that “works” for “most of us”.

    I also agree with your vision that “architecture in a box” works best when the architecture itself is extensible to handle unforseen cases.

    What part of this problem are you interested in? It’s a problem I’m particularly interested in, especially in the RCP space.

    Dave Orme

    • January 23, 2011 at 12:12 pm

      What I find important is that we don’t try finding the right level of abstraction across a range of problems, but rather that we do it separately for each instance since that will be unique anyway, if only in terms of people involved and outside requirements and context. (Of course, it is helpful to see and evaluate attempts at solutions for each instance.) In general, I find the word “architecture” to be a bit too vague to be useable and I don’t really understand how you see an “architecture in a box” in my blog. For modeling, I’d rather confine myself to concepts like “meta model”, “syntax”, etc. -in fact, setting up a solution (i.e., modeling tools and environment) should be simple enough not to warrant a “real” (instance-specific) architecture.

  2. January 31, 2011 at 10:56 pm

    completely agree: finding the right abstractions is hard and time consuming.
    In a sense, that’s why good DSLs do not come out of the blue everyday. And despite I can be a fan of language design and open DSLs, I think this is fine as it is.
    The final goal of MDSD is not to make every developer a language designer. The goal is to have *good* DSL designers that build *good* DSLs for the right audience.
    Our experience with DSL visual languages in the Web context (WebML, see http://www.webml.org) is that it took *years* (together with tens of industrial experiences and thousands of users) to refine and polish a good language. I’m not convinced that a process like this can be in charge to any developer or designer.
    Then, I agree with you that a good DSL should be open for extensions and closed for modifications (ahh, the good open-close principle).
    But once this is set, the abstractions must be intuitive and make the life easier, not harder.
    Finally, about tools: good tools must pair good languages. And I think that domain specific architectures (shall we call them DSAs?) should pair DSLs. As we say that a language does not fit all needs, I would say that an architecture may not fit all languages. This could give you the impression I’m somewhat skeptical with respect to generic model-based tool/editor generators, and that’s actually true. Tools, design interfaces and publishing architectures should be tailored to the users (and this comes from our experience too, see: http://www.webml.org), otherwise we fall back to the problems you raise in your excellent analysis.

    • February 1, 2011 at 8:28 am

      Hi Marco,

      You’re making one of my points better than I do 🙂 The reason those “fixed” tools I mentioned are popular (and have value) is exactly because they carry a good DSL that’s reusable across a large (enough) audience.

      On the other hand, getting to that point, either within the scope of one/a few closely-related projects or with the goal of creating “the new WebML”, is still quite tedious and difficult. This means it takes a lot of focus and determination to get there in the first place. In case you happen to have a domain which isn’t catered for off-the-shelf, that often means you don’t try it at all…

  3. February 22, 2011 at 9:11 pm

    DSLs and Abstractions do not have to be heavy.

    In fact, when DSL creation and maintenance is trivialized, the abstractions can target really tiny areas. And when the DSL is source-controlled-in-sync with the rest of the project, the DSL can target other DSLs, that is; each abstraction covers its own responsibility area and targets other abstractions.


    Makes a DSL in less than a day, from one single person. And maintainable as-required whenever the abstraction needs modifications.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: