Home > SD in general > New Year’s Resolution: the ASmAP Principle (part 1)

New Year’s Resolution: the ASmAP Principle (part 1)

Happy New Year, everyone!

Seeing that we’ve safely progressed into 2012, it’s time for New Year’s Resolutions. As a suggestion for a NYR in software engineering, I’d like to suggest following the

ASmAP-principle, where ASmAP == As Small As Possible

Ever since I started to program on the MSX2, I’ve been wondering where all that extra computing power, memory and disk space that all my subsequent work horses have been endowed with, went to. Obviously, the amount of pixel estate has grown quite a bit (from a measly 0.1Mpixel in 4-bit color to over 1.2 in 32-bit color on the quite modest screen of my MBP 15″), but that doesn’t really explain why my Eclipse typically reports around 200MB of heap space in use: I sincerely doubt that the size of my source code (or even the sheer bytecode size of my compiled classes for a fairer comparison) has grown by a factor of  7000 from the roughly 24KB of memory that was available for MSX BASIC. It also doesn’t explain why whole slews of applications aren’t really more responsive these days than they were in “ye’ olden days”.

Of course, I know that running a JVM and OO languages on it is intrinsically more memory intensive than assembler or what are essentially token streams (the in-memory representation of an MSX BASIC program). But I simply refuse to accept that we need so many more bytes to write even the simplest of programs on our modern-day OSs, VMs and IDEs. I think a large part of this growth stems precisely from Moore’s Law in action: because we have a lot of memory and extra CPU cycles per second, we tend to use them – not per se for more functionality, nicer graphics and such, but also to make programming easier for ourselves without actually delivering more business value or quality.

I’ve seen my fair share of “programming by copy-paste” and the consequences of that: bloated code bases with a lot of (usually inconsistent) duplication, a build process that takes ages to finish and is brittle, a development environment that takes days to set up (possibly after sacrificing a few kittens in the process, just to get it to work at all) and an enormous amount of dependencies (“Maven makes this so easy to manage”) consisting of all kinds of stop-gap frameworks which never seem to solve a business problem and only seem to solve halve of a technical problem. The end result is a gargantuan and monolithic Thing™ that’s hard to fix, hard to alter and hard to hand off to anyone so you’re stuck with it until eternity. Sounds familiar? 🙂

I think it’s this tendency to copy-paste our way around development, without taking the time to properly Refactor your stuff and reflect on your solutions, which accounts for the enormous gap between a typical application’s footprint and the value it delivers. As a solution, I propose to optimize something that’s relatively easy to measure: the footprint of your code base, in terms of pure source size but also taking into account the amount of re-used “stuff” such as libraries/frameworks, the time needed to build your project and deploy the application and the number of dependencies. In other words: make your project As Small As Possible. (Note that size is a lot easier to quantify than, say, complexity or quality.)

The overruling motivation for doing this is something which hasn’t changed the last few millennia: the amount of information our brains can interpret, process and store (in a limited amount of time). It is this characteristic which is the prime limitation in any project and especially those involving more than one developer – i.e., the typical ones. The amount of information in a software project is a lot bigger than the sheer code size conveys because there are all kinds of connections between various parts of the code base which you need to understand in order to add or change anything in the code. It’s my impression that the complexity is roughly a function of the code size base multiplied by the number of layers/aspects in the architecture.

So, the larger a project’s footprint, the harder it is to understand: not only for your co-workers (who are presumably stupid gits which couldn’t program their way out of a tin can), but even for yourself. The less you understand about your project, the less effective and productive you’re going to be. In other words: you need to build up a mental model of the project (which is something I’ve blogged about earlier) and hope it will fit in your brain in the short time you have. Given that you and your team mates can’t grow extra brains, you’ll have to utilize the available mental processing power as efficiently as possible. Note that adding extra single brains to the project doesn’t provide a solution, because each individual brain will not be able to grok the entire project if you couldn’t do so yourself to begin with.

In the next installment, I’ll discuss some simple approaches to follow the ASmAP-principle.

  1. January 2, 2012 at 2:21 pm

    Happy new year Meinte!
    I’m not sure that code size is by itself a complexity factor. I’ll bet more on http://en.wikipedia.org/wiki/Halstead_complexity_measures for what concerns code volume.
    You’re right about the number of technical layers, but “business design” is also a source for complexity.

    • January 2, 2012 at 2:31 pm

      Ah, but I never said “minimize the complexity” 🙂 I don’t believe in (purely quantitative) metrics since “if it’s a metric, you can game it”. Also, I’d say that complexity coming from “business design” is usually part of the essential complexity. By its very nature, you’re not going to get around having it. On the other hand: incidental complexity, coming from “implementation practices”, choice of architecture and so-on, usually bears a direct relationship to the code base.

  2. January 3, 2012 at 8:35 am

    Ok, what you advocate is more or less the KISS principle (http://en.wikipedia.org/wiki/KISS_principle).
    About code SLOC however, a hundred/thousand screens app is no more complex than a 10 screens app provided that all screens follow the same architecture/programming rules. If these rules are complex, it costs time to truly get them in mind but when done one can embrace the whole code, whatever the size. On another hand, if each screen has a specific micro-design, for sure 1000 occurences is more complex thant 10.
    About architecture choice, I agree with you that we ought to choose the necessary and sufficient ones (there still may be several possibilities). But necessary and sufficient to what? My guess: to non functional requirements, seldom explicitely described. A good read around that point: http://caminao.wordpress.com/2011/12/30/ahead-with-the-new-year/. And unfortunately, it can occur that the right sized detailed architecture does not fit into one single brain 😉 But I’m Ok that in many case it should…

    • January 3, 2012 at 8:53 am

      You’re right: it’s the KISS principle. Let’s see how it is taken up with a different name 😉

      If an app has a lot of screens (which happen to be implemented homogeneously), then the essential complexity apparently is large. There’s no point to try and “balance” that by having an overly elaborate architecture which individual devs have to really know before that can do anything: it’s your job as an architect to ensure that all devs can be productive and contribute business value, not the other way around. If devs have trouble embracing the whole code, then you didn’t do a very good job at architecting it, did you? (There’s an obvious benefit for DSLs in here.)

      Sorry if this turns into a rant against architects, but I’ve spent way too much time in situations where Architects (mind the capital) thought they were God, couldn’t be bothered with low-level concepts like “is this ever going to work?” and wanted to stay as high-level as possible with lots of hand waving, while they couldn’t program their way out of the towers of Hanoi. It’s far too easy to pitch the entire software crisis on “if only non-functional requirements were explicitly described”, IMHO. And lastly: if the “right-sized detailed architecture” doesn’t fit into one single brain, then it didn’t fit in the brain of the architect to begin with, so what does he/she know!? 😉 (Divide-and-conquer is obviously a good strategy here.)

  3. January 3, 2012 at 9:39 am

    Just one last thing before let you go on with your initial intent: I don’t like neither any title with a capital letter in front, would it be Architect or Programmer or Analyst. But there is actually a need to analyze, design and program (in this order even if it should always be an iterative process). And at the end, we should’nt write a single line of code without being sure that it increases business value and/or optimize maintenance costs. And so each line of code (or component) should then be challenged through business needs and/or architecture/programming rules. Note that the business value goal involves that the code must work, be reliable, meet expected response time etc.

    • January 3, 2012 at 9:45 am

      True, but this should be the responsibility of everyone involved, not just an architect/Architect – division of responsibilities makes it too darn easy to say “not my problem/fault” and point fingers. Also, it’s often very hard to verify whether business needs are going to be met without writing any code – that’s why agile/Agile was invented. Lastly, removing code is just as important as writing code – in fact, adding code is a sure way of increasing maintenance costs.

  4. January 3, 2012 at 5:23 pm

    By the way, Extreme Programming (1996) was the first Agile trend success. I tried it then on some projects, with the help of the now French Agile guru (Laurent Bossavit), but can’t succeed to sell the idea to customers at this time. It was focused on user stories and direct implementation, with as many refactoring as developpers thought to be good (and team meeting and contineous integration and so on…).
    Nowadays people begin to understand that an agile way to conduct a project, say with Scrum, is one thing. And that it may be applied to any kind of project, not only in IT field. Building a sustainable software is another kind of thing. Merging the two things is a good way to succeed in application development.
    BUT, taking into account just business needs (without analysis) and programming level needs (without architecture) is just a no way road. IHMO. At least or projects which will have to survive to their developpers availability.
    Modeling is a mean to do analysis and architecture design. I am for the, not yet existing, Agile Modeling new way of life! 😉

    • January 3, 2012 at 5:33 pm

      Good idea, also because I can’t see a real distinction between analysis, design, implementation or between architecture and programming anyway – and have to meet the person who was really good at any one of those without having a good grasp of _all_ of the others as well.

  5. January 12, 2012 at 4:40 pm

    Meinte, I remember the obfuscated C code contest, where the goal was to create As Small As Possible C programs. Quite a challenge, but not a recommendation for real life :-).

    • January 12, 2012 at 8:54 pm

      You know to expect better than that from me, Jos 😉
      Part 1 already stresses the need for nimbleness because of the limited bandwith into our brains, and obfuscation just doesn’t work on that particular channel. Just wait for part 2+ for some concrete strategies.

  6. January 13, 2012 at 9:19 am

    Of course I do expect better from you and I am looking forward to the concrete strategies.

  1. January 4, 2012 at 11:28 am
  2. April 23, 2012 at 9:18 pm
  3. April 30, 2013 at 9:25 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: