Archive
New Year’s Resolution: the ASmAP Principle (part 1)
Happy New Year, everyone!
Seeing that we’ve safely progressed into 2012, it’s time for New Year’s Resolutions. As a suggestion for a NYR in software engineering, I’d like to suggest following the
ASmAP-principle, where ASmAP == As Small As Possible
Ever since I started to program on the MSX2, I’ve been wondering where all that extra computing power, memory and disk space that all my subsequent work horses have been endowed with, went to. Obviously, the amount of pixel estate has grown quite a bit (from a measly 0.1Mpixel in 4-bit color to over 1.2 in 32-bit color on the quite modest screen of my MBP 15″), but that doesn’t really explain why my Eclipse typically reports around 200MB of heap space in use: I sincerely doubt that the size of my source code (or even the sheer bytecode size of my compiled classes for a fairer comparison) has grown by a factor of 7000 from the roughly 24KB of memory that was available for MSX BASIC. It also doesn’t explain why whole slews of applications aren’t really more responsive these days than they were in “ye’ olden days”.
Of course, I know that running a JVM and OO languages on it is intrinsically more memory intensive than assembler or what are essentially token streams (the in-memory representation of an MSX BASIC program). But I simply refuse to accept that we need so many more bytes to write even the simplest of programs on our modern-day OSs, VMs and IDEs. I think a large part of this growth stems precisely from Moore’s Law in action: because we have a lot of memory and extra CPU cycles per second, we tend to use them – not per se for more functionality, nicer graphics and such, but also to make programming easier for ourselves without actually delivering more business value or quality.
I’ve seen my fair share of “programming by copy-paste” and the consequences of that: bloated code bases with a lot of (usually inconsistent) duplication, a build process that takes ages to finish and is brittle, a development environment that takes days to set up (possibly after sacrificing a few kittens in the process, just to get it to work at all) and an enormous amount of dependencies (“Maven makes this so easy to manage”) consisting of all kinds of stop-gap frameworks which never seem to solve a business problem and only seem to solve halve of a technical problem. The end result is a gargantuan and monolithic Thing™ that’s hard to fix, hard to alter and hard to hand off to anyone so you’re stuck with it until eternity. Sounds familiar? 🙂
I think it’s this tendency to copy-paste our way around development, without taking the time to properly Refactor your stuff and reflect on your solutions, which accounts for the enormous gap between a typical application’s footprint and the value it delivers. As a solution, I propose to optimize something that’s relatively easy to measure: the footprint of your code base, in terms of pure source size but also taking into account the amount of re-used “stuff” such as libraries/frameworks, the time needed to build your project and deploy the application and the number of dependencies. In other words: make your project As Small As Possible. (Note that size is a lot easier to quantify than, say, complexity or quality.)
The overruling motivation for doing this is something which hasn’t changed the last few millennia: the amount of information our brains can interpret, process and store (in a limited amount of time). It is this characteristic which is the prime limitation in any project and especially those involving more than one developer – i.e., the typical ones. The amount of information in a software project is a lot bigger than the sheer code size conveys because there are all kinds of connections between various parts of the code base which you need to understand in order to add or change anything in the code. It’s my impression that the complexity is roughly a function of the code size base multiplied by the number of layers/aspects in the architecture.
So, the larger a project’s footprint, the harder it is to understand: not only for your co-workers (who are presumably stupid gits which couldn’t program their way out of a tin can), but even for yourself. The less you understand about your project, the less effective and productive you’re going to be. In other words: you need to build up a mental model of the project (which is something I’ve blogged about earlier) and hope it will fit in your brain in the short time you have. Given that you and your team mates can’t grow extra brains, you’ll have to utilize the available mental processing power as efficiently as possible. Note that adding extra single brains to the project doesn’t provide a solution, because each individual brain will not be able to grok the entire project if you couldn’t do so yourself to begin with.
In the next installment, I’ll discuss some simple approaches to follow the ASmAP-principle.