The ASmaP Principle (part 2)

April 23, 2012 Leave a comment

To recount the gist of the previous blog: the ASmAP-principle strives to minimize the footprint of your project/application in terms of code base size, dependencies and configuration complexity. (In fact, it’s a particular re-statement of the KISS-principle.)

What’s the use?

The main reason to follow this principle is that projects/applications tend to grow fast to a point where it’s quite impossible to fit a good mental model of it into your brain in any reasonable amount of time. In turn, this inhibits you in becoming productive on those projects and making the right coding decisions. By constantly making an effort to reduce incidental complexity, you’ll make development on the project more effective and productive – not only in the long run, but also in the short run. You should see a decrease of time-to-market, bug count, maintenance costs after a short time, while hooking up extra devs to the project and effectively communicating with your customer becomes easier.

The ASmaP Principle is not about being as smart as possible in as few lines of code, it’s about obtaining/using/achieving the simplest possible solution, but not simpler - free after Albert Einstein. It also pertains to documentation: provide just enough text to clarify what your code is doing beyond what the code already clearly expresses on its own. This is also an incentive to write your code in a way which makes documentation largely unnecessary. I usually get away with some explanation about a class as a whole and some usage patterns.

Now, I’m not advocating premature optimization (of the performance of implemented functionality) here. I do think that it is the root of at least a lot of evil and any optimization effort should be based on profiling. Moreover, optimization is usually the source of a lot of incidental complexity since it doesn’t expand functionality as such only the performance of the implementation. As an example, consider caching: it requires adding a layer of indirection, thinking about invalidation and usually comes with a bunch of assumptions and runtime behaviors which are

Oh, and I certainly don’t advocate minification or obfuscation :)

Kill your dependencies

Dependencies on external libraries, frameworks, components, runtimes, environments and what-have-you -i.e., all this is within your direct sphere of influence but which you didn’t actually create yourself- are often the biggest source of incidental complexity. Each of these dependencies add their own dependencies but more importantly, they add to the information you need to carry around in your head: what’s the library’s API like, how to include the library correctly, how to instantiate the component, what are the peculiarities of this runtime, what do I need to configure in that environment to have it switch between dev/test/acc/prod?

Obviously, you can document much of this, but documentation has to be kept up-to-date and you’ll always need to remember a non-trivial amount of information to even be able to get things working at all, let alone productively.

Furthermore, dependencies tend to solve only half of a problem that technical rather than functional in nature – in other words: it’s falls squarely in the category of incidental complexity. My favorite example is Hibernate: it gives you (un-)marshalling of Java objects from/to a relational database, but at the price of having to write XML mapping files (well, in the pre-@annotation days, at least) which you need to keep in sync with both the Java classes and DB schema. Were I to generate the latter, I could just as well generate the necessary (un-)marshalling methods. (With annotations most of the meta data resides in the Java class, at least.)

To gauge whether you have too many dependencies in your Java project, I propose the following:

The Maven Rule: if your project uses Maven for dependency management, you have too many dependencies. Period.

While Maven undeniably makes the whole dependency management problem more bearable, it generally expands the same problem by making it just too friggin’ easy to drag in more dependencies without every really wondering whether that’s a good idea. Think about what you really need from dependency #37 that you put in your project. Some questions you could ask yourself:

  1. Is the same functionality already implemented (more-or-less equivalently) somewhere else in your project (typically using another dependency-of-ill-repute)? – If so, Refactor and remove all but the most-used dependency. Sometimes, you’ll find that you’re better off writing your own implementation without using external dependencies to snugly fit your needs.
  2. Does it solve your entire technical problem? If not, are better frameworks available?
  3. Does it provide an complete paradigm/way-of-thinking/-working (such as the Google Guice or Apache Wicket frameworks) or does it “only” provide some utility functions (such as Google Guava)? In the latter case, see the first point.

Note that I’m not advocating NIH here, but mind that there’s also no point in shoehorning a problem in a particular shape so that it happens to fit some “reusable” code: the fit is often quite bad and the primary code not very aesthetically pleasing. That’s a general problem with reuse (I have): all code makes a lot of implicit assumptions and unless these assumptions are met by your problem (domain) on a grand scale, things are not going to fly. The most rewarding uses of reusable components are often those where the component either matches the problem domain perfectly or is completely functionally orthogonal.

Not only can you reduce the sheer number of artifacts on which you’re dependent, you can also make sure that the number of direct dependencies (i.e., part X makes direct use of dependency Y by explicit reference such as an import) is as small as possible. This is analogous to your project architect insisting that application layers only interact in very controlled ways (and preferably through DTOs). Often, you can hide a large number of dependencies in a specific sub system which then governs use of and acces to its dependencies.

Categories: SD in general

Language Workbench Challenge 2012

March 31, 2012 Leave a comment

Apologies for the radio silence: it’s been way too long since I’ve done some blogging.

In my defense, the last couple of months have been pretty busy: I’ve been working a lot on my language workbench (my own startup) and I also got heavily involved with another startup. Last week I was in Cambridge, UK for the Code Generation 2012 conference and the co-located Language Workbench Challenge during which I presented said language workbench for the first time – more on that in a minute.

As it turns out, the next couple of months are going to be slightly less busy and should allow me to write some more blogs. I got some material pent up already that’s good to go after a little spit-shine.

By the way: I was thrilled to notice that views have not really dropped all that much during the hiatus since my last blog post! :)

Language Workbench Challenge 2012

The #lwc2012 is an event that co-located with the Code Generation conference and takes places a day before that. The essence of the #lwc2012 -at least: to me- is to challenge the various language workbench creators with new ideas, levels of maturity, etc.. The sheer variance among the “contenders” is so big that it’s quite impossible to judge them on any objective and/or quantitative scale – this explains why the nomer Competition would be quite unjust. For more information on the event itself, I’m going to refer to the official site. This will probably be updated soon by organizers Angelo Hulshout and Paul Zenden with a nice summary of the event and possibly even videos of the various presentations (including mine).

My primary goal was to gather feedback on my ideas around and implementation of Más which a Cloud-based domain language workbench that makes creation of domain-specific languages and using these to model “stuff”. The feedback I got during the challenge was quite positive, in general. I already knew that UI and the editing behavior still leaves much to be desired but I was positively surprised by the fact that people other than myself were able to use it to do parts of the extension assignment. The fact that you can do graphical modeling with nothing more than regular HTML, CSS, Javascript plus a bit of HTML5 Canvas seemed to surprise plenty of people.

On the whole, the assignment -creating a modeling environment for the Piping & Instrumentation domain, including code generation and preferably doing or triggering simulation- itself was somewhat cumbersome for several reasons.

First of all, the reference implementation made use of a rather old-fashioned piece of proprietary Windows software which didn’t really provide a very clear for the code generation and triggering of the simulation. To get around that, I simply took the MetaEdit+ implementation which the nice people of MetaCase were good enough to share with the world at large, ran their code generation against an equivalent model and re-implemented that. I didn’t bother with the Windows thing beyond that.

Secondly, the two “domain experts” (or at least, the two people most knowledgeable on the domain, being Paul Zenden and Juha-Pekka Tolvanen) on site were also two challengers. Especially the extension assignment could have benefited from a clarification by unbiased domain stakeholders. Angelo and I have already exchanged some ideas on how to do that differently next year – in particular: it would be nice to be able to consult real, on-site domain stakeholders which would be available during the preparation of the assignment as well. The extension assignment also didn’t really address the workbenches’ capability to really extend the language.

Overall, the event was quite inspiring to me and has provided me with encouragement to continue with the development of Más as well as with a couple of ideas I didn’t have beforehand. Stay tuned for more on that in the future :)

Categories: DSLs, MDSD

New Year’s Resolution: the ASmAP Principle (part 1)

January 2, 2012 14 comments

Happy New Year, everyone!

Seeing that we’ve safely progressed into 2012, it’s time for New Year’s Resolutions. As a suggestion for a NYR in software engineering, I’d like to suggest following the

ASmAP-principle, where ASmAP == As Small As Possible

Ever since I started to program on the MSX2, I’ve been wondering where all that extra computing power, memory and disk space that all my subsequent work horses have been endowed with, went to. Obviously, the amount of pixel estate has grown quite a bit (from a measly 0.1Mpixel in 4-bit color to over 1.2 in 32-bit color on the quite modest screen of my MBP 15″), but that doesn’t really explain why my Eclipse typically reports around 200MB of heap space in use: I sincerely doubt that the size of my source code (or even the sheer bytecode size of my compiled classes for a fairer comparison) has grown by a factor of  7000 from the roughly 24KB of memory that was available for MSX BASIC. It also doesn’t explain why whole slews of applications aren’t really more responsive these days than they were in “ye’ olden days”.

Of course, I know that running a JVM and OO languages on it is intrinsically more memory intensive than assembler or what are essentially token streams (the in-memory representation of an MSX BASIC program). But I simply refuse to accept that we need so many more bytes to write even the simplest of programs on our modern-day OSs, VMs and IDEs. I think a large part of this growth stems precisely from Moore’s Law in action: because we have a lot of memory and extra CPU cycles per second, we tend to use them – not per se for more functionality, nicer graphics and such, but also to make programming easier for ourselves without actually delivering more business value or quality.

I’ve seen my fair share of “programming by copy-paste” and the consequences of that: bloated code bases with a lot of (usually inconsistent) duplication, a build process that takes ages to finish and is brittle, a development environment that takes days to set up (possibly after sacrificing a few kittens in the process, just to get it to work at all) and an enormous amount of dependencies (“Maven makes this so easy to manage”) consisting of all kinds of stop-gap frameworks which never seem to solve a business problem and only seem to solve halve of a technical problem. The end result is a gargantuan and monolithic Thing™ that’s hard to fix, hard to alter and hard to hand off to anyone so you’re stuck with it until eternity. Sounds familiar? :)

I think it’s this tendency to copy-paste our way around development, without taking the time to properly Refactor your stuff and reflect on your solutions, which accounts for the enormous gap between a typical application’s footprint and the value it delivers. As a solution, I propose to optimize something that’s relatively easy to measure: the footprint of your code base, in terms of pure source size but also taking into account the amount of re-used “stuff” such as libraries/frameworks, the time needed to build your project and deploy the application and the number of dependencies. In other words: make your project As Small As Possible. (Note that size is a lot easier to quantify than, say, complexity or quality.)

The overruling motivation for doing this is something which hasn’t changed the last few millennia: the amount of information our brains can interpret, process and store (in a limited amount of time). It is this characteristic which is the prime limitation in any project and especially those involving more than one developer – i.e., the typical ones. The amount of information in a software project is a lot bigger than the sheer code size conveys because there are all kinds of connections between various parts of the code base which you need to understand in order to add or change anything in the code. It’s my impression that the complexity is roughly a function of the code size base multiplied by the number of layers/aspects in the architecture.

So, the larger a project’s footprint, the harder it is to understand: not only for your co-workers (who are presumably stupid gits which couldn’t program their way out of a tin can), but even for yourself. The less you understand about your project, the less effective and productive you’re going to be. In other words: you need to build up a mental model of the project (which is something I’ve blogged about earlier) and hope it will fit in your brain in the short time you have. Given that you and your team mates can’t grow extra brains, you’ll have to utilize the available mental processing power as efficiently as possible. Note that adding extra single brains to the project doesn’t provide a solution, because each individual brain will not be able to grok the entire project if you couldn’t do so yourself to begin with.

In the next installment, I’ll discuss some simple approaches to follow the ASmAP-principle.

Xtext tip: “synthetic” parser rules

December 22, 2011 2 comments

This is a quick one to share a simple trick which may come in handy when creating an Xtext grammar.

Let’s say your grammar has a type rule T1 (i.e., a rule which corresponds to an EClass in the Ecore meta model). Let’s also say that some other type rule T2 composes that type somehow, i.e., it has a feature someT1 to which something of type T1 is assigned. Let’s say that you want to limit the syntactic possibilities for the composition of a T1 instance somewhat, e.g. in the case that T1 is a group of alternatives but a few alternatives are invalid when used inside T2.

This is a wholly legitimate situation because Xtext grammars usually have a number of responsibilities at the same time, amongst which are defining (1) a mapping to an Ecore meta model and defining (2) the syntax of the DSL.

Let’s sum this situation up in some grammar code:

T1: A1 | A2 | A3;

T2: 'a-t2' someT1=T1;

Let’s say that we would want to exclude A3 from the possible T1‘s in any T2. We could do this via a validation which simply checks the someT1 feature of any T2, reporting an error if it’s an A3. But that means that the parser itself still allows an A3 at that spot which could open up a whole can of smelly worms – e.g., left-recursion or some ambiguity. Also, the content assist that comes out-of-the-box will make syntax suggestions for A3.

Hence, we would like to inform the parser about the restricted syntax. One possibility would be:

T1: T1WithoutA3 | A3;

T1WithoutA3: A1 | A2;

T2: 'a-t2' someT1=T1WithoutA3;

This works perfectly, but it also ‘pollutes’ the meta model a bit. Since the meta model is mostly consumed by downstream clients like interpreters and code generators, this would only cause confusion. But more importantly: if we re-use an existing Ecore meta model (by means of the returns clause) this solution is not possible, since we would have to add a super type T1WithoutA3 to the A1 and A2 types which are sealed inside the re-used Ecore meta model – Xtext will issue an error as soon as we try it.

The clean solution consists of using something which I’ve termed a “synthetic parser rule” and has the following form:

T1:  A1 | A2 | A3;

T1WithoutA3 returns T1: A1 | A2;

T2: 'a-t2' someT1=T1WithoutA3;

Now there’s no pollution of the meta model, but the syntax will be restricted as we’d like it. Note that is very much something which is part of the standard Xtext repertoire but this trick works especially well in the face of type hierarchies and re-used Ecore meta model or inheriting Xtext grammars.

Using syntactic predicates in Xtext, part 2

December 20, 2011 2 comments

This blog is a continuation of the previous one about how to use syntactic predicates in Xtext. As promised, I’ll provide a few more examples, most of which come from the realm of GPL-like languages.

But first, a little summary is in order. As stated in the previous blog, a syntactic predicate is an annotation in an Xtext grammar which indicates to the ANTLR parser generator how a (potential) ambiguity should be resolved by picking the (first) one which is decorated with ‘=>‘. The annotation can be applied to:

  • a(n individual) keyword (such as ‘else‘),
  • a rule call (unassigned or as part of an assignment) and
  • a grouped parse expression, i.e. a parse expression between parentheses.

One thing to keep in mind -not only for syntactic predicates but in general- that an Xtext grammar has at least three and often four responsibilities:

  1. defining the lexing behavior through definition and inclusion of terminals;
  2. defining the parsing behavior through parser rules which determine how tokens are matched and consumed;
  3. defining how the model is populated;
  4. (when not using an existing Ecore model) defining the meta model.

Syntactic predicates influence the second of these but not the others. It is, after all, a syntactic predicate, not a semantic one – which Xtext doesn’t have in any case. Just as without using syntactic predicates, parsing behavior is not influenced by how the model is populated: instead, it is governed solely by the types of the tokens it receives from the lexer. This is easily forgotten when you’re trying to write grammars with cross-references like this:

SomeParserRule: Alternative1 | Alternative2;
Alternative1: ref1=[ReferencedType1|ID];
Alternative1: ref2=[ReferencedType2|ID];

In this case, the parser will always consume the ID token as part of Alternative1 even if its value is the (qualified) name of something of ReferencedType2. In fact, ANTLR will issue a warning about alternative 2 being unreachable so it is disabled. For a workaround this problem, see this older blog: it uses a slightly different use case as motivation but the details are the same. The only thing a syntactic predicate can do here is to explicitly favor one alternative over the other.

Some examples from Xbase

The Xtend and the Xbase languages that Xtext ships with both use plenty of syntactic predicates to avoid ambiguities in their grammars and to avoid having to use backtracking altogether. This already indicates that syntactic predicates are a necessary tool, especially when creating GPL-like or otherwise quite expressive DSLs. Note again that syntactic predicates are typically found near/inside optional parts of grammar rules since optionality automatically implies an alternative parsing route.

A good example can be found in the Xbase grammar in the form of the XReturnExpression rule: see GitHub. It uses a syntactic predicate on an assignment to force the optional XExpression following the ‘return‘ keyword to be parsed as part of the XReturnExpression rather than being an XExpression all on its own – which would have totally different semantics, but could be a viable interpretation considering Xtend doesn’t require separating/ending semi-colons.

The Xbase grammar also shows that syntactic predicates are an effective way to disambiguate the use of pairs of parentheses for denoting a list of arguments to a function call from that for grouping inside an expression: once again, see GitHub – here, the syntactic predicate applies to a grouped parse expression, i.e. everything between the parentheses pair starting just before the ‘=>‘.

Unforeseen consequences

Even if you don’t (have to) use syntactic predicates yourself, it’s important to know of their existence. As an example, the other day I was prototyping a DSL which used the JvmTypeReference type rule from Xbase followed by an angled bracket pair (‘<’, ‘>’) which held ID tokens functioning as cross-references. I was momentarily surprised to see parse errors arise in my example along the lines of “Couldn't resolve reference to JvmType 'administrator'.” The stuff between the angled brackets was being interpreted as a generic type parameter!

It turns out that the  JvmTypeReference parser rule uses a syntactic predicate on an angled bracket pair surrounding generic type parameters. This explains both the behavior and the lack of warnings by ANTLR about grammar ambiguities. You’d probably have a hard time figuring out this behavior before finding an innocuous ‘=>here. In the end, I changed “my” angled brackets to square brackets to resolve this. This shows that syntactic predicates, just like backtracking, can be a double-edged sword: it can solve some of your problems but you have to really know how it works to be able to understand what’s going on.

I hope that this was useful for you: please let me know whether it is! I’m not planning on a third installment but you never know: a particular enticing use case might just do the trick.

Using syntactic predicates in Xtext, part 1

December 5, 2011 5 comments

Xtext 2.x comes with the possibility to define syntactic predicates in the grammar. But what exactly are these syntactic predicates and how can they be used to avoid or resolve ambiguities in your grammar? The reference documentation is characteristically succinct on the subject. This might mean that it’s either very simple or very complicated ;)

In short: syntactic predicates provide a way to force the parser to make certain choices by annotating the grammar using a ‘=>‘.

Fortunately, it’s actually quite simple but you have to dive a little deeper into the parsing technology used by Xtext to really understand it. Xtext uses ANTLR* ‘under the hood’ to generate the lexer and recursive-descent parser. To leverage ANTLR, Xtext generates an** ANTLR grammar from an Xtext one. As such, it is ANTLR that does most of the heavy lifting while the Xtext runtime sort-of piggybacks on the ‘stuff’ ANTLR generates to build a full model from the parsed text and provide the functionality that ANTLR doesn’t.

During the generation of lexer and parser, ANTLR performs a thorough analysis of the grammar generated by Xtext to check for non-LL(*) behavior (i.e., left-recursion) and nondeterminism (i.e., ambiguities) in the grammar. The former it deals with by reporting an error “[fatal] rule xxx has non-LL(*) decision due to recursive rule invocations reachable from alts n, m, …. Resolve by left-factoring or using syntactic predicates or using backtrack=true option.” for every left-recursive rule and quitting the process, leaving you with a broken Xtext project. Left-recursion usually originates from trying to implement an expression language along the lines of

    | '(' Expression ')'
    | left=Expression op=('+'|'-'|'*'|'/') right=Expression

There’s a string of material (see herehere and here) detailing the ‘right’ patterns for writing such languages in a non-left-recursive manner in Xtext which also takes care of precedence and associativity. Since those patterns don’t use syntactic predicates (well, they can but it’s not essential), I won’t talk about these any more here.

Switching on backtracking should really be the very last option you try, as it doesn’t guarantee to solve the problem your grammar has but it does guarantee to obscure any problem, simply by not reporting any, even the ones that are easy to fix. Furthermore, backtracking ‘merely’ tries all the possible options, picking the first one that works: in essence it’s a ‘precognitive’ syntactic predicate, but at the expense of time and memory. If we can tweak our grammar with syntactic predicates so that no backtracking is required, we get a parser that performs better and more predictable if only because we’ve documented part of its behavior in the grammar.

The perfunctory example: the dangling else-problem

The most well-known application of syntactic predicates is also the simplest. Consider this grammar (header stuff omitted):


    'if' condition=Expression 'then' then=Expression
    ('else' else=Expression)?;

    IfStatement | {ValueReference} name=ID;

When having Xtext generate the language infrastructure for this grammar, you’ll get a warning from ANTLR saying “Decision can match input such as “‘else’” using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that input“. This means that there are is an ambiguity in the grammar. ANTLR detects this and makes a choice for you, because otherwise it would have to return a forest of parse trees instead of just one per parse, or roll a dice to cope with the nondeterminism. We’ll see in a minute that a syntactic predicate allows you to choose yourself, instead of having to rely on ANTLR to pick the right choice – with the chance of your luck running out.

Of course, we already were expecting this behavior, so let’s fire up ANTLRWorks on the InternalMyDsl.g file in the principal/non-UI Xtext project (easily findable using the Ctrl/-Shift-R shortcut) to see how we might use that in general. First, ask ANTLRWorks to perform the same analysis ANTLR itself does during parser generation through Ctrl/-R. Then, click the ruleIfStatement (conveniently marked in red) to see the Syntax Diagram for it. This will look like this:

Since ANTLR already reported to only use alternative 1, this is the way that the if-statement will be parsed: the optional else-part will be matched as part of the current invocation of the IfStatement rule. For the canonical example input “if a then if b then c else d”, it means that the parse will be equivalent to “if a then (if b then c else d)”, i.e. the else-part belongs to the second, inner if-statement and not the first, outer if-statement. This result is what we usually would want since it complies with most existing languages and also because the else-part is visually closer to the inner if so it’s more natural that it binds to that instead of the outer if.

By unchecking alternative 1 and checking alternative 2, we get the following:

Now, these ‘faulty’ diagrams in ANTLRWorks are usually a bit funky to interpret because the arrows don’t really seem to start/end in logical places. In this case, we should read this as: the optional else-part can also be matched as part of the invocation of the IfStatement rule invoking the IfStatement rule for a second time – it’s probably convenient to think of the outer, respectively, inner invocation. For our ubiquitous example input, it would mean that the parse is equivalent to “if a then (if b then c) else d” – with the else-part belonging to the first, outer if-statement and not the inner if-statement.

Note that it’s a bit hard to implement a recursive-descent parser with this behavior, since the execution of the inner IfStatement rule should somehow decide to leave the matching and consumption of the following ‘else‘ keyword to the execution of an (not necessarily the direct caller rule!) outer IfStatement rule. ANTLR tends to favor direct matching and consuming tokens as soon as possible, by the currently-called parser rule, over a more circuitous approach.

You can influence the alternatives-picking behavior by placing syntactic predicates in the Xtext grammar. One advantage is that make the choice explicit in your grammar, which both serves to document it as well to eradicate the corresponding warning(s). Another advantage might be is that you make a different choice from the one ANTLR would make: in fact, you can ‘trigger’ a syntactic predicate not only from a single token but also from a series of tokens – more on that in a next blog. Note that syntactic predicates favor the quickest match as well – by design.

Syntactic predicates in an Xtext grammar consist of a ‘=>‘ keyword in front of a keyword, rule call, assignment (i.e., an assigned rule call) or a grouped parse expression (including any cardinality). So, in our case the IfStatement rule becomes:

    'if' condition=Expression 'then' then=Expression
    (=>'else' else=Expression)?;

The ‘=>‘ now forces ANTLR to not consider the second alternative path at all and always match and directly consume an ‘else‘ and an ensuing Expression, which happens to match the situation without a syntactic predicate – but now this behavior is clearly intentional and not a happenstance.

Since this blog already runs to some length, I’m deferring some more examples, insights and hints & tips to a next blog. One of the examples will revolve around some common GPL-like language features which can be difficult to implement without syntactic predicates but are blissfully uncomplicated with them.

*) Not entirely by default, but it’s thoroughly recommended: see this explanation for more details on that matter.
**) Actually, Xtext generates two ANTLR grammars: one for full parsing, and one which extracts just enough information to provide the content assist functionality with. They’re essentially the same as far as the pure ANTLR part is concerned.


December 1, 2011 6 comments

As a follow-up of sorts to a previous blog, I’m going to rant a bit now about the situation that there is decent documentation, but people refuse to read it.

In ye’ olden days, I have tried to study books like this as part of my Master’s studies in Mathematics, books which are really hard because they deal with very abstract topics and do so in a comprehensive but still quite compact manner so you have to ‘embellish’ the reasoning by interpolating between the factual statements given in order for you, mere mortal, to be able to understand it completely.

And by ‘hard’ I mean ‘time consuming’ as much as ‘headache-inducing’, the juxtaposition of which precludes that “Googling for 5 hours” will get you anywhere: it may be time consuming (it certainly is) and might be ‘frustration-inducing’ but your brain is not engaged enough by that type of effort to actually attain a bit of knowledge about something from which you can come up with a solution to your problem. It also precludes finding solution for problems which nobody ever encountered before or which have been worded sufficiently different.

Going back to math books: I always told the (business/psychology/A.I.) students I had while being a student assistant teacher, that if they worked their way through one page of mathematics in less than a quarter of an hour, they could be 100% certain that they hadn’t understand one bit of it and would therefore fail their examinations. I’ve never been challenged about the truth of that statement, only the minimum amount of time varies per student and per book/course: usually it takes a lot more than 15 minutes to really grok the material. For the book I cited earlier, I could substitute ’15 minutes’ with ’8 hours’. In the end, I’ve given up on that particular book. Luckily, that didn’t stop me from getting my MSc.

The reason that learning takes time is that the human brain is surprisingly good at a great number of things, but remembering factual statements rigorously and executing their semantics in a totally reliable manner isn’t one of them. I read once that “the human brain is very good at avoiding being eaten by a mountain lion and distinguishing the nourishing berries from the poisonous ones, but we aren’t that well equipped for thinking about 11-dimensional spaces”. We are exceedingly good at pattern recognition and are able to deal with incomplete or faulty ‘input’ very well -ftr ll, y cn prbbly rd ths sntnc jst fn- but logical thought takes a bit of a backseat, comparatively speaking.

To do ‘intellectual stuff’ like reasoning about parser implementations and such, we have to build up a sort of mental model which we can hold in our brains. This mental model holds all sort of knowledge: both factual knowledge, remembered verbatim, but also vague stuff which we could file under ‘intuition’, as well ‘executive knowledge’ which can be fed input and which helps you form thought processes. All this will hopefully enable you to solve problems; if not, you have to adapt and expand the mental model. The mental model is unique for each combination of a particular human brain and a particular domain and not readily expressed in any natural language, which also explains for a bit why teaching is such an art: you have to find a modus to tap into the brain and the relevant mental model of the people you’re teaching.

(Disclaimer to the previous paragraphs: I’m not a psychologist nor a neurologist, so what I’m saying here is essentially my mental model of how learning works.)

Building up the mental model takes time, and appreciably more time than just reading some text book: your brain has got to remember a lot of ‘stuff’, make connections between that stuff and validate all new inputs (the next sentence) against the mental model that’s being formed. And to add insult to injury: often enough it’s very difficult to say that your mental model has actually expanded at the end of the day, because once you really understand something, it tends to become trivial and the mental model feels ‘compressed’.

I realize that I have yet to start my rant for real…

In stark contrast to the former, this day and age apparently is one where all answers should be one Google search away. This is unrealistic for many reasons, the most important being that it would imply that all answers are already known, which would be a depressing thought in itself. However, the most demeaning aspect of this school of ‘thought’ (or lack thereof) is that it is implied that knowledge is unnecessary and that only facts matter. But facts in itself are essentially inert matter and you have to make it come alive by embedding in a body of knowledge: a.k.a. the mental model. Without that we would be the proverbial infinite amount of monkeys hammering away at typewriters to try and come up with a Shakespeare play.

Also annoying to the max is the idea that communities (such as the plethora of Eclipse fora) exist to do the work for you, rather than helping you do it by guiding you in acquiring more knowledge on the subject. It seems that home schoolwork is often done these days by posting the problems to some Q&A site and expecting, nay, demanding that someone else solve these for them. Instead of being glad that someone who has already gone to the trouble to attain the knowledge required for performing that task free up some of his/her time to try and help a complete stranger by giving hinters, tips and useful pointers, abuse seems to be in order…

For myself, I’ve decided already some time ago to only help those people who show at least some willingness to learn and put in the (right kind of) effort themselves. In fact, I tend to try and help everyone but stop helping as soon as my help isn’t taken seriously, no effort is being expended by the other party, I receive abuse or any combination thereof – there’s no point in feeding the troll, after all: he/she can go gnaw some rocks, as far as I’m concerned…


Funny point in case: how to post on xda.

Categories: SD in general

Get every new post delivered to your Inbox.