Archive

Author Archive

An incovenient dichotomy: specific vs. generic

July 29, 2014 5 comments

Recently, I had a bit of a realisation: the LISPers of this world have a point.

But before explaining that, a question: how much of your application is really specific to your business domain? The meat of the realisation is that the answer typically is: not all that much.

This is because by the nature of applications and how they are embedded in their environments, they are not purely descriptions of the business domain but rather transformations of that domain interspersed with more “mundane” details like UX (design, style, layout, navigation, etc.), authentication, authorisation, validation, persistence, external interfaces, internationalisation, concurrency (these days: asynchrony), error handling, etc. So, however you measure the size of your application’s code base (lines of code, zipped megabytes, etc.), either a very small portion of it will actually represent your business domain authoritatively or the details of the business domain are duplicated and spread out across the entire code base.

Consider as an example a Web application to sell financial products. Such products typically come with a lot of details and business rules, such as applicability (essentially validations on who can buy a product), their actual semantics (i.e., a specification of what should happen when someone buys that product), etc., adorned with “stuff” that exists primarily for people to read. The majority of the Web application does not pertain to any of that, but is rather about guiding prospective customers in some way to the product they want/need, providing general information about the company, points of contact, etc. As for the financial products themselves: the semantics will only be executed on a mainframe backend, while the applicability would translate into some kind of questionnaire checking for that.

So, on the one hand modeling the financial products provides all kind of benefits since you can derive the questionnaire and semantics from the model. Having a DSL to do that modeling makes sense since each financial business domains typically use very specific concepts and jargon. On the other hand, creating a Web application is hardly domain-specific in any way: the number of concepts involved is quite large, but they hardly vary in essence over a broad range of applications. Of course, how these concepts are applied in a specific application should be quite restricted by means of a uniform/house style and an application architecture.

In my experience, it does not pay to create app-specific DSLs capturing this generic range of concepts. (*gasp* Did I just say: “Don’t build DSLs.”?! Yes, I did.) It is simply a lot of work: you have to determine which concepts to capture, taking into account possible extensions, changes to the technology stack, etc. You typically also need an underlying type system and powerful expressions sub-language. Then you need a generator or interpreter tying in with the underlying architecture, technology stack and application environment. This is not trivial stuff and certainly not for the general developer, even with the capabilities of modern language workbenches.

Instead, creating applications should be done through a generic application language which supports raising levels of abstractions essentially continuously. And this is where LISP comes in: not that LISP is this GAL (or should be), but we could, and should, be inspired by the concept of macros. (I say levels of abstraction, since I also realised that there is not one level to abstract over. Rather, at various points in your code there are various axes to abstract over.)

A lot of constructs in general purpose languages are essentially about raising that level of abstraction, or can be used in that way. E.g., (non-polymorphically called) methods/functions are just code blocks with a name to them, inheritance is another way to avoid duplicating code, etc. These constructs force you to raise the level of abstraction by reasoning from the GPL constructs towards the intended, or naturally-suited level of abstraction.

LISP-style macros, on the other hand, work in the opposite direction: they are effectively templates transforming some code into code that exhibits a lower level of abstraction. Provided that you can chain such transformations together to arrive at code that’s actually executable, you can vary the level of abstraction incrementally and essentially continuously. Note that I say “code”, not “data”: making a distinction there would force us to choose where data ends and where code starts – not only is that a hard topic, it’s also not relevant for our purposes!

To go back to our example: the financial products could be modeled as GAL code which is then transformed into parts of the Web application. You can view the modeling GAL code as an internal DSL. The GAL tooling could even help to expose this internal DSL as an external DSL.

So, is LISP this GAL? No. LISP is a very nice language but no amount of tooling is going to be able to general developer to become productive with it while also keeping the LISP community itself on board. In a next blog, I’ll try to say more about what this GAL and its tooling should look like.

In conclusion: specific vs. generic is a false dichotomy. Rather, we need to be able to raise levels of abstraction in a meaningful way, i.e. incrementally and essentially continuously. That way, we can leverage genericity while still embracing specificity.

 

Categories: MDSD Tags: ,

Object algebras in Xtend

June 29, 2014 Leave a comment

I finally found/made some time to watch the InfoQ video shot during Tijs van der Storm‘s excellent presentation during the Dutch Joy of Coding conference 2014, on object algebras. Since Tijs uses Dart for his code and I’m a bit of an Xtend junkie, I couldn’t resist to port his code. Happily, this results in code that’s at least as short and readable as the Dart code, because of the use of Xtend-specific features.

Object algebras…

In my understanding, object algebras are a way to capture algebraic structures in object-oriented programming languages in a way that allows convenient extensibility in both the algebra (structure) itself as well as in the behavior/semantics of that algebra – at the same time, even.

Concretely, Tijs presents an example where he defines an initial algebra, consisting of only integer literals and + operations. This allows you to encode simple arithmetic expressions inside of the host programming language – i.e., as an internal DSL. Note that these expressions result in object trees (or ASTs, if you will). Also, they can be seen as an example of the Builder Pattern. For this initial algebra, Tijs provides the typical print and evaluation semantics. Next, he extends the algebra with multiplication and also provides the print and evaluation semantics for that.

All of this comes at a cost. In fact, two costs: a syntactical one and a combinatorial one. The syntactical cost is that “1 + 2″ is encoded as “a.plus(a.lit(1), a.lit(2))” – at least in the Dart code example. Luckily, Xtend can help here – see below. The combinatorial cost (which seems to be intrinsic to object algebras) is that for every combination of algebra concept (i.c.: literal, plus operation, multiplication operation) and semantics (i.c.: print, evaluation) we need an individual class – although these can be anonymous if the language allows that.

Despite the drawbacks, object algebras do the proverbial trick in case you’re dealing with object trees/builders/ASTs in an object-oriented, statically-typed language and need extensibility, without needing to revert to the “mock extensibility” of the Visitor Pattern/double dispatch.

…in Xtend

…it looks like this. Go on, click the link – I’ll wait :) Note that GitHub does not know about Xtend (yet?) so the syntax coloring is derived from Java and not entirely complete – most notably, the Xtend keywords defoverride and extension are not marked as such.

To start with the biggest win, look at lines 80 and 142. Instead of the slightly verbose “a.plus(a.lit(1), a.lit(2))” you see “lit(1) + lit(2)”. This is achieved by marking the function argument of type ExpAlg/MulAlg with the extension keyword. This has the effect that the public members of ExpAlg/MulAlg are available to the function body without needing to dereference them as “a.”. In general, Xtend’s extension mechanism is really powerful especially when used on fields in combination with dependency injection. In my opinion, it’s much better than e.g. mucking about with Scala’s implicit magic, precisely because of the explicitness of Xtend’s extension.

Another win is the use of operator overloading so we can redefine the + and * operators in the context of ExpAlg/MulAlg, even usual the actual tokens: see lines 18 and 105. Further nice features are:

  • The use of the @Data annotation on classes to promote these to Value Objects, with suitable getters, setters and constructors generated automatically. Unfortunately, the use of the @Data annotation does not play nice with anonymous classes which were introduced in Xtend 2.6. So in this case, the trade-off would be to have less explicit classes versus more code in each anonymous class. In the end, I chose to keep the code close to the Dart original.
  • No semicolons ;)
  • Parentheses are not required for no-args function calls such as constructor invocations; e.g., see lines 39, 45 and 90.
  • Nice templating using decidedly non-Groovy syntax that is less likely to require escaping and also plays nice with indentation; see e.g. line 39.

All in all, even though I liked the Dart code, I like the Xtend version more.

Addendum: now with closures

As Tijs himself pointed out on Twitter, we can also use closures to do away with classes, whether they are explicit or anonymous depending on your choice of implementation language or style. This is because closures and objects are conceptually equivalent and concretely because the Xtend compiler does three things:

  • It turns closures into anonymous classes.
  • It tries to match the type of the closure to the target type, i.e.: it can coerce to any interface or abstract class which has declared only one abstract method. In our case that’s the print and eval methods of the respective interfaces.
  • It declares all method arguments to be final to match the functional programming style. As a result, the parameters of the factory methods are effectively captured by the closure.

(Incidentally, I’ve made examples of this nature before.)

The resulting code can be found here. It completely does away with explicit and anonymous classes apart from the required factory classes, saving 40 lines of code in the process. (The problem with the @Data annotation naturally disappears with that as well.) Note that we have to make explicit that the closures take no explicit arguments, only arguments captured from the scope, by using the “[| ]” syntax (nothing before |) or else Xtend will infer an implicit argument of type Object – see e.g. line 31.

A slight drawback of the closure approach is that it not only seals the details (i.e., the properties’ values – this is a good thing) but also hides them and that it limits extensibility to behavior that can be expressed in exactly one method. E.g., to do introspection on the objects one has to define a new extension: see lines 124-. Note that this make good use of the @Data annotation after all: both the constructor and a useful toString method are generated.

Categories: DSLs, Xtend(2)

The practical value of category theory (and art)

April 12, 2014 2 comments

After my presentation “A category-theoretic view of model-driven” (slides forthcoming through the conference web site) during this year’s Code Generation conference, I got the question what the practical value of category theory was. In the heat of the moment, I chanced upon an analogy with art which very soon found its way to Twitter as:

Felienne CT-tweet

This got some retweets and also some responses to the tune of me being completely wrong. Although the above is what I said, my actual opinion on this is a bit more nuanced. (As an aside: Félienne was kind enough to write a live-blog post about my presentation.)

First of all, art has quite some practical value: it improves people’s lives in a myriad ways by triggering deeply-felt emotional responses. Category theory (or CT, in short) doesn’t quite work like that in general although for some of its practitioners it might come close. CT certainly has value in providing its practitioners a mental framework which guides them to the sort of abstractions that work really well in mathematics and, increasingly, in functional programming to think and reason more effectively.

What didn’t quite made it to the tweet was my assertion that in practice CT does not relieve you of the hard work. Or to put in another way: there’s no substitute for thought. Framing something in the CT language and framework doesn’t “magically” give you the answers for the questions you might have. It can certainly help in reaching those answers more efficiently and phrasing them (much) more elegantly – which is precisely what proper abstraction should achieve. But at the same time, once you have these answers, it’s perfectly possible to “desugar away” from the use of CT. At the end of the day, everyone has to consider his/her ROI on learning enough CT to be able to take this route.

For the same reason I have a problem with the tendency in certain circles to outright declare adequate knowledge of CT as a criterion for admittance to those circles. (I hinted somewhat obliquely and humorously to this in my presentation.) It’s an entirely artificial entry barrier with those enforcing it doing so for not entirely autarkic reasons.

In conclusion: yes, CT can certainly have practical value but it requires the right context and considerable upfront effort to achieve that and does not constitute a prerequisite.

Categories: Uncategorized Tags:

Using Xtend with Google App Engine

December 6, 2012 5 comments

I’ve been using Xtend extensively for well over a year – ever since it came out, basically.

I love it as a Java replacement that allows me to succinctly write down my intentions without my code getting bogged down in and cluttered with syntactic noise – so much so that my Xtend code is for a significant part organized as 1-liners. The fact that you actually have closures together with a decent syntax for those is brilliant. The two forms of polymorphic dispatch allow you to cut down on your OO hierarchy in a sensible manner. Other features like single-interface matching of closures and operator overloading are the proverbial icing on the cake. The few initial misgivings I had for the language have either been fixed or I’ve found sensible ways to work around them. Over 90% of all my JVM code is in Xtend these days, the rest mostly consisting of interfaces and enumerations.

One of the things I’ve been doing is working on my own startup: Más – domain modeling in the Cloud, made easy. I host that on Google App Engine so I’ve some experience of using Xtend in that context as well. Sven Efftinge recently wrote a blog on using Google Web Toolkit with Xtend. Using Xtend in the context of GWT requires a recent version of Xtend because of the extra demands that GWT makes on Java code (which is transpiled from Xtend code) and Java types in order to ensure it’s possible to transpile to JavaScript and objects are serializable. However, when just using Xtend on the backend of a Google App Engine, you don’t need a recent version. However, you do need the “unsign” a couple of Xtend-related JAR files because otherwise the hosted/deployed server will trip over the signing meta data in them.

To use Xtend in a Google Web Application, do the following:

  1. After creating the Web Application project, create an Xtend class anywhere in the Java src/ folder.
  2. Hover over the class name to see the available quickfixes. Alternatively, use the right mouse click menu on the corresponding error in the Problems view.
  3. Invoke the “Add Xtend libs to classpath” quickfix by selecting it in the hover or by selecting the error in the Problems view, pressing Ctrl/Cmd 1 and clicking Finish.
  4. At this point, I usually edit the .classpath file to have the xtend-gen/ Java source folder and Xtend library appear after the src/ folder and App Engine SDK library entry.
  5. Locate the com.google.guava, org.eclipse.xtext.xbase.lib and org.eclipse.xtend.lib JAR files in the plugins/ folder of the Eclipse installation.
  6. Unsign these (see below) and place the unsigned versions in the war/WEB-INF/lib/ folder of the Web Applcation project.

Now, you’re all set to use Xtend in a GAE project and deploy it to the hosted server. In another post, I’ll discuss the benefits of Xtend’s rich strings in this context.

Unsigning the Xtend libs

The following (Bourne) shell script (which kinda sucks because of the use of the cd commands) will strip a JAR file of its signing meta data and create a new JAR file which has the same file name but with ‘-unsigned’ postfixed before the extension. It takes one argument: the name of the JAR file without file extension (.jar).

#!/bin/sh
unzip $1.jar -d tmp_jar/
cd tmp_jar
rm META-INF/ECLIPSE_.*
cp MANIFEST.MF tmp_MANIFEST.MF
awk '/^$/ {next} match($0, "(^Name:)|(^SHA1-Digest:)") == 0 {print $0}' tmp_MANIFEST.MF > MANIFEST.MF
# TODO  remove empty lines (1st clause isn't working...)
rm tmp_MANIFEST.MF

zip -r ../$1-unsigned.jar .
cd ..
rm -rf tmp_jar/

# Run this for the following JARs (with version and qualifier):
#    com.google.guava
#    org.eclipse.xtend.lib
#    org.eclipse.xtext.xbase.lib

Note that using the signed Xtend libs (placing them directly in war/WEB-INF/lib/) isn’t a problem for the local development server, but it is for the (remote) hosted server. It took me a while initially to figure out that the cryptic, uninformative error message in the logs (available through the dashboard) actually mean that GAE has a problem with signed JARs.

And yes, I know the unsign shell script is kind-of ugly because of the use of the cd command, but hey: what gives…

Categories: Xtend(2) Tags:

In the trenches: MDSD as guerilla warfare

October 1, 2012 4 comments

However much I would’ve liked for things to be different, adoption of MDSD is still quite cumbersome, even though it has been around for quite some time in different guises and various names. Particularly, the MDA variant may have caused more damage than brought goodwill because of its rigidity and reliance on UML which, frankly and paradoxically, is both unusably large and largely semantics-free at the same time.

Therefore, it’s “somewhat” unlikely that you’ll be able to ride the silver bullet train and introduce MDSD top-down as a project- or even company-wide approach to doing software. This means you’ll have to dig in and resort to guerilla tactics, so be prepared to have a lot of patience as well as suffer frustration, boredom and even a number of casualties – doing all this in the knowledge that if/when you succeed, you will have furthered the MDSD Cause and made the world a ever-so-slightly better place.

Remarkably, opposition to MDSD has -at least in my experience- equally come from both non-techies as well as techies alike but with very differing concerns. From the managers’ viewpoint, you’re introducing yet another “technology”, or more accurately: methodology, so you’ll have to convince them of the gains (e.g., increase of productivity, longer-term reliability) and placate them regarding the risks (e.g., ramp-up costs, specialty knowledge). That to be expected, as well entirely reasonable.

Your colleague developers, on the other hand, are a tougher crowd to convince and any manager or team/project lead worth his or her salt is going to take a team’s gut feeling into account when deciding on proclaiming yay or nay on adopting MDSD in a given project. What struck me, at first, as extremely odd is the reluctance of the “average developer” to use MDSD. This was at a point in my career that I still had to find out that most developers were much more interested in prolonging the usefulness of their current skill set than in acquiring new skills, and much less seeing that same skill set become much less valuable because of something that’s not a buzz word (which MDSD isn’t). From a personal economic viewpoint this makes imminent sense as it makes for a steady and predictable longer-term income.

“Unfortunately,” that’s not the way I work: I have a fair bit of Obsessive Compulsive Disorder which leaves me quite incapable of not continuously improving a code base and encourages me to find ever better ways of succinctly expressing intent in it. I stand by Ken Thompson’s quote: “One of my most productive days was throwing away 1000 lines of code.” In hindsight, it is no surprise that I immediately gravitated towards MDSD when I came to know it.

But enough about me: what would the Average Developer™ think about MDSD? As already mentioned: it can easily be perceived as a threat because adoption usually means that less of total developer time is spent on using an existing skill set (never mind that that skill set comes with dull, tedious and repetitive tasks!) which begs the question what the remainder of the developer time is spent on. The answer can be: on something which requires a new skill set or it is not spent at all because the work can be done in less hours which amounts to another blow to one’s job security.

The other reason that our AD doesn’t like MDSD is that they are, well…average, meaning that it’s probably a bit of leap for him/her to get used to it and especially to acquire some of the required skill set to actively participate in it. Meta modeling comes natural to some of us, but these people tend to occupy the far right of the bell curve of the normal distribution. So, MDSD is also a threat to one’s occupational pride and confidence.

A useful approach to convincing the AD that MDSD might hold some sway and can make life easier, is by “doing it from the inside out”: instead of converting the entire project to an MDSD approach, convert a small part of it, focusing on a low-hanging fruit which comes with a lot of daily pain for the developers involved. A fruit that usually hangs within grabbing range is the data model, together with all its derived details in the various application layers: DB schema, ORM configuration, DTOs, views, etc. It’s convenient that it’s already called a data model, so it’s not a big leap to formalize/”modelize” its description sufficiently, e.g. using a DSL, and to write a generator to generate the tedious bits of code.

It’s important to pick your battles intelligently: don’t try and subjugate entire regions of the existing code base to an MDSD regime, even if you already could with the current state-of-the-art of the model. This is for two reasons: any conversion will have impact on the project and have some unforeseen consequences, but more importantly, you’re effectively throwing away someone’s code and replacing it with generated code – most developers are uneasy with that and if you do this too harshly you will not gain traction for MDSD.

An important ingredient of guerilla tactic in general is to appease the local population: “The people are the key base to be secured and defended rather than territory won or enemy bodies counted”. For MDSD it’s no different which means it’s crucial to keep responsibility in the same place. As Daniel Pink explains, autonomy, mastery and purpose are crucial to anyone’s motivation. So, make the original developer of the replaced code responsible for the MDSD solution as well: show him/her how to change the model, the generator and (even) the DSL and demonstrate how this makes life easier.

Once small parts of a project are model-driven, you can stimulate growth in two directions: (1) “conquer” more areas of the project, and (2) glue parts together. E.g., if at first you’re only generate the DB schema from the data model, then start generating the DTOs/POJOs as well, after which you naturally can generate the ORM configuration as well. If you’ve also got a model for the backend services, you could now start writing down contracts for those services in terms of the data model.

Again, you’ll have to keep track of how your fellow programmers are doing, where you can help them with MDSD (and where you can’t), whether you can “hide” the ramp-up costs enough to not be considered a cost center, and whether they are adopting the MDSD way as their own.

A trick for speeding up Xtend building

September 24, 2012 Leave a comment

I love Xtend and use it as much as possible. For code bases which are completely under my control, I use it for everything that’s not an interface or something that really needs to have inner classes and such.

As much as I love Xtend, the performance of the compilation (or “transpilation”) to Java source is not quite on the level of the JDK’s Java compiler. That’s quite impossible given the amount of effort that has gone into the Java compiler accumulated over the years and the fact that the team behind Xtend is only a few FTE (because they have to take care of Xtext as well). Nevertheless, things can get out of hand relatively quickly and leave you with a workspace which needs several minutes to fully build  and already tens of seconds for an incremental build, triggered by a change in one Xtend file.

This performance (or lack thereof) for incremental builds is usually caused by a lot of Xtend source  interdependencies. Xtend is an Xtext DSL and, as such, is aware of the fact that a change in on file can make it necessary for another file to be reconsidered for compilation as well. However, Xtend’s incremental build implementation is not (yet?) always capable of deciding when this is the case and when not, so it chooses to add all depending Xtend files to the build and so forth – a learned word for this is “transitive build behavior”.

A simple solution is to program against interfaces. You’ve probably already heard this as a best practice before and outside of the context of Xtend, so it already has merits outside of compiler performance. In essence, the trick is to extract a Java interface from an Xtend class, “demote” that Xtend class to an implementation of that interface and use dependency injection to inject an instance of the Xtend implementation class. This works because the Java interface “insulates” the implementation from its clients, so when you change the implementation, but not the interface, Xtend doesn’t trigger re-transpilation of Xtend client classes. Usually, only the Xtend implementation class is re-transpiled.

In the following I’ll assume that we’re running inside a Guice container, so that the Xtend class is never instantiated explicitly – this is typical for generators and model extensions, anyway. Perform the following steps:

  1. Rename the Xtend class to reflect it’s the implementation class, by renaming both the file and the class declaration itself, without using the Rename Refactoring. This will break compilation for all the clients.
  2. Find the transpiled Java class corresponding to the Xtend class in the xtend-gen/ folder. This is easiest through the Ctrl/Cmd-Shift-T (Open Type) short cut.
  3. Invoke the Extract Interface Refactoring on that one and extract it into a Java interface in the same package, but with the original name of the Xtend class.
  4. Have the Xtend implementation class implement the Java interface. Compilation should become unbroken at this point.
  5. Add a Guice annotation to the Java interface:
    @com.google.inject.ImplementedBy(...class literal for the Xtext implementation class...)
    

Personally, I like to rename the Xtend implementation class to have the Impl postfix. If I have more Xtend classes together, I tend to bundle them up into a impl sub package.

Of course, every time the effective interface of the implementation class changes, you’ll have to adapt the corresponding interface as well – prompted by compilation errors popping up. I tend to apply this technique only as soon as the build times become a hindrance.

Categories: Xtend(2) Tags: , ,

A (slightly) better switch statement in JavaScript

September 8, 2012 2 comments

The switch statement in JavaScript suffers from the usual problems associated with C-style switch statements: fall through. This means that each case guard needs to be expressly closed with a break statement to avoid falling through to the first executable code after that – no matter which case that code belongs to. Fall through has been the source of very many bugs. Unfortunately, the static code analysis for JavaScript (JSLint, JSHint and Google’s Closure compiler) do not check for potential fall through (yet?).

Today I thought I could improve the switch statement slightly with the following code pattern:

var result = (function(it) {
switch(it) {
case 'x': return 1;
case 'y': return 2;
/* ... */
default: return 0;
}
})(my_it);

(Apologies for the lack of indentation: couldn’t get that to work…)

The advantage of using return statements is two-fold:

  1. it exits the switch statement immediately,
  2. it usually comes right after the case guard, making visual inspection and verification much easier than hunting for a break either in- or outside of a nice pair of curly braces.

This approach also has a definite functional programming flavor, as we’ve effectively turned the switch statement into an expression, since the switch statement is executed as part of a function invocation.

Postscript

Yes, I do write JavaScript from time to time. I usually don’t like the experience very much, mostly because of inadequate tool support and the lack of static typing (and the combination thereof: e.g. the JS plug-ins for Eclipse often have a hard time making sense of the code at all). But we do what we can to get by ;)

Groovy-type builders and JSON initializers in Xtend

September 3, 2012 Leave a comment

One of the nicer features of dynamic languages like Ruby, Groovy, etc. is the possibility to easily implement builders which are constructs to build up tree-like structures in a very succinct a syntactically noise free way. You can find some Groovy examples here – have a special look at the HTML example. Earlier, Sven Efftinge has written a blog on the implementing the same type of builders using Xtend. My blog post will expand a little on his post by providing the actual code and another example.

The main reason that builders come naturally in dynamic languages is that metaprogramming allows adding “keywords” to the language without the need to actually define them in the form of functions. In the case of HTML, these “keywords” are the tag names. Statically-typed languages like Java or Xtend do not have that luxury (or “luxury” as the construct can easily be misused) so we’ll have to do a little extra.

The HTML example

You’ll find Sven’s original example reproduced in HtmlDocumentExample . Note that because of the point mentioned above, we need to have compile-time representations of the HTML DOM elements we’re using. Sven has written these manually but I’m afraid that I’m lazy to do that so I opted for a generative approach. Apart from that, the example works exactly the same, so I’ll refer to his original blog for the magic details – some of which I’ll re-iterate for the JSON example below.

Generation

Some basic HTML DOM element types are provided by the BaseDomElements file. Note two nice features of Xtend 2.3+: one file can hold multiple Xtend classes and the use of the @Data annotation as a convenient way to define POJOs – or should those be called POXOs? ;) The POJOs for the other DOM elements are generated by the GenerateDomInfrastructure main class: note they are generated as POXOs which are then transpiled into Java. After running the GenerateDomInfrastructure main class and refreshing the Eclipse project, the  HelpDocumentExample class should compile.

A JSON example

You can find the JSON example in JsonResponseExample. For convenience and effect, I’ll reproduce it here:

class JsonResponseExample {

    @Inject extension JsonBuilder

    def example() {
        object(
            "dev"        => true,
            "myArray"    => array("foo", "bar"),
            "nested"     => object("answer" => 42)
        )
    }

}

To be able to compile this example, you’ll need my fork of Douglas Crockford’s Java JSON library with the main differences being that it’s wrapped as an Eclipse plug-in/OSGi bundle and it’s (as properly as manageable) generified. In addition to the GitHub repo, you can also directly download the JAR file.

As with the HTML example, the magic resides in the line which has a JsonBuilder Xtend class Guice-injected as an extension, meaning that you can use the functions defined in that class without needing to explicitly refer to it. This resembles a static import but without the functions/methods needing to be static themselves. The JsonBuilder class has two factory functions: object(..) builds a JSONObject from key-value pairs and array(..) builds a JSONArray from the given objects.

The fun part lies in the overloading of the binary => operator by means of  the operator_doubleArrow(..) function which simply returns a Pair object suitable for consumption by the object(..) factory function. This allows us to use the “key => value” syntax demonstrated in the example. Note that the => operator has no pre-existing meaning in Xtend – the makers of Xtend have been kind enough to provide hooks for a number of such “user-definable” operators: see the documentation.

I’m still trying to find a nice occasion to use the so-called “spaceship” or “Elvis” operators: now that would be positively…groovy ;)

Postscript

I failed to notice earlier that Xtend already has an -> operator which does exactly the same thing that our => operator does. Also, the “Elvis” ?: operator already has a meaning: “x ?: y” returns x if it’s non-null and y otherwise. This makes for a convenient way to set up default values. You’ll find both overloading definitions in the org.eclipse.xtext.xbase.lib.ObjectExtensions class.

Categories: Xtend(2)

Polymorphic dispatch in Xtend

August 3, 2012 1 comment

Polymorphic dispatch (or http://en.wikipedia.org/wiki/Multiple_dispatch or multimethods as its also called) is a programming language construct which chooses a code path based on runtime types instead of types that are inferred at compile. The poor man’s method of achieving such behavior would be to litter your code with prose like this:

if( x instanceof TypeA ) { (x as TypeA).exprA }
else if( x instanceof TypeB ) { (x as TypeB).exprB }
else ...

Xtend 2.x offers two much better constructs to do polymorphic dispatching:

  1. Through the use of the dispatch modifier for function defs – this construct is Xtend’s “official” polymorphic dispatch.
  2. Through the use of the switch statement and referring to types instead of cases.

These are better than the poor man’s method because they are declarative, i.e.: they express intent much more clearly and succinctly. Though both constructs have a lot in common, there are some marked differences and Best Practices for safe guarding type safety which I’ll discuss in the blog.

The example

Consider the following Xtend code – note that the syntax coloring is lacking a bit, but WordPress doesn’t fully understand Xtend – …yet…. Also note that since Xtend2.3 you can have more than one Xtend class in one file.

class CommonSuperType { ... }
class TypeA extends CommonSuperType { ... }
class TypeB extends CommonSuperType { ... }
class TypeC extends CommonSuperType { ... }
class UnrelatedType { ... }
class Handler {

  def dispatch foo(TypeA it) { it.exprA }
  def dispatch foo(TypeB it) { it.exprB }

  def bar(CommonSuperType it) {
    switch it {
      TypeA: it.exprA
      TypeB: it.exprB
    }
  }
}

For simplicity’s sake, let’s assume that exprA and exprB both return an int. The Xtend compiler generates two public methods in the Java class Handler – one for foo, one for bar. Both of these have the same signature: int f(CommonSuperType), where f = foo or bar. In addition, for each foo dispatch function, Xtend generates a public method with signature int _foo(t), where t=TypeA or TypeB – note the prefixed underscore. The actual polymorphic dispatch then happens in “combined” foo(CommonSuperType) method, actually through the previously demonstrated poor man’s method.

By the way: a user-friendly way to inspect the “combined” method is the Outline which will group the dispatch functions belonging together under the combined signature.

Note that the foo and bar method ends up with CommonSuperType as the type of its parameter. This is because CommonSuperType is the most specific common super type of TypeA and TypeB – deftly implied by the name – and Xtend infers that as the parameter’s type for the “combined” method. In general, Xtend will compute the most specific common super type across all dispatch functions, on a per-argument basis. In case of the bar method we declared ourselves what the parameter type is.

As demonstration, add the following code to the Handler class and see what happens:

  def dispatch foo(TypeC it) { it.exprC }

(Assume that exprC again returns an int.)

The generated foo and bar methods are functionally nearly identical, the difference being that foo explicitly throws an IllegalArgumentException mentioning the unhandled parameter type(s) in its message, in case you called it with something that is a CommonSuperType but neither of TypeA nor of TypeB. The bar method does no such thing and simply falls through the switch, returning the appropriate default value: typically null but 0 in our int-case. To remedy that, you’ll have to add a default case which throws a similar exception, like so:

  def bar(CommonSuperType it) {
    switch it {
      TypeA: it.exprA
      TypeB: it.exprB
      default:
        throw new IllegalArgumentException("don't how to handle sub type " + it.^class.simpleName)
    }
  }

In case you already have sensible default case, you’re basically out of luck.

Potential mistakes

Both approaches have their respective (dis-)advantages which I’ll list comprehensively below. In both cases, though, it’s relatively easy to make programmers’ mistakes. The most common and obvious ones are:

  1. The parameter type of the “combined” foo method is inferred, so if you add a dispatch function having a parameter type which does not extend CommonSuperType, then the foo method will wind up with a more general parameter type – potentially Object. This means that the foo method will accept a lot more types than usually intended and failing miserably (through a thrown IllegalArgumentException) on most of them. This is especially dangerous for public (which happens to be the default visibility!) function defs.
  2. Xtend will not warn you at editor/compile time about the “missing” case TypeC: it’s a sub type of CommonSuperType but not of TypeA nor of TypeB. At runtime, the bar method will simply fall through and return 0.
  3. The return type of the combined method is also inferred as the most specific common super type of the various return types – again, potentially Object. This is usually much less of a problem because that inferred type is checked against the parameter type of clients of the combined method.

This shows that these constructs require us to do a little extra to safe guard the type safety we so appreciate in Xtend.

Advantages and disadvantages of both constructs

We list some advantages and disadvantages of both constructs. Advantages of the dispatch construct:

  • Provides more visual code space. This is useful if the handling of the separate types typically needs more than 1 line of code.
  • Explicit handling of unhandled cases at runtime.

Disadvantages of the dispatch construct:

  • Automagically infers parameter types of the “combined” method as the most common super types. In case of a programmer error, this may be (much) too wide.
  • Takes up more visual code space/more syntactic noise.

Advantages of the switch construct:

  • Takes up less visual code space. This is useful if the handling of the separate types doesn’t need more than one line of code.
  • It’s a single expression, so you can use it as such inside the function it’s living in. Also, you can precompute “stuff” that’s useful for more than one case.

Disadvantages of the switch construct:

  • Fall-through of unhandled cases at runtime, resulting in an (often) non-sensical return value. You have to add an explicit default case to detect fall-through.

Mixing polymorphic and ordinary dispatch

Since Xtend version 2.3, you are warned about dispatch functions having a compatible signature as a non-dispatch function and vice versa. As an example, consider the following addition to the Handler class:

  def foo(TypeC it) { it.exprC  }
  def foo(UnrelatedType it) { it.someExpr }

Here, exprC again returns an int, but someExpr may return anything. Note that both functions are not of the dispatch persuasion.

The first line is flagged with the warning “Dispatch method has same name and number of parameters as non-dispatch method”, which is a just warning in my book. However, this warning is also given for the second line, as well as for the first two foo functions. (Note that the warnings are also given with only one of these extra functions present.) In that case, it’s not always a helpful warning but it does riddle your code file with warnings.

To get rid of the warnings, I frequently make use of the following technique:

  • “Hide” all dispatch functions by giving them (and only these) an alternate name. My personal preference is to postfix the name with an underscore, since the extra _ it’s visually inconspicuous enough to not dilute the intended meaning. Also, give them private visibility to prevent prying eyes.
  • Create an additional function with the same signature as the “combined” method for the dispatch functions, calling those.

The net result is that you get rid of the warnings, because there’s no more mixture of dispatch and non-dispatch functions with compatible signatures. Another upshot is that the signature of the “combined” method is now explicitly checked by the additional function calling it – more type safety, yeah! Of course, a disadvantage is that you need an extra function but that typically only is one line of code.

In the context of our example, the original two foo functions are replaced by the following code:

  def foo(CommonSuperType it) { foo_ }

  def private dispatch foo_(TypeA it) { it.exprA }
  def private dispatch foo_(TypeB it) { it.exprB }
Categories: The How, Xtend(2)

The ASmaP Principle (part 2)

April 23, 2012 Leave a comment

To recount the gist of the previous blog: the ASmAP-principle strives to minimize the footprint of your project/application in terms of code base size, dependencies and configuration complexity. (In fact, it’s a particular re-statement of the KISS-principle.)

What’s the use?

The main reason to follow this principle is that projects/applications tend to grow fast to a point where it’s quite impossible to fit a good mental model of it into your brain in any reasonable amount of time. In turn, this inhibits you in becoming productive on those projects and making the right coding decisions. By constantly making an effort to reduce incidental complexity, you’ll make development on the project more effective and productive – not only in the long run, but also in the short run. You should see a decrease of time-to-market, bug count, maintenance costs after a short time, while hooking up extra devs to the project and effectively communicating with your customer becomes easier.

The ASmaP Principle is not about being as smart as possible in as few lines of code, it’s about obtaining/using/achieving the simplest possible solution, but not simpler – free after Albert Einstein. It also pertains to documentation: provide just enough text to clarify what your code is doing beyond what the code already clearly expresses on its own. This is also an incentive to write your code in a way which makes documentation largely unnecessary. I usually get away with some explanation about a class as a whole and some usage patterns.

Now, I’m not advocating premature optimization (of the performance of implemented functionality) here. I do think that it is the root of at least a lot of evil and any optimization effort should be based on profiling. Moreover, optimization is usually the source of a lot of incidental complexity since it doesn’t expand functionality as such only the performance of the implementation. As an example, consider caching: it requires adding a layer of indirection, thinking about invalidation and usually comes with a bunch of assumptions and runtime behaviors which are

Oh, and I certainly don’t advocate minification or obfuscation :)

Kill your dependencies

Dependencies on external libraries, frameworks, components, runtimes, environments and what-have-you -i.e., all this is within your direct sphere of influence but which you didn’t actually create yourself- are often the biggest source of incidental complexity. Each of these dependencies add their own dependencies but more importantly, they add to the information you need to carry around in your head: what’s the library’s API like, how to include the library correctly, how to instantiate the component, what are the peculiarities of this runtime, what do I need to configure in that environment to have it switch between dev/test/acc/prod?

Obviously, you can document much of this, but documentation has to be kept up-to-date and you’ll always need to remember a non-trivial amount of information to even be able to get things working at all, let alone productively.

Furthermore, dependencies tend to solve only half of a problem that technical rather than functional in nature – in other words: it’s falls squarely in the category of incidental complexity. My favorite example is Hibernate: it gives you (un-)marshalling of Java objects from/to a relational database, but at the price of having to write XML mapping files (well, in the pre-@annotation days, at least) which you need to keep in sync with both the Java classes and DB schema. Were I to generate the latter, I could just as well generate the necessary (un-)marshalling methods. (With annotations most of the meta data resides in the Java class, at least.)

To gauge whether you have too many dependencies in your Java project, I propose the following:

The Maven Rule: if your project uses Maven for dependency management, you have too many dependencies. Period.

While Maven undeniably makes the whole dependency management problem more bearable, it generally expands the same problem by making it just too friggin’ easy to drag in more dependencies without every really wondering whether that’s a good idea. Think about what you really need from dependency #37 that you put in your project. Some questions you could ask yourself:

  1. Is the same functionality already implemented (more-or-less equivalently) somewhere else in your project (typically using another dependency-of-ill-repute)? – If so, Refactor and remove all but the most-used dependency. Sometimes, you’ll find that you’re better off writing your own implementation without using external dependencies to snugly fit your needs.
  2. Does it solve your entire technical problem? If not, are better frameworks available?
  3. Does it provide an complete paradigm/way-of-thinking/-working (such as the Google Guice or Apache Wicket frameworks) or does it “only” provide some utility functions (such as Google Guava)? In the latter case, see the first point.

Note that I’m not advocating NIH here, but mind that there’s also no point in shoehorning a problem in a particular shape so that it happens to fit some “reusable” code: the fit is often quite bad and the primary code not very aesthetically pleasing. That’s a general problem with reuse (I have): all code makes a lot of implicit assumptions and unless these assumptions are met by your problem (domain) on a grand scale, things are not going to fly. The most rewarding uses of reusable components are often those where the component either matches the problem domain perfectly or is completely functionally orthogonal.

Not only can you reduce the sheer number of artifacts on which you’re dependent, you can also make sure that the number of direct dependencies (i.e., part X makes direct use of dependency Y by explicit reference such as an import) is as small as possible. This is analogous to your project architect insisting that application layers only interact in very controlled ways (and preferably through DTOs). Often, you can hide a large number of dependencies in a specific sub system which then governs use of and acces to its dependencies.

Categories: SD in general
Follow

Get every new post delivered to your Inbox.