Archive for the ‘software engineering’ Category

Digesting Microservices at muCon

On Friday, I had the privilege of presenting at the very first Microservices conference – muCon. In my talk, Engineering Sanity into Microservices, I spoke about the technical issues surrounding state in distributed systems as a whole, how these become a bigger problem as the number of deployed services goes up, and a few suggested patterns that will help you stay sane. The video is now available on the Skilllsmatter site (registration required).

MuCon was a really enjoyable single-topic conference, the talks ranged from high-level CTO-type overviews all the way to the gory details, and war stories. It will be interesting to turn up next year to hear more of the latter.

My biggest takeaway was from Greg Young’s presentation The Future of Microservices, where he spoke about Conway’s Law. As a reminder:

Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations
— M. Conway

The topical corollary to which he explained as (I paraphrase):

Siloed organizations will never be able to get the benefits of a microservice architecture, as it does not correspond to their communication structures.

Read that again, and really let it sink in.

I will put a layer of interpretation on this: SOA is absolutely not dead. It is a useful tool for highly compartmentalized organizations. Microservices is not its replacement. They are two different tools for different organizational types.

That insight alone was worth turning up for.

Deep testing of integrations with Camel

One of the things that often comes up in client conversations about developing integration code with Camel is what test support is available, and more to the point appropriate, for testing integrations. There is a spectrum of test types that can be performed, ranging from fully automated unit tests to full-blown multi-system, user-based “click and see the flow through effects” tests. Camel came with comprehensive test support baked in from it’s very inception, but the mechanisms that are available can be used to go way beyond the standard unit test.

Unit tests

Without wanting to get academic about it, let’s define a unit test as being one that tests the logic encapsulated within a block of code without external side effects. Unit testing straightforward classes is trivial. If you want to make use of external service classes, these can be mocked using your favourite mocking library and injected into the class under test. Camel routes are a little different, in that what they define isn’t executed directly, but rather builds up a set of instructions that are handed to the Camel runtime for execution.

Camel has extensive support for testing routes defined using both the Java DSL as well as the Spring/Blueprints XML DSL. In general the pattern is:

  1. instantiate a RouteBuilder or Spring context containing the routes with a CamelContext, and start the context (this is handled for you by CamelTestSupport or CamelSpringTestSupport – see Camel testing). These should contain direct: endpoints as the inputs to the routes (consumers) and mock: endpoints as the outputs (producers).
  2. get a hold of the mock endpoints, and outline the expectations. A MockEndpoint itself uses a directed builder DSL to allow tou to define a suite of comprehensive expectation, ranging from checking the number of messages received to the details of an individual message. You can make full use of Camel expressions in these tests as well.
  3. create messages that you want to feed in to the route and send them to the direct: endpoint at the top of the route under test using a ProducerTemplate.
  4. assert that the mock endpoints received the expected messages.

An example of this approach can be seen in the RssConsumerRouteBuilderTest in the horo-app I blogged about yesterday.

There are a couple of things that you need to employ this approach successfully. If using Java, the RouteBuilder class that defines your routes should have the ability to have the route endpoint URIs injected and any beans that touch external resources into it – see RssConsumerRouteBuilder. The external beans can easily be mocked as in a standard unit test.

Using the Spring DSL, we can still employ the same general approach, but we need to jump through a couple of hoops to do it. Consider what you would need to do the equivalent. A simple route might be defined via:

    <route id="fileCopyRoute">
        <from uri="file:///some/directory"/>
        <to uri="file:///some/other/directory"/>

You can externalise any URIs using Spring’s property support:

    <route id="fileCopyRoute">
        <from uri="${fileCopyRoute.input}"/>
        <to uri="${fileCopyRoute.output}"/>

You could then define a PropertyPlaceHolderConfigurer with a properties file that defines these properties as


The definition of this class should be in a Spring context file seperate to that of your route definitions. For testing you would run the routes with another test XML file that defines a PropertyPlaceHolderConfigurer that points to a test file with the test URIs:

This is usually why Spring DM/Blueprints based bundle projects split the config across (a minimum of) two context files. One (META-INF/spring/spring-context-osgi.xml) contains all of the beans that touch the OSGi runtime including the properties mechanism, and the other (META-INF/spring/spring-context.xml) contains your physical routes. When testing you can easily switch out the OSGi bits via another config file. This allows you to inject in other bits during a unit test of the XML-based routes, or when using the camel-maven-plugin in order to run those routes off the command line without an OSGi container like ServiceMix.

Embedded integration tests

Sometimes, testing just the route logic isn’t enough. When I was building out the horo-app, I happily coded up my routes, tested tham and deployed, only to have them blow up immediately. What happened? The objects that I was expecting to receive from the RSS component didn’t match those the component actually sent out. So I changed tact. To engage the component as part of the route I needed a web server to serve the file that fed the test.

Integration testing is usually pretty problematic in that you need an external system servicing your tests – and when you are in an environment where the service changes, you can break the code of the other people working against the same system. But there is a solution! Sun’s Java 6 comes with an embeddable web server that you can start up as part of your integration tests.

The approach that I used was to spin up this server at the start of my test, and configure it programatically to serve up a response suitable for my test when a certain resource was consumed. The server was started on port 0, which means that it’s up to the runtime to assign an available port on the machine when the test runs. This is very important as it enables multiple instances of the same test to run at the same time, as is often the case on CI servers. Without it, tests would trip over each other. Similar approaches are possible using other embeddable server types, such as LDAP via ApacheDS, messaging via ActiveMQ, or databases via H2 or Derby.

Tests that require an external resource often start failing on large projects without any changes on the programmer’s side due to this exact reason – the underlying system dependencies changing. By embedding the server to test your integration against, you decouple yourself from that dependency.

The routes in your test then inject the URI to the embedded resource. In my case, I whipped up an integration test version of the original unit test (RssConsumerRouteBuilderITCase) to do exactly this. Integration tests can be wired in to a seperate part of the Maven build lifecycle using the maven-failsafe-plugin and use a different naming convention (* as opposed to *

Usually the way the you structure your tests to avoid duplicating the lifecycle of these embedded backends ends up relying on a test class hierarchy, which may end up looking like:

  • CamelTestSupport
    • CamelTestSupportWithDatabase
    • CamelTestSupportWithWebserver

which I don’t really like, as you inevitably end up requiring two kinds of resource in a test. A much better option is to manage these extended resources using JUnit’s @Rule annotation. This treats any object that implements the org.junit.rules.ExternalResource interface as an aspect of the test, stopping and starting it as part of the test’s lifecycle. As such, you can compose your test of as many of these dependencies as you like – all without a rigid class hierarchy.

This approach allows you to test your integration code against a physical backend, without requiring that backend to be shared between developers. This decouples your development from the rest of the team and allows your integration tests to be run in a CI server. A huge win, as only tests which are deterministic end up being run and maintained in the long term.


Give your DB the love it deserves

Pity the poor database. As critical to most apps as a foundation to a building. And as interesting as an accounting seminar at a nudist colony.

Not sexy enough for the attentions of the senior dev, or considered to be “well understood”, DB work frequently end up getting handed off to the junior guys on the team. Who promptly make all the mistakes the senior guys have learned not to make. Mistakes which end up with massive hunks of sub-optimal compensating code in layers above them. Then they write some code off the back of them. Viola! Instant technical debt.

Queue self-perpetuating “relational databases aren’t web scale“, “normalised schemas aren’t performant”, “You don’t have these problems with NoSQL”.

Senior guys often don’t have the time to deal with it. DBAs aren’t seen as being responsive enough for JFDI/iterative development. Peer review at the end is too late.

So what’s the fix? Just enough design. Back of napkin. Whack together a schema, and talk through with someone (else) who knows what they’re doing. Then code. It’s not rocket science.


Read this and chuckled.

“Our industry, the global programming community, is fashion-driven to a degree that would embarrass haute couture designers from New York to Paris. We’re slaves to fashion. Fashion dictates the programming languages people study in school, the languages employers hire for, the languages that get to be in books on shelves. A naive outsider might wonder if the quality of a language matters a little, just a teeny bit at least, but in the real world fashion trumps all.”

Original from Forword to Joy of Clojure.

Bored with software?

What’s interesting right now in software isn’t the new shiny thing. We already have the tools to do most of what we want. What’s interesting is scale and change.

You build a system. Then you realize you need to break out and share functionality via modules. Then you want to manage them independently in live environments. And not take the system down. And have the old transactions finish on the old code while the new work hits the new code.

You build logic. It grows to the point where your original hand crafted solution is too unweildy. You need a rules engine, or workflow. Your code needs to keep running. A rewrite is not an option. Rework, refactor, augment, migrate. But don’t break what’s there.

You just wanted to integrate to that one external system. Web services behind a facade. Now another, this time via messaging. All of a sudden it’s 12. Integration framework? ESB? You’re in a cluster, shared network memory, processes that can only run in one place at a time. What’s the last straw, the tipping point to your next upgrade? Where to from here?

That’s what’s interesting.

Get Functional

That was the message that was coming through the Devoxx conference presentations this year. The idea that it will help your code run in the brave new world of multi everything (multi-core, multi-thread etc.) is one that’s widely touted, but rarely the primary driver for its use. Instead, it’s about less code, that’s more easily understood. When you do get to scaling it, it won’t do any harm either.

As Guillaume Laforge tweeted, from 800 Java developers in his session, only 10 knew/used Scala, 3 Clojure, 20 Ruby, and 50 were on Groovy – which gives a nice gentle introduction to some of the constructs for those looking to wade in. Good stats to cut through they hype. So what of the roughly 90% slogging on without closures, does this mean that they have to miss out on this fun?

Quite simply, no. There’s heap of drop in libraries that you can add into a Java project for all manner of functional goodness, and which don’t change the syntax of the language. LambdaJ for example gives a nice functional way of dealing with collections. To steal an example directly from the website, the following typical Java code:

List<Person> sortedByAgePersons = new ArrayList<Person>(persons);
Collections.sort(sortedByAgePersons, new Comparator<Person>() {
        public int compare(Person p1, Person p2) {
           return Integer.valueOf(p1.getAge()).compareTo(p2.getAge());

is replaced with:

List<Person> sortedByAgePersons = sort(persons, on(Person.class).getAge());

Fancy a bit of map-reduce without a grid? Well, it comes stock-standard with the Fork Join (JSR166y) framework that will be added to the concurrency utilities in JDK 7. If you don’t fancy waiting until September 2010 (the latest expected date for the GA release), it’s downloadable here. As an aside, Doug Lea has written a really good paper on the FJ framework.

Don’t fancy loops in loops in loops to filter, aggregate, do set operations with all the null checking that Java programming typically entails? Well, the Google Collections library (soon to be integrated into Guava, a set of Google’s core libs), contains predicates and transform functions that make all of this a lot easier to write and reason about. Dick Wall had a great presentation about this showing just how much code can be reduced (heaps).

A thing I heard a number of times outside the sessions was, “I don’t know about all this stuff, surely as we get further from the metal, performance suffers”. Sure, it gets harder to reason about timings as the abstractions get weirder, but the environment gets better all the time, and the productivity gains more than outweigh performance in all but the most perf-intensive environments. Brian Goetz spoke about how the JVM supports this new multi-language world. Not something that I had ever really given much thought to, but the primary optimizations aren’t at the language compiler level (javac, scalac, groovyc etc.)- they’re are all done at runtime, when the JVM compiles the bytecode. The number of optimizations in HotSpot are massive (there was a striking slide showing 4 columns of individual techniques in a tiny font). Multiple man-centuries of effort have gone into it, and each new release tightens it up. If you’re not sure, then profile it and make up your own mind. JDK 7 will also see the VM with some goodness that will make dynamic languages really fly.

One thing that still sticks out like a sore thumb is Closures support in Java. It’s not a candidate for inclusion in JDK 7, and the proposed syntax shown at the conf by Mark Reinhold looks pretty ugly when compared to other langs (see the proposal by Neal Garter). Either way, not a sniff of actual implementation. I understand there’s some serious work on the VM to make any of this possible regardless of the syntax. Not holding my breath. [Closures will actually be in JDK7 – thanks Neal.]

All up, I’m pretty excited by all this, and can’t wait to get my hot little hands on some of these tools. The functional style yields code that’s much easier to read and reason about, and the fact that it’s essentially all Java syntax, means that there’s no reason not to apply it. If you’re already comfortable with using EasyMock on your team, you won’t find it a huge mind shift.

The Church of the One True Language

I stumbled upon an interview from JAOO 2007 with Joe Armstrong and Mads Torgensen discussing Erlang, concurrency and program structure (objects versus interrelated processes). It was really interesting to see how similar yet different their points of view were. I’m not going to paraphrase, as it’s worth listening in on it.

Two points came out the conversation that are worth talking about – the fallacy of the silver bullet language, and the right tools for the future.

The premise of The Church of the One True Language goes something like this: “you can do anything in my language”. Write servers, build databases, write accounting apps etc. But if you think of languages having their sweet spots and use cases, much like libraries, that pretty much falls apart. I for one, wouldn’t want to be writing hugely parallel software in Java, just like I wouldn’t be writing web services in Erlang. Sure it’s doable, but probably not the best way to go about it. So it makes sense, that unless you want to be working in the one problem domain for the duration of your career it pays to diversify. Right tool for the job and all that.

This leads me to something that I have been thinking about for a while. The multi-core era is upon us, and we don’t have the right tools for the job.

OO programming makes it easy to design by component, and organise and compose the pieces to desired effect. The problem is that those same concepts break down when you think of system-level services like threads, and the interaction of your pieces with the platform. Should a thread really be an object that can be controlled by the programmer? I think probably not.

Writing multi-threaded software is really hard. After reading Java Concurrency In Practice, I realised the nuances of just how hard it really is, and how easy it is to do the wrong thing. Even really smart people get it wrong. The core of the problem is shared mutable state, and any language that does not sufficiently separate the effects among threads can, and probably will, end up doing the wrong thing. Erlang’s message passing model is quite cool in that it separates processes, yet it falls over on the front of modelling entities and the relationships between them. Not surprising given its design philosophy.

This seems to be the crux of the problem – the next generation of apps will have to deal easily with breaking up problems in an easily concurrent manner, but at the same time model the world in “this object is a bank account that belongs to that guy over there” abstractions that we have become used to thinking in. Those abstractions seem to be at odds with each other using current development paradigms. You can stick Actor libraries on existing languages, but they still don’t make it impossible to mess things up. This is ultimately what the next-gen programming environment needs to address. It should be really difficult to mess things up. Threading should be as though of like garbage collection in a modern VM (i.e. you should know how it works in case things go wrong, but can pretty much depend on it to work correctly the rest of the time).

Maybe the correct approach isn’t to duct tape these concepts together in one syntax, but rather to have different abstraction in a language that model each world-view seperately. This would be a bit like using floor plans in combination with elevations in building design. Or, for that matter, class and sequence diagrams in UML. Both represent a facet of the whole, but neither is fully complete on it’s own.

Either way, it pays to diversify. Languages have their own particular sweet spots, and problems that they address well. Even if we do manage to marry an object view of the world with transparent multi-threading, that willl just highlight a different class of problems that cannot be easily solved using that approach.

A fire-side chat about programming

Every once in a while I go through a period of introspection where I pose questions like “why am I solving the same stuff all the time?”, “is there a better way to be doing this?” and “what’s around the corner?”. I think it’s pretty healthy, and I prefer to give it a good two weeks of thought straight rather than to constantly be going through that process (which I find pretty distracting at the 10k foot level). As part of that I have been reading an awesome book in the last week called “Secrets of the Rock Star Programmers“. It’s a collection of interviews with some of the biggest/loudest names in programming, and contains the sorts of conversations that you would have down at the pub with these guys. I think that it’s quite an introspective, passing-on-wisdom type of book in the vein of “The Pragmatic Programmer” (TPP), but for the Java/.Net generation. Unlike TPP, it covers subjects around the meta-level stuff like keeping up to date versus trend chasing, and work-life balance amongst the day-to-day grind of pending deadlines. The really interesting thing is the common threads coming out despite the personalities and differences in approach. The book’s style is very different to TPP’s in that it is not prescriptive, but rather lets you draw your own conclusions. It has been an interesting read that I think I will keep coming back to, and one that I think I would not have gotten as much out of at the beginning of my career. I strongly recommend it, especially if you happen to be going through a “so, what’s it all about, then?” stage and don’t happen to have your favourite rock star around to chat to.

Be a Better Developer

I came across 91 Surefire Ways to Become an Even Better Developer while loooking for programming resources similar to Project Euler (the best way to learn a new language). Dozens of links and ideas when you feel that work is not stretching the brain as much as it could. My favourite? Get your boss to get you a massage.

What can you learn from the guys at Google?

Anyone whose coding work tends to lean towards the more advanced or low-level should check out Google Code University. Topics covered in this series of presentations include language corner cases, web security, distributed systems and AJAX. Good stuff, worth taking a look at.