Posts Tagged ‘software engineering’

The Church of the One True Language

I stumbled upon an interview from JAOO 2007 with Joe Armstrong and Mads Torgensen discussing Erlang, concurrency and program structure (objects versus interrelated processes). It was really interesting to see how similar yet different their points of view were. I’m not going to paraphrase, as it’s worth listening in on it.

Two points came out the conversation that are worth talking about – the fallacy of the silver bullet language, and the right tools for the future.

The premise of The Church of the One True Language goes something like this: “you can do anything in my language”. Write servers, build databases, write accounting apps etc. But if you think of languages having their sweet spots and use cases, much like libraries, that pretty much falls apart. I for one, wouldn’t want to be writing hugely parallel software in Java, just like I wouldn’t be writing web services in Erlang. Sure it’s doable, but probably not the best way to go about it. So it makes sense, that unless you want to be working in the one problem domain for the duration of your career it pays to diversify. Right tool for the job and all that.

This leads me to something that I have been thinking about for a while. The multi-core era is upon us, and we don’t have the right tools for the job.

OO programming makes it easy to design by component, and organise and compose the pieces to desired effect. The problem is that those same concepts break down when you think of system-level services like threads, and the interaction of your pieces with the platform. Should a thread really be an object that can be controlled by the programmer? I think probably not.

Writing multi-threaded software is really hard. After reading Java Concurrency In Practice, I realised the nuances of just how hard it really is, and how easy it is to do the wrong thing. Even really smart people get it wrong. The core of the problem is shared mutable state, and any language that does not sufficiently separate the effects among threads can, and probably will, end up doing the wrong thing. Erlang’s message passing model is quite cool in that it separates processes, yet it falls over on the front of modelling entities and the relationships between them. Not surprising given its design philosophy.

This seems to be the crux of the problem – the next generation of apps will have to deal easily with breaking up problems in an easily concurrent manner, but at the same time model the world in “this object is a bank account that belongs to that guy over there” abstractions that we have become used to thinking in. Those abstractions seem to be at odds with each other using current development paradigms. You can stick Actor libraries on existing languages, but they still don’t make it impossible to mess things up. This is ultimately what the next-gen programming environment needs to address. It should be really difficult to mess things up. Threading should be as though of like garbage collection in a modern VM (i.e. you should know how it works in case things go wrong, but can pretty much depend on it to work correctly the rest of the time).

Maybe the correct approach isn’t to duct tape these concepts together in one syntax, but rather to have different abstraction in a language that model each world-view seperately. This would be a bit like using floor plans in combination with elevations in building design. Or, for that matter, class and sequence diagrams in UML. Both represent a facet of the whole, but neither is fully complete on it’s own.

Either way, it pays to diversify. Languages have their own particular sweet spots, and problems that they address well. Even if we do manage to marry an object view of the world with transparent multi-threading, that willl just highlight a different class of problems that cannot be easily solved using that approach.

Poorly Formatted Code Costs You Money

After nearly 10 years of working on complex systems I think I have nailed down why poorly formatted code annoys me so much. It wastes time. Complex logic requires whitespace in order for the reader to make sense of it in the same way that punctuation is used in sentences. If the whole thing looks like a dog’s breakfast, it makes it more difficult to understand.

When a person approaches poorly laid out code, they have two choices:

  • battle through it
  • clean it up and make sense of it

The first one results in an exercise in frustration, the second… well, that’s a beast unto itself.

A long time ago, in my first job, I was working for a consultancy at a major telecoms company on a very large system. The system was used to activate telecoms products on individual lines and talked to telephone exchanges across the network. The project was in its tenth year and had fallen into a steady routine of releases. Regression tests had been written years earlier, but had long since fallen by the wayside. A new project manager came in with an agenda of improvement, and the process to get the tests running again began in earnest.

I was given the task of redeveloping telephone exchange simulators that the tests made use of. These Perl daemon servers would listen on a pipe, take some text in, interpret it and spit out what was expected of an actual telephone exchange of the appropriate manufacturer and version.

I had never worked on anything like this before and asked whether there was any documentation. The response as I remember it was “Bwahahaha! Documentation?”. OK, maybe not quite that dramatic, more along the lines of… “Nothing concrete but it has a lot of comments”.

Understatement of the millennium. Just a sample:

$i++; # add 1 to the value of i

Apparently, the project had taken on a contractor years previously who wasn’t particularly good. Rather than getting rid of him (they didn’t care as they were being paid by the hour for the bum on the seat), they got him to comment the code. Obviously annoyed that he was being sidelined, he commented every single line out of pure spite.

The code wasn’t great to begin with, but in this state it was unreadable! The first thing I did was strip out the redundant comments (some 20,000 lines worth) and checked in a clean copy. The next day one of the senior programmers and the version control manager gave me a a very stern talking to!

It seems that even though no one could argue with my intentions and everyone agreed that it was the right thing to do, it played havoc with the merge tracking. Everything had changed and it would now be impossible to see what my actual code changes were!

The same issue arises with non-standardized code. On a large project there are a lot of people working against the same code base. Some will be good, others not so much. Everyone iterates through each others classes, making changes as is warranted. Now imagine the scenario above, but with numerous people working on various branches of code that all have to be merged back together.

Your programmers now have the same choice:

  • do they slowly battle with illegible code in dealing with the task at , or
  • do they reformat and take up someone else’s time as they struggle to work out what of the multiple versions needs to go into the final release?

Not a pretty choice. But there is hope!

Actually apply a coding standard.

Give anyone who does not apply it a good talking to. You could establish one using current naming structures, layouts, consensus etc. But you will probably end up making life more difficult for yourself. Getting code formatters to behave just the way you want to and then getting those changes out to everyone on the team takes time, and in a project situation, that’s a rare commodity.

Using Java? Use the Sun standard. ALT-SHIFT-F will automatically format Java code to it by default in Netbeans, and CTRL-SHIFT-F does the same in Eclipse. Weird naming conventions are great for your pet project, but just use the defaults in real life. Personal preference has little relevance in reality. The curly brackets debate happened a long time ago, and no one won. I have used Jalopy in the past as a custom formatter where some weird conventions were dictated. Even though it was supposedly a standard, I realized that few other on the project team did the same, because it took too much time to set up and they didn’t know what the big deal was anyway… *sigh*

Use standard coding conventions. Keep a close eye on anyone who checks in nonsense because poor formatting is often an indicator of poor quality code in other ways, and it will take time to clean up their mess. Time that could be better spent bringing your project in on budget.

Unit Testing the Database Tier

Unit testing database code is a bit of a funny problem. Most developers can pretty easily get their heads around unit testing a piece of Java code using interfaces and mock objects. When it comes to database code or DAOs, it suddenly becomes particularly difficult. But why, what is so difficult about testing stuff against the database? Surprisingly enough, the answer is that it has nothing to do with coding or a particular framework, although these do play their parts. It comes down to a complex web of human interaction, version control and managing environments. Let me explain.

The standard unit test has three basic phases:

  • Setup (@Before)
  • Test (@Test)
  • Tear down (@After)

The first sets the test environment into an expected state, the second runs the test and checks that the outcome is as expected, while the final one clears up any test resources.

How does this relate to database testing? Let’s say that we have a DAO that performs a particular select statement. Our test should be to retrieve a particular number of records from a known set. Easy enough. The precondition of course, is that you have a known set to begin with.

It’s ALL about the environment.

Most large development projects go like this: The database guys update the schema. The developers write the code. The developers need a particular data set to exercise the various use cases so they add it to the schema. It all becomes a bit messy.

Eventually, very complex data sets are set up by everyone concerned in a primary schema that keeps getting updated. The database schema generally is not version controlled, as it is constantly being redefined using DDL statements run by the DBAs. Most of the time you will be lucky to get a backup of a schema, with all of the data truncated, as the schema and supporting code (i.e. the application) moves between environments.

Getting back to the test. You set up your data by hand in the master schema so that there were three items in the widgets table where some condition was true. You write your test, it runs against the schema, pulls out the expected three widgets and everything is great. You check in the tests. A week later your colleague, Bob, adds another widget to satisfy his test condition. Your test all of a sudden returns 4 items and the test breaks.

Of course, Bob didn’t actually run your test because he was too busy with his own and the test suite isn’t clean anyway because everyone is falling over each other.

Sound familiar?

What about inserts? The precondition: no sprockets were in the table, the test: insert a sprocket, the postcondition: a sprocket is in your table. Kind of hard to test under the above conditions isn’t it? For one thing, the exact data of the test sprocket may be in the table, so checking by value may give you false positives, while deleting it may get rid of more records than you wanted. What about concurrent tests? With a group of developers running the same tests, they start tripping over each other very quickly and the whole effort becomes an exercise in frustration. At this point the development manager throws up his hands, says that this automated testing thing is a load of bollocks and to get back to your work because they didn’t deal with all this when he was doing VB. Somewhere else, Kent Beck sheds a tear…

Let’s examine what goes on in Ruby on Rails. One of the best ideas that was popularized by this framework was its method for database unit testing. A developer’s workspace has multiple environments by default – development, test and production. You develop against the development schema, designing table structures, and playing with the user interface to your heart’s content. When you run unit tests, the following happens – the schema from development is copied into the test database with no data in it. The framework imports version controlled sets of test data (saved as YAML files) into this new schema. Whenever a test is run, it is guaranteed that the database will be in this state. Any changes a test makes are visible only within the scope of this one test. The tear down step cleans out your changes. This makes life so much simpler, especially if you have been working in the nightmare scenario above.

So how do we get the same sort of effect in a corporate development environment?

You need multiple database schemas in order to unit test your db code.

Pause and re-read that line. It’s not negotiable. Probably two per developer. One with sample data to use while you work on the user interface. The other, a temporary one for unit testing. A whole development team using the one schema does not work. Most projects do it, but that doesn’t mean that it’s a good idea.

Some suggestions for how to manage this. The DBAs have their own schemas. The full DDL for the database is kept in version control. After each change, the full database DDL is dumped and checked in. No UPDATE TABLE statements. Ever. This way you are guaranteed that if you ever want to get a baseline of your system, you can also rebuild the database as it existed at this time. I worked on a very large telecoms project with a huge development team, and this worked. Well.

The test data for your environments is stored in version control – at the very least, as dumps of insert statements. For unit testing purposes, a dedicated unit test framework is beneficial. DBUnit performs the same task in Java as described above for Rails – it loads test data from dumps (a number of formats are supported), and guarantees that the test database exists in the expected state when each test is run.

To test your database code, refresh your test schema with the one from version control – typically using your chosen build system. Ant tasks are generally pretty good for this. Now run your test cases. Gorgeous! No tripping over other people, and your tests are guaranteed to work the same each time. No excuses for a red bar.

So why is unit testing databases so difficult if it doesn’t have to be? Most of the time it involves process change and getting out of bad habits, not just a tool. And change means convincing people. Generally, managers do not understand what benefit there is in multiple database schemas, as it is seen to increase complexity and therefore risk, and DBAs like to have full control over what is going on on their servers. The topic of databases and processes is also a great one for religious zeal.

The process outlined above should explain the hows and whys to the individuals involved. The changes above mean a little bit more setup initially, but a saner development process.

A nice side effect is how easy upgrading databases through your environments can become. Run the latest DDL against a fresh schema, get the differences between it and your target environment using a database compare tool, and fire it off. Beautiful.