The Safety Net That Wasn’t

The other day, I wasted time debugging some Java code. When I say “wasted” I do not complain about debugging per se — debugging is part of my life as a developer. Time was wasted because debugging should not have been necessary in this case. Let me explain…

It just so happened that I called a method but violated a constraint on a parameter. Within the called method, the constraint was properly enforced via the use of an assertion, just like in this example:

Normally, my violation of the method’s contract would have been immediately reported and I wouldn’t have had to debug this bug. Normally, yes, but not in this case, as I forgot to run my program with assertions enabled. So instead of

I wrote what I had written thousands of times before:

Silly me, silly me, silly me! That’s what I thought initially. But then I was reminded of the words of Donald A. Norman. In his best-selling book “The Design of Everyday Things” he observes that users frequently — and falsely — blame themselves when they make a mistake, when in fact it is the failure of the designer to prevent such mistakes in the first place. Is it possible that Java’s assertion facility is ill-designed? After having thought about it for some time, I’m convinced it is.

Assertions first appeared in the C programming language and they came with two promises: first, assertions are enabled by default (that is, until you explicitly define NDEBUG) and second, they don’t incur any inefficiencies once turned off. These two properties are essential and Java’s implementation misses both of them.

The violation of the first principle means that you cannot trust your assertion safety net: It is just too easy for you, your teammates or your users to forget the ‘-ea’ command-line switch. If you don’t trust a feature, you don’t want to use it. What use is an anti-lock break system that you have to enable manually every time you start your car?

Efficiency has always been a major concern to developers. If you execute your Java code with assertions disabled (which is, as we know, unfortunately the default) you will most likely not notice any speed penalty. What you will notice, however, is the additional footprint for your assertions that will always travel with your Java program. There is no way to compile assertions out. Take a look at this C example:

A prerequisite of any binary search implementation is that the input values are sorted, so why not assert it? Since we need to iterate over all elements, a simple assert expression is not sufficient. Contrary to Java, this is not a problem in C and C++: the code for the assert as well as the for-loop will be removed from the release build, thanks to the pre-processor.

While assertions — especially non-trivial assertions that require supporting debug code — already waste memory, you can do worse if you use the kind of assertion that allows you to specify a string to be displayed when an assertion fails:

This string is of little use. If a programmer ever sees it, (s)he will have to look at the surrounding code anyway (as provided by the filename, line number pairs in the stack trace), since it is unlikely that such an assertion message can provide enough context. But, hey, I wouldn’t really mind the string if it came at no cost, but in my view, wasting dozens of bytes in addition for the string is not justified. I prefer the traditional approach, that is, an explanation in the form of a comment:

Assertions are like built-in self-tests and one of the cheapest and most effective bug-prevention tools available; this fact has been confirmed once again in a recently published study by Microsoft Research. If developers cannot rely on them (because someone did forget to pass ‘-ea’ or inadvertently swallowed the assertion by catching ‘Throwable’ or ‘Error’ in surrounding code) or always have to worry about assertion code-bloat, they won’t use them. This is the true waste of Java assertions.

Personal Scrum

pomodorosEven though I’ve never participated in a Scrum project, I’m a big Scrum fan. I’m convinced that a feedback-enabled, quantitative project management approach, one which puts the customer in the driver’s seat, is key to avoiding delays and frustration.

Especially the concept of time-boxing is very powerful: the Scrum team sets their own goals that they want to achieve within a given period of time. In Scrum, this period of time — or iteration — is called “sprint” and usually lasts two to four weeks. Because the sprint deadline is in the not-so-distant future, developers stay on track and the likelihood of procrastination and gold-plating is fairly low.

But there is even more time-boxing in Scrum: Every day at the “Daily Scrum Meeting” the team comes together and everyone tells what they have achieved and what they want to achieve until the next daily scrum. In practice, that’s another 24 hours (or 8 work-hours) time-box.

Still, getting things done is not easy. If you are like me you are distracted dozens of times every day. While hacking away, you are suddenly reminded of something else. Maybe it’s a phone call that you have to make. Or you want to check-out the latest news on “Slashdot“. Maybe a colleague pops by to tell you about the weird compiler bug he just discovered in the GNU C++ compiler…

If you give in to these interruptions, you won’t get much done in a day. You won’t get into what psychologists call “flow“: a highly productive state were you are totally immersed in your work.

Is there a way to combat such distractions? There is, but let me first tell you what doesn’t work: quiet hours. Quiet hours are team-agreed fixed periods of time were you are not interruptible, say, from 9.00 to 11.00 in the morning and from 14.00 to 16.00 in the afternoon. Every team member is expected to respect these hours. Sounds like a nice idea, but it fails miserably in practice. Especially in large projects, people depend on each other and productivity drops if developers are blocked because they cannot ask for help for two hours. All teams I belonged to and which tried quiet hours abandoned them shortly after they had introduced them.

The solution is to make the period of highly focused work much shorter, say 25 minutes. If interruptions occur, you make a note of them in your backlog and carry on with your task. When the time expires, you take a quick break (usually 5 minutes), check your backlog and decide what to do next: either continue with your original task or handle one of your queued interrupts. In any case, you start another period of highly efficient 25 minutes and after 4 such iterations, you take a bigger break (15 – 30 minutes). That’s the Pomodoro technique in a nutshell.

Pomodoro (Italian for tomato) was invented by Francesco Cirillo, a student who had problems focusing on his studies. He wanted to find a method that allowed him to study effectively — even if only for 10 minutes — without distractions. He used a mechanical kitchen timer in the shape of a tomato to keep track of time, and hence named his technique after his kitchen timer. He experimented with different durations, but finally came to the conclusion that iterations of 25 minutes (so-called “Pomodoros”) work best.

I like to think of the Pomodoro technique as “Personal Scrum”. To me, a 25 minute time-box is just perfect. It’s enough time to get something done, yet short enough to ensure that important issues that crop up are not delayed for too long. In his freely available book, Francesco writes that while there are software Pomodoro timers available, a mechanical kitchen timer usually works best — and I definitely agree. The act of manually winding up the timer is a gesture of committing to a task and the ticking sound helps staying focused, since you are constantly reminded of time. However, mechanical timers are a clear no-no if you share your office with others: the ticking and especially the ringing sound would be too annoying.

When I’m all by myself, I prefer a mechanical kitchen timer, but if I share a room with someone else, I prefer something softer. I’ve asked the folks at AudioSparx to implement a Pomodoro kitchen timer MP3 for me: 25 minutes of ticking, followed by a 10 seconds gentle ring (yes, you can download it — it’s USD 7.95 and no, I don’t get commission). I listen to it on my PC’s MP3 player wearing headphones, which has two additional benefits: first, headphones shut off office noise and second, they signal to others that I wish to be left alone, so they only interrupt me if it is really, really urgent.

“I have a deadline. I’m glad. I think that will help me get it done.”
–Michael Chabon

Get into ‘Insert’ Mode

Here I am, trying to write something. I’m sitting at my desk, staring at my screen an it looks like this:


It is empty. I just have no clue how to even start.

Are you familiar with such situations? Among writers, this is a well-known phenomenon and it’s called “writer’s block”. But similar things happen in all creative fields: sooner or later, people hit a massive roadblock and don’t know where to start. A painter sits in front of a blank canvas, an engineer in front of a blank piece of paper and a programmer in front of an empty editor buffer.

Is there any help? Sure. You can use a technique called “free writing“, which means you just write down whatever comes to your mind, regardless of how silly it looks. It’s important that you don’t judge what you write, you don’t pay attention to spelling or layout, your only job is to produce a constant stream of words — any words. This exercise will warm-up your brains and hopefully remove the block. Applied to programming, you set up a project, you write a “main” routine (even if it only prints out “Hello, World, I don’t know how to implement this freaking application”) and a test driver that invokes it.

The next thing that you do is write a “shitty first draft“, as suggested by Anne Lamott. You probably know the old saying: the better is the enemy of the good. By looking for the perfect solution, we often end up achieving nothing because we cannot accept temporary uncertainty and ugliness. That’s really, really sad. Instead, write a first draft, even if it is a lousy one. Then, put it aside and let it mature, but make sure you revisit it regularly. You will be amazed at how new ideas and insights emerge. Experienced programmers are familiar with this idea, but they call it prototyping. They jot down code, they smear and sketch without paying attention to things like style and error-handling, often in a dynamic language like Perl or Python.

So if you have an idea that you think is worthwhile implementing, start it. Start somewhere — anywhere — even if the overall task seems huge. Get into ‘insert’ mode (if you are using the ‘vi’ editor, press the ‘I’ key). Remember the Chinese proverb: “The hardest part of a journey of a thousand miles is leaving your house”.

Greyface Management

no no no“On arrival we will stay in dock for a seventy-two hour refit, and no one’s to leave the ship during that time. I repeat, all planet leave is cancelled. I’ve just had an unhappy love affair, so I don’t see why anybody else should have a good time. Message ends.”
(Prostetnic Vogon Jeltz, Hitchhiker’s Guide to the Galaxy)

Grey is not just a color — it’s an attitude. There is a management style that I refer to as “Greyface Management”. The term is loosely based on the “Curse of Greyface“, an important concept of Discordianism.

Greyface Management is characterized by a total absence of fun. Everything is prohibited: free speech, sarcasm and parties. And there is no praise for good work, either. Never. In fact, a Greyface Manager’s motto is: “Praise is the absence of punishment”. A Greyface Manager typically wears a grey suit (mentally, at least) and an annoyed look on his face — he is a humorless bureaucrat, akin to a member of the Vogon race.

The presence of Greyface Management is not just unpleasant — it is a sign of serious trouble. A manager who uses this kind of management style in a software shop openly confesses that he doesn’t have a clue about software development in general and “Peopleware” (that is, developers) in particular. Now, it is a well-known fact that most software managers can’t manage (a subject well-worth exploring; I will certainly revisit this topic in future posts) but many software managers are aware of their limitations and successfully use techniques such that productive work is still possible under their reign. A Greyface Manager, on the other hand, hasn’t reached that level of sophistication and uses the worst-possible approach: oppression.

Humor is very important for software developers, especially “creative” humor that requires “out-of-the-box” thinking — that’s the very reason why programmers usually love Monty Python and Dilbert. Sarcasm and inside jokes help keeping the team knit together, so it’s not always a bad sign if developers make jokes about testers and sales people (and vice versa). And, dear Greyface Manager, what use are conforming “yes-sayers” that work to the rule, anyway?

Intended Use vs. Real Use

Often, things are invented to solve a particular problem, but then the invention is used for something completely different.

Take Post-it® Notes, for instance. In 1970, Spencer Silver at 3M research laboratories was looking for a very strong adhesive, but what he found was much weaker than what was already available at his company: It stuck to objects, but could easily be lifted off. Years later, a colleague of his, Arthur Fry, digged up Spencer’s weak adhesive — the rest is history.

Another example is the discovery of this blue little pill called Viagra®. Pfizer was looking for medications to treat heart diseases, but the desired effects of the drug were minimal. Instead, male subjects reported completely different effects — again, the rest is history.

In 1991, a team of developers at Sun were working on a new programming language called “Oak” — the goal was to create a language and execution platform for all kinds of embedded electronic devices. They changed the name to “Java” and it has become a big success: You can find it almost everywhere, except — big surprise — in embedded systems.

I would never have guessed how minute Java’s impact on embedded systems was until I read Michael Barr’s recent article, provokingly called “Real men program in C” where he presents survey result showing the usage statistics of various programming languages on embedded systems projects.

The 60-80% dominance of C didn’t surprise me — C is the lingua franca of systems programming: high-level enough to support most system-level programming abstractions, yet low-level enough to give you efficient access to hardware. If it is fine for the Linux kernel (which is around 10 million lines of uncommented source code, SLOC) it should be fine for your MP3 player as well.

Naturally, at least to me, C++ must be way behind C — Barr reports a 25% share. C++ is a powerful but difficult language. It is more or less built on top of C, so it is “backwards-efficient”. Alas, to master it, you need to read at least 10 books by Bjarne Stroustrup, Scott Myers, Herb Sutter et. al. and practice for five years — day and night. But the biggest problem with C++ is that it somehow encourages C++ experts to endlessly tinker with their code, using more and more advanced and difficult language features until nobody else understands the code anymore. (Even days after everything is already working they keep polishing — if people complain that they don’t understand their template meta-programming gibberisch, they turn away in disgust.)

But how come Java is only at 2%? Barr, who mentions Java only in his footnotes (maybe to stress the insignificance of Java even more) has this to say: “The use of Java has never been more than a blip in embedded software development, and peaked during the telecom bubble — in the same year as C++.”

Compared to C++, Java has even more weaknesses when it comes to embedded systems programming. First of all, there is no efficient access to hardware, so Java code is usually confined to upper layers of the system. Second, Java, being an interpreted language, cannot be as fast as compiled native code and JIT (just-in-time) compilation is only feasible on larger systems with enough memory and computational horsepower. As for footprint, it is often claimed that Java code is leaner than native code. Obviously, this is true, as the instruction set of the JVM is more “high-level” than the native instruction set of the target CPU. However, for small systems, the size of the VM and the Java runtime libraries have to be taken into account and this “overhead” will only amortize in larger systems. But two more properties of Java frequently annoy systems programmers: the fact that all memory allocation goes via the heap (i. e. you cannot efficiently pass objects via the stack) and the fact that the ‘byte’ data type is signed, which can be quite a nuisance if you want to work with unsigned 8-bit data (something that happens rather frequently in embedded systems). Finally, if C++ seduces programmers to over-engineer their code by using every obscure feature the language has to offer, Java seduces programmers to over-objectify their code — something that can lead to a lot of inefficiency by itself.

I don’t think that the embedded world is that black and white. I’m convinced that for small systems (up to 20 KSLOC) C is usually the best choice — maybe sprinkled with some assembly language in the device drivers and other performance-critical areas. Medium-sized systems can and large systems definitely will benefit from languages like C++ and Java, but only in upper layers like application/user interface frameworks and internal applications. Java clearly wins if external code (e. g. applets, plug-ins) will be installed after the system has been deployed. In such cases, Java has proven as a reliable, secure and portable framework for dynamically handling applications. For the rest, that is, the “core” or the “kernel” of a larger system, C is usually the best and most efficient choice.

I, Engineer

Swiss Army MouseI could hardly wait for my new Linux PC to arrive. When it finally did, I ripped the cardboard box open, connected everything, pressed the power button and … was utterly disappointed.

I didn’t want an off-the-shelf PC, partly to avoid the usual Microsoft tax (aka. pre-installed Windows Vista) but mostly because I wanted a quiet PC. All the components (including a passively cooled graphics card) were selected with this goal in mind. Still, my PC sounded like a freaking lawn mower.

One option would have been to send everything straight back, but this would have been rather cumbersome; the other to take care of this problem myself.

I used to be a big fan of “McGyver”, hero of the eponymous 1980s action series. “Mac” was a wonderful person: a good-looking, daredevil who avoids conflicts and doesn’t wear firearms; instead he always carries a Swiss Army Knife and duct tape. He knows how to defuse bombs, how to hotwire cars and is able to fix everything with everyday stuff like paper clips. In short, he is a great problem solver, a great hacker and a great role model.

McGyver would not have sent back the PC — he would have taken care of this problem himself. So I opened the case and found out that — even though I had a passively cooled graphics card — there are four fans in my case: a power supply fan, two case fans (one mounted on the front and a larger one mounted on the back) and a CPU fan.

It turned out that the manufacturer saved a couple of bucks by using really cheap fans, so I ordered ultra-silent replacement fans; yet for my taste the CPU fan was still too loud. I measured the current that ran through it and did a quick calculation to find out which resistor I needed to slow it down to 1000 rpm. Alas, I only had two resistors that sustained the amount of current flowing through my fan: one that was too big (which prevented the fan from starting up) and another one that was too small (the fan was still sounding like a lawn mower). I could have ordered the perfect resistor, but this would have meant waiting a couple of days and paying 10 EUR for shipping and handling. The right “hack” was of course to connect them in parallel, which yielded a resistance very close to the one I calculated. After a little bit of soldering I protected the solder joints with heat-shrink tubing and — voila! — I had a decently quiet PC!

Too many programmers I’ve met are not able to cope with everyday situations. Maybe they know how to optimize SQL queries, but they can’t fix a dripping tap. That’s rather unfortunate as this means that such folks are forever dependent on others. On the other hand, I’ve often observed that principles from other fields can be applied to software development as well, for instance, to build better metaphors. Such metaphors play a major role in getting a deeper understanding of software development, which is very useful for explaining software issues to non-technical people. (As an example, I personally like comparing refactoring to gardening: if you don’t constantly take care of your garden by weeding, fertilizing, watering, mowing, it will require a huge investment in time and money later.)

So step out of the computer-nerd zone and rather be a “jack-of-all-trades”; try to be a true engineer, a person who is able to solve all kinds of technical problems with technical/scientific knowledge and creativity — for your own benefit as well as the benefit of the people around you, but also for fun and profit.

[update 2009-10-29: Alert reader Jörg (obviously a true engineer) discovered an embarrassing mistake: if you connect resistors in parallel, the resulting resistor value is of course smaller than the smallest resistor value of any resistor connected, which means that part of my story just doesn’t make sense. Damn! I checked whether the resistors in my PC are really connected in parallel — which they are. I tried hard to recall what the real story was but ultimately gave up. The hack works — that’s what counts in the end, doesn’t it? ;-)
— end update]