I was so proud: after I had gotten rid of some minor compile-time issues (ie. typos), my unit tests ran over my newly written code without any errors. Granted, the changes I made comprised less than 500 lines, but still, it meant something to me. Feeling happy and content, I hummed R. Kelly’s “The World’s Greatest” while carrying on with other work.
A few days later, I wrote some black-box tests and — to my big surprise — I got a couple of “fails”. After some debugging, I found more than five bugs in the code that had passed my unit tests so nicely. I was completely puzzled. What went wrong? Why didn’t my unit tests catch these trivial bugs?
As it turned out, I forgot to register my test code with the CppUnit test framework, so my tests were not executed at all! Once I had added the missing line
|
CPPUNIT_TEST(TestDemodulationCoupling); |
to my test suite class, all five bugs surfaced in an instant. I was so angry! My first reaction was to curse CppUnit: with JUnit this would not have happened. I would have used the @Test annotation and my test would have been auto-registered — unless I had forgotten to tag it with the @Test annotation…
Later that day, I realized that the actual mistake was a violation of Steve Maguire’s powerful principle: “Step through every line of code that you added or changed with your debugger”. Had I set a breakpoint in my code, I would have seen that it was never executed.
Years ago, I used to be a passionate follower of this principle, but somehow unlearned it, largely — I presume — due to the rising unit testing hype. Don’t get me wrong: I think that unit testing is great (and black-box testing is great, too), but it is no replacement for single-stepping through your code.
Reviewing your own code is good, but actually stepping through your code is much cooler. The cursor showing the next statement to be executed focuses your attention and you really experience the program flow instead of having to make guesses about it. Further, you have all the data available and you can even modify it. You can invoke functions from your debugger (e. g. ‘call myfunc()’ in gdb), play with different combinations of parameters, member variables and the like, re-execute just executed code without restarting the debugger by setting the “next statement to execute” a couple of lines up. Probably the biggest benefit is that you get a deeper understanding of your code: maybe you step over a library call that works as expected but takes two seconds to execute; or you observe that you unnecessarily visit the remaining elements of a collection after you found what you’ve been looking for — no unit test would give you this kind of insight.
Often, it is difficult to unit test for certain failure causes, like malloc() returning NULL on out-of-memory conditions:
|
if ((p = (char*)malloc(2048) == NULL) { // Handle out-of-memory. ... } |
How would you unit test that? Such error handling code is usually left untested and is the reason why so much software crashes under heavy load. While you’re in a debugger, testing is easy: just set the “next statement to execute” to the error-handling code (right before stepping over the call to malloc), step through it and convince yourself that it works as expected. Again, how would you unit test that? Answer: factor out the error-handling code:
|
void HandleOutOfMemory(/* context */) { ... } if ((p = (char*)malloc(2048) == NULL) { HandleOutOfMemory(/* context */); } |
Now, you can call your error handling code from your unit tests. Still, testing the code by using the debugger is easier, doesn’t require any context set-up and gives more insight.
It helps, of course, if you write your code such that debugging is as painless as possible. A line like this is fine, of course:
|
double convReading = convertSensorReading(Sensor.current().reading(), scalingFactor()); |
but writing it like this is (probably) more readable and you can inspect (and alter) intermediate values in your debugger:
|
Sensor currSensor = Sensor.current(); double currReading = currSensor.reading(); double convReading = convertSensorReading(currReading, scalingFactor()); |
If you think this is too much typing, get better at typing and/or get yourself a better editor. If you think this wastes code, bear in mind that we don’t live in the 1970s anymore. If you think that you can always step inside convertSensorReading() and inspect/change the parameters there, you are right, at least as long as you have access to the source code of the function you want to step into.
Macros are bad since you cannot step into them. Use them only if you have no other choice; instead prefer (inline) functions and template functions: they come with the same efficiency advantages and you get type-safety and debuggability as a bonus.
And, speaking of the preprocessor, stop using #define’d symbolic constants: all preprocessor symbols are inlined during the preprocessor phase and I don’t know of any debugger that can resolve their values. Instead, use enums or, even better, const variables:
|
#define MIN_COUNT 23 // bad const int MAX_COUNT = 42; // good ... if (MIN_COUNT <= count && count <= MAX_COUNT) |
Mouse over MIN_COUNT in your debugger and you will see nothing; mouse over MAX_COUNT and you will get “the answer” ;-)
Automated unit tests are great, but stepping through your code gives quick feedback and a lot of insight into what is happening at run-time. Sometimes, hard-to-write unit tests can be avoided by consequently following the “step through all of your code” paradigm. As a simple guideline write unit tests — if you like — before starting with the implementation. Then single-step your code by executing your unit tests in a debugger and watch your step.