For now I'm writing tidbits about computer programming, the plan is to actually blog at some point in time.

Wednesday, April 09, 2008

TDD and a code coverage tools

Arguments about how important is the use of code coverage tools are not scarce. I don't want to participate in any of those. I merely want to show that using a coverage tool while doing TDD will increase the probability of not leaving stones unturned.

The generally recommended way to do TDD looks like something along these lines:

1 - write a failing test case
2 - write the simplest code modifications that will make the test pass
3 - we are done if can't think of more test cases to write
4 - go back to (1)

If we absolutely stick to the recommended way, chances are that our test cases cover the totality of the code we write to make them pass. Unfortunately I would adventure to say that somehow in step 2 we tend to write more code that we should, and as a result we end up with code that is not tested.

Ending up with code that is not tested is precisely one of the things TDD tries to make sure we avoid.

Good thing is that with the help of a code coverage tool it is very easy to make sure we don't have code that is not tested. All we have to do is modify step 3 to look like:

3 - we are done if can't think of more test cases to write, and the code coverage of the class we are actually writing is 100% when all the test for that class are run.

---
EclEmma, a Java Code Coverage Plugin For Eclipse

3 comments:

Anonymous said...

using a coverage tool while doing TDD will increase the probability of not leaving stones unturned.

Indeed, but this reminds me of Dijkstra's famous quote: "Program testing can be used to show the presence of bugs, but never to show their absence."

In Step 2 we wrote the simplest thing that could possibly work, but we need to be wary of "complexity creep" - the natural tendency towards doing the easiest thing that can possibly work.

Introducing complexity metrics in addition to code coverage could be an interesting enhancement. So, step 3 becomes: "we are done if can't think of more test cases to write, the code coverage of the class we are actually writing is 100% when all the test for that class are run, and the complexity of the class is less than the watermark our team has agreed upon", where watermark is a value less than 25.

Alexei Guevara said...

I like your enhancement, although my concern is that checking the code coverage metric plus the complexity metric might be too much for a developer and it might interfere with the normal flow of coding as the developer him/her self will have to verify that the metrics are within the values the team has agreed on (expecting the developer to check the code coverage metric might be too much already).

Having said that, the solution here might be to create better tooling support rather than giving up altogether.

I'm outlining below the workflow a hypothetical tool based on mylyn will have to support.

- the developer marks a mylyn task as active
- the tool start listening for newly created classes, that are not test cases (this should be easy enough if junit is used for unit testing). (1)
- every time the code is saved/compiled, all the test cases whose target is one of the newly created classes discovered in (1) are executed, and code coverage and code complexity metrics are collected.
- the code coverage and code complexity metrics violation will be visible to the developer (using the red bar/indicator metaphor), and we might want to go as far as not allowing commits until this violations are fixed.

note:
- finding newly created classes that are not test cases will be much easier if for example the project layout follows the one recommended by maven, where test cases are located in a specific folder, separate from the one where the main code.

Anonymous said...

Well said.