If you measure test coverage, aim for 100%

When it comes to unit testing, there are three schools of thought that I'm aware of:

  1. We don't do unit testing, since they are overrated/too hard/take too much time/whatever.
  2. We only test critical/error-prone parts of our code, and bugs that have been fixed.
  3. We want a significant part of our code to be covered by tests.

I'm usually in the latter camp: I think that automated tests are one of the most effective ways to ensure code quality on a long-running project.

This strategy, however, is often accompanied with a policy of maintaing at least a certain level of code coverage. And since there are usually some parts of the codebase where the effort required to bring them under test is considered not to outweigh the benefits, unit test policies usually stipulate a minimum amount of code coverage of 80%, 90% or 95%.

I'd like to argue that best practice is to require code coverage analysis to report a coverage of 100%.

Why not <100%

When a measure becomes a target, it ceases to be a good measure.

Goodhart's Law

The risk of requiring a certain level of coverage is that achieving that level becomes the goal, instead of the goal being to write proper tests. And what is the easiest way to achieve 80% code coverage? Simple: write tests for the 80% of your code that is easiest to test.

However, you do not want to test the code that is easiest to test: you want to test the code that is most important to test. In other words: whether a certain line of code is worth the effort of writing a test should be consciously decided, rather than certain parts not being tested simply because the other parts were enough to achieve the required coverage metrics.

Reaching 100% without losing your mind

Code coverage reports should offer guidance, not a goal to meet. More specifically, they can help you find out which parts of your code you actually forgot to test. It regularly happens that there are some unhappy paths that I did not think of when writing tests, but of which it was obvious that they would need to be dealt with when writing code. When anything below 100% coverage is configured to be insufficient, my testing tools will then point out to me that those parts still need a test.

Of course, it is still likely that there are parts of your code for which the benefits of testing do not outweigh the costs. However, rather than lowering my coverage target by some arbitrary number, I mark those parts as irrelevant for the coverage report, with a comment explaining the reasoning behind not testing it.

In other words: I don't want 100% my code to be covered by unit tests, but I at least want 100% of my code to have been considered for unit tests. That way, what to test is still left to the programmer's best judgment, and code coverage analysis becomes helpful rather than yet another tool to satisfy.

Now, I haven't seen the above mentioned as best practice before, so I'm very much interested in what people have to say about it. That said, regardless of the testing strategy you use, I think developer buy-in is always a good starting point. The tooling we use should help us achieve our goals, rather than set them for us.

License

This work by Vincent Tunru is licensed under a Creative Commons Attribution 4.0 International License.


TestingEngineering Practices