The 100% test coverage topic


When it comes to testing, I like 100% code coverage. Not everyone does, which can lead to discussions. So I decided to write down a few notes on that topic. To have an easy intro and reference for those who are interested.

There are many reasons why 100% coverage reports are good. Far more than I can cover here. In this post, I focus on some of the basics.

Usual arguments (some of them)

Here are some of the arguments I often hear in the context.

Having good test coverage doesn’t mean your code is well tested

One of the most common arguments, and yes, I know. Everyone knows.

It’s up to people responsible for tests to ensure tests explore good and bad code paths and the appropriate edge cases.

Full test coverage does not replace manual testing / what ever

Also a common argument, and yes, I know. Everyone knows.

No one I know ever said that it is enough to have 100% coverage. The 100% coverage report is just one additional QA tool. IMHO, and for some people, an important one.

Do you really test all your getters and setters.

Another argument that is often brought up.

If your getters and setters never do anything and never will, why even have them? Use public member variables. It’s possible to break during debugging when a variable changes the value. The "I need it for breakpoints" is therefore not an argument.

If getters and setters shall be future-proof, they are part of a contract with expected behavior, and that should be tested. If they do not only set and read values, they should have tests in any case.

This is maybe C++ specific, but can also be valid for other languages. Do you check that default constructed objects have proper or expected initial values? I have been bitten by that, and that hurt.

There might be more reasons why 5 to 10 minutes time investment per class to test set and get properties values is not a bad idea. But if it’s relay too much work, just exclude them from coverage reports. More on that later.

Some code is not reachable via unit test

Yes, of course. And I agree that can be a challenging topic. And there is no one answer or argument for all types of projects.

For some code parts to be reached you need to run integration tests. Or even manual tests.

But if there is no way to reach some code, it might be a good idea to think about removing those parts.
As a side note: Sometimes it is interesting how much code can be found in old legacy projects that do nothing other than confuse the people working with the project and cause huge costs.

There are always exceptions, and there are different types of code. GUI code is different than library code. GUI code can be difficult to test, do that automated, and get good coverage reports. For library code, if it’s not possible to reach 100% of the implementation via the public interface, there might be a problem.

It can be difficult to reach full coverage reports. Even with the possibility of merging coverage reports from different testing stages. But that should not be a hinder to aiming for as high coverage as possible, and often that can be 100%.

I need to ship yesterday

Yes, sometimes some projects or customers require quick and dirty solutions. Write once, ship, and pray for the best.

I feel kind of sorry for people that are in these types of projects, where this is the norm and not the exception. But I can understand that those projects or situations exist and that they have special rules.

What 100% code coverage means

After having explored some of the existing discussion points, here is something very essential:
What does 100% code coverage mean?

The following is my interpretations and opinions may vary. But I think I can share some common points.

The secret: 100% might not really be 100%

To me, 100% code coverage means: I want all code that I think should be reachable via tests covered. That is often 100% of a source file, but not always.

Most good coverage tools have options to ignore files, lines or blocks of code from coverage reports. (hello go tool cover 😉)

Therefore, for me the actual meaning of 100% code coverage is having an all green coverage report!

In terms of C++, for example, gcov creates coverage data and lcov generates coverage reports. I am focused on the lcov output, and that can be tweaked.

What I want is an all green lcov report at the end of the testing pipeline. (Hint: It is possible to merge coverage reports, so you can see the complete coverage of unit and integration tests in one report, and some other languages have similar tools)

But, what’s that all about then?

Now you might wonder, if 100% is not always 100%, why is there even some discussion on that topic? Why even bother?

To me, it is a mental question and a question of motivation. If I write tests with a goal of 100% I will reach better results than having no goal. And more often than not, 100% test coverage is possible. Often I found possible improvements, in the implementation and/or tests, through aiming for 100%. It’s due to these experiences that I have this point of view.

All green coverage reports mean there is at least some test coverage for most of the code, which is better than nothing. That is good to know, definitely better than not knowing what is tested and what is not. And having some metrics is better than having none.

Seeing in code review (reading code) which parts can/should be ignored and which needs test is meaningful. And for testing someone else’s code, having the 100% goal can be a great help and motivator.

Reaching code via tests can bring ideas for more meaningful tests. And can make you productive. Start a Monday with let’s raise coverage! That can trigger additional actions that would not have been triggered otherwise.

Maybe for code authors it’s hard to see the value of testing 'trivial' code parts. But there might be a huge value for the next developer who has to work with that code base. Always think for the developer after you, good tests and good coverage will simplify their life.

Every additional QA tool in the pipeline is a win. Aiming for high test coverage is a QA tool.

Working on full test coverage can lead to surprising findings, some might save your day - or more.

And so on. There are more reasons why some people like full test coverage.

That all might not convince you, and that is OK

I am aware that this will not convince anyone on the team that means good coverage is not worth it.

I started to take that topic seriously after experiencing firsthand how aiming for complete coverage improved my code and design. And saved me from shipping problematic code. Several times. It was not a fast learning experience, but it was worth it.

There is of course also the chance there is the type of developer that always writes perfect code. And full test coverage can not improve a lot. It’s a small change, but it might exist.

For all other developers, it might take some time and maybe different types of projects to see why some people prefer that way of working. Consistently applied, aiming for full coverage will give some great moments. It will improve code-quality, and maybe teach something new about programming.

It can be like starting test-driven development. It might take a while until it stops feeling like a burden that makes development more expensive. It can take time to feel natural, and the return on investment becomes noticeable.

Just another QA and development methodology

To me, the 100% test coverage topic sums up to: I always want to have it, sometimes I can not have it but let’s try the best.

Like all tools and methodologies, used correctly it will bring value. Used wrong, it will most likely create frustration. It’s good to follow as a guideline, but it’s not recommended as a religious practice. It is a valuable goal to aim for. And very often easy to reach. So let’s just do it.

If you are here and read this post because I gave you the link since we talk about testing and coverage, I hope it helps to understand why I say: let’s try to reach 100% 😉.