SOFTWARE ENGINEERING blog & .lessons_learned
manuel aldana
Manuel Aldana

February 6th, 2011 · 4 Comments

Unit-Testing: Situations when NOT to do it

I am a big fan and practioner of automated unit-testing, but throughout the years I took my lessons. Starting with “everything has to be automated tested” throughout years I experienced situations where doing unit-testing is not optimum approach.

The presented sections go along with my favorite test-smells:

  1. Brittle tests: Though functionality hasn’t been changed the test fails. Test should show green but in fact shows red (false positive).
  2. Inefficient tests: The effort of writing automated tests doesn’t pay out at all. The benefit/cost ratio (short + long term) is extremely low.

Unit-Test little scripts/tools

There is often no sense to write unit-tests for little scripts or tools, which are one or two-liners. The script content is already so “declaritive”, short and compact that the code is too simple to break. Further more often stubbing or mocking the dependencies is tough (e.g. writing to stdout/file, shutdown machine, doing an HTTP call). You can end up writing a external system emulator which is overkill in this situation. Surely testing is important but for that I go the manual way (executing script, and smoke-test sanity check the outcome).

Unit-Test high level orchestration services

Orchestration services have many dependencies and chain-call lower services. The effort of writing such unit-tests is very high: Stubbing/Mocking all these outgoing dependencies is tough, test setup logic can get very complex and make your test-code hard to read and understand. Further more these tests tend to be very brittle, e.g. minor refactoring changes to production code will break them. Main reason is that inside test-code you have to put a lot of implementation detail knowledge to make stubbing/mocking work. You can argue having many fan-out/outgoing dependencies is a bad smell and you should refactor from start on. This is true in some cases but higher order service often have the nature to orchestrate lower ones, so refactoring won’t change/simplify much and make design even more complicated. In the end for such high level services I much prefer to let test-cover them by automated or non-automated acceptance tests.

Test-first during unclear Macro-Design

When implementing a feature or something from scratch often the macro-design is blurry, I like to call this “diving-in”. For diving-in development or quick prototyping you get a feeling which design fits or not. During this phase class structures/interactions change a lot, sometimes even big chunks of code are thrown away and you restart again. Such wide code changes/deletions often will break your tests and you have to adapt or even delete them. In these situations test-first approach doesn’t work for me, writing test-code even distracts me and slows me down. Yes, unit-tests and test-first approach can and should guide your design but I experienced this counts more when the bigger design decisions have been settled.

100% Code-Coverage

I can’t overstate this: Code-Coverage != Test-Coverage. The Code-Coverage of unit-tests is a nice metric to see untested spots, but it is by far not enough. It just tells you that the code has been executed and simply misses the assert part of your test. Without proper asserts, which check the side-effect of your production code and expected behaviour, the test gives zero value. You can reach 100% code-coverage without having tested anything at all. In the end this wrong feeling of security is much worse as having no test at all! Further more 100% code-coverage is inefficient because you will test a lot of code, which is “too simple to break” (e.g. getters/setters, simple constructors + factory-methods).


Above points shouldn’t give you the impression that I do speak against automated unit-tests, I think they are great: They can guide you to write incremental changes and help you to focus, you get affinity for green colors ;), they are cheap to execute and regression testing gives you security of not breaking things and more courage to refactor. Still going with the attitude that you have to go for 100% code-coverage and to test every code snippet will kill testing culture and end up in Red-Green color blindness.

Tags: Continous Integration · Software Engineering · Software Maintenance

4 responses

  • 1 Major Unit-Testing Pitfalls and Anti-Patterns // Feb 6, 2011 at 1:54 pm

    [...] I became more pragmatic and experienced major test pitfalls. The presented pitfalls go… [full post] manuel aldana Manuel Aldana continous integrationsoftware engineeringsoftware maintenance [...]

  • 2 Sandeep // Feb 10, 2011 at 3:54 pm

    Do you have some link/info about what should and what should not be tested. I mean should the functionality be tested or the methods of the classes?

  • 3 manuel aldana // Feb 11, 2011 at 6:16 pm

    You should always test the functionality of the implementation and NOT the entry point aka method (therefore I also think test-generation tools which generate one test-method per production-code method are nonsense). Methods internally “spawn” the real behaviour and code paths to be tested.

    For a much deeper reference I can recommend

  • 4 Software Development // May 13, 2011 at 12:53 am

    Good post on testing. I happen to prefer manual testing as well. I think that test automation is not always accurate and some important aspects can be missed. I think that testing can also be handed to users with little or no knowledge of that type of programming as they can break it easy, giving the developer the opportunity to see where some wrong keystrokes can break their codes. Sometimes this can be the only way to find some bugs. Thanks for the post.

You must log in to post a comment.