SOFTWARE ENGINEERING blog & .lessons_learned
manuel aldana
Manuel Aldana

June 16th, 2008 · No Comments

Avoiding xUnit test-errors (false positives, false negatives)

You are using unit-tests to ensure that production code works as defined or specified from the class-level view. This way you either get feedback that your implementation works as wanted (green-bar=success) or not (red-bar=fail). Unfortunately tests are also man-crafted work and can contain bugs. Following article shows what kind of test-errors exist and what preventive actions can be done.

Annoyance of test-errors

Test-errors are very annoying because your tests should be the impartial authority saying your production works or not. And if you cannot rely on them and they are constantly lying to you, you quickly get the opinion that tests don’t help you but merely slow down your work tasks. In fact I noticed that many developers new to unit-tests are gettings frustrated by test-errors and thus are stopping to write them. This way they lose the advantages of a good test-suite and Test Driven Development which is a shame. Test-errors can roughly occur in two forms: false negatives and false positives.

False positive test

A false positive test gives you a failure, though the feature is behaving alright. You change or create some code, make everything compile but the corresponding test gives you a failure. After debugging for a while you see that the test made wrong expectations about the feature so that your test is inconsistent with the specified behaviour.

Very simplified example:

public static int sum(int x, int y){
  return x+y;

public void testSum(){
  assertTrue(3,sum(1,1)); //fails though class under test alright

I experience false positive tests sometimes when using mock-frameworks (like EasyMock ), because when injecting mocks to class under test you expose some bits of implementation details to the test by recording behaviour to the mock. After changing implementation of class under test slightly, the calling of your mocks could change also. Without changing your mocks correspondly your test most likely will fail.

Another often occurring cause for a false positive is a wrong test setup, e.g. you could pass wrong or no instances at all to class under test, so it behaves different as expected from test case view. Example: You pass an implementation to class under test, which connects to a file-database and wants to read data. Database is not available and test fails. Here you made a test setup mistake: Actually you had to pass a persistence stub, which returns appropriate values for the test.

Further more your design could be too tightly coupled, where your class under test has many dependencies to other classes. Thus many other instances are called indirectly and it is difficult to isolate you “real” test case. A change of dependent classes makes the test fail, though class under test itself did not change and is still behaving fine.

False negative test

A false negative test reports you a success message that everything is alright but in fact it is broken. In this case all your test expectations (=asserts) are fine, but production code in fact has got a bug.

Same simple example:

public static int sum(int x, int y){
  return x+1

public void testSum(){
  assertTrue(2,sum(1,1)); //success though class under test has a bug

What I see often in false negative tests, is that assert statements are formulated too weak and claim not enough from class under test. In many test cases you find a very non-saying assertNotNull(returnObject) and the properties of returnObject are not checked more detailed.

Preference of test-error

The lesser evil test-error are definitely the false positives. Here I get instantly notified that there IS a test-error. Looking at a false negative test I don’t get notified at all and the overall ‘good’ feeling that I got tests in place is very deceptive. You interpret more quality to your production code than there is.

What to do about test-errors

Awareness of both test-error categories is a good first step, but how can you avoid test-errors generally?

Suggestions to avoid false positives

An obvious cause for a false positive can be your test assert statements, maybe they are just plain wrong, requirements have been changed and class under test has been adjusted respectively. To avoid such assert statement mistakes you should develop test driven: When a requirement has been changed first look if feature has a corresponding test case and adjust it. After that change your production code. This way you avoid specification inconsistencies inside your test cases.

When looking at your production code, maybe your design is too tightly coupled and your tests are covering just to many classes? With this you should consider to introduce dependency-injection to make passings of alternative test-implementations possible. Or maybe you got a monster method and thus “feature-isolation” is not possible either? Here you should consider to use extract refactorings (method, interface, class), to achieve isolation and loose coupling and make stub/mock injections possible.

Suggestions to avoid false negatives

Most likely your assert statements are too weak and you don’t use enough test-data. Consider to add asserts an invoke class under test with alternative input values. The assertNotNull(returnObject) assert is often a good start to check whether the returnObject has been initialized at all, but if so, more “stronger” asserts should follow. Always remember: Apart from your application throwing an unexpected Exception inside test-run your test case will only fail if one of your asserts will evaluate to false!

Tags: Continous Integration · Software Engineering

0 responses

    You must log in to post a comment.