I work on a small/medium sized Java application. Over time, I’ve tried to ensure that the tests which I write are “good” tests, and that there’s plenty of them. Hence I’ve been looking into various, more rigorous, testing methodologies, such as boundary value analysis, combinatorial tests, and other testing methods.
However, most of these have examples that are very dry, and the examples don’t map very well to my domain, as the examples are mostly functions of numerical data. My functions are almost never concerned with numerical data, they’re about enforcing business rules.
The application is hibernate/spring, and my testing strategy is as follows:
- Every single class gets a unit test. Collaborators of each class are entirely mocked away at this point, and as far as is reasonable, the contract of that class is exercised. Usually with hand-picked test cases. These tests often run in seconds.
- A full spring context is fired up by TestNG. All of a class’ collaborators are therefore present, and even the database access layer is active (with a real, on-disk database that is trashed at the end of the test). Once again, the system is exercised using hand-selected test-cases. The start up time for these tests often stretches into 20 – 30 seconds apiece, and there’s enough of them to seriously impact the build time.
- The system is booted up, usually against a blank database, and then a copy of an end-user database. A developer “clicks around” each booted system in an attempt to elicit a failure. This takes a while, easily over 30 minutes.
I worry that my testing regimen is insufficient, and that my tests are not as effective as their metrics would imply (it’s quite easy to get close to 100% coverage with this scheme). There is no formal guidance on how to select test cases, aside from “what the developer thinks are good tests.”
I’ve considered trying to express the behaviour of the system in something roughly like Z notation, and using that specification to drive test case production, but that is likely to be very costly in terms of time. Similarly, convincing my colleagues that there will be a reasonable return on investment of such an activity would be a very hard sell, and even I’m not convinced it’s worth it for our system.
There have been discussions relating to randomised tests, but they’ve been vague and very general; and we rarely bump into run time errors/exceptions (NullPointerException, ArraysOutOfBounds, etc.), it’s usually that the behaviour of the system is unacceptable to the client (i.e. in corner/degenerate cases, the system behaves in a counter intuitive manner, but which is completely consistent)
I would like to get the most “bang for my buck” with my tests; that is, how does one design test cases which are likely to exercise a defect in the systems similar to the one above?
8
Further to class level unit tests, what’s in my opinion perhaps a more grounded test and pragmatic approach is to execute all the critical/high value execution paths..
identify and create a metrics (of criticality) for your end user’s minimum required critical functionality set – invoke these functions/operations at (increasingly) much higher level than at a class level. A state transition comparison against an expected state would be your test. For example state transition could be change in database table, addition in a message queue and so on – obviously that is the expected end result of the functionality under testing.
Unit tests are good, but they are also expensive. Writing and CERTAINLY maintaining one in a fast evolving product environment has very serious and significant cost. Developers can also go to extremes to test at unit level and in the process end up loosing the far sight.
100% coverage is a very subjective term as you already know.
I have always found if you put tests into the running code, even in production (ones that do not degrade performance), gives you more results for your testing money, than the heavily isolated unit tests that are currently in fashion. The tests I speak of are based on the design of each method and class. In each important, externally accessed method:
- Test preconditions.
- Test postconditions.
- Test class invariants.
If you don’t know what the preconditions, postconditions, or invariants should be, then the problem is in the design phase. In this case, I recommend to read more about these concepts, and why they are important.
While this breaks the ideal that some strive for (isolated tests), any testing of a higher level component will test lower level components as well, so you get more testing accomplished overall. Specifying preconditions, etc, is also good for designs in general. It is never good to throw code to customers without understanding its limitations.