In his talk TDD, where did it all go wrong, Ian Cooper pushes Kent Beck’s original intention behind unit testing in TDD (to test behaviours, not methods of classes specifically) and argues for avoiding coupling the tests to the implementation.
In the case of behaviour like save X to some data source
in a system with a typical set of services and repositories, how can we unit test the saving of some data at the service level, through the repository, without coupling the test to implementation details (like calling a specific method)? Is avoiding this kind of coupling actually not worth the effort/bad in some way?
2
Your specific example is a case that you usually have to test by checking if a certain method was called, because saving X to data source
means communicating with an external dependency, so the behavior you have to test is that the communication is occurring as expected.
However, this isn’t a bad thing. The boundary interfaces between your application and its external dependencies are not implementation details, in fact, they are defined in the architecture of your system; which means that such a boundary is not likely to change (or if it must, it would be the least frequent kind of change). Thus, coupling your tests to a repository
interface should not cause you too much trouble (if it does, consider if the interface is not stealing responsibilities from the application).
Now, consider only the business rules of an application, decoupled from UI, databases and other external services. This is were you must be free to change both structure and behavior of the code. This is where coupling tests and implementation details will force you to change more test code than production code, even when there is no change in the overall behavior of the application. This is where testing State
instead of Interaction
help us go faster.
PS: It’s not my intention to say if testing by State or Interactions is the only true way of TDD – I believe it to be a matter of using the right tool for the right job.
4
My interpretation of that talk is:
- test components, not classes.
- test components through their interface ports.
It’s not stated in the talk, but I think the assumed context for the advice is something like:
- you are developing a system for users, not, say, a utility library or framework.
- the goal of testing is to successfully deliver as much as possible within a competitive budget.
- components are writen in a single, mature, probably statically typed, language like C#/Java.
- a component is of the order of 10000-50000 lines; a Maven or VS project, OSGI plugin, etc.
- components are written by a single developer, or closely integrated team.
- you are following the terminology and approach of something like the hexagonal architecture
- a component port is where you leave the local language, and its type system, behind, switching to http/SQL/XML/bytes/…
- wrapping every port are typed interfaces, in the Java/C# sense, which can have implementations switched out to switch technologies.
So testing a component is the largest possible scope in which something can still be reasonably called unit testing. This is rather different from how some people, especially academics, use the term. It’s nothing like the examples in the typical unit test tool tutorial. It does, however, match its origin in hardware testing; boards and modules are unit tested, not wires and screws. Or at least you don’t build a mock Boeing to test a screw…
Extrapolating from that, and throwing in some of my own thoughts,
- Every interface is going to be either an input, an output, or a collaborator (like a database).
- you test the input interfaces; call the methods, assert the return values.
- you mock the output interfaces; verify the expected methods are called for a given test case.
- you fake the collaborators; provide a simple but working implementation
If you do that properly and cleanly, you barely need a mocking tool; it only gets used a few times per system.
A database is generally a collaborator, so it gets faked rather than mocked. This would be painful to implement by hand; luckily such things already exist.
The basic test pattern is do some sequence of operations (e.g. save and reload of a document); confirm it works. This is the same as for any other test scenario; no (working) implementation change is likely to cause such a test to fail.
The exception is where database records are written but never read by the system under test; e.g. audit logs or similar. These are outputs, and so should be mocked. The test pattern is do some sequence of operations; confirm the audit interface was called with methods and arguments as specified.
Note that even here, providing you are using a type-safe mocking tool like mockito, renaming an interface method cannot cause a test failure. If you use an IDE with the tests loaded, it will be refactored along with the method rename. If you dont, the test won’t compile.
10
My suggestion is to use a state-based testing approach:
GIVEN
We have the test DB in a known state
WHEN
The service is called with arguments X
THEN
Assert that the DB has changed from its original state to the expected state by calling read-only repository methods and checking their returned values
Doing that way, you don’t rely on any internal algorithm of the service, and are free to refactor its implementation without having to change the tests.
The only coupling here is to the service method call and the repository calls needed to read data from the DB, which is fine.