So let’s say that I have a test:
@Test
public void MoveY_MoveZero_DoesNotMove() {
Point p = new Point(50.0, 50.0);
p.MoveY(0.0);
Assert.assertAreEqual(50.0, p.Y);
}
This test then causes me to create the class Point:
public class Point {
double X; double Y;
public void MoveY(double yDisplace) {
throw new NotYetImplementedException();
}
}
Ok. It fails. Good. Then I remove the exception and I get green. Great, but of course I need to test if it changes value. So I write a test that calls p.MoveY(10.0) and checks if p.Y is equal to 60.0. It fails, so then I change the function to look like so:
public void MoveY(double yDisplace) {
Y += yDisplace;
}
Great, now I have green again and I can move on. I’ve tested not moving and moving in the positive direction, so naturally I should test a negative value. The only problem with this test is that if I wrote the test correctly, then it doesn’t fail at first. That means that I didn’t fit the principle of “Red, Green, Refactor.”
Of course, This is a first-world problem of TDD, but getting a fail at first is helpful in that it shows that your test can fail. Otherwise this seemingly innocent test that is just passing for incorrect reasons could fail later because it was written wrong. That might not be a problem if it happened 5 minutes later, but what if it happens to the poor-sap that inheirited your code two years later. What he knows is that MoveY does not work with negative values because that is what the test is telling him. But, it really could work and just be a bug in the test.
I don’t think that would happen in this particular case because the code sample is so simple, but if it were a large complicated system that might not be the case. It seems crazy to say that I want to fail my tests, but that is an important step in TDD, for good reasons.
Then don’t write it.
You have already come to a point where you
- Have the simplest code which passes all tests.
- Are unable to think of edge-cases that aren’t covered by the code.
Time to stop worrying about that method.
If you’re writing tests for edge-cases that you know are already covered then you’re worrying more about the test part of TDD than the design part. Unfortunately, this is not a very productive thing to do.
Why do you need to know that your method works with negatives? How far do you take that? Do you then test fractions? Do you test odd and even numbers? Each factor of ten? Every possible value of double?
If not then why test negatives? They’re not going to act any differently from positives.
4
First I wouldn’t cause my production code to fail by throwing an exception. Which leads to the second point. My first failing code would be to use a magic number, either zero or something else. The test for non-movement would then have a small, but non-zero of passing when it wasn’t supposed to — which is fine, that’s in part why you have more than one test.
So, if your implementation had been
public void MoveY(double yDisplace){
Y=0;
}
Your test would have failed. Further tests would depend upon you seeing a way to break the existing code — i.e you identify a problem not covered by the existing test or code. In this case, it doesn’t actualy move, so the next test, Move_Moves, would be to see if it does, get your red light and then fix the code. The next step would be to realize that your initial test is really unnecessary, it’s not even really a special case of your Move_Moves test: you would be testing that beforeY + yDistance = afterY, which covers zero just as effectively as 1.2. So you delete the test.
Only if you have a more complex function, where zero could possibly be treated differently than 1.2 or 42, would you need to have test both to see if it moves the right amount, and that it doesn’t move for zero. Unproductive test should be removed just as any other code.
Which is my final point — test are ways to identify how things can go wrong, and make sure that your production code has those ways covered both now, and in the future. The reason to write the test first is to help you identify the edge cases before you put the code into production and to verify that your solution correctly handles them, you keep the test around to avoid rgression. If a test won’t help with either of those, don’t write a test.
Your first test case came from your specification (which was possibly just in your head). Now, what does your specification say about negative values? Does it include them explicitly? If so, write a test. If not, don’t.
The Red-Green part of the cycle tests the test. If you have a hunch that a particular edge case needs to be considered, or just to show that it’s been accounted for, I’d argue it’s legitimate to artificially force a failure to prove that a test locks the implementation into a correct behavior.
In the example case, it’s probably unnecessary. But, if you ever need a more complicated MoveY
implementation that accounts for … weird rules, barriers, bounds, and the like — it is potentially helpful to ensure that any additional rules and bounds-checking operate on 0
as expected (not at all).
So … write the code incorrectly:
public void MoveY(double yDisplace) {
Y += 100; // to test the test
Y += yDisplace;
}
Run the test to verify that the relevant MoveY test(s) fail. And then ensure that removing the bad code causes the test(s) to pass:
public void MoveY(double yDisplace) {
Y += yDisplace;
}
If your project has been rigidly TDD from the start; you usually won’t need to do this sort of thing. But, even on largely TDD’d code, I’ve encountered transient bugs that I couldn’t be confident I’d solved without temporarily simulating a failure in down-the-line code in a similar fashion.
Similarly, in the process of porting code from an old system to a new one, I’ve written tests that would have failed on the OLD code but not the NEW code simply because the new code was far simpler. So, manually failing those tests gives me the confidence that future complexities won’t reintroduce bugs that the OLD complexity initially produced: my test has been tested to protect against them!