I am newbie to TDD (writing first project following TDD practices).
I have fairly basic interface IProfiler
and an implementation Profiler
.
interface IProfiler
{
bool IsBusy {get;}
long Elapsed {get;}
}
A Simple Test
IProfiler profiler = default(Profiler);
[TestInitialize]
public void Initialize()
{
profiler = new Profiler();
}
[TestMethod]
public void ProfilerInitializationTest()
{
Assert.AreEqual(false, profiler.IsBusy);
Assert.AreEqual(true, profiler.ElapsedMilliSeconds == default(long));
}
The question is : As I will have more state, Number of asserts will increase. Should I keep separate Assert to test each field or should i include all here, or should I group into 3-4 similar asserts?
2
Stick to one assertion per test, but not necessarily one Assert.
In my experience, the best way to get the right balance, is to name each method Should…
In your example, that would be ShouldInitializeWithCorrectDefaults(). That is a single assertion, regardless of how many Assert methods you call. As soon as I run into a situation where I’m struggling to not use “And” in my method name, I suspect I’m testing too much.
Also, try to keep in mind the question, “if I introduce a bug here, how many tests will break?” The ideal (but not always practical) answer is 1. I introduce a bug which stops it initializing correctly, the ShouldInitializeWithCorrectDefaults() test should break, because it should and now it doesn’t.
Unfortunately, initialization is one of those examples where you’ll probably break every test in the class. Which raises the question, why bother testing it by itself? Sometimes there are good reasons, but often you just go in and think, “Well, I’ll know that if every test breaks, it’s something common to every test, which is probably class instantiation. So does it need its own test?”
Dan North answered this question with BDD in 2006, this might interest you:
http://dannorth.net/introducing-bdd/
To sum up the idea, we need a paradigm shift and stop thinking about tests but more about specifications/behaviour.
When we write our test we want first to write a specification of the object we are going to implement.
By following this philosophy we quickly find out that our “test” methods name should actually be a sentence that describes an expected behaviour. That’s because we want to read a specification and not tests.
We also want to keep our “tests” methods name short, because it’s easier to read and enforce the idea that the object under specification should have one Responsibility (SRP).
That’s why we encourage the one assert per test guideline.
As an example if we would write a specification for your Profiler, the test method name could be:
Profiler
ShouldNotBeBusyWhenCreated
ShouldHaveNotSpentAnyTimeInProfilingWhenCreated
Which reads nicely as a specification. If you were to put all your asserts in one test then the specification would read as a long sentence like
Profiler
ShouldNotBeBusyAndHaveNotSpentAnyTimeInProfilingWhenCreated
Which is not really nice, specially when the test fails you wouldn’t be able to figure out precisely for which reason it’s failing.
If you come back to your specification weeks later, you should be able to understand what the object is supposed to do by just reading your “tests” methods names.
0
isn’t TDD for Test Driven Development, where you write test to get requirement described as tests. If you are talking about same TDD then you should write enough tests to cover all your requirements and don’t mix them with unit test, where you test small code unit, but not requirement (that doesn’t mean you can, or should not have both, unit tests are very helpful in development).
3