> This is something I will let evolve as I add more tests. ?It seems like the pain your coworker was feeling could be down to not refactoring the test itself. ?Instantiating the class under test multiple times in a test is what I would consider to be duplication
?
In some ways you're exactly correct. He could have used a factory method to extract out duplication in the creation code. But at other times no. He would sometimes write 15 tests for a method and then add a new parameter to the method under test resulting in the need to modify 15 tests. I believe he was doing too much iceberg testing -- that is, he was trying to test an "iceberg" through a small whole rather than testing the individual methods. He hadn't yet tied in how SRP and small units improve the overall TDD experience.
Again I would put that down to duplication. ?It's extreme, but if I call a method from 15 tests I see that as something that should be extracted.
1) Assuming each test has some setup -- each of those lines in the test needs to be "tested". In other words, we should be able to see a failure indicating that line in the test is necessary. If we have not seen the "failure" it's possible that we have made an error in the setup which increases the likelihood that we'll have to debug the test.
I'm not sure I'm following you here. ?Doesn't the failing test diagnostics provide this information/guarantee?
2) When we write the whole test without having supported it with the necessary production code, the production code, though testable, will not have had the same opportunity to evolve as it would have had it been co-developed with the tests. To put this another way and refer to J.B. Rainsberger's Queueing Theory post, we've lost the feedback loop between each line in the test and the production code. IMO, when we write the entirety of the test before writing any production code we're more likely to mentally create a design and go with it rather than letting it evolve fluidly.
3) When we don't co-develop the production code with the tests, we've also elongated the time between having a passing unit test and the corresponding implementation which again increases the likelihood that we'll need to debug something or attempt a jump that's too big.
I think both of these come down to how big your tests are and how much functionality they're attempting to test in a single go. ?To draw an example, imagine I have a class that converts object A into object B - i will have 1 test for each and every field and incrementally add the conversion as I go.