¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io

[TDD] Three Law of TDD - to strictly follow, or not?


 

Do you strictly follow the three laws of TDD as literally written?

No. ?But I don't follow anything strictly!
?
What step size do you typically have when going back and forth between tests and code?

This varies greatly depending on what I'm working on/how comfortable I am with everything. ?When working with a new language/framework etc. my steps will usually be very small as I'm trying to figure things out as I go along. ?If I'm in vanilla Java doing things I've done before the steps will be bigger as I'm more comfortable. ?Kent described this in his book but referred to it as "gears" not steps

How do you decide the appropriate step size?

See above + some gut feeling
?
When writing a test, do you define a method or constructor with all its expected parameters?

This is something I will let evolve as I add more tests. ?It seems like the pain your coworker was feeling could be down to not refactoring the test itself. ?Instantiating the class under test multiple times in a test is what I would consider to be duplication
?
Do you consider, "an incomplete but green test is at best misleading and at worst wrong"?

Yes, but only for a certain degree of "incomplete" :)

You said "I disagree with the comment because it allows multiple unnecessary lines of code to be written and doesn't provide a progression that guarantees that all production code is effectively "tested"/covered.". ?I'd like to ask why you think this? ?Each of my tests are small and cover essentially at most one path through the code. ?

?

Thank you for the feedback.

--Kaleb




--
Maybe she awoke to see the roommate's boyfriend swinging from the chandelier wearing a boar's head.

Something which you, I, and everyone else would call "Tuesday", of course.


 

When writing a test, do you define a method or constructor with all its expected parameters?

> This is something I will let evolve as I add more tests. ?It seems like the pain your coworker was feeling could be down to not refactoring the test itself. ?Instantiating the class under test multiple times in a test is what I would consider to be duplication
?
In some ways you're exactly correct. He could have used a factory method to extract out duplication in the creation code. But at other times no. He would sometimes write 15 tests for a method and then add a new parameter to the method under test resulting in the need to modify 15 tests. I believe he was doing too much iceberg testing -- that is, he was trying to test an "iceberg" through a small whole rather than testing the individual methods. He hadn't yet tied in how SRP and small units improve the overall TDD experience.

> You said "I disagree with the comment because it allows multiple unnecessary lines of code to be written and doesn't provide a progression that guarantees that all production code is effectively "tested"/covered.". ?I'd like to ask why you think this? ?Each of my tests are small and cover essentially at most one path through the code.

I'll answer that in parts:

1) Assuming each test has some setup -- each of those lines in the test needs to be "tested". In other words, we should be able to see a failure indicating that line in the test is necessary. If we have not seen the "failure" it's possible that we have made an error in the setup which increases the likelihood that we'll have to debug the test.

2) When we write the whole test without having supported it with the necessary production code, the production code, though testable, will not have had the same opportunity to evolve as it would have had it been co-developed with the tests. To put this another way and refer to J.B. Rainsberger's Queueing Theory post, we've lost the feedback loop between each line in the test and the production code. IMO, when we write the entirety of the test before writing any production code we're more likely to mentally create a design and go with it rather than letting it evolve fluidly.

3) When we don't co-develop the production code with the tests, we've also elongated the time between having a passing unit test and the corresponding implementation which again increases the likelihood that we'll need to debug something or attempt a jump that's too big.

Thank you for your response and questions Colin. Overall, I feel pretty much the same way. Much of what I do is gut feel. Sometimes I will indeed write a whole test before writing the production code, but at other times I'll follow the three laws of TDD to the letter.

My hope was that by asking these questions and eliciting feedback I could move from tacit knowledge to explicit knowledge and thereby be better at TDD and better able to help others.

--Kaleb



On Tue, Aug 26, 2014 at 10:04 AM, Colin Vipurs zodiaczx6@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:


Do you strictly follow the three laws of TDD as literally written?

No. ?But I don't follow anything strictly!
?
What step size do you typically have when going back and forth between tests and code?

This varies greatly depending on what I'm working on/how comfortable I am with everything. ?When working with a new language/framework etc. my steps will usually be very small as I'm trying to figure things out as I go along. ?If I'm in vanilla Java doing things I've done before the steps will be bigger as I'm more comfortable. ?Kent described this in his book but referred to it as "gears" not steps

How do you decide the appropriate step size?

See above + some gut feeling
?
When writing a test, do you define a method or constructor with all its expected parameters?

This is something I will let evolve as I add more tests. ?It seems like the pain your coworker was feeling could be down to not refactoring the test itself. ?Instantiating the class under test multiple times in a test is what I would consider to be duplication
?
Do you consider, "an incomplete but green test is at best misleading and at worst wrong"?

Yes, but only for a certain degree of "incomplete" :)

You said "I disagree with the comment because it allows multiple unnecessary lines of code to be written and doesn't provide a progression that guarantees that all production code is effectively "tested"/covered.". ?I'd like to ask why you think this? ?Each of my tests are small and cover essentially at most one path through the code. ?

?

Thank you for the feedback.

--Kaleb




--
Maybe she awoke to see the roommate's boyfriend swinging from the chandelier wearing a boar's head.

Something which you, I, and everyone else would call "Tuesday", of course.




 

Answers inline.

> This is something I will let evolve as I add more tests. ?It seems like the pain your coworker was feeling could be down to not refactoring the test itself. ?Instantiating the class under test multiple times in a test is what I would consider to be duplication
?
In some ways you're exactly correct. He could have used a factory method to extract out duplication in the creation code. But at other times no. He would sometimes write 15 tests for a method and then add a new parameter to the method under test resulting in the need to modify 15 tests. I believe he was doing too much iceberg testing -- that is, he was trying to test an "iceberg" through a small whole rather than testing the individual methods. He hadn't yet tied in how SRP and small units improve the overall TDD experience.

Again I would put that down to duplication. ?It's extreme, but if I call a method from 15 tests I see that as something that should be extracted.

1) Assuming each test has some setup -- each of those lines in the test needs to be "tested". In other words, we should be able to see a failure indicating that line in the test is necessary. If we have not seen the "failure" it's possible that we have made an error in the setup which increases the likelihood that we'll have to debug the test.

I'm not sure I'm following you here. ?Doesn't the failing test diagnostics provide this information/guarantee?

2) When we write the whole test without having supported it with the necessary production code, the production code, though testable, will not have had the same opportunity to evolve as it would have had it been co-developed with the tests. To put this another way and refer to J.B. Rainsberger's Queueing Theory post, we've lost the feedback loop between each line in the test and the production code. IMO, when we write the entirety of the test before writing any production code we're more likely to mentally create a design and go with it rather than letting it evolve fluidly.

3) When we don't co-develop the production code with the tests, we've also elongated the time between having a passing unit test and the corresponding implementation which again increases the likelihood that we'll need to debug something or attempt a jump that's too big.


I think both of these come down to how big your tests are and how much functionality they're attempting to test in a single go. ?To draw an example, imagine I have a class that converts object A into object B - i will have 1 test for each and every field and incrementally add the conversion as I go.


 


On Wed, Aug 27, 2014 at 8:23 AM, Colin Vipurs zodiaczx6@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:

> This is something I will let evolve as I add more tests. ?It seems like the pain your coworker was feeling could be down to not refactoring the test itself. ?Instantiating the class under test multiple times in a test is what I would consider to be duplication
?
In some ways you're exactly correct. He could have used a factory method to extract out duplication in the creation code. But at other times no. He would sometimes write 15 tests for a method and then add a new parameter to the method under test resulting in the need to modify 15 tests. I believe he was doing too much iceberg testing -- that is, he was trying to test an "iceberg" through a small whole rather than testing the individual methods. He hadn't yet tied in how SRP and small units improve the overall TDD experience.

Again I would put that down to duplication. ?It's extreme, but if I call a method from 15 tests I see that as something that should be extracted.

I had never thought about calling a single method on the SUT multiple times as duplication. I'll have to spend some more time thinking about that.
?

1) Assuming each test has some setup -- each of those lines in the test needs to be "tested". In other words, we should be able to see a failure indicating that line in the test is necessary. If we have not seen the "failure" it's possible that we have made an error in the setup which increases the likelihood that we'll have to debug the test.

I'm not sure I'm following you here. ?Doesn't the failing test diagnostics provide this information/guarantee?

I responded to another of your e-mails a little bit ago that I think explains better, but I'll describe it again here.

If the entire test is written up front, and a new class is under development, that test will fail because of a failure to compile. The author could in theory, write lots of production code and then see the test "pass" without ever having seen it fail for the expected reasons. I would hope that wouldn't happen, but not having seen the full development progress or the progression taken it's a concern that I have.

--Kaleb


 

I had never thought about calling a single method on the SUT multiple times as duplication. I'll have to spend some more time thinking about that.
Perhaps I should write this up somewhere. ?The usual example I give when teaching is using Java and moving a method from a static call no a non-static call. ?The test suite now needs to be updated in multiple places, hence the duplication.

If the entire test is written up front, and a new class is under development, that test will fail because of a failure to compile. The author could in theory, write lots of production code and then see the test "pass" without ever having seen it fail for the expected reasons. I would hope that wouldn't happen, but not having seen the full development progress or the progression taken it's a concern that I have.


I get you now. ?In my head I see this as two forms of failure, compilation failure and runtime failure. ?I won't even run the test if the compiler is complaining*. ?The key here really is to always see the test fail /for the right reason/, i.e. the diagnostic output of my assert. ?It's good practise to ensure your diagnostics messages convey all the information you want them to at this point anyway.

* At least in a language/IDE that offers good support?


 


On Thu, Aug 28, 2014 at 8:53 AM, Colin Vipurs zodiaczx6@... [testdrivendevelopment] <testdrivendevelopment@...> wrote:
I had never thought about calling a single method on the SUT multiple times as duplication. I'll have to spend some more time thinking about that.
Perhaps I should write this up somewhere. ?The usual example I give when teaching is using Java and moving a method from a static call no a non-static call. ?The test suite now needs to be updated in multiple places, hence the duplication.

Please do. Feel free to e-mail me or post a link to the writeup once it's done :).

--Kaleb