¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io

How would you respond?


 

I just came across this blog post. The title is click bait. Overall I think the blog post is good and accurate.?
However, the comments scare me.
What are your thoughts?
https://medium.com/madhash/rip-tdd-or-are-we-just-thinking-about-it-wrong-32ef36b9c5


 

¿ªÔÆÌåÓý

Comments not visible to me. I do not subscribe to Medium (and do not plan to). Maybe that's why the comments didn't show up.

Good general advice: never read your reviews, never read the comments. :)

I thought the article was pretty weak but aside from the click-bait and artificial argumentation, I didn't see much to object to.

R

On Aug 29, 2020, at 6:25 PM, Avi Kessner <akessner@...> wrote:

I just came across this blog post. The title is click bait. Overall I think the blog post is good and accurate.?
However, the comments scare me.
What are your thoughts?


Ron Jeffries
An Agile method is a lens. The work is done under the lens, not by the lens.


 

Reposting the main comment which got the most likes.


"You¡¯re arguing against what would classically be described as unit tests (without ever actually using that term) while arguing in favor of something else you never ascribed any name to. By your examples, that something else seems to be more on the integration test, behavior-driven test, or end-to-end test end of the spectrum. I think it¡¯s critically important to understand that many different kinds of tests exist, they have established names, and they?complement?one another.

You¡¯re right that there is tremendous value in tests that operate from (or closer to) the perspective of a user, but that¡¯s one?kind?of test that really should be supplemented with other kinds of test as well. You usually cannot cover all your bases with just one type of test.

Someone told me once that behavior driven tests assert you¡¯ve ¡°built the right thing,¡± whilst unit tests assert you ¡°built the thing right.¡± That¡¯s an important distinction. Unit tests?can, indeed, be brittle, but that should be expected, because they are intimately and inextricably related to the most?low-level?bits of your code. At that level, you can assert things that you cannot assert from several layers of abstraction up the stack. At the end of the day, it¡¯s the only way to catch a certain class of bug that other forms of testing will never find.

A thought on the mechanic analogy ¡ª mechanics perform maintenance. Software engineers do as well, but that role is largely incidental. ¡°Engineer¡± is in our title. We design. We build. Mechanics don¡¯t do that. It¡¯s not unfair to say that comparing software engineers to those who maintain automobiles is a less apt comparison than one to those who?design?them ¡ª or to those who design individual automotive parts even. I can assure you that the automotive industry doesn¡¯t rely solely on testing of fully assembled vehicles to assert things work as they should. Before your car¡¯s HVAC system was designed and eventually assembled, there was a radiator that was known to generate heat, a compressor that was known to compress, coolant that was known to cool, and fans that were known to blow ¡ª all within their own individual specifications. Bringing this analogy back to software, one could ask what is point of integration testing a suite of components that aren¡¯t individually known to meet their own specifications. When that system inevitably fails, it¡¯s anyone¡¯s guess which component was responsible, and that¡¯s when you start to wish you¡¯d unit tested."


Then follows many comments saying things like, you aren't describing TDD, you are describing BDD. Or , this only applies to integration tests, or, TDD has to cover every line of code, so it can only work on implementation details, etc.


On Sun, 30 Aug 2020, 15:04 Ron Jeffries, <ronjeffriesacm@...> wrote:
Comments not visible to me. I do not subscribe to Medium (and do not plan to). Maybe that's why the comments didn't show up.

Good general advice: never read your reviews, never read the comments. :)

I thought the article was pretty weak but aside from the click-bait and artificial argumentation, I didn't see much to object to.

R

On Aug 29, 2020, at 6:25 PM, Avi Kessner <akessner@...> wrote:

I just came across this blog post. The title is click bait. Overall I think the blog post is good and accurate.?
However, the comments scare me.
What are your thoughts?


Ron Jeffries
An Agile method is a lens. The work is done under the lens, not by the lens.


 

Weak - yes. Didn't see comments either, but this line made me wonder about the author grasp of the big picture:

"By the third round, we developers simply throw our hands up towards testing our code and decide to just go agile instead."


 

Huh, I'm curious why others can't see the comments. I don't have a Medium account as well. I wonder if it's because I got the link through AMP.
brought to you by the letters A, V, and I
and the number 47


On Mon, Aug 31, 2020 at 4:11 PM Tom <rossentj@...> wrote:
Weak - yes. Didn't see comments either, but this line made me wonder about the author grasp of the big picture:

"By the third round, we developers simply throw our hands up towards testing our code and decide to just go agile instead."


 

I think my response would be somewhat similar to that of the comments. (FYI, to make the comments visible click the link that says "25 responses" below the article.)

For starters, it's worthwhile remembering that different people mean different things by TDD- at the coarsest level of granularity, there's the "Chicago"/"inside-out"/"bottom-up"/"classical" and "London"/"outside-in"/"top-down"/"mockist" approaches. Each of these approaches given the same set of requirements would not only lead to very different test suites, but likely different code as well, so I'm not a fan of anyone claiming that "this is TDD, and people who don't do this don't understand TDD."?

That being said, I'm not sure adherents of *either* school would necessarily be entirely happy with the thrust of the article. London school proponents might begin by writing down an acceptance test of the kind described by the article, but then are very explicit about using unit tests (in the purest possible sense of the term) to drive 'internal code quality'. They would explicitly reject the premise of the article that tests should not dictate the design of the code.

Chicago style proponents might also start with an acceptance test. Unlike the London school, I guess it is possible in principle to test-drive an application for correct working entirely through system level tests, written in terms that would make sense to the end user ("click on this to see a modal with this text", etc.). However, in his book on the subject Kent Beck is very explicit that he would still practise TDD on smaller units even if such a comprehensive suite of acceptance tests were being written. The reasons he gives for this are short feedback times between writing code and running the tests, and to simplify the internal design. This would reflect my understanding of what TDD is and the advantages it offers.?

The only caveat I would express here is that Beck was writing in an era of Smalltalk GUI apps, under the assumption that without "programmer tests" (tests of small units of code, as the article cautions against) the time taken to write a system-level test and get it passing is measured in days. If I think about a modern Rails app, for some simple functionality it might take an hour or less to write an integration test and the code to make it pass, so this might change that calculus. Nevertheless, this is a professional judgement call. I don't think that the decision to write unit-level tests in this situation would reflect a lack of understanding of TDD on Beck's part.

Rob


 

Some quotes from Kent Beck.

I call them "unit-tests," but they don't match the accepted definition of unit test very well.

-- Kent Beck, Test Driven Development by Example (2003)

Structure-invariant tests requires a particular style of programming and design as well as a particular style of design. I frequently see tests that assert, ¡°Assert that this object sends this message to that object with these parameters and then sends this other message to that other object.¡± An assertion like this is basically the world¡¯s clumsiest programming language syntax.?If I care about the order of operations, I¡¯ve designed the system wrong.
...?
  • Respond to behavior changes.
  • Not respond to structure changes.


When refactoring causes tests to fail, that's when you learn you are writing your tests wrong. The only way to avoid that is to focus on the behaviour you are expecting.

brought to you by the letters A, V, and I
and the number 47


On Tue, Sep 1, 2020 at 12:47 PM <rold9888@...> wrote:

I think my response would be somewhat similar to that of the comments. (FYI, to make the comments visible click the link that says "25 responses" below the article.)

For starters, it's worthwhile remembering that different people mean different things by TDD- at the coarsest level of granularity, there's the "Chicago"/"inside-out"/"bottom-up"/"classical" and "London"/"outside-in"/"top-down"/"mockist" approaches. Each of these approaches given the same set of requirements would not only lead to very different test suites, but likely different code as well, so I'm not a fan of anyone claiming that "this is TDD, and people who don't do this don't understand TDD."?

That being said, I'm not sure adherents of *either* school would necessarily be entirely happy with the thrust of the article. London school proponents might begin by writing down an acceptance test of the kind described by the article, but then are very explicit about using unit tests (in the purest possible sense of the term) to drive 'internal code quality'. They would explicitly reject the premise of the article that tests should not dictate the design of the code.

Chicago style proponents might also start with an acceptance test. Unlike the London school, I guess it is possible in principle to test-drive an application for correct working entirely through system level tests, written in terms that would make sense to the end user ("click on this to see a modal with this text", etc.). However, in his book on the subject Kent Beck is very explicit that he would still practise TDD on smaller units even if such a comprehensive suite of acceptance tests were being written. The reasons he gives for this are short feedback times between writing code and running the tests, and to simplify the internal design. This would reflect my understanding of what TDD is and the advantages it offers.?

The only caveat I would express here is that Beck was writing in an era of Smalltalk GUI apps, under the assumption that without "programmer tests" (tests of small units of code, as the article cautions against) the time taken to write a system-level test and get it passing is measured in days. If I think about a modern Rails app, for some simple functionality it might take an hour or less to write an integration test and the code to make it pass, so this might change that calculus. Nevertheless, this is a professional judgement call. I don't think that the decision to write unit-level tests in this situation would reflect a lack of understanding of TDD on Beck's part.

Rob


 

So the biggest difference between the two schools of TDD I mentioned is in their use of mock objects, but there's broad agreement that mocks act as an impediment to some refactorings. People who use mocks extensively do so because they think the trade offs are worth that particular price. Beck wrote the foreword to the book "Growing Object-Oriented Software Guided by Tests" which is the canonical reference for London-school TDD, and was still very complimentary about the book whilst acknowledging that it isn't how he personally practices TDD. To be clear, I don't disagree that structure-insensitivity of tests is a highly desirable property, just with the claim that anyone who doesn't exclusively test an application from the perspective of an end user "doesn't understand" TDD.?

Let's now forget completely about the London-school approach. The section of 'TDD by Example' that you quote is exactly the same one that I referred to before making my previous post, entitled 'Can you drive development with application-level tests?', and it answers that question (to my reading) in the negative. I quote:

Another aspect of [Application test-driven development] is the length of the cycle between test and feedback. If a customer wrote a test and ten days later it finally worked, you would be staring at a red bar most of the time. I think I would still want to do programmer-level TDD, so that

  • ?I got immediate green bars
  • ?I simplified the internal design
I take this section- including the excerpt you quoted- to mean that Beck doesn't have a problem calling a test a unit test if (for example) it tests say three classes working together (whereas a purist might insist on isolating dependencies from an object with test doubles to call it a true unit test). However, this paragraph seems unambiguous in saying that he would write tests at some more granular level than the whole application, which can reasonably be described as 'unit tests'.?

As soon as you write such a test, you expose yourself to the possibility that a refactoring can break that test. As a trivial but concrete example, say you're working on a web app backend with an MVC structure. Your controller handles anything to do with the web layer, including params and rendering a response, but delegates all business logic to SomeClass#some_instance_method.

A 'unit test' of SomeClass#some_instance_method might still be fairly structure insensitive (if the method involves any dependencies, private methods of SomeClass, whatever). But that unit test cares, at the very least, about
  • The class name SomeClass
  • The method name some_instance_method
  • the method signature.

And likely also how to instantiate an instance of the class or set dependencies, unless you're using some sort of factory or an IoC container like Java Spring. Changing any of these things, together with how they are invoked in the controller action, is a refactoring that the user does not care about, but will force you to make the corresponding change in your test.?

There's a tradeoff. Testing smaller units of functionality makes your tests faster to run, easier to write, likely easier to read, more focussed as bug detectors, and helps you manage the internal structure of your code. Your tests being sensitive to things the user doesn't care about is the price you pay for these benefits. Different people value these costs and benefits differently, to the extent that the Chicago and London schools can be recognised as distinct approaches, but even within each school people make their own choices about this compromise (and others) on a smaller scale every time they write a test IMHO.


 

I very much agree about the tradeoffs.? I disagree about attributing all the tradeoffs to mocks.? Sometimes mocks are the most expedient way to purely unit test an object in isolation from the objects it calls.


On Tue, Sep 1, 2020 at 8:37 AM <rold9888@...> wrote:

So the biggest difference between the two schools of TDD I mentioned is in their use of mock objects, but there's broad agreement that mocks act as an impediment to some refactorings. People who use mocks extensively do so because they think the trade offs are worth that particular price. Beck wrote the foreword to the book "Growing Object-Oriented Software Guided by Tests" which is the canonical reference for London-school TDD, and was still very complimentary about the book whilst acknowledging that it isn't how he personally practices TDD. To be clear, I don't disagree that structure-insensitivity of tests is a highly desirable property, just with the claim that anyone who doesn't exclusively test an application from the perspective of an end user "doesn't understand" TDD.?

Let's now forget completely about the London-school approach. The section of 'TDD by Example' that you quote is exactly the same one that I referred to before making my previous post, entitled 'Can you drive development with application-level tests?', and it answers that question (to my reading) in the negative. I quote:

Another aspect of [Application test-driven development] is the length of the cycle between test and feedback. If a customer wrote a test and ten days later it finally worked, you would be staring at a red bar most of the time. I think I would still want to do programmer-level TDD, so that

  • ?I got immediate green bars
  • ?I simplified the internal design
I take this section- including the excerpt you quoted- to mean that Beck doesn't have a problem calling a test a unit test if (for example) it tests say three classes working together (whereas a purist might insist on isolating dependencies from an object with test doubles to call it a true unit test). However, this paragraph seems unambiguous in saying that he would write tests at some more granular level than the whole application, which can reasonably be described as 'unit tests'.?

As soon as you write such a test, you expose yourself to the possibility that a refactoring can break that test. As a trivial but concrete example, say you're working on a web app backend with an MVC structure. Your controller handles anything to do with the web layer, including params and rendering a response, but delegates all business logic to SomeClass#some_instance_method.

A 'unit test' of SomeClass#some_instance_method might still be fairly structure insensitive (if the method involves any dependencies, private methods of SomeClass, whatever). But that unit test cares, at the very least, about
  • The class name SomeClass
  • The method name some_instance_method
  • the method signature.

And likely also how to instantiate an instance of the class or set dependencies, unless you're using some sort of factory or an IoC container like Java Spring. Changing any of these things, together with how they are invoked in the controller action, is a refactoring that the user does not care about, but will force you to make the corresponding change in your test.?

There's a tradeoff. Testing smaller units of functionality makes your tests faster to run, easier to write, likely easier to read, more focussed as bug detectors, and helps you manage the internal structure of your code. Your tests being sensitive to things the user doesn't care about is the price you pay for these benefits. Different people value these costs and benefits differently, to the extent that the Chicago and London schools can be recognised as distinct approaches, but even within each school people make their own choices about this compromise (and others) on a smaller scale every time they write a test IMHO.


 

Sure, it wasn't my intention to claim either that the tradeoffs are entirely due to mocks, or that mocks aren't a good way of isolating a test subject from its dependencies. Sorry if I was unclear.?


 

I think there is a distinction between testing behavior vs testing implementation and testing the end user?experience / acceptance tests that is getting muddled here.?


On Tue, 1 Sep 2020, 19:36 , <rold9888@...> wrote:
Sure, it wasn't my intention to claim either that the tradeoffs are entirely due to mocks, or that mocks aren't a good way of isolating a test subject from its dependencies. Sorry if I was unclear.?


 

Note that if you're using automated refactoring tools, they will refactor the test code and the application code at the same time. It's a big win to think of editing in terms of concepts instead of lines of code or characters.

- George

On 9/1/20 11:37 AM, rold9888@... wrote:
A 'unit test' of SomeClass#some_instance_method might still be fairly structure insensitive (if the method involves any dependencies, private methods of SomeClass, whatever). But that unit test cares, at the very least, about
* The class name SomeClass
* The method name some_instance_method
* the method signature.
And likely also how to instantiate an instance of the class or set dependencies, unless you're using some sort of factory or an IoC container like Java Spring. Changing any of these things, together with how they are invoked in the controller action, is a refactoring that the user does not care about, but will force you to make the corresponding change in your test.
--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


 

Quite possibly. I'd trace this back to the article, with a couple of paragraphs quoted by way of example:

"test if x will be returned when a and b are added together"

The above language is not useful to anyone, unless you¡¯re building a calculator. But this kind of language is present in a lot of test code that specifically targets methods and therefore coupling it with how a particular class may work.


"test if confirmation modal appears when submit button is clicked"

This test case is geared towards a frontend facing software. The above test doesn¡¯t care how you got there, only that you got there. This makes it a flexible test case to implement as any changes behind the scenes won¡¯t break the testing suites.

When software is test driven, it means that the development workflow is focused on producing a set of particular results that emulate the needs of its consumers. It¡¯s about fulfilling requirements ¡ª not creating requirements for how to structure and produce your code ... At its simplest, proper TDD is an abstraction of user requirements.

?

And the final paragraph

This is basically TDD ¡ª testing to see if your software produces an expected, higher level and clearly specified expected result. The test merely catches the break ¡ª it doesn¡¯t and shouldn¡¯t dictate the design...

That¡¯s basically the core principle behind TDD.

Given that there are people out there who would argue that TDD is a design activity *before* it concerns itself with having tests to catch regressions, this last point is in my view an inappropriately strong claim to make, and represents my biggest problem with the article. In particular, if tests don't drive the design of the code, and aren't being used to anchor down the behaviour of small units, then I'm not sure what argument the author offers in favour of writing the tests before the code, which is pretty much the only thing that unites different TDD practitioners of all stripes AFAICT.


 

I can see how those paragraphs can be ambiguous. I didn't originally read them the way you are reading them now.?

As for the tests dictating the design. I think this is where most people get stuck. (e.g. they start writing posts about how TDD is an antipattern)
The tests help guide the design, and help you see if your design is working. But the art in TDD is finding the next test which forces the code to move towards the design you want. It doesn't dictate the design by forcing the implementation, it guides the design by enforcing the desired behaviors, and helps you check that the design is correct.

On Wed, 2 Sep 2020, 01:08 , <rold9888@...> wrote:

Quite possibly. I'd trace this back to the article, with a couple of paragraphs quoted by way of example:

"test if x will be returned when a and b are added together"

The above language is not useful to anyone, unless you¡¯re building a calculator. But this kind of language is present in a lot of test code that specifically targets methods and therefore coupling it with how a particular class may work.


"test if confirmation modal appears when submit button is clicked"

This test case is geared towards a frontend facing software. The above test doesn¡¯t care how you got there, only that you got there. This makes it a flexible test case to implement as any changes behind the scenes won¡¯t break the testing suites.

When software is test driven, it means that the development workflow is focused on producing a set of particular results that emulate the needs of its consumers. It¡¯s about fulfilling requirements ¡ª not creating requirements for how to structure and produce your code ... At its simplest, proper TDD is an abstraction of user requirements.

?

And the final paragraph

This is basically TDD ¡ª testing to see if your software produces an expected, higher level and clearly specified expected result. The test merely catches the break ¡ª it doesn¡¯t and shouldn¡¯t dictate the design...

That¡¯s basically the core principle behind TDD.

Given that there are people out there who would argue that TDD is a design activity *before* it concerns itself with having tests to catch regressions, this last point is in my view an inappropriately strong claim to make, and represents my biggest problem with the article. In particular, if tests don't drive the design of the code, and aren't being used to anchor down the behaviour of small units, then I'm not sure what argument the author offers in favour of writing the tests before the code, which is pretty much the only thing that unites different TDD practitioners of all stripes AFAICT.


 


But the art in TDD is finding the next test which forces the code to move towards the design you want.
I think this difficulty is more or less the driving motivation behind Justin Searls' London-style approach to TDD. Here's a choice quote:
The first test will result in some immediate problem-solving implementation code. The second test will demand some more. The third test will complicate your design further. At no point will the act of TDD per se prompt you to improve the intrinsic design of your implementation by breaking your large unit up into smaller ones.
Preventing your code¡¯s design from growing into a large, sprawling mess is left as an exercise to the developer. This is why many TDD advocates call for a ¡°heavy refactor step¡± after tests pass, because they recognize this workflow requires intervention on the part of the developer to step back and identify any opportunities to simplify the design.
?
Refactoring after each green test is gospel among TDD advocates (¡°red-green-refactor¡±, after all), but in practice most developers often skip it mistakenly, because nothing about the TDD workflow inherently compels people to refactor until they¡¯ve got a mess on their hands.
?
Some teachers deal with this problem by exhorting developers to refactor rigorously with an appeal to virtues like discipline and professionalism. That doesn¡¯t sound like much of a solution to me, however. Rather than question the professionalism of someone who¡¯s already undertaken the huge commitment to practice TDD, I¡¯d rather question whether the design of my tools and practices are encouraging me to do the right thing at each step in my workflow.
?
The full thing is well worth a read:?.


 

¿ªÔÆÌåÓý

Wow, that¡¯s depressing.

For me, this is a major portion of the disconnect:

Even after the refactor is completed successfully, more work remains! To ensure that every unit in your system is paired with a well-designed (I call it ¡°symmetrical¡±) unit test, you now have to design a new unit test that characterizes the behavior of the new child object.

I had to read that multiple times and read on to make sure that he was actually saying what it sounded as though he was saying. It (and in fact his whole approach) sounds to me a lot like design-code-test rather than the test-code-refactor approach I¡¯ve been using. I confess that I hadn¡¯t actually paid all that much attention to this ¡°London-style¡± approach. I hate the term ¡°Mockist¡± here, as it seems to imply that the Detroit approach eschews test doubles. I presume that other people don¡¯t hear it that way? I¡¯m gathering now that what it means simply is that you test doubles only when the unit test requires them, rather than as a way to force everything to be decoupled.

But he does raise a concern that I have recognized: most people don¡¯t refactor because they don¡¯t sense the code smells. Most developers don¡¯t know what good code looks like. ?have often used Martin Fowler¡¯s Video Store sample to demonstrate refactoring, and when I ask for opinions of the original code, most people say it looks good; they tend to judge code primarily by how much effort is involved in trying to understand it.

That suggests that the most important lesson to be taught is the recognition of bad code, probably by teaching why code smells matter.?

In Uncle Bob¡¯s ¡°Clean Code,¡± he notes the rule about avoiding ¡°magic numbers.¡± That advice is hardly new; I have books from the 1970¡¯s making the same recommendation, and yet programmers routinely violate it. Yourdon and Constantine wrote about coupling and cohesion in 1975, and yet most code I see shows that even very senior programmers fail to internalize those rules. In some ways, we aren¡¯t making a lot of progress.?
-----------------
Author,?Getting Started with Apache Maven <>
Author, HttpUnit <> and SimpleStub <>
Now blogging at <>

Have you listened to Edict Zero <>? If not, you don¡¯t know what you¡¯re missing!





On Sep 2, 2020, at 4:02 AM, rold9888@... wrote:


But the art in TDD is finding the next test which forces the code to move towards the design you want.
I think this difficulty is more or less the driving motivation behind Justin Searls' London-style approach to TDD. Here's a choice quote:
The first test will result in some immediate problem-solving implementation code. The second test will demand some more. The third test will complicate your design further. At no point will the act of TDD per se prompt you to improve the intrinsic design of your implementation by breaking your large unit up into smaller ones.
Preventing your code¡¯s design from growing into a large, sprawling mess is left as an exercise to the developer. This is why many TDD advocates call for a ¡°heavy refactor step¡± after tests pass, because they recognize this workflow requires intervention on the part of the developer to step back and identify any opportunities to simplify the design.
?
Refactoring after each green test is gospel among TDD advocates (¡°red-green-refactor¡±, after all), but in practice most developers often skip it mistakenly, because nothing about the TDD workflow inherently compels people to refactor until they¡¯ve got a mess on their hands.
?
Some teachers deal with this problem by exhorting developers to refactor rigorously with an appeal to virtues like discipline and professionalism. That doesn¡¯t sound like much of a solution to me, however. Rather than question the professionalism of someone who¡¯s already undertaken the huge commitment to practice TDD, I¡¯d rather question whether the design of my tools and practices are encouraging me to do the right thing at each step in my workflow.
?
The full thing is well worth a read:?.



 



On Wed, 2 Sep 2020 at 15:28, Russell Gold <russ@...> wrote:
Even after the refactor is completed successfully, more work remains! To ensure that every unit in your system is paired with a well-designed (I call it ¡°symmetrical¡±) unit test, you now have to design a new unit test that characterizes the behavior of the new child object.

I had to read that multiple times and read on to make sure that he was actually saying what it sounded as though he was saying. It (and in fact his whole approach) sounds to me a lot like design-code-test rather than the test-code-refactor approach I¡¯ve been using.?

Note that the context here is not greenfield TDD but bringing existing code that starts entirely without tests into testing and refactoring along the way initially via characterization tests etc.? The comment above should not be read without that background context as it could otherwise be potentially misleading.? -- Given you start with code without tests there's going?to be some tests being written after the fact, though then even still, you could argue that you could/one should start TDD'ing at the point where you have your characterisation tests and start refactoring towards a new object (say), which if done in that way, would mean you should not then end having to further write "symemetrical" unit tests after the fact.? Perhaps that was your point and I'm just slow catching on.? Still, for the benefit of others just mentioning the context as I initially read this and got slightly the wrong idea without that context.?


 

On Thu, Sep 3, 2020 at 12:31 AM, Walter Prins wrote:
Note that the context here is not greenfield TDD but bringing existing code that starts entirely without tests into testing and refactoring along the way initially via characterization tests etc.? The comment above should not be read without that background context as it could otherwise be potentially misleading.? -- Given you start with code without tests there's going?to be some tests being written after the fact, though then even still, you could argue that you could/one should start TDD'ing at the point where you have your characterisation tests and start refactoring towards a new object (say), which if done in that way, would mean you should not then end having to further write "symemetrical" unit tests after the fact.? Perhaps that was your point and I'm just slow catching on.? Still, for the benefit of others just mentioning the context as I initially read this and got slightly the wrong idea without that context.?

I don't actually think this is what he's saying here. He's describing a situation in which a class has been developed with TDD, and has good coverage, but it's growing too large and taken on too many responsibilities. In the "refactor" step of TDD, the solution to this problem is to extract a class that is responsible for one of those responsibilities.

Whereas I'd probably be comfortable treating this extracted class (he calls it a "child", at the risk of terminology colliding with inheritance) as an implementation detail of the "parent", and not writing a unit test for it, he seems to take it as axiomatic that it requires its own "symmetrical" test. (Exactly what he means by symmetrical is documented at?. That wiki is a whole other source of interesting reading.) This is why that section is called "Characterization Tests of Greenfield Code" - because the child class has not itself been developed by TDD.

Having said that I wouldn't immediately bother writing a dedicated test for the child, I think you really see the value of his approach when the child develops a second client, (or more) and then a behaviour change is required in the behaviour of that child. If the behaviour of the child is tested through its clients, then each test of those clients that depends on that behaviour is going to need modifying, which is drudgery. If, by contrast, you write a test at the level of the coordination between the child and its clients, then they don't need to be affected by the change in the child's behaviour. This approach, however, obviously depends on the child's real functionality being covered by its own test.


 

I think this talk is a great place to understand how Justin approaches mocks and TDD:



His style though is an iteration on the London-school approach that's sufficiently different that he separates it in his discussion of the different approaches on that wiki:


 



On Thu, 3 Sep 2020 at 13:32, <rold9888@...> wrote:
On Thu, Sep 3, 2020 at 12:31 AM, Walter Prins wrote:
Note that the context here is not greenfield TDD but bringing existing code that starts entirely without tests into testing and refactoring along the way initially via characterization tests etc.? The comment above should not be read without that background context as it could otherwise be potentially misleading.? -- Given you start with code without tests there's going?to be some tests being written after the fact, though then even still, you could argue that you could/one should start TDD'ing at the point where you have your characterisation tests and start refactoring towards a new object (say), which if done in that way, would mean you should not then end having to further write "symemetrical" unit tests after the fact.? Perhaps that was your point and I'm just slow catching on.? Still, for the benefit of others just mentioning the context as I initially read this and got slightly the wrong idea without that context.?

I don't actually think this is what he's saying here. He's describing a situation in which a class has been developed with TDD, and has good coverage, but it's growing too large and taken on too many responsibilities. In the "refactor" step of TDD, the solution to this problem is to extract a class that is responsible for one of those responsibilities.

Whereas I'd probably be comfortable treating this extracted class (he calls it a "child", at the risk of terminology colliding with inheritance) as an implementation detail of the "parent", and not writing a unit test for it, he seems to take it as axiomatic that it requires its own "symmetrical" test. (Exactly what he means by symmetrical is documented at?. That wiki is a whole other source of interesting reading.) This is why that section is called "Characterization Tests of Greenfield Code" - because the child class has not itself been developed by TDD.

Having said that I wouldn't immediately bother writing a dedicated test for the child, I think you really see the value of his approach when the child develops a second client, (or more) and then a behaviour change is required in the behaviour of that child. If the behaviour of the child is tested through its clients, then each test of those clients that depends on that behaviour is going to need modifying, which is drudgery. If, by contrast, you write a test at the level of the coordination between the child and its clients, then they don't need to be affected by the change in the child's behaviour. This approach, however, obviously depends on the child's real functionality being covered by its own test.

Apologies you're correct, it's I who misread (skimread and then skim re-read and managed to get the wrong end of the stick :rolleyes:) -- "too much hurry"... Thanks for pointing that out.? :)

Walter