Wow, that¡¯s depressing.
For me, this is a major portion of the disconnect:
Even after the refactor is completed successfully, more work remains! To ensure that every unit in your system is paired with a well-designed (I call it ¡°symmetrical¡±) unit test, you now have to design a new unit test that characterizes the behavior of the new child object.
I had to read that multiple times and read on to make sure that he was actually saying what it sounded as though he was saying. It (and in fact his whole approach) sounds to me a lot like design-code-test rather than the test-code-refactor approach I¡¯ve been using. I confess that I hadn¡¯t actually paid all that much attention to this ¡°London-style¡± approach. I hate the term ¡°Mockist¡± here, as it seems to imply that the Detroit approach eschews test doubles. I presume that other people don¡¯t hear it that way? I¡¯m gathering now that what it means simply is that you test doubles only when the unit test requires them, rather than as a way to force everything to be decoupled.
But he does raise a concern that I have recognized: most people don¡¯t refactor because they don¡¯t sense the code smells. Most developers don¡¯t know what good code looks like. ?have often used Martin Fowler¡¯s Video Store sample to demonstrate refactoring, and when I ask for opinions of the original code, most people say it looks good; they tend to judge code primarily by how much effort is involved in trying to understand it.
That suggests that the most important lesson to be taught is the recognition of bad code, probably by teaching why code smells matter.?
In Uncle Bob¡¯s ¡°Clean Code,¡± he notes the rule about avoiding ¡°magic numbers.¡± That advice is hardly new; I have books from the 1970¡¯s making the same recommendation, and yet programmers routinely violate it. Yourdon and Constantine wrote about coupling and cohesion in 1975, and yet most code I see shows that even very senior programmers fail to internalize those rules. In some ways, we aren¡¯t making a lot of progress.?
-----------------
Author,?Getting Started with Apache Maven <>
Author, HttpUnit <> and SimpleStub <>
Now blogging at <>
Have you listened to Edict Zero <>? If not, you don¡¯t know what you¡¯re missing!
But the art in TDD is finding the next test which forces the code to move towards the design you want.
I think this difficulty is more or less the driving motivation behind Justin Searls' London-style approach to TDD. Here's a choice quote:
The first test will result in some immediate problem-solving implementation code. The second test will demand some more. The third test will complicate your design further. At no point will the act of TDD per se prompt you to improve the intrinsic design of your implementation by breaking your large unit up into smaller ones.
Preventing your code¡¯s design from growing into a large, sprawling mess is left as an exercise to the developer. This is why many TDD advocates call for a ¡°heavy refactor step¡± after tests pass, because they recognize this workflow requires intervention on the part of the developer to step back and identify any opportunities to simplify the design.
?
Refactoring after each green test is gospel among TDD advocates (¡°red-green-refactor¡±, after all), but in practice most developers often skip it mistakenly, because nothing about the TDD workflow inherently compels people to refactor until they¡¯ve got a mess on their hands.
?
Some teachers deal with this problem by exhorting developers to refactor rigorously with an appeal to virtues like discipline and professionalism. That doesn¡¯t sound like much of a solution to me, however. Rather than question the professionalism of someone who¡¯s already undertaken the huge commitment to practice TDD, I¡¯d rather question whether the design of my tools and practices are encouraging me to do the right thing at each step in my workflow.
?
The full thing is well worth a read:?.