¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io

Re: TDD in type-checked FP languages

 

On Tue, Feb 9, 2021 at 11:03 AM Arnaud Bailly <arnaud.oqube@...> wrote:
?
This is a subject dear to my heart

I know. :)
?
[...] The first (and most) important?thing I have noticed is that types play in part the role tests would have in other languages, in the sense that I use the compiler to provide feedback when I code, for example using so-called "holes" that can be filled somewhat automatically by the compiler. In essence, the compiler's output is just like another output of the dev environment, like tests,?leading me to needed?changes. This is particularly true when refactoring: changing a type, or a type's structure, leads to compiler errors to fix, then to test errors, then to more tests to introduce?behavior.

This matches my experience. We had this discussion back in Rochegude about the desire to write tests for the pieces, but then wanting to trust composition to glue those pieces together.

The way I write code is usually very outside-in, starting from a high level function with some type, and then zooming in refining the type. I go back and forth between types and tests in this process. This might be supported in different ways depending on the language, and it's one of the reasons I have grown interest over the past few years with Idris (and dependently typed languages in general) as they promise to offer more opportunity to design code incrementally through a dialogue with the compiler.

This conforms to my mental model that says that the type checker runs mandatory microtests that I would otherwise probably not spend the time to write.

Another types have changed the way I do TDD is that I try hard to write property-based tests instead of example based tests. Writing a property first, then letting the failing examples drive the implementation provides a great feedback loop that has the advantage of forcing you to cover a lot of cases you wouldn't usually care to cover.

I noticed, when working in Elm, that I would begin with examples, then refactor towards property-based tests. It might be that I merely don't yet think "natively" in terms of properties.
?
Also, in Haskell especially, having a strict separation of pure and impure code has a strong effect on the way I design and write code: Because the compiler has a much easier task when working on pure functions, I try very hard to stay in the pure world which means pushing back effects to edges of the code, which of course leads naturally to port/adapters like "architecture".

Removing duplication from tests, especially of irrelevant details, tends to push the programmer to isolate pure code from impure code---at worst, they have pure core slightly contaminated with collecting parameters entirely in memory. When working in Elm or Purescript, I push myself to stay in pure code more ruthlessly.

Last but not least, working in an expressive type system makes it possible to have your code's language closer to your domain's, expressing "constraints" from the domain directly in the types rather than laboriously test-driving some encoding. Here is an example I have used recently in a conference which you might find interesting (unfortunately in French):

Pas grave, comme tu le sais d¨¦j¨¤...

[...] It's about representing a French SSN. I have tried to provide a strong case for better types by comparing a String-based representation with a strongly-typed one, with associated tests.?

I'll read this happily.

Over Christmas I spent some time trying to do the Advent of Code problems in Purescript. I learned a lot in only 2 weeks. I merely need more practice.
--
J. B. (Joe) Rainsberger :: ?:: ::


--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Swift UI for iOS Apps - What I'm learning...

 

Hello folks,

I just came across this Groups.io - thanks JB. ?Since the best TDD minds in the Universe are here maybe I can get some help learning...

?

My area of study these last few months has been Swift for iOS Apps. ?I've written lots of example code - but that only takes your learning so far (and it's not far enough!) - so I ventured beyond tutorials and wrote an App for some colleagues and published in the Apple Store. ?Now that required lots of learning that is not in the tutorials. See InspireMe! Cards - Real Advice for Agilists?

I wrote that App without any automated test. ?Decided that was NOT the way to go in the future. ?So I needed to dust off the old Java TDD skills and apply to Swift.

Xcode has wonderful integration for unit testing & even UI testing & performance testing.

A card flipping App doesn't have much business logic nor Obj model - but has lots of views in the SwiftUI world. ?So testing the View layer is the primary need. ?Trying to learn the UI testing layer inside of Xcode is very difficult... can't seem to find much documentation. ?All the examples of course WORK... but my attempts do not... so learning from "canned examples" is not fruitful.

I was invited to and jumped at that opportunity - he's a rock star in the TDD community and I highly recommend his awesome TDD course. ?Even if you are NOT an embedded C/C++ programmer you will learn lots from James. ?And to get a glimpse at his work flow - well it is MIND-BLOWING. ?Do not think you know what TDD is until you've seen a master in action... to generalize; YOU are taking WAY to BIG of a STEP!

To solve this problem - I teamed up with the good folks at TDD.Academy and Lance Kind. ?He's mentoring me in TDD on Swift UI with iOS App dev on his Live Coding Stream TDD.Academy ?- shameless plug: ?

Yet I've still got many many questions. ?And I've learned lots in the last several months. ?I now consider my self a TDD beninnger and perhaps competent in SwiftUI Testing... which means I can spend hours banging my head on a testing of a simple iOS app and fix it the next day - Competent.

So what have I learned: ?(maybe this is a good place for a list)
- Doing TDD is way easier with a mentor (programming pair) - I still forget to write the test first.?
- Testing the Zero (NULL) case first
- Using the Empty-Class pattern
- Learning to craw - before I attempt to RUN.
- Tons of Swift & SwiftUI stuff
- SwiftUI toolbars don't play well with the testing ID accessibilityIdentifier() - so you can quit banging your head on this one - my head is flat.
- UI tests are NOT fast... 3 orders of magnitude slower than a unit test.
- You MUST speed this up... so you go get ?and learn to unit test the View layer.
- Even View Inspector has problems with toolbars... so you go back to head banging...
- Apple has a big war chest of money - and then are not spending it on programmer documentation
- You need to know your View hierarchy - so in a test "print(app.debugDescription)"?- is Your savior

- You could try the dev tool -?Accessibility Inspector - but you will return to the print statement.
- You have to GROK the XCUI element query stuff... and no one will talk about it in the examples/tutorials
- You guessed it Apple doesn't understand it so they did not document XCUI Element Query...
- so something like app.buttons["my_button"].tap() might just poke your buttons


I'm journaling my experiments and learning opportunities on you are invited to come along with me.



Re: TDD in type-checked FP languages

 

Hi,
My own experience isn't interesting as caml was one the first language I've learnt (25 years ago) and don't realise if changes I notice are due to the language itself or the paradigm.
So I'm curious too, what have you noticed JB ?


Le?mar. 9 f¨¦vr. 2021 ¨¤?12:44, J. B. Rainsberger <me@...> a ¨¦crit?:
Hi, folks. I'm curious about your experiences in test-driving in type-checked FP languages. I've worked a little in Elm and I'm playing around with Purescript. I have noticed a few things, but I'm curious about what you've noticed.

Does anyone here have industrial-strength experience doing evolutionary design in one of these languages? What did you notice about how you practised TDD?

Thanks.
--
J. B. (Joe) Rainsberger :: ?:: ::


--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: TDD in type-checked FP languages

 

¿ªÔÆÌåÓý

Not what you asked, but when I was writing two (relatively simple) front ends in Elm, I deliberately didn¡¯t write tests, relied on the compiler. I was disappointed in the results, as I found more problems after I was ¡°done¡± than I¡¯m used to. Most of them seemed to be faults of omission - ¡°code not complex enough for the problem¡±. In theory, TDD isn¡¯t about those, so shouldn¡¯t cause me to realize them more quickly. In practice, it seems to, for me.

I worked a little on a Haskell project, mainly as a product owner and writing some tests for untested code. Not enough to have any conclusions, except that Haskell is a really nice language for writing tabular-style tests because of (1) relative lack of parentheses, (2) infix two-argument functions using backticks, and (3) precedence rules that didn¡¯t seem to work against me. I forget the details, but I had a number of tests like:

f 1 2 3 `gives` 5

(Maybe that would require parens?)

Elixir is OK for tabular tests because of the pipe operator, but it¡¯s not as good:

?


On Feb 9, 2021, at 5:43 AM, J. B. Rainsberger <me@...> wrote:

Hi, folks. I'm curious about your experiences in test-driving in type-checked FP languages. I've worked a little in Elm and I'm playing around with Purescript. I have noticed a few things, but I'm curious about what you've noticed.

Does anyone here have industrial-strength experience doing evolutionary design in one of these languages? What did you notice about how you practised TDD?

Thanks.
--
J. B. (Joe) Rainsberger :: ?:: ::


--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: TDD in type-checked FP languages

 

Hello J.B.,

This is a subject dear to my heart and I have had some experience doing various forms TDD in Haskell in the past years, after more than 10 years trying to practice in Java.
I even proposed a session on this very topic entitled "Type and Test Driven Development" at this year's DDDEurope but had to pull it back for personal reasons.

The first (and most) important?thing I have noticed is that types play in part the role tests would have in other languages, in the sense that I use the compiler to provide feedback when I code, for example using so-called "holes" that can be filled somewhat automatically by the compiler. In essence, the compiler's output is just like another output of the dev environment, like tests,?leading me to needed?changes. This is particularly true when refactoring: changing a type, or a type's structure, leads to compiler errors to fix, then to test errors, then to more tests to introduce?behavior.

The way I write code is usually very outside-in, starting from a high level function with some type, and then zooming in refining the type. I go back and forth between types and tests in this process. This might be supported in different ways depending on the language, and it's one of the reasons I have grown interest over the past few years with Idris (and dependently typed languages in general) as they promise to offer more opportunity to design code incrementally through a dialogue with the compiler.

Another types have changed the way I do TDD is that I try hard to write property-based tests instead of example based tests. Writing a property first, then letting the failing examples drive the implementation provides a great feedback loop that has the advantage of forcing you to cover a lot of cases you wouldn't usually care to cover.

Also, in Haskell especially, having a strict separation of pure and impure code has a strong effect on the way I design and write code: Because the compiler has a much easier task when working on pure functions, I try very hard to stay in the pure world which means pushing back effects to edges of the code, which of course leads naturally to port/adapters like "architecture".

Last but not least, working in an expressive type system makes it possible to have your code's language closer to your domain's, expressing "constraints" from the domain directly in the types rather than laboriously test-driving some encoding. Here is an example I have used recently in a conference which you might find interesting (unfortunately in French):? It's about representing a French SSN. I have tried to provide a strong case for better types by comparing a String-based representation with a strongly-typed one, with associated tests.?

--?
Arnaud Bailly - @dr_c0d3


On Tue, Feb 9, 2021 at 12:44 PM J. B. Rainsberger <me@...> wrote:
Hi, folks. I'm curious about your experiences in test-driving in type-checked FP languages. I've worked a little in Elm and I'm playing around with Purescript. I have noticed a few things, but I'm curious about what you've noticed.

Does anyone here have industrial-strength experience doing evolutionary design in one of these languages? What did you notice about how you practised TDD?

Thanks.
--
J. B. (Joe) Rainsberger :: ?:: ::


--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


TDD in type-checked FP languages

 

Hi, folks. I'm curious about your experiences in test-driving in type-checked FP languages. I've worked a little in Elm and I'm playing around with Purescript. I have noticed a few things, but I'm curious about what you've noticed.

Does anyone here have industrial-strength experience doing evolutionary design in one of these languages? What did you notice about how you practised TDD?

Thanks.
--
J. B. (Joe) Rainsberger :: ?:: ::


--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


FYI: Downgraded group to free plan

 

Hi, folks. I've downloaded this group to the free plan on groups.io, so if you notice anything lost or broken as a result of this change, then please let me know. As far as I can tell, nothing important will be lost.
--
J. B. (Joe) Rainsberger :: :: ::


Re: Tests more complex than the solution they drive

 

It depends, query handlers tend to have a repository interface injected, the repository encapsulates the EF LINQ in a read only way.? Command handlers will generally perform operations on EF collections using methods exposed on the Domain Entities and then saving the changes.? Orchestrating handlers, in theory, shouldn't need to modify entities, their job is to pass query results to command handlers.


Re: Tests more complex than the solution they drive

 

¿ªÔÆÌåÓý

Is the design such that handlers manipulate EF collections directly?

On Sep 7, 2020, at 3:10 AM, paulnmackintosh <pnm@...> wrote:

?

I am currently working on what you could categorise as a workflow automation product.

The product is comprised of independently deployed front and back end components.


The core domain is realised by a single .Net Core solution.? The front end interacts with it via a REST API and it interacts with further components (and occasionally itself) using messaging.


Two core aspects of the internal design are Entity Framework for data access and extensive use of the mediator pattern.

Using mediator, every element of the solution is broken down into sets of 3 classes, request, handler and result.? Requests and results are POCOs whilst handlers implement behaviour and require few dependencies. ?Often, in cases where the only job to do is orchestrate the sending and receiving of requests and results, the sole dependency is the mediator type itself.? Other requests fall into the two categories of query or command.


All told a well thought out and pleasant stack to work in.? Mediator leads to few dependencies, which in turn leads to simple classes that are easy to drive with tests.


Except for the fact that nothing is ever quite as simple as one would like.


There is a problem writing tests in this solution in that to test a simple isolated query the test setup must replay all the steps the application would theoretically take in order to reach the point whereby the query can return sensible results when issued against Entity Framework over an in memory database.

This is achieved by gathering and then issuing a collection of steps, where each step is a high-level mediator message, as the ¡°Given¡±s and ¡°When¡±s of a test scenario.? During the ¡°Then¡± assertions are made against the result POCOs that were observed by a stub implementation of the mediator type, e.g. assert that the last created entity of type ¡°A¡± should have such and such an ¡°A.Name¡±.


I believe the goal of the approach is to use the real application code to avoid making assumptions about the state of the system when writing tests, however the more steps required to reach a the prerequisite snapshot of state, the greater the reliance on developers writing new tests to comprehend the set of steps that should be replayed in order to reach the point from which they want to proceed.


It feels to me as though what has gone wrong here is that although the application code has a nicely decoupled architecture the database and/or the Entity Framework domain model layer does not and as a result entities depend on other entities in a way that makes them intrinsically co-dependent.? This interdependency is revealed by tests which cannot drive simple code without themselves being complicated.


I thought perhaps I could lean on the experience in this group to gain some suggestions on how to reduce this kind of test friction?


Paul


Re: Tests more complex than the solution they drive

 

¿ªÔÆÌåÓý

Is it possible to place an object in a desired state without all of the builder steps? If so, you could then give it stimuli and test just single state transitions. I would be concerned, if I worked on a system like this, than the testing time would become rather problematic.

On Sep 7, 2020, at 7:17 AM, paulnmackintosh <pnm@...> wrote:

The in solution 'framework' that has evolved for these tests does have a bit of a system test feel to it, although it is being leant on to test drive new classes with new behaviours.
Here's an example in pseudo code in case it helps:

[Fact]
public async Task ExampleFact()
{
? await TestScenario

? ? .Given(builder => builder

? ? ? .UserCreatesEntityA("Entity A")

? ? ? .UserCreatesEntityB("Entity B",?new[]?{?new DetailForB { Name = "Detail B"},?})
? ? ? .SystemCreatesEntityC()
? ? ? .SystemActsOnAAndBAndC()
? ? .When(builder =>
? ? ? builder.ExerciseNewUnitUnderTest())
? ? .Then(results =>
? ? {
? ? ? // use results to assert
? ? });
}



Re: Tests more complex than the solution they drive

 

Yes, I can see a fair number of tests that are currently using common setup steps.? These could be encapsulated at a higher level of abstraction and doing so will remove duplication in the test code.? I wonder if this will lead to slow test execution over time though as it will manage complexity consistently rather than reduce it.


Re: Tests more complex than the solution they drive

 

Do you have common use cases where you can use the builder pattern to create the context? Or is the set of steps different in each scenario?
brought to you by the letters A, V, and I
and the number 47


On Mon, Sep 7, 2020 at 4:55 PM Rob Park <robert.d.park@...> wrote:
For me, it's great that you have 2 independently deployable pieces. Though I would ensure I also had independently runnable tests. With the exception of having contract tests to test your expectations of the backend interactions. This will have benefits of being easier to reason about (at least once you start to feel comfortable with the concept); faster test suites; less brittle maintenance.

In a similar setup to what you have, I've had?end-to-end system tests in the past, but I literally only had 1 or 2 validating the most important use case still worked after each deploy (of either side).

Good luck!


On Mon, Sep 7, 2020 at 7:17 AM paulnmackintosh <pnm@...> wrote:

The in solution 'framework' that has evolved for these tests does have a bit of a system test feel to it, although it is being leant on to test drive new classes with new behaviours.
Here's an example in pseudo code in case it helps:

[Fact]
public async Task ExampleFact()
{
? await TestScenario

? ? .Given(builder => builder

? ? ? .UserCreatesEntityA("Entity A")

? ? ? .UserCreatesEntityB("Entity B",?new[]?{?new DetailForB { Name = "Detail B"},?})
? ? ? .SystemCreatesEntityC()
? ? ? .SystemActsOnAAndBAndC()
? ? .When(builder =>
? ? ? builder.ExerciseNewUnitUnderTest())
? ? .Then(results =>
? ? {
? ? ? // use results to assert
? ? });
}


Re: Tests more complex than the solution they drive

Rob Park
 

For me, it's great that you have 2 independently deployable pieces. Though I would ensure I also had independently runnable tests. With the exception of having contract tests to test your expectations of the backend interactions. This will have benefits of being easier to reason about (at least once you start to feel comfortable with the concept); faster test suites; less brittle maintenance.

In a similar setup to what you have, I've had?end-to-end system tests in the past, but I literally only had 1 or 2 validating the most important use case still worked after each deploy (of either side).

Good luck!


On Mon, Sep 7, 2020 at 7:17 AM paulnmackintosh <pnm@...> wrote:

The in solution 'framework' that has evolved for these tests does have a bit of a system test feel to it, although it is being leant on to test drive new classes with new behaviours.
Here's an example in pseudo code in case it helps:

[Fact]
public async Task ExampleFact()
{
? await TestScenario

? ? .Given(builder => builder

? ? ? .UserCreatesEntityA("Entity A")

? ? ? .UserCreatesEntityB("Entity B",?new[]?{?new DetailForB { Name = "Detail B"},?})
? ? ? .SystemCreatesEntityC()
? ? ? .SystemActsOnAAndBAndC()
? ? .When(builder =>
? ? ? builder.ExerciseNewUnitUnderTest())
? ? .Then(results =>
? ? {
? ? ? // use results to assert
? ? });
}


Re: Tests more complex than the solution they drive

 

The in solution 'framework' that has evolved for these tests does have a bit of a system test feel to it, although it is being leant on to test drive new classes with new behaviours.
Here's an example in pseudo code in case it helps:

[Fact]
public async Task ExampleFact()
{
? await TestScenario

? ? .Given(builder => builder

? ? ? .UserCreatesEntityA("Entity A")

? ? ? .UserCreatesEntityB("Entity B",?new[]?{?new DetailForB { Name = "Detail B"},?})
? ? ? .SystemCreatesEntityC()
? ? ? .SystemActsOnAAndBAndC()
? ? .When(builder =>
? ? ? builder.ExerciseNewUnitUnderTest())
? ? .Then(results =>
? ? {
? ? ? // use results to assert
? ? });
}


Re: Tests more complex than the solution they drive

 

¿ªÔÆÌåÓý

I¡¯m a bit confused by your description, as it sounds as though you are doing system tests. That¡¯s the most obvious case in which you¡¯d need to ¡°replay all the steps the application would theoretically take in order to reach the point whereby the query can return sensible results.¡± One of the advantages of unit tests is that you don¡¯t do that, as you are testing each small part of the system.

Is the problem that even the small parts have to be brought to a state via complex steps??

Russ
-----------------
Author,?Getting Started with Apache Maven <>
Author, HttpUnit <> and SimpleStub <>
Now blogging at <>

Have you listened to Edict Zero <>? If not, you don¡¯t know what you¡¯re missing!





On Sep 7, 2020, at 6:10 AM, paulnmackintosh <pnm@...> wrote:

I am currently working on what you could categorise as a workflow automation product.

The product is comprised of independently deployed front and back end components.

The core domain is realised by a single .Net Core solution.? The front end interacts with it via a REST API and it interacts with further components (and occasionally itself) using messaging.

Two core aspects of the internal design are Entity Framework for data access and extensive use of the mediator pattern.
Using mediator, every element of the solution is broken down into sets of 3 classes, request, handler and result.? Requests and results are POCOs whilst handlers implement behaviour and require few dependencies. ?Often, in cases where the only job to do is orchestrate the sending and receiving of requests and results, the sole dependency is the mediator type itself.? Other requests fall into the two categories of query or command.

All told a well thought out and pleasant stack to work in.? Mediator leads to few dependencies, which in turn leads to simple classes that are easy to drive with tests.

Except for the fact that nothing is ever quite as simple as one would like.

There is a problem writing tests in this solution in that to test a simple isolated query the test setup must replay all the steps the application would theoretically take in order to reach the point whereby the query can return sensible results when issued against Entity Framework over an in memory database.
This is achieved by gathering and then issuing a collection of steps, where each step is a high-level mediator message, as the ¡°Given¡±s and ¡°When¡±s of a test scenario.? During the ¡°Then¡± assertions are made against the result POCOs that were observed by a stub implementation of the mediator type, e.g. assert that the last created entity of type ¡°A¡± should have such and such an ¡°A.Name¡±.

I believe the goal of the approach is to use the real application code to avoid making assumptions about the state of the system when writing tests, however the more steps required to reach a the prerequisite snapshot of state, the greater the reliance on developers writing new tests to comprehend the set of steps that should be replayed in order to reach the point from which they want to proceed.

It feels to me as though what has gone wrong here is that although the application code has a nicely decoupled architecture the database and/or the Entity Framework domain model layer does not and as a result entities depend on other entities in a way that makes them intrinsically co-dependent.? This interdependency is revealed by tests which cannot drive simple code without themselves being complicated.

I thought perhaps I could lean on the experience in this group to gain some suggestions on how to reduce this kind of test friction?

Paul


Tests more complex than the solution they drive

 

I am currently working on what you could categorise as a workflow automation product.

The product is comprised of independently deployed front and back end components.


The core domain is realised by a single .Net Core solution.? The front end interacts with it via a REST API and it interacts with further components (and occasionally itself) using messaging.


Two core aspects of the internal design are Entity Framework for data access and extensive use of the mediator pattern.

Using mediator, every element of the solution is broken down into sets of 3 classes, request, handler and result.? Requests and results are POCOs whilst handlers implement behaviour and require few dependencies. ?Often, in cases where the only job to do is orchestrate the sending and receiving of requests and results, the sole dependency is the mediator type itself.? Other requests fall into the two categories of query or command.


All told a well thought out and pleasant stack to work in.? Mediator leads to few dependencies, which in turn leads to simple classes that are easy to drive with tests.


Except for the fact that nothing is ever quite as simple as one would like.


There is a problem writing tests in this solution in that to test a simple isolated query the test setup must replay all the steps the application would theoretically take in order to reach the point whereby the query can return sensible results when issued against Entity Framework over an in memory database.

This is achieved by gathering and then issuing a collection of steps, where each step is a high-level mediator message, as the ¡°Given¡±s and ¡°When¡±s of a test scenario.? During the ¡°Then¡± assertions are made against the result POCOs that were observed by a stub implementation of the mediator type, e.g. assert that the last created entity of type ¡°A¡± should have such and such an ¡°A.Name¡±.


I believe the goal of the approach is to use the real application code to avoid making assumptions about the state of the system when writing tests, however the more steps required to reach a the prerequisite snapshot of state, the greater the reliance on developers writing new tests to comprehend the set of steps that should be replayed in order to reach the point from which they want to proceed.


It feels to me as though what has gone wrong here is that although the application code has a nicely decoupled architecture the database and/or the Entity Framework domain model layer does not and as a result entities depend on other entities in a way that makes them intrinsically co-dependent.? This interdependency is revealed by tests which cannot drive simple code without themselves being complicated.


I thought perhaps I could lean on the experience in this group to gain some suggestions on how to reduce this kind of test friction?


Paul


Re: How would you respond?

 

My only problem with the repo is that it presents a 'straw man' argument to the Detroit School (). For example, a lot of London School practice uses Clean Architecture/Ports & Adapters and tests at the level of the Port, just as London School does. It does not have to test all the way at the outside as the author implies.

The mistake there is to believe that TDD needs to deliver 'complete' coverage, it doesn't. It may be sensible to use Detroit School to drive the Port and Application, and ignore the Adapter layer altogether for TDD. That adapter layer can be replaced with test doubles, the rules are: does it prevent tests from running together (shared fixture) or does it make it slow. I would also suggest that the Detroit school is clear when to use a test double. Now you might want to use automated tests of your adapter layer, but those are not required to be created by TDD.

By providing false evidence of bad practice, and then criticizing the approach for things it does not do, the author undermines their own argument.

Both schools have their own trade-offs. FWIW i side with Kent that tests should depend on requirements not structure.




Re: How would you respond?

 



On Thu, 3 Sep 2020 at 13:32, <rold9888@...> wrote:
On Thu, Sep 3, 2020 at 12:31 AM, Walter Prins wrote:
Note that the context here is not greenfield TDD but bringing existing code that starts entirely without tests into testing and refactoring along the way initially via characterization tests etc.? The comment above should not be read without that background context as it could otherwise be potentially misleading.? -- Given you start with code without tests there's going?to be some tests being written after the fact, though then even still, you could argue that you could/one should start TDD'ing at the point where you have your characterisation tests and start refactoring towards a new object (say), which if done in that way, would mean you should not then end having to further write "symemetrical" unit tests after the fact.? Perhaps that was your point and I'm just slow catching on.? Still, for the benefit of others just mentioning the context as I initially read this and got slightly the wrong idea without that context.?

I don't actually think this is what he's saying here. He's describing a situation in which a class has been developed with TDD, and has good coverage, but it's growing too large and taken on too many responsibilities. In the "refactor" step of TDD, the solution to this problem is to extract a class that is responsible for one of those responsibilities.

Whereas I'd probably be comfortable treating this extracted class (he calls it a "child", at the risk of terminology colliding with inheritance) as an implementation detail of the "parent", and not writing a unit test for it, he seems to take it as axiomatic that it requires its own "symmetrical" test. (Exactly what he means by symmetrical is documented at?. That wiki is a whole other source of interesting reading.) This is why that section is called "Characterization Tests of Greenfield Code" - because the child class has not itself been developed by TDD.

Having said that I wouldn't immediately bother writing a dedicated test for the child, I think you really see the value of his approach when the child develops a second client, (or more) and then a behaviour change is required in the behaviour of that child. If the behaviour of the child is tested through its clients, then each test of those clients that depends on that behaviour is going to need modifying, which is drudgery. If, by contrast, you write a test at the level of the coordination between the child and its clients, then they don't need to be affected by the change in the child's behaviour. This approach, however, obviously depends on the child's real functionality being covered by its own test.

Apologies you're correct, it's I who misread (skimread and then skim re-read and managed to get the wrong end of the stick :rolleyes:) -- "too much hurry"... Thanks for pointing that out.? :)

Walter


Re: How would you respond?

 

I think this talk is a great place to understand how Justin approaches mocks and TDD:



His style though is an iteration on the London-school approach that's sufficiently different that he separates it in his discussion of the different approaches on that wiki:


Re: How would you respond?

 

On Thu, Sep 3, 2020 at 12:31 AM, Walter Prins wrote:
Note that the context here is not greenfield TDD but bringing existing code that starts entirely without tests into testing and refactoring along the way initially via characterization tests etc.? The comment above should not be read without that background context as it could otherwise be potentially misleading.? -- Given you start with code without tests there's going?to be some tests being written after the fact, though then even still, you could argue that you could/one should start TDD'ing at the point where you have your characterisation tests and start refactoring towards a new object (say), which if done in that way, would mean you should not then end having to further write "symemetrical" unit tests after the fact.? Perhaps that was your point and I'm just slow catching on.? Still, for the benefit of others just mentioning the context as I initially read this and got slightly the wrong idea without that context.?

I don't actually think this is what he's saying here. He's describing a situation in which a class has been developed with TDD, and has good coverage, but it's growing too large and taken on too many responsibilities. In the "refactor" step of TDD, the solution to this problem is to extract a class that is responsible for one of those responsibilities.

Whereas I'd probably be comfortable treating this extracted class (he calls it a "child", at the risk of terminology colliding with inheritance) as an implementation detail of the "parent", and not writing a unit test for it, he seems to take it as axiomatic that it requires its own "symmetrical" test. (Exactly what he means by symmetrical is documented at?. That wiki is a whole other source of interesting reading.) This is why that section is called "Characterization Tests of Greenfield Code" - because the child class has not itself been developed by TDD.

Having said that I wouldn't immediately bother writing a dedicated test for the child, I think you really see the value of his approach when the child develops a second client, (or more) and then a behaviour change is required in the behaviour of that child. If the behaviour of the child is tested through its clients, then each test of those clients that depends on that behaviour is going to need modifying, which is drudgery. If, by contrast, you write a test at the level of the coordination between the child and its clients, then they don't need to be affected by the change in the child's behaviour. This approach, however, obviously depends on the child's real functionality being covered by its own test.