Keyboard Shortcuts
Likes
- Testdrivendevelopment
- Messages
Search
Re: TDD in type-checked FP languages
On Tue, Feb 9, 2021 at 11:03 AM Arnaud Bailly <arnaud.oqube@...> wrote: ?
I know. :) ?
This matches my experience. We had this discussion back in Rochegude about the desire to write tests for the pieces, but then wanting to trust composition to glue those pieces together.
This conforms to my mental model that says that the type checker runs mandatory microtests that I would otherwise probably not spend the time to write.
I noticed, when working in Elm, that I would begin with examples, then refactor towards property-based tests. It might be that I merely don't yet think "natively" in terms of properties. ?
Removing duplication from tests, especially of irrelevant details, tends to push the programmer to isolate pure code from impure code---at worst, they have pure core slightly contaminated with collecting parameters entirely in memory. When working in Elm or Purescript, I push myself to stay in pure code more ruthlessly.
Pas grave, comme tu le sais d¨¦j¨¤...
I'll read this happily. Over Christmas I spent some time trying to do the Advent of Code problems in Purescript. I learned a lot in only 2 weeks. I merely need more practice. J. B. (Joe) Rainsberger :: ?:: :: -- J. B. (Joe) Rainsberger :: :: :: Teaching evolutionary design and TDD since 2002 |
Swift UI for iOS Apps - What I'm learning...
Hello folks, I just came across this Groups.io - thanks JB. ?Since the best TDD minds in the Universe are here maybe I can get some help learning... ? My area of study these last few months has been Swift for iOS Apps. ?I've written lots of example code - but that only takes your learning so far (and it's not far enough!) - so I ventured beyond tutorials and wrote an App for some colleagues and published in the Apple Store. ?Now that required lots of learning that is not in the tutorials. See InspireMe! Cards - Real Advice for Agilists? I wrote that App without any automated test. ?Decided that was NOT the way to go in the future. ?So I needed to dust off the old Java TDD skills and apply to Swift. Xcode has wonderful integration for unit testing & even UI testing & performance testing. A card flipping App doesn't have much business logic nor Obj model - but has lots of views in the SwiftUI world. ?So testing the View layer is the primary need. ?Trying to learn the UI testing layer inside of Xcode is very difficult... can't seem to find much documentation. ?All the examples of course WORK... but my attempts do not... so learning from "canned examples" is not fruitful. I was invited to and jumped at that opportunity - he's a rock star in the TDD community and I highly recommend his awesome TDD course. ?Even if you are NOT an embedded C/C++ programmer you will learn lots from James. ?And to get a glimpse at his work flow - well it is MIND-BLOWING. ?Do not think you know what TDD is until you've seen a master in action... to generalize; YOU are taking WAY to BIG of a STEP! To solve this problem - I teamed up with the good folks at TDD.Academy and Lance Kind. ?He's mentoring me in TDD on Swift UI with iOS App dev on his Live Coding Stream TDD.Academy ?- shameless plug: ? Yet I've still got many many questions. ?And I've learned lots in the last several months. ?I now consider my self a TDD beninnger and perhaps competent in SwiftUI Testing... which means I can spend hours banging my head on a testing of a simple iOS app and fix it the next day - Competent. So what have I learned: ?(maybe this is a good place for a list) - You could try the dev tool -?Accessibility Inspector - but you will return to the print statement. |
Re: TDD in type-checked FP languages
Hi, My own experience isn't interesting as caml was one the first language I've learnt (25 years ago) and don't realise if changes I notice are due to the language itself or the paradigm. So I'm curious too, what have you noticed JB ? Le?mar. 9 f¨¦vr. 2021 ¨¤?12:44, J. B. Rainsberger <me@...> a ¨¦crit?:
|
Re: TDD in type-checked FP languages
¿ªÔÆÌåÓýNot what you asked, but when I was writing two (relatively simple) front ends in Elm, I deliberately didn¡¯t write tests, relied on the compiler. I was disappointed in the results, as I found more problems after I was ¡°done¡± than I¡¯m used to. Most of them seemed to be faults of omission - ¡°code not complex enough for the problem¡±. In theory, TDD isn¡¯t about those, so shouldn¡¯t cause me to realize them more quickly. In practice, it seems to, for me. I worked a little on a Haskell project, mainly as a product owner and writing some tests for untested code. Not enough to have any conclusions, except that Haskell is a really nice language for writing tabular-style tests because of (1) relative lack of parentheses, (2) infix two-argument functions using backticks, and (3) precedence rules that didn¡¯t seem to work against me. I forget the details, but I had a number of tests like: f 1 2 3 `gives` 5 (Maybe that would require parens?) Elixir is OK for tabular tests because of the pipe operator, but it¡¯s not as good: ?
|
Re: TDD in type-checked FP languages
Hello J.B., This is a subject dear to my heart and I have had some experience doing various forms TDD in Haskell in the past years, after more than 10 years trying to practice in Java. I even proposed a session on this very topic entitled "Type and Test Driven Development" at this year's DDDEurope but had to pull it back for personal reasons. The first (and most) important?thing I have noticed is that types play in part the role tests would have in other languages, in the sense that I use the compiler to provide feedback when I code, for example using so-called "holes" that can be filled somewhat automatically by the compiler. In essence, the compiler's output is just like another output of the dev environment, like tests,?leading me to needed?changes. This is particularly true when refactoring: changing a type, or a type's structure, leads to compiler errors to fix, then to test errors, then to more tests to introduce?behavior. The way I write code is usually very outside-in, starting from a high level function with some type, and then zooming in refining the type. I go back and forth between types and tests in this process. This might be supported in different ways depending on the language, and it's one of the reasons I have grown interest over the past few years with Idris (and dependently typed languages in general) as they promise to offer more opportunity to design code incrementally through a dialogue with the compiler. Another types have changed the way I do TDD is that I try hard to write property-based tests instead of example based tests. Writing a property first, then letting the failing examples drive the implementation provides a great feedback loop that has the advantage of forcing you to cover a lot of cases you wouldn't usually care to cover. Also, in Haskell especially, having a strict separation of pure and impure code has a strong effect on the way I design and write code: Because the compiler has a much easier task when working on pure functions, I try very hard to stay in the pure world which means pushing back effects to edges of the code, which of course leads naturally to port/adapters like "architecture". Last but not least, working in an expressive type system makes it possible to have your code's language closer to your domain's, expressing "constraints" from the domain directly in the types rather than laboriously test-driving some encoding. Here is an example I have used recently in a conference which you might find interesting (unfortunately in French):? It's about representing a French SSN. I have tried to provide a strong case for better types by comparing a String-based representation with a strongly-typed one, with associated tests.? --? Arnaud Bailly - @dr_c0d3On Tue, Feb 9, 2021 at 12:44 PM J. B. Rainsberger <me@...> wrote:
|
TDD in type-checked FP languages
Hi, folks. I'm curious about your experiences in test-driving in type-checked FP languages. I've worked a little in Elm and I'm playing around with Purescript. I have noticed a few things, but I'm curious about what you've noticed. Does anyone here have industrial-strength experience doing evolutionary design in one of these languages? What did you notice about how you practised TDD? Thanks. -- J. B. (Joe) Rainsberger :: ?:: :: -- J. B. (Joe) Rainsberger :: :: :: Teaching evolutionary design and TDD since 2002 |
Re: Tests more complex than the solution they drive
It depends, query handlers tend to have a repository interface injected, the repository encapsulates the EF LINQ in a read only way.? Command handlers will generally perform operations on EF collections using methods exposed on the Domain Entities and then saving the changes.? Orchestrating handlers, in theory, shouldn't need to modify entities, their job is to pass query results to command handlers.
|
Re: Tests more complex than the solution they drive
toggle quoted message
Show quoted text
On Sep 7, 2020, at 3:10 AM, paulnmackintosh <pnm@...> wrote:
|
Re: Tests more complex than the solution they drive
¿ªÔÆÌåÓýIs it possible to place an object in a desired state without all of the builder steps? If so, you could then give it stimuli and test just single state transitions. I would be concerned, if I worked on a system like this, than the testing time would become rather problematic.
|
Re: Tests more complex than the solution they drive
Yes, I can see a fair number of tests that are currently using common setup steps.? These could be encapsulated at a higher level of abstraction and doing so will remove duplication in the test code.? I wonder if this will lead to slow test execution over time though as it will manage complexity consistently rather than reduce it.
|
Re: Tests more complex than the solution they drive
Do you have common use cases where you can use the builder pattern to create the context? Or is the set of steps different in each scenario? brought to you by the letters A, V, and I and the number 47 On Mon, Sep 7, 2020 at 4:55 PM Rob Park <robert.d.park@...> wrote:
|
Re: Tests more complex than the solution they drive
Rob Park
For me, it's great that you have 2 independently deployable pieces. Though I would ensure I also had independently runnable tests. With the exception of having contract tests to test your expectations of the backend interactions. This will have benefits of being easier to reason about (at least once you start to feel comfortable with the concept); faster test suites; less brittle maintenance. In a similar setup to what you have, I've had?end-to-end system tests in the past, but I literally only had 1 or 2 validating the most important use case still worked after each deploy (of either side). Good luck! On Mon, Sep 7, 2020 at 7:17 AM paulnmackintosh <pnm@...> wrote:
|
Re: Tests more complex than the solution they drive
The in solution 'framework' that has evolved for these tests does have a bit of a system test feel to it, although it is being leant on to test drive new classes with new behaviours. [Fact] ? ? .Given(builder => builder ? ? ? .UserCreatesEntityA("Entity A") ? ? ? .UserCreatesEntityB("Entity B",?new[]?{?new DetailForB { Name = "Detail B"},?}) |
Re: Tests more complex than the solution they drive
¿ªÔÆÌåÓýI¡¯m a bit confused by your description, as it sounds as though you are doing system tests. That¡¯s the most obvious case in which you¡¯d need to ¡°replay all the steps the application would theoretically take in order to reach the point whereby the query can return sensible results.¡± One of the advantages of unit tests is that you don¡¯t do that, as you are testing each small part of the system.Is the problem that even the small parts have to be brought to a state via complex steps?? Russ ----------------- Author,?Getting Started with Apache Maven <> Author, HttpUnit <> and SimpleStub <> Now blogging at <> Have you listened to Edict Zero <>? If not, you don¡¯t know what you¡¯re missing!
|
Tests more complex than the solution they drive
I am currently working on what you could categorise as a workflow automation product. The product is comprised of independently deployed front and back end components.
Using mediator, every element of the solution is broken down into sets of 3 classes, request, handler and result.? Requests and results are POCOs whilst handlers implement behaviour and require few dependencies. ?Often, in cases where the only job to do is orchestrate the sending and receiving of requests and results, the sole dependency is the mediator type itself.? Other requests fall into the two categories of query or command.
This is achieved by gathering and then issuing a collection of steps, where each step is a high-level mediator message, as the ¡°Given¡±s and ¡°When¡±s of a test scenario.? During the ¡°Then¡± assertions are made against the result POCOs that were observed by a stub implementation of the mediator type, e.g. assert that the last created entity of type ¡°A¡± should have such and such an ¡°A.Name¡±.
|
Re: How would you respond?
My only problem with the repo is that it presents a 'straw man' argument to the Detroit School (). For example, a lot of London School practice uses Clean Architecture/Ports & Adapters and tests at the level of the Port, just as London School does. It does not have to test all the way at the outside as the author implies. The mistake there is to believe that TDD needs to deliver 'complete' coverage, it doesn't. It may be sensible to use Detroit School to drive the Port and Application, and ignore the Adapter layer altogether for TDD. That adapter layer can be replaced with test doubles, the rules are: does it prevent tests from running together (shared fixture) or does it make it slow. I would also suggest that the Detroit school is clear when to use a test double. Now you might want to use automated tests of your adapter layer, but those are not required to be created by TDD. By providing false evidence of bad practice, and then criticizing the approach for things it does not do, the author undermines their own argument. Both schools have their own trade-offs. FWIW i side with Kent that tests should depend on requirements not structure. |
Re: How would you respond?
On Thu, 3 Sep 2020 at 13:32, <rold9888@...> wrote: On Thu, Sep 3, 2020 at 12:31 AM, Walter Prins wrote: Apologies you're correct, it's I who misread (skimread and then skim re-read and managed to get the wrong end of the stick :rolleyes:) -- "too much hurry"... Thanks for pointing that out.? :) Walter |
Re: How would you respond?
On Thu, Sep 3, 2020 at 12:31 AM, Walter Prins wrote:
Note that the context here is not greenfield TDD but bringing existing code that starts entirely without tests into testing and refactoring along the way initially via characterization tests etc.? The comment above should not be read without that background context as it could otherwise be potentially misleading.? -- Given you start with code without tests there's going?to be some tests being written after the fact, though then even still, you could argue that you could/one should start TDD'ing at the point where you have your characterisation tests and start refactoring towards a new object (say), which if done in that way, would mean you should not then end having to further write "symemetrical" unit tests after the fact.? Perhaps that was your point and I'm just slow catching on.? Still, for the benefit of others just mentioning the context as I initially read this and got slightly the wrong idea without that context.? I don't actually think this is what he's saying here. He's describing a situation in which a class has been developed with TDD, and has good coverage, but it's growing too large and taken on too many responsibilities. In the "refactor" step of TDD, the solution to this problem is to extract a class that is responsible for one of those responsibilities. Having said that I wouldn't immediately bother writing a dedicated test for the child, I think you really see the value of his approach when the child develops a second client, (or more) and then a behaviour change is required in the behaviour of that child. If the behaviour of the child is tested through its clients, then each test of those clients that depends on that behaviour is going to need modifying, which is drudgery. If, by contrast, you write a test at the level of the coordination between the child and its clients, then they don't need to be affected by the change in the child's behaviour. This approach, however, obviously depends on the child's real functionality being covered by its own test. |