On Mar 30, 2022, at 3:29 PM, J. B. Rainsberger <
me@...> wrote:
On Tue, Mar 29, 2022 at 5:31 PM Russell Gold <
russ@...> wrote:
On Mar 29, 2022, at 2:58 PM, J. B. Rainsberger <
me@...> wrote:
On Tue, Mar 29, 2022 at 1:57 PM Russell Gold <
russ@...> wrote:
I¡¯ve never heard of the ¡°Mikado method,¡± but that¡¯s pretty much the way I learned to do TDD.
Saff¡¯s technique seems interesting, and I will keep it in mind for the future; I¡¯m not sure yet exactly where it would help. Most of the problems I deal with are errors that wind up being caught in integration testing, because we missed a unit test. There, the problem is that it is not immediately obvious what test is missing, because we haven¡¯t even reproduced the problem other than the integration test: and it doesn¡¯t directly call the production code, so we cannot inline anything useful.
That's _exactly_ what the Saff Squeeze does: it starts with a failing, bigger unit test and produces a missing, smaller unit test. It merely uncovers that test systematically instead of relying on your intuition to imagine it.
But I don¡¯t have a failing unit test. I have a failing
?integration?test (and one that typically takes close to an hour to run through all setup and initialization each time).?
I infer that you're using the term "integration test" to mean "unit test for a bigger unit". It's still a unit, even if it's a large one. :) (Yes, I'm trying to rehabilitate the original meaning of "unit": any independently-inspectable part of the system. And yes, that means that the entire system is a unit.)
Ah, yes, terminology problems.
As my current team uses it, a unit test tests code by making calls. It does no I/O and does not interact with system services, including timers. What we¡¯re calling an integration test is one which interacts with the entire system in a way that simulates how a user or external system would do it. In this case, it sets up Kubernetes and Docker, builds new images, and uses https calls to make things happen. That is not a ¡°bigger unit¡± in this sense. It¡¯s particular slow because the testers wrote it incrementally - each test adds to the previous environment before running its own logic, so you pretty much have to run the entire suite. We¡¯ve had discussions on this point.
I¡¯ve also heard these called ¡°acceptance tests¡± and ¡°functional tests¡± and ¡°system tests.¡± We¡¯re using the Maven Failsafe plugin to run these, so?
?
When I get a failing unit test, I generally just revert the change that caused it and try again. If that¡¯s not practical for some reason, I will try this.
You're assuming that you know the change that caused it. The whole point of this technique is to help find the cause of a defect when we don't know the cause of the defect by other means.
Correct; since we run the unit tests after each change, we know which one caused it.?
That obviously doesn¡¯t work for learning tests, where the change was the creation of the unit test itself. I¡¯ll have to keep this technique in mind for the next time I write one of those.
-----------------
Author, HttpUnit <> and SimpleStub <>
Now blogging at <>
Have you listened to Edict Zero <>? If not, you don¡¯t know what you¡¯re missing!