Re: Changeset Evolution as an aid to Test Driven Development.
It's not the pretending so much as making the commit history tell an easy to understand (well organized) story about what changes were being made to the code.? The extreme opposite is a commit history full of commits with commit messages like "oops", "crap!", "forgot this bit in the previous commit", "dammit", "I hope this fixes things", etc.
This last one is another common usage for me: when I struggle to integrate with Someone Else's Stuff, I often interleave writing a bit of my domain code with writing a bit of the technology integration code. My domain code is usually right (because it's harder to get wrong and lies more within my direct control) and my technology integration code is sometimes guesswork converging to working. I find it helpful to be able to push corrections to the technology integration back in time to simulate what would have happened if I'd understood better from the beginning. Since I usually keep technology integration isolated from domain code in my designs, I can move technology integration code back in time relatively conflict-free.?
I rarely want a record of my struggles to integrate Their Stuff, unless I'm specifically writing an article about that. -- J. B. (Joe) Rainsberger :: ?:: ::
-- J. B. (Joe) Rainsberger :: :: :: Teaching evolutionary design and TDD since 2002
|
Re: Changeset Evolution as an aid to Test Driven Development.
? Do you actually edit / add to anything but the tip of your branch? If so, why? If not, what advantage do you get from pushing some change backward in time?
I do. I mostly amend the most-recent commit because "Oh, shit, I meant to do that, too", such as cleaning up import statements or realizing that I made a change in 3 places and missed the 4th.
Not infrequently, I fail to notice this "Oh, shit" moment for hours or days. When I find it, I realize that I had meant to do this other thing, but now 39 commits ago. Since my commits are usually quite small and independent, it's usually relatively easy to push my amendment 39 commits back in time, so I do it. My current favorite git client, lazygit, makes this really quite easy if sometimes slow: Ctrl+J, no conflict, Ctrl+J, no conflict, ... until I get there or see a conflict.
Of course, I do this as long as I haven't published changes to anyone else. Often, I'm the only one on here, so that's rarely a problem. And even if I have published to my offsite backup, I retain the authority to push again with force.
This allows me to better keep code base history that reflects what I intended, rather than remaining prisoner to small mistakes. -- J. B. (Joe) Rainsberger :: ?:: ::
-- J. B. (Joe) Rainsberger :: :: :: Teaching evolutionary design and TDD since 2002
|
Re: Changeset Evolution as an aid to Test Driven Development.
I'm still trying to grok all this, despite having no earthly reason ever to do it ... I think I don't get how you use it as well as why. Will ask essentially the same question every time: how does this editing of the past help you accomplish this benefit listed?
The powerful growing observability and diffing logs debug technique is an excellent reason.
What can you observe and diff more easily this way than without.
The poster child environment for this trick is embedded / real time / multithreaded. The further you're away from that the less benefit it provides.
On an embedded target events are firing from the many different hardware subsystems at whatever time and in whatever order the hardware pleases and is handled in whatever order the scheduler pleases.
Defect free code for _a_ module will cope with that, and sequence it's behaviour into something sensible and orderly. ie. Logs for _all_ modules mixed together will be pretty chaotic.? For a well behaved module, usually pretty orderly.
The most difficult subtle horrible bugs in a multithreaded / real time environment arise from racy behaviour. ie. The code hasn't been written robust against different ordering or timing of events.
All else being the same, refactorings of all and every sort will alter the ordering and timing of events, which in defect free code will make no difference...
So if a refactoring, somewhere in your pile of refactorings, mixed up with a pile of behaviour changes, did something/triggered something nasty. Your pain is immeasurable.
If you had adequate logging you would at least see where the behaviour changed, but you haven't. So you add logging, as per tradition at the end of your branch, and yes, you can see something changed somewhere in your pile. You have no clue as to which, you have to start as if you? know nothing.
Which is where this technique becomes a super power.... go back to the start of the branch... add logging for the module of interest, record the real time behaviour.
Rebase pile of refactorings on top, record new real time behaviour.... if true refactoring, nothing interesting will have changed.
If something noxious and subtle is happening (perhaps even due to a preexisting bug), you can see exactly where the behaviour changed. Often that is enough to narrow down the exact changeset. Otherwise bisect to find it. Quite possibly that refactoring merely _exposed_ racy behaviour rather than introduced it.
In which case? extend unit test to catch it, (insert at end of unit test region of branch), fix bug, check to see it did and then decide where you want to drop the fix.
As I said, the poster child is embedded / real time / multithreaded, but value from this trick can even be gained within unit tests when the class under test is too complex to understand, and a refactoring broke something that the unit test coverage didn't catch.
It tells you where precisely coverage is needed.
But hey, why refactor if everything is easy and simple to understand? It's the gnarliest and worst modules with the most debt that most need refactoring.
Having strong proofs, that just keep getting stronger, that refactorings are indeed pure refactorings is an excellent reason.
How is it that the proofs get stronger? How is it different from putting the improved proofs at the tip of the spear? Are you building old versions just to run new tests on them?
If you are green fields and the code has never been released.. you get nothing.
If the code has man years of testing and man decades or centuries of use in the field.....? you need fairly strong evidence that you are making things better, rather than just changing things and maybe making it worse. So extending the tests _after_ behavioural changes loses that valuable oracle.?
Remember tests are code hence tests have bugs, tests test the code and the code tests the tests.
So expanding tests on known working code, and adding precondition asserts, provides a strong oracle to prove that the tests are correct and working.
Which as you grow the tests, they turn around and provide a strong oracle to prove your refactorings are correct and are indeed pure refactorings.
?
The "keeping up with the herd of cats" to avoid Bing Bang integrations is an excellent reason.
How does this help?
Breaks it down into small manageable integrations... and instead of finding later you are conflicting with weeks worth of work with another cat...
You find out after only a days worth of conflicts, know to walk over and work _with_ the other cat. Yup, shouldn't ever happen good team work and all that.... blah blah, but shit and schedule pressure happens.
Hanging out on a branch whilst man years is going into the mainline is going to be bad no matter what, but this trick decreases the urgency to close slightly.
Being able to drop the extended unit tests into the mainline to stop the herd of cats accidentally breaking things, even if the rest of the branch isn't ready, is an excellent reason.
How does this help? Wouldn't all your unit tests be available at the tip anyway?
Interposing behavioural changes with behavioural changes from other cats makes review and defect isolation hard. So if you're doing this mix of things, being able to drop the zero / low risk items first until you can build coverage and confidence. In the realm of excellent unit test coverage not so much of a problem, in browner fields, deeply embedded, multi-threaded, more of a problem.
Floods of microcommits of "obviously correct" tiny refactorings is an excellent reason.
How does this help do microcommits?
Review. Makes it much much easier. Whether by yourself, or reviewer or pair. Bundle twenty or so refactorings... and it becomes damn hard to reason about.
I have great confidence in myself in making correct refactorings.... so can blithely pile change upon change upon change.
Alas, history shows that confidence is woefully unfounded.
Using coverage analysis to see where you're starting to walk on shaky ground, and then retroactively? brace with unit tests _before_ all your changes is an excellent reason.
Again, are you running/testing old versions? If not, how is this different from adding tests wherever you are?
Your unit tests are what give you confidence bravely refactor, knowing you aren't causing regressions.
Sadly, looking at coverage analysis I know myself and all other cats are way overly optimistic about what unit tests do in fact test. Again, it's hard to assert negatives. It's easy to assert behaviour, but hard to assert the absence thereof. (Mutation testing sounds like a very good thing, anyone know of a good C/C++ mutation testing framework?)
If a branch you have just refactored is showing red in coverage analysis.... your test? shouldn't be giving _any_ confidence about the change. Doesn't mean you need to abandon the change, just means you have to go back and brace it first.
This Communication is Confidential. We only send and receive email on the basis of the terms set out at
|
Re: Changeset Evolution as an aid to Test Driven Development.
>? Ideally you would have a magic CI system that if, for example, you rebase on top of the mainline, it would rebuild and check _every_ changeset. But that's a lot of compute power and tooling.
That system exists:?
Arnaud
|
Re: Changeset Evolution as an aid to Test Driven Development.
I'm still trying to grok all this, despite having no earthly reason ever to do it ... I think I don't get how you use it as well as why. Will ask essentially the same question every time: how does this editing of the past help you accomplish this benefit listed?
The powerful growing observability and diffing logs debug technique is an excellent reason.
What can you observe and diff more easily this way than without.
Having strong proofs, that just keep getting stronger, that refactorings are indeed pure refactorings is an excellent reason.
How is it that the proofs get stronger? How is it different from putting the improved proofs at the tip of the spear? Are you building old versions just to run new tests on them?
The "keeping up with the herd of cats" to avoid Bing Bang integrations is an excellent reason.
How does this help?
Being able to drop the extended unit tests into the mainline to stop the herd of cats accidentally breaking things, even if the rest of the branch isn't ready, is an excellent reason.
How does this help? Wouldn't all your unit tests be available at the tip anyway?
Floods of microcommits of "obviously correct" tiny refactorings is an excellent reason.
How does this help do microcommits?
Using coverage analysis to see where you're starting to walk on shaky ground, and then retroactively? brace with unit tests _before_ all your changes is an excellent reason.
Again, are you running/testing old versions? If not, how is this different from adding tests wherever you are?
I'm clearly missing something ...
Ron Jeffries
Isn¡¯t testing?quality?in a lot?like weaving straw into?gold? ¡ª George Cameron
|
Re: Changeset Evolution as an aid to Test Driven Development.
On Wed, Jun 16, 2021 at 4:11 AM Avi Kessner < akessner@...> wrote: However, and this is where the tooling comes in, it often takes me more time and concentration to figure out when I want to roll back to, or which version of the file I want to work with, than it does to just roll everything back to master and either rewrite or copy paste (taking advantage of the IDE to not hot reload the file when I do a checkout) my changes back into the "new codebase".? So what kinds of tools are available to find that safe point in time for you?
Partly the invariant... the rule for playing this game is keep _every_ point on the branch green.
Ideally you would have a magic CI system that if, for example, you rebase on top of the mainline, it would rebuild and check _every_ changeset. But that's a lot of compute power and tooling.
Usually I do something _much_ simpler, I check the CI for a point on the mainline that it has _just_ proven to be sound. Rebase on top of that. Then move to the tip of my branch, build and run tests _just_ for the module(s) I'm working on.
If it's broken, the breakage must be somewhere in between, using "hg bisect" or "git bisect", hone in very rapidly on the first changeset that broke it, fix, amend and evolve. Repeat until all green again. Usually at most one? fix and amends is required. Bisect is a very nifty, very fast, very easy tool to use.
Alternatively, I just need to look at the commit log messages, they always indicate whether this is observability change, comment only change, test extension, refactoring or behaviour change, so it's easy to see where I want to add, say a test extension.
?
Is it a matter of running a suite of tests which agaisnt each commit history until it goes green/red? I'm not quite understanding that part.
It's not the pretending so much as making the commit history tell an easy to understand (well organized) story about what changes were being made to the code.? The extreme opposite is a commit history full of commits with commit messages like "oops", "crap!", "forgot this bit in the previous commit", "dammit", "I hope this fixes things", etc.
I suppose if you work within an organization (and I have in the past) where merges to the main branch are required to be squash merges, then none of this storytelling, whether it's bad or good, matters much to anyone but oneself, but I prefer to write a well organized story even for myself -- the future self who has long since forgotten most of what he once knew about this portion of the code and will appreciate an easy to read/understand exposition of it rather than a stream of consciousness or play-by-play retelling of the actual events as they occurred.
Al
Hi Al, sent from iPad, probably via Mars. Errors, if any, are not mine. ?I can't remember or quickly find the blog post where I first saw it espoused to treat your private "feature" (in Git Flow terminology) as your own to rewrite commit history as you wish. I do it very frequently to better-organize the content of commits (e.g, pretend I wrote those tests at the same time as the production code and glom them together into the same new commit by squishing the two commits together, which has the advantage of making them an atomic package). I admit I only skimmed John's original post in the current thread, but I got the impression that's the kind of thing he was saying he uses the ability to rewrite VCS history for.
Yes, sounds similar. Why do you want to pretend those things? Do you go back and read the revised history or something? I wonder, because I always work on the tip of the spear, and only very rarely check out a prior version, generally to figure out when something changed. When I do feel the need to do that, I usually consider it a bit of a failure.
I'm about what the code is now?so I truly think wonder why folks care about the order of things in the past.?
Thanks,
R
-- John Carter Tait Electronics? ? ? ? ? ? ? ? ?? ? ??? New Zealand
This Communication is Confidential. We only send and receive email on the basis of the terms set out at
|
Re: Changeset Evolution as an aid to Test Driven Development.
I see the benefit of being able to take a subset of changes which matches when you made your changes and push those changes safely up the stack. You really need to be working with a number of teams to appreciate this.
However, and this is where the tooling comes in, it often takes me more time and concentration to figure out when I want to roll back to, or which version of the file I want to work with, than it does to just roll everything back to master and either rewrite or copy paste (taking advantage of the IDE to not hot reload the file when I do a checkout) my changes back into the "new codebase".? So what kinds of tools are available to find that safe point in time for you?
Is it a matter of running a suite of tests which agaisnt each commit history until it goes green/red? I'm not quite understanding that part.
toggle quoted message
Show quoted text
It's not the pretending so much as making the commit history tell an easy to understand (well organized) story about what changes were being made to the code.? The extreme opposite is a commit history full of commits with commit messages like "oops", "crap!", "forgot this bit in the previous commit", "dammit", "I hope this fixes things", etc.
I suppose if you work within an organization (and I have in the past) where merges to the main branch are required to be squash merges, then none of this storytelling, whether it's bad or good, matters much to anyone but oneself, but I prefer to write a well organized story even for myself -- the future self who has long since forgotten most of what he once knew about this portion of the code and will appreciate an easy to read/understand exposition of it rather than a stream of consciousness or play-by-play retelling of the actual events as they occurred.
Al
Hi Al, sent from iPad, probably via Mars. Errors, if any, are not mine. ?I can't remember or quickly find the blog post where I first saw it espoused to treat your private "feature" (in Git Flow terminology) as your own to rewrite commit history as you wish. I do it very frequently to better-organize the content of commits (e.g, pretend I wrote those tests at the same time as the production code and glom them together into the same new commit by squishing the two commits together, which has the advantage of making them an atomic package). I admit I only skimmed John's original post in the current thread, but I got the impression that's the kind of thing he was saying he uses the ability to rewrite VCS history for.
Yes, sounds similar. Why do you want to pretend those things? Do you go back and read the revised history or something? I wonder, because I always work on the tip of the spear, and only very rarely check out a prior version, generally to figure out when something changed. When I do feel the need to do that, I usually consider it a bit of a failure.
I'm about what the code is now?so I truly think wonder why folks care about the order of things in the past.?
Thanks,
R
|
Re: Changeset Evolution as an aid to Test Driven Development.
It's not the pretending so much as making the commit history tell an easy to understand (well organized) story about what changes were being made to the code.? The extreme opposite is a commit history full of commits with commit messages like "oops", "crap!", "forgot this bit in the previous commit", "dammit", "I hope this fixes things", etc.
I suppose if you work within an organization (and I have in the past) where merges to the main branch are required to be squash merges, then none of this storytelling, whether it's bad or good, matters much to anyone but oneself, but I prefer to write a well organized story even for myself -- the future self who has long since forgotten most of what he once knew about this portion of the code and will appreciate an easy to read/understand exposition of it rather than a stream of consciousness or play-by-play retelling of the actual events as they occurred.
Al
On Monday, June 14, 2021, 02:45:14 PM PDT, Ron Jeffries <ronjeffriesacm@...> wrote:
Hi Al, sent from iPad, probably via Mars. Errors, if any, are not mine. ronjeffries@... is a better address for me, maybe. On Jun 14, 2021, at 9:22 AM, Al Chou via groups.io <hotfusionman@...> wrote:
?I can't remember or quickly find the blog post where I first saw it espoused to treat your private "feature" (in Git Flow terminology) as your own to rewrite commit history as you wish. I do it very frequently to better-organize the content of commits (e.g, pretend I wrote those tests at the same time as the production code and glom them together into the same new commit by squishing the two commits together, which has the advantage of making them an atomic package). I admit I only skimmed John's original post in the current thread, but I got the impression that's the kind of thing he was saying he uses the ability to rewrite VCS history for.
Yes, sounds similar. Why do you want to pretend those things? Do you go back and read the revised history or something? I wonder, because I always work on the tip of the spear, and only very rarely check out a prior version, generally to figure out when something changed. When I do feel the need to do that, I usually consider it a bit of a failure.
I'm about what the code is now?so I truly think wonder why folks care about the order of things in the past.?
Thanks,
R
|
Re: Changeset Evolution as an aid to Test Driven Development.
> I'm? with Ron on this. I want the history to be clean, for sure, but
even more I want it to be correct. That means it should reflect what I
did in the order that I did it.
I admit I used to do that, I wanted the commit log to be static so it reflected the experimentation and testing I have done, and hence resisted using this tool.
However, I can still get that from following the predecessor successor nodes in mercurial if I want.... and I find in practice I seldom want.
But as I discovered the advantages of sequencing the types of commit to achieve more useful purposes... I have willingly abandoned that without a backward glance.
Admittedly I'm now dreaming of a CI system that will wake up on every commit or evolve, rather than on a push, and will prove the invariant (after every evolution every changeset on the branch passes).
It should also be able to automagically isolate the transformation that broke it...
That is something that should be automatable.
Yup, it will require some hefty compute power (and/or cunning algorithms) and I'm quietly brewing plans for that in the back of my head.
Pretending I wasn't so stupid in the past is by far the weakest reason for doing this. (A nice warm fuzzy feeling of absolution maybe, takes some of the bite out of imposter syndrome maybe, but a lousy reason).
The powerful growing observability and diffing logs debug technique is an excellent reason.
Having strong proofs, that just keep getting stronger, that refactorings are indeed pure refactorings is an excellent reason.
The "keeping up with the herd of cats" to avoid Bing Bang integrations is an excellent reason.
Being able to drop the extended unit tests into the mainline to stop the herd of cats accidentally breaking things, even if the rest of the branch isn't ready, is an excellent reason.
Floods of microcommits of "obviously correct" tiny refactorings is an excellent reason.
Using coverage analysis to see where you're starting to walk on shaky ground, and then retroactively? brace with unit tests _before_ all your changes is an excellent reason.
toggle quoted message
Show quoted text
I'm? with Ron on this. I want the history to be clean, for sure, but even more I want it to be correct. That means it should reflect what I did in the order that I did it. For example, maybe I got carried away in refactoring and made real (behavioral) changes, which broke some tests. If I haven't yet committed, I can fix that locally. But if I have, I'll leave it in and make a correction. Some of those corrections may be a bit embarrassing, but that will help me remember to do it right next time.
One thing I sometimes do, which is similar to what the OP suggests: When working on a script, which __only runs on a particular CI server _ (e.g. AppVeyor), it may take me several tries to get it right, especially if I don't fully understand how the CI server works. In that case, I'll eventually do a rebase, eliminating all my failed attempts. So sue me!
Charlie
Hi Al, sent from iPad, probably via Mars. Errors, if any, are not mine. ?I can't remember or quickly find the blog post where I first saw it espoused to treat your private "feature" (in Git Flow terminology) as your own to rewrite commit history as you wish. I do it very frequently to better-organize the content of commits (e.g, pretend I wrote those tests at the same time as the production code and glom them together into the same new commit by squishing the two commits together, which has the advantage of making them an atomic package). I admit I only skimmed John's original post in the current thread, but I got the impression that's the kind of thing he was saying he uses the ability to rewrite VCS history for.
Yes, sounds similar. Why do you want to pretend those things? Do you go back and read the revised history or something? I wonder, because I always work on the tip of the spear, and only very rarely check out a prior version, generally to figure out when something changed. When I do feel the need to do that, I usually consider it a bit of a failure.
I'm about what the code is now?so I truly think wonder why folks care about the order of things in the past.?
Thanks,
R
-- John Carter Tait Electronics? ? ? ? ? ? ? ? ?? ? ??? New Zealand
This Communication is Confidential. We only send and receive email on the basis of the terms set out at
|
Re: Changeset Evolution as an aid to Test Driven Development.
I'm? with Ron on this. I want the history to be clean, for sure, but even more I want it to be correct. That means it should reflect what I did in the order that I did it. For example, maybe I got carried away in refactoring and made real (behavioral) changes, which broke some tests. If I haven't yet committed, I can fix that locally. But if I have, I'll leave it in and make a correction. Some of those corrections may be a bit embarrassing, but that will help me remember to do it right next time.
One thing I sometimes do, which is similar to what the OP suggests: When working on a script, which __only runs on a particular CI server _ (e.g. AppVeyor), it may take me several tries to get it right, especially if I don't fully understand how the CI server works. In that case, I'll eventually do a rebase, eliminating all my failed attempts. So sue me!
Charlie
toggle quoted message
Show quoted text
Hi Al, sent from iPad, probably via Mars. Errors, if any, are not mine. ?I can't remember or quickly find the blog post where I first saw it espoused to treat your private "feature" (in Git Flow terminology) as your own to rewrite commit history as you wish. I do it very frequently to better-organize the content of commits (e.g, pretend I wrote those tests at the same time as the production code and glom them together into the same new commit by squishing the two commits together, which has the advantage of making them an atomic package). I admit I only skimmed John's original post in the current thread, but I got the impression that's the kind of thing he was saying he uses the ability to rewrite VCS history for.
Yes, sounds similar. Why do you want to pretend those things? Do you go back and read the revised history or something? I wonder, because I always work on the tip of the spear, and only very rarely check out a prior version, generally to figure out when something changed. When I do feel the need to do that, I usually consider it a bit of a failure.
I'm about what the code is now?so I truly think wonder why folks care about the order of things in the past.?
Thanks,
R
|
Re: Changeset Evolution as an aid to Test Driven Development.
Hi Al, sent from iPad, probably via Mars. Errors, if any, are not mine. ronjeffries@... is a better address for me, maybe. On Jun 14, 2021, at 9:22 AM, Al Chou via groups.io <hotfusionman@...> wrote:
?I can't remember or quickly find the blog post where I first saw it espoused to treat your private "feature" (in Git Flow terminology) as your own to rewrite commit history as you wish. I do it very frequently to better-organize the content of commits (e.g, pretend I wrote those tests at the same time as the production code and glom them together into the same new commit by squishing the two commits together, which has the advantage of making them an atomic package). I admit I only skimmed John's original post in the current thread, but I got the impression that's the kind of thing he was saying he uses the ability to rewrite VCS history for.
Yes, sounds similar. Why do you want to pretend those things? Do you go back and read the revised history or something? I wonder, because I always work on the tip of the spear, and only very rarely check out a prior version, generally to figure out when something changed. When I do feel the need to do that, I usually consider it a bit of a failure.
I'm about what the code is now?so I truly think wonder why folks care about the order of things in the past.?
Thanks,
R
|
Re: Changeset Evolution as an aid to Test Driven Development.
I can't remember or quickly find the blog post where I first saw it espoused to treat your private "feature" (in Git Flow terminology) as your own to rewrite commit history as you wish. I do it very frequently to better-organize the content of commits (e.g, pretend I wrote those tests at the same time as the production code and glom them together into the same new commit by squishing the two commits together, which has the advantage of making them an atomic package). I admit I only skimmed John's original post in the current thread, but I got the impression that's the kind of thing he was saying he uses the ability to rewrite VCS history for.
Al
|
Re: Changeset Evolution as an aid to Test Driven Development.
Hi Edwin, On Jun 12, 2021, at 2:55 AM, Edwin Castro < egcastr@...> wrote:
This was an interesting formalism of what I, and most of my co-workers, already do with our VCS of choice. I think the "A-ha!" moment for me was when I realized that pushing my own branch does not make my change "public" as we still work in a centralized fashion. We only integrate with the mainline and changes are not published until they are merged to the mainline. Modern DVCS give me the power to interact with the evolving codebase as I wish I had done had I been psychic and known everything I would learn along the way. At the time I figured everybody else already worked this way.
Do you actually edit / add to anything but the tip of your branch? If so, why? If not, what advantage do you get from pushing some change backward in time?
Thanks,
Ron Jeffries
Sometimes you just have to stop holding on with both?hands, both feet, and your tail, to get someplace better.?
Of?course you might plummet to the earth and die, but?probably not: you were made for this.
|
Re: Changeset Evolution as an aid to Test Driven Development.
Thanks, can you add a small running example along the post? so it will be more realistic and concrete
|
Re: Changeset Evolution as an aid to Test Driven Development.
I fail to understand what this is all about. Do you have a concrete example, with actual code, somewhere? --?
Arnaud Bailly - @dr_c0d3
toggle quoted message
Show quoted text
On Fri, Jun 11, 2021 at 4:12 AM John Carter via <john.carter= [email protected]> wrote: So there is a new tool on the block that permits new approaches to software development.
This is my first attempt at describing what can be done with it. Ultimately this will grow in a blog post and maybe? training course.
I'm working with the Mercurial Distributed Version Control System, which makes a lot of things a lot easier and simpler than git, but I believe nothing I say here cannot be emulated in git.
Here is the documentation, but it's not needed to understand what I'm saying.
The core idea is that changesets or commits to your version control system become mutable until you choose to publish them.
You can split them, join them, reorder them, rebase them on top of other commits.
Thus the "when" you do a change, becomes decoupled from "where" in the evolution sequence of the codebase. ie. You no longer have to make changes only at the end of an ever growing branch. You can make changes anywhere within your branch and at any time.
Declaring a change "fit for public consumption" is decoupled from "committing to version control".
This post is about what use you might make of? this decoupling.
Now keep in mind the definition of the word "Refactor". It means "Improving the code WITHOUT changing it's externally visible behaviour".
If you refactor the code and a test breaks.... you are either not doing a refactoring or your tests are not just testing behaviour, but are coupled to implementation details. More on that later.
Now in the game of TDD, you have a couple of moves you can make... - Refactor a test.
- Extend a test. ie. Provide more test coverage of existing code
- Refactor the code.
- Change the Test to check for new behaviour.
- Change the code behaviour..
- Add observability. (Logging, tracepoints etc.)
- Add inner checks (precondition asserts documenting my beliefs and assumptions about the code).
Refactoring a test or extending a test should NOT require a change of code. If it does, something is wrong. Either it was not a refactoring or extension, or the extension uncovered a preexisting bug in the code.
Number 6 is interesting and not usually mentioned in the context of TDD. It's sort of orthogonal to unit testing, in fact, unless it's a requirement like an audit trail, I'd explicitly strongly recommend you _don't_ unit test logging as it should NOT change the behaviour of the code whether it's turned on or off! However, as you will see later, it becomes a powerful additional tool in your armoury!
The traditional mantra of TDD is never write a line of code unless you have a breaking test.
Note that this imposes a timewise ordering on activities. - Write a test. Watch it break.
- Implement the matching code. Watch it go green.
- Refactor. Keep it green.
Now the point with changeset evolution is we don't care _when_ we do those things. We care about the order in the evolution of the codebase in which they occur.
For example, implementing a change in behaviour of the code should result in a test breaking (the tests are verifying the behaviour). If it doesn't our tests are insufficient. We should extend our tests.
Or conversely, if we write a test in anticipation of the next step, the implementation step, and it doesn't break, we again have something wrong.
Furthermore, most of us are not sufficiently lucky to always and only work with a fully TDD'd codebase with excellent coverage.
So when working with "legacy" code, there is a zeroth step... extend the test coverage.
So how much coverage is "enough"? Must we first get 100% coverage of everything we touch? What are we testing? Mostly that it just does whatever it does which we barely have a clue about.
The entire point of refactoring is to improve the internal qualities like readability, understandability and simplicity of the code.
Conversely a bundle of code ridden with technical debt is obtuse, and odds on you don't really have a clue what it does.
So how do you even start with a debt ridden legacy code?
I start with observability. I turn on logging, I add logging, I run the code to get a clue.
Commit!
I then add a "Hello World" unit test. Simplest dumbest stupidest test in the world. Starts the unit, shuts it down, cleans up and resources, nothing else.
Commit!
I then look at my coverage. It's lousy, almost zero.
But now between my logging and my coverage and my debugger, I can see an in, I can see where the happy path goes.
Add a test that goes? startup,? one step on the happy path, check it succeeded, teardown.
Commit!
But I can't make head or tail of what it's really doing, it's too complex.
So I sprinkle a few precondition assert checks to executably document what I believe about the system, run tests, whoops, one assumption was wrong, I've learnt something, remove or alter that check. Run, it's green.
Commit!
I can make the code simpler by some low risk, very "Tiny Step" refactorings that are "Obviously Correct & Better". Maybe early return pattern, maybe reduce scope of variables. Tiny tiny tiny step.
Commit!
Repeat several times. Commit! Commit! Commit!
Run up on target / system test. Oh Shit! It's broken! I did something stupid! Options?
Diff the logs before and after my changes. Aha! The behaviour diverged there! Oh dear, the logging is too sparse and coarse grained to tell me where.
MOVE BACK TO START OF THE BRANCH! BEFORE ALL TESTS! BEFORE ALL CHANGES!
Add more logging!
Run up and record log.
Rebase everything on top of the additional logging.
Run up and record log.
Diff the logs! Aha! Exactly there is the change!
Use "hg bisect" or "git bisect" to identify the changeset that broke it.
Hmm. Why didn't my unit test catch it? Look at coverage, oh dear, I don't cover that branch, or check that, oh dear, oh dear!
Extend the test! Does it catch it now! Yes!
MOVE BACK TO WHERE THE UNIT TESTS WAS ADDED! AND COMMIT THE TEST EXTENSION THERE!
Rebase everything. Go to the changeset that broke it.
Test now doesn't run.
Fix that changeset.
AMEND that changeset. So tests now run. Evolved / rebase everything on top of that.
So you see what we are growing here...
At the base of the branch....
* Only extensions to observability, only non-behaviour changing logging, you could drop all this stuff into the mainline right now. Zero risk.
* Followed by Only additions of, or extensions to unit tests, so you could drop all this stuff into the mainline right now. Zero risk. * Followed by an ever growing pile of tiny tiny tiny PURE refactorings. * Followed by a cluster of small neat changes in behaviour.
The real time WHEN we add logging or test cases or refactorings, is decoupled from WHERE in the branch we insert them.
IMPORTANT RULE! At _every_ commit, everything always compiles, links, and all tests run successfully. (Except maybe a Work In Progress changeset at the very end)
If at any stage the tip is broken, you can bisect to the breaking changeset and fix and amend and evolve.
And so you go, improving the code, extending the test coverage, improving the observability.
At some point, you reach the objective.... You now understand the code. You can see where to add the new feature / change of behaviour. It's clean and easy to add..
So you add a test at the end of your branch that tests for the existence of the new behaviour. It breaks. Add the code to implement. It passes.
Commit.
Now you need to add more, you start to do that...
Add test.... it breaks. Start to add code. Damn. I need to clean up more. Commit as a "Work In Progress".
MOVE BACK BEFORE THE BEHAVIOUR CHANGE.
Clean up. Commit! Evolve! ? MOVE FORWARD TO TIP.
Add code, it's a small change now. The test passes. Amend the changeset.
On looking back at your implementation of the first feature, you spot an improvement.
MOVE BACK.
Refactor. Commit! Evolve!
Hmm. That first try was so hideous, I'm embarrassed it exists. No problem. I can fold the original implementation and my refactoring into one. Red face gone.
Keep doing everything above until....
- The code is well factored, understandable testable and tested clean code running in production.
- You completely understand the code as proven by...
- Executable precondition checks documenting your assumptions.
- Well designed unit tests that read as "executable documentation" of the subsystems behaviour.
- Observable behaviour both under test and in production.
- All required behaviour is implemented and tested in a clean manner.
But oh dear! All this is taking longer than a day. The rest of the Herd of Cats is pouring in code into the mainline at one man day's worth of code per cat per day.... you are heading for a Big Bang integration nightmare!
No problem. An hour or so before home time, get to a clean point, pull the mainline, move to a point already proven by the CI system....
AND REBASE YOUR ENTIRE BRANCH ON THAT POINT!
Go to the tip of your branch, does it compile and run? No. Bisect, fix, amend, evolve until it does.
Push to mercurial or git server and go home. Your stuff is backed up, the CI system will wake up and prove ( or otherwise) your branch.
Come to work in the morning, fix and amend anything the CI system complained about.
Carry on until you are done, reviewed, CI's happy. Rebase one last time and drop it into the mainline and push.
So what have we gained?
- A powerful new debug tool. Growing and Diffing the logs. Especially effective in multithreaded apps. (ps: Also works on logs from unit tests and/or logs from full system tests!)
- Mitigation of the risks of Big Bang integrations.
- Mitigation of the risks of large refactorings.
- An effective strategy for learning, covering, refactoring and changing legacy code.
- Executable evidence of our understanding of the code, and executable documentation of what we have learnt.
- A strategy to simplify review. Each changeset is tiny tiny tiny and obviously correct. You can review by changeset, or end to end. Your choice.
- A clear separation between observability changes, test extensions, true refactorings and behaviour changes.
- A gradation of risk shading from zero to some risk, allowing you to focus your test, review, debug efforts where they count.
- A rapid strategy (bisection) for rapidly finding and fixing breaking changes.
- A system to avoid interleaving changes from multiple programmers, to avoid broken mainlines, and to make it easy to pinpoint breaking changes once the branch is published.
-- John Carter Tait Electronics? ? ? ? ? ? ? ? ?? ? ??? New Zealand
This Communication is Confidential. We only send and receive email on the basis of the terms set out at
|
Re: Changeset Evolution as an aid to Test Driven Development.
This was an interesting formalism of what I, and most of my co-workers, already do with our VCS of choice. I think the "A-ha!" moment for me was when I realized that pushing my own branch does not make my change "public" as we still work in a centralized fashion. We only integrate with the mainline and changes are not published until they are merged to the mainline. Modern DVCS give me the power to interact with the evolving codebase as I wish I had done had I been psychic and known everything I would learn along the way. At the time I figured everybody else already worked this way.
-- Edwin G. Castro
toggle quoted message
Show quoted text
So there is a new tool on the block that permits new approaches to software development.
This is my first attempt at describing what can be done with it. Ultimately this will grow in a blog post and maybe? training course.
I'm working with the Mercurial Distributed Version Control System, which makes a lot of things a lot easier and simpler than git, but I believe nothing I say here cannot be emulated in git.
Here is the documentation, but it's not needed to understand what I'm saying.
The core idea is that changesets or commits to your version control system become mutable until you choose to publish them.
You can split them, join them, reorder them, rebase them on top of other commits.
Thus the "when" you do a change, becomes decoupled from "where" in the evolution sequence of the codebase. ie. You no longer have to make changes only at the end of an ever growing branch. You can make changes anywhere within your branch and at any time.
Declaring a change "fit for public consumption" is decoupled from "committing to version control".
This post is about what use you might make of? this decoupling.
Now keep in mind the definition of the word "Refactor". It means "Improving the code WITHOUT changing it's externally visible behaviour".
If you refactor the code and a test breaks.... you are either not doing a refactoring or your tests are not just testing behaviour, but are coupled to implementation details. More on that later.
Now in the game of TDD, you have a couple of moves you can make... - Refactor a test.
- Extend a test. ie. Provide more test coverage of existing code
- Refactor the code.
- Change the Test to check for new behaviour.
- Change the code behaviour..
- Add observability. (Logging, tracepoints etc.)
- Add inner checks (precondition asserts documenting my beliefs and assumptions about the code).
Refactoring a test or extending a test should NOT require a change of code. If it does, something is wrong. Either it was not a refactoring or extension, or the extension uncovered a preexisting bug in the code.
Number 6 is interesting and not usually mentioned in the context of TDD. It's sort of orthogonal to unit testing, in fact, unless it's a requirement like an audit trail, I'd explicitly strongly recommend you _don't_ unit test logging as it should NOT change the behaviour of the code whether it's turned on or off! However, as you will see later, it becomes a powerful additional tool in your armoury!
The traditional mantra of TDD is never write a line of code unless you have a breaking test.
Note that this imposes a timewise ordering on activities. - Write a test. Watch it break.
- Implement the matching code. Watch it go green.
- Refactor. Keep it green.
Now the point with changeset evolution is we don't care _when_ we do those things. We care about the order in the evolution of the codebase in which they occur.
For example, implementing a change in behaviour of the code should result in a test breaking (the tests are verifying the behaviour). If it doesn't our tests are insufficient. We should extend our tests.
Or conversely, if we write a test in anticipation of the next step, the implementation step, and it doesn't break, we again have something wrong.
Furthermore, most of us are not sufficiently lucky to always and only work with a fully TDD'd codebase with excellent coverage.
So when working with "legacy" code, there is a zeroth step... extend the test coverage.
So how much coverage is "enough"? Must we first get 100% coverage of everything we touch? What are we testing? Mostly that it just does whatever it does which we barely have a clue about.
The entire point of refactoring is to improve the internal qualities like readability, understandability and simplicity of the code.
Conversely a bundle of code ridden with technical debt is obtuse, and odds on you don't really have a clue what it does.
So how do you even start with a debt ridden legacy code?
I start with observability. I turn on logging, I add logging, I run the code to get a clue.
Commit!
I then add a "Hello World" unit test. Simplest dumbest stupidest test in the world. Starts the unit, shuts it down, cleans up and resources, nothing else.
Commit!
I then look at my coverage. It's lousy, almost zero.
But now between my logging and my coverage and my debugger, I can see an in, I can see where the happy path goes.
Add a test that goes? startup,? one step on the happy path, check it succeeded, teardown.
Commit!
But I can't make head or tail of what it's really doing, it's too complex.
So I sprinkle a few precondition assert checks to executably document what I believe about the system, run tests, whoops, one assumption was wrong, I've learnt something, remove or alter that check. Run, it's green.
Commit!
I can make the code simpler by some low risk, very "Tiny Step" refactorings that are "Obviously Correct & Better". Maybe early return pattern, maybe reduce scope of variables. Tiny tiny tiny step.
Commit!
Repeat several times. Commit! Commit! Commit!
Run up on target / system test. Oh Shit! It's broken! I did something stupid! Options?
Diff the logs before and after my changes. Aha! The behaviour diverged there! Oh dear, the logging is too sparse and coarse grained to tell me where.
MOVE BACK TO START OF THE BRANCH! BEFORE ALL TESTS! BEFORE ALL CHANGES!
Add more logging!
Run up and record log.
Rebase everything on top of the additional logging.
Run up and record log.
Diff the logs! Aha! Exactly there is the change!
Use "hg bisect" or "git bisect" to identify the changeset that broke it.
Hmm. Why didn't my unit test catch it? Look at coverage, oh dear, I don't cover that branch, or check that, oh dear, oh dear!
Extend the test! Does it catch it now! Yes!
MOVE BACK TO WHERE THE UNIT TESTS WAS ADDED! AND COMMIT THE TEST EXTENSION THERE!
Rebase everything. Go to the changeset that broke it.
Test now doesn't run.
Fix that changeset.
AMEND that changeset. So tests now run. Evolved / rebase everything on top of that.
So you see what we are growing here...
At the base of the branch....
* Only extensions to observability, only non-behaviour changing logging, you could drop all this stuff into the mainline right now. Zero risk.
* Followed by Only additions of, or extensions to unit tests, so you could drop all this stuff into the mainline right now. Zero risk. * Followed by an ever growing pile of tiny tiny tiny PURE refactorings. * Followed by a cluster of small neat changes in behaviour.
The real time WHEN we add logging or test cases or refactorings, is decoupled from WHERE in the branch we insert them.
IMPORTANT RULE! At _every_ commit, everything always compiles, links, and all tests run successfully. (Except maybe a Work In Progress changeset at the very end)
If at any stage the tip is broken, you can bisect to the breaking changeset and fix and amend and evolve.
And so you go, improving the code, extending the test coverage, improving the observability.
At some point, you reach the objective.... You now understand the code. You can see where to add the new feature / change of behaviour. It's clean and easy to add..
So you add a test at the end of your branch that tests for the existence of the new behaviour. It breaks. Add the code to implement. It passes.
Commit.
Now you need to add more, you start to do that...
Add test.... it breaks. Start to add code. Damn. I need to clean up more. Commit as a "Work In Progress".
MOVE BACK BEFORE THE BEHAVIOUR CHANGE.
Clean up. Commit! Evolve! ? MOVE FORWARD TO TIP.
Add code, it's a small change now. The test passes. Amend the changeset.
On looking back at your implementation of the first feature, you spot an improvement.
MOVE BACK.
Refactor. Commit! Evolve!
Hmm. That first try was so hideous, I'm embarrassed it exists. No problem. I can fold the original implementation and my refactoring into one. Red face gone.
Keep doing everything above until....
- The code is well factored, understandable testable and tested clean code running in production.
- You completely understand the code as proven by...
- Executable precondition checks documenting your assumptions.
- Well designed unit tests that read as "executable documentation" of the subsystems behaviour.
- Observable behaviour both under test and in production.
- All required behaviour is implemented and tested in a clean manner.
But oh dear! All this is taking longer than a day. The rest of the Herd of Cats is pouring in code into the mainline at one man day's worth of code per cat per day.... you are heading for a Big Bang integration nightmare!
No problem. An hour or so before home time, get to a clean point, pull the mainline, move to a point already proven by the CI system....
AND REBASE YOUR ENTIRE BRANCH ON THAT POINT!
Go to the tip of your branch, does it compile and run? No. Bisect, fix, amend, evolve until it does.
Push to mercurial or git server and go home. Your stuff is backed up, the CI system will wake up and prove ( or otherwise) your branch.
Come to work in the morning, fix and amend anything the CI system complained about.
Carry on until you are done, reviewed, CI's happy. Rebase one last time and drop it into the mainline and push.
So what have we gained?
- A powerful new debug tool. Growing and Diffing the logs. Especially effective in multithreaded apps. (ps: Also works on logs from unit tests and/or logs from full system tests!)
- Mitigation of the risks of Big Bang integrations.
- Mitigation of the risks of large refactorings.
- An effective strategy for learning, covering, refactoring and changing legacy code.
- Executable evidence of our understanding of the code, and executable documentation of what we have learnt.
- A strategy to simplify review. Each changeset is tiny tiny tiny and obviously correct. You can review by changeset, or end to end. Your choice.
- A clear separation between observability changes, test extensions, true refactorings and behaviour changes.
- A gradation of risk shading from zero to some risk, allowing you to focus your test, review, debug efforts where they count.
- A rapid strategy (bisection) for rapidly finding and fixing breaking changes.
- A system to avoid interleaving changes from multiple programmers, to avoid broken mainlines, and to make it easy to pinpoint breaking changes once the branch is published.
-- John Carter Tait Electronics? ? ? ? ? ? ? ? ?? ? ??? New Zealand
This Communication is Confidential. We only send and receive email on the basis of the terms set out at
|
Re: Changeset Evolution as an aid to Test Driven Development.
As with many things, this technique becomes more or less compelling depending on your situation.
The larger the herd of cats you're working with, the more compelling. ie. If? you hang off the mainline for one day and several man days of work gets poured in... You really really do need to keep up.
The more technical debt that needs cleaning up and the less coverage you have to start with, the more compelling.
Real time / multithreaded / embedded / resource constrained / bare metal / c/c++ environments can be _very_ hard to reason about. Testing & debug on target compared to unit test on? development machine is extraordinarily expensive in time. (Can be several orders of magnitude compared to desktop or web dev).
So when refactoring something that already has had one or more test and release cycles, the logging "oracle" is more and more important, and the grow logging and diff technique more useful. (Yup, that's my day job, real time, multithreaded embedded resource constrained near bare metal.)
Of course if you're interleaving with the changes poured in by the rest of the herd of cats, and the coverage isn't at the level you want, and you are in real time / multithreaded.... reasoning about who broke what when can get _very_ hard.
Again, interleaving changes makes end to end review very hard. Yes, review change by change is good, but sometimes the end to end view is also a very useful way of doing review to check if something got lost along the way.
At the "well factored, full coverage, no debt, small change, single developer, desktop app" end of the scale.... Ignore this technique. It doesn't apply.
At the "high technical debt, low coverage, large refactoring, major change, part of a busy team, real time / multithreaded / embedded / resource constrained / bare metal / end of the scale", this technique is very valuable.
And somewhere in between bits and pieces are useful.
eg. Adding precondition asserts to document one's assumptions and understanding of the code is _always_ useful.
Adding observability is always useful. (The great weakness of unit testing and TDD? Proving negative assertions. ie. The test enforce some of what the unit does, but cannot enforce that it doesnt do anything else! Observability is great for triggering those very very useful "WTF!? Why's is doing THAT!?" moments.)
Accumulating all test coverage extensions and observability changes at the start of the branch _before_ any refactorings is always useful. If you are doing any... don't bother. If you find you do a bunch of little tweaks to coverage and logging and refactorings. Very useful.
It's another tool in the toolbox. Choose the right one for your current job in hand.
toggle quoted message
Show quoted text
John,
I confess I'm not seeing the shine on this. I freely grant that I'm not doing any version control with a team of people, so that could be the whole reason.
However, it seems to me that if the team always works from the front of the version control queue, always pulling from and pushing to the head, this editing of the past has no discernible effect: head is always as good as we can make it.
What am I missing?
Thanks,
R
So there is a new tool on the block that permits new approaches to software development.
This is my first attempt at describing what can be done with it. Ultimately this will grow in a blog post and maybe? training course.
Ron Jeffries An Agile method is a lens. The work is done under the lens, not by the lens.
-- John Carter Tait Electronics? ? ? ? ? ? ? ? ?? ? ??? New Zealand
This Communication is Confidential. We only send and receive email on the basis of the terms set out at
|
Re: Changeset Evolution as an aid to Test Driven Development.
John,
I confess I'm not seeing the shine on this. I freely grant that I'm not doing any version control with a team of people, so that could be the whole reason.
However, it seems to me that if the team always works from the front of the version control queue, always pulling from and pushing to the head, this editing of the past has no discernible effect: head is always as good as we can make it.
What am I missing?
Thanks,
R
So there is a new tool on the block that permits new approaches to software development.
This is my first attempt at describing what can be done with it. Ultimately this will grow in a blog post and maybe? training course.
Ron Jeffries An Agile method is a lens. The work is done under the lens, not by the lens.
|
Changeset Evolution as an aid to Test Driven Development.
So there is a new tool on the block that permits new approaches to software development.
This is my first attempt at describing what can be done with it. Ultimately this will grow in a blog post and maybe? training course.
I'm working with the Mercurial Distributed Version Control System, which makes a lot of things a lot easier and simpler than git, but I believe nothing I say here cannot be emulated in git.
Here is the documentation, but it's not needed to understand what I'm saying.
The core idea is that changesets or commits to your version control system become mutable until you choose to publish them.
You can split them, join them, reorder them, rebase them on top of other commits.
Thus the "when" you do a change, becomes decoupled from "where" in the evolution sequence of the codebase. ie. You no longer have to make changes only at the end of an ever growing branch. You can make changes anywhere within your branch and at any time.
Declaring a change "fit for public consumption" is decoupled from "committing to version control".
This post is about what use you might make of? this decoupling.
Now keep in mind the definition of the word "Refactor". It means "Improving the code WITHOUT changing it's externally visible behaviour".
If you refactor the code and a test breaks.... you are either not doing a refactoring or your tests are not just testing behaviour, but are coupled to implementation details. More on that later.
Now in the game of TDD, you have a couple of moves you can make... - Refactor a test.
- Extend a test. ie. Provide more test coverage of existing code
- Refactor the code.
- Change the Test to check for new behaviour.
- Change the code behaviour..
- Add observability. (Logging, tracepoints etc.)
- Add inner checks (precondition asserts documenting my beliefs and assumptions about the code).
Refactoring a test or extending a test should NOT require a change of code. If it does, something is wrong. Either it was not a refactoring or extension, or the extension uncovered a preexisting bug in the code.
Number 6 is interesting and not usually mentioned in the context of TDD. It's sort of orthogonal to unit testing, in fact, unless it's a requirement like an audit trail, I'd explicitly strongly recommend you _don't_ unit test logging as it should NOT change the behaviour of the code whether it's turned on or off! However, as you will see later, it becomes a powerful additional tool in your armoury!
The traditional mantra of TDD is never write a line of code unless you have a breaking test.
Note that this imposes a timewise ordering on activities. - Write a test. Watch it break.
- Implement the matching code. Watch it go green.
- Refactor. Keep it green.
Now the point with changeset evolution is we don't care _when_ we do those things. We care about the order in the evolution of the codebase in which they occur.
For example, implementing a change in behaviour of the code should result in a test breaking (the tests are verifying the behaviour). If it doesn't our tests are insufficient. We should extend our tests.
Or conversely, if we write a test in anticipation of the next step, the implementation step, and it doesn't break, we again have something wrong.
Furthermore, most of us are not sufficiently lucky to always and only work with a fully TDD'd codebase with excellent coverage.
So when working with "legacy" code, there is a zeroth step... extend the test coverage.
So how much coverage is "enough"? Must we first get 100% coverage of everything we touch? What are we testing? Mostly that it just does whatever it does which we barely have a clue about.
The entire point of refactoring is to improve the internal qualities like readability, understandability and simplicity of the code.
Conversely a bundle of code ridden with technical debt is obtuse, and odds on you don't really have a clue what it does.
So how do you even start with a debt ridden legacy code?
I start with observability. I turn on logging, I add logging, I run the code to get a clue.
Commit!
I then add a "Hello World" unit test. Simplest dumbest stupidest test in the world. Starts the unit, shuts it down, cleans up and resources, nothing else.
Commit!
I then look at my coverage. It's lousy, almost zero.
But now between my logging and my coverage and my debugger, I can see an in, I can see where the happy path goes.
Add a test that goes? startup,? one step on the happy path, check it succeeded, teardown.
Commit!
But I can't make head or tail of what it's really doing, it's too complex.
So I sprinkle a few precondition assert checks to executably document what I believe about the system, run tests, whoops, one assumption was wrong, I've learnt something, remove or alter that check. Run, it's green.
Commit!
I can make the code simpler by some low risk, very "Tiny Step" refactorings that are "Obviously Correct & Better". Maybe early return pattern, maybe reduce scope of variables. Tiny tiny tiny step.
Commit!
Repeat several times. Commit! Commit! Commit!
Run up on target / system test. Oh Shit! It's broken! I did something stupid! Options?
Diff the logs before and after my changes. Aha! The behaviour diverged there! Oh dear, the logging is too sparse and coarse grained to tell me where.
MOVE BACK TO START OF THE BRANCH! BEFORE ALL TESTS! BEFORE ALL CHANGES!
Add more logging!
Run up and record log.
Rebase everything on top of the additional logging.
Run up and record log.
Diff the logs! Aha! Exactly there is the change!
Use "hg bisect" or "git bisect" to identify the changeset that broke it.
Hmm. Why didn't my unit test catch it? Look at coverage, oh dear, I don't cover that branch, or check that, oh dear, oh dear!
Extend the test! Does it catch it now! Yes!
MOVE BACK TO WHERE THE UNIT TESTS WAS ADDED! AND COMMIT THE TEST EXTENSION THERE!
Rebase everything. Go to the changeset that broke it.
Test now doesn't run.
Fix that changeset.
AMEND that changeset. So tests now run. Evolved / rebase everything on top of that.
So you see what we are growing here...
At the base of the branch....
* Only extensions to observability, only non-behaviour changing logging, you could drop all this stuff into the mainline right now. Zero risk.
* Followed by Only additions of, or extensions to unit tests, so you could drop all this stuff into the mainline right now. Zero risk. * Followed by an ever growing pile of tiny tiny tiny PURE refactorings. * Followed by a cluster of small neat changes in behaviour.
The real time WHEN we add logging or test cases or refactorings, is decoupled from WHERE in the branch we insert them.
IMPORTANT RULE! At _every_ commit, everything always compiles, links, and all tests run successfully. (Except maybe a Work In Progress changeset at the very end)
If at any stage the tip is broken, you can bisect to the breaking changeset and fix and amend and evolve.
And so you go, improving the code, extending the test coverage, improving the observability.
At some point, you reach the objective.... You now understand the code. You can see where to add the new feature / change of behaviour. It's clean and easy to add..
So you add a test at the end of your branch that tests for the existence of the new behaviour. It breaks. Add the code to implement. It passes.
Commit.
Now you need to add more, you start to do that...
Add test.... it breaks. Start to add code. Damn. I need to clean up more. Commit as a "Work In Progress".
MOVE BACK BEFORE THE BEHAVIOUR CHANGE.
Clean up. Commit! Evolve! ? MOVE FORWARD TO TIP.
Add code, it's a small change now. The test passes. Amend the changeset.
On looking back at your implementation of the first feature, you spot an improvement.
MOVE BACK.
Refactor. Commit! Evolve!
Hmm. That first try was so hideous, I'm embarrassed it exists. No problem. I can fold the original implementation and my refactoring into one. Red face gone.
Keep doing everything above until....
- The code is well factored, understandable testable and tested clean code running in production.
- You completely understand the code as proven by...
- Executable precondition checks documenting your assumptions.
- Well designed unit tests that read as "executable documentation" of the subsystems behaviour.
- Observable behaviour both under test and in production.
- All required behaviour is implemented and tested in a clean manner.
But oh dear! All this is taking longer than a day. The rest of the Herd of Cats is pouring in code into the mainline at one man day's worth of code per cat per day.... you are heading for a Big Bang integration nightmare!
No problem. An hour or so before home time, get to a clean point, pull the mainline, move to a point already proven by the CI system....
AND REBASE YOUR ENTIRE BRANCH ON THAT POINT!
Go to the tip of your branch, does it compile and run? No. Bisect, fix, amend, evolve until it does.
Push to mercurial or git server and go home. Your stuff is backed up, the CI system will wake up and prove ( or otherwise) your branch.
Come to work in the morning, fix and amend anything the CI system complained about.
Carry on until you are done, reviewed, CI's happy. Rebase one last time and drop it into the mainline and push.
So what have we gained?
- A powerful new debug tool. Growing and Diffing the logs. Especially effective in multithreaded apps. (ps: Also works on logs from unit tests and/or logs from full system tests!)
- Mitigation of the risks of Big Bang integrations.
- Mitigation of the risks of large refactorings.
- An effective strategy for learning, covering, refactoring and changing legacy code.
- Executable evidence of our understanding of the code, and executable documentation of what we have learnt.
- A strategy to simplify review. Each changeset is tiny tiny tiny and obviously correct. You can review by changeset, or end to end. Your choice.
- A clear separation between observability changes, test extensions, true refactorings and behaviour changes.
- A gradation of risk shading from zero to some risk, allowing you to focus your test, review, debug efforts where they count.
- A rapid strategy (bisection) for rapidly finding and fixing breaking changes.
- A system to avoid interleaving changes from multiple programmers, to avoid broken mainlines, and to make it easy to pinpoint breaking changes once the branch is published.
-- John Carter Tait Electronics? ? ? ? ? ? ? ? ?? ? ??? New Zealand
This Communication is Confidential. We only send and receive email on the basis of the terms set out at
|
Re: unit test condition on sql statement
Thanks, everyone for your feedback/comments.
So far a couple of interesting ideas and thoughts that I could apply for our team:
1) extract sql formatting to its own class (which is what we did and merged to production). 2) likely our next step is to refactor and remove more duplication by introducing sqlStatement library. For me, I have used Specification Pattern in the past and would love to try it again to see how well it works for this specific case. I think the main benefit of that is it would give me some kind of way/API to gradually removing a huge amount of duplicate SQL formatting code and somehow, we make it flexible enough that we could evolve it to call the third party library like jOOQ if we choose so.
Feel free to let me know if there's anything I have missed and should consider about it more. Much appreciated for everyone help. Cheers, Tony
|