开云体育

ctrl + shift + ? for shortcuts
© 2025 Groups.io

Nmock domain

 

I have nmock.org which is about to be expire. I’ve been paying for it for some time.

as far as I can tell, the project is dead as is sourceforge where it points.?


is there anyone with just cause who should have the domain?

thx

S


Re: Classifying tests: problem? solution? something else?

 

Wisdom, for me, includes no longer viewing such usage as "improper", but merely something more like "unexpected" or "different". I don't even mean that we stop calling it improper, but that we stop experiencing it as improper. :) Many do the first; fewer do the second.

On Sat, Jul 29, 2023 at 10:45?AM David Koontz <david@...> wrote:
That is some WIZE advice…. Wish I could have heard it 20 yrs ago…. I’d not have banged mine head so much and raged against the improper use of a term!

On Jul 29, 2023, at 9:58 AM, George Dinwiddie <lists@...> wrote:

I've found that it useless to try to change existing usage, and confusing to ignore existing usage.



--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: "The design signals are weak"

 

On Wed, Jul 26, 2023 at 11:39?AM Vaughn Cato via <vaughn_cato=yahoo.com@groups.io> wrote:
?
When I talk about a signal being weak, I mean that it isn't a strong indication that something must be done to correct it.

I believe I understand you better now. Indeed, I consider the signals "weak" in the same way, but then I change my expectation from "This code needs to tell me when a design change must be corrected" to "This code tells me when to pay closer attention to emerging design risks", and then it becomes more meaningful to me to evaluate signals for strength. What do you think about that?
?
The term "smell", I believe, was created to try to indicate that it's something that might be a concern, and is worth consideration, but if no clear general improvement is possible after some consideration, then no change is necessary. For example, if a function is long, then I'll look to see if I can find a natural way to break it up, but if after some consideration, it doesn't seem like breaking it up is going to make the code any clearer, or causes other issues, then I'll ignore that signal and continue.

That matches how I work.?

In this sense, I guess I'm saying that the strength of the signal is an indication of how much work I'm willing to do to remove the signal.

For me, the signals such as "I'm having to write a lot of test code to show what I'm trying to achieve" are much stronger, and I'll go to great lengths to avoid it. For example, I may break up a function to make testing easier, even if I wouldn't have broken it up just because it smelled like it was too long.

I believe I understand you. I guess you find yourself more sensitive to the design risk signals coming from the tests than coming from the production code itself. That surprises me, because I don't hear it very often, but it doesn't surprise me, because I tend to work similarly. I tend to hear the opposite: the typical programmer seems to pay little attention to the design risk signals coming from tests. That might be because I tend to work with programmers still drowning in defects, so they're usually too distracted to see the tests as a source of feedback about design choices.

?Thanks.
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.
?
On Fri, Jul 21, 2023 at 9:56?PM Vaughn Cato via <vaughn_cato=yahoo.com@groups.io> wrote:
?
I find that the design signals that I get directly during refactoring are actually pretty weak. There are code smells, but these aren't the signals. The code smells are things that you or other people have determined generally tend to make the design better, but this is just something that you recognize from experience.

What makes this "weak" for you? The signals can only say "Pay attention here!" I'm not sure they could ever say anything stronger than that.

But then, the signals from test runs can only say "Pay attention here!" as well. What makes them stronger for you than the "code smell"/"design risk" signals?
?
I find the actual design signals come about when trying to create a test or to make a test pass. I get strong signals such as "It's not clear to me how to make this pass", or "I'm having to make changes in a lot of places to make this pass," or "I'm having to write a lot of test code to show what I'm trying to achieve." This leads to me doing more preparatory refactoring than other kinds of refactoring, because the signals are strongest then.

I don't find those signals stronger nor weaker, but merely different. What makes them feel stronger for you?
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002








--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: Classifying tests: problem? solution? something else?

 

David,

I got a few lumps on the noggin learning that.

- George

On 7/29/23 1:44 PM, David Koontz wrote:
That is some WIZE advice…. Wish I could have heard it 20 yrs ago…. I’d not have banged mine head so much and raged against the improper use of a term!

On Jul 29, 2023, at 9:58 AM, George Dinwiddie <lists@...> wrote:

I've found that it useless to try to change existing usage, and confusing to ignore existing usage.
--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Re: Classifying tests: problem? solution? something else?

 

开云体育

That is some WIZE advice…. Wish I could have heard it 20 yrs ago…. I’d not have banged mine head so much and raged against the improper use of a term!

On Jul 29, 2023, at 9:58 AM, George Dinwiddie <lists@...> wrote:

I've found that it useless to try to change existing usage, and confusing to ignore existing usage.


Re: Classifying tests: problem? solution? something else?

 

I've often worked with (and in) organizations where the perfect word (from my point of view) for a concept was already in use for something else. I've found that it useless to try to change existing usage, and confusing to ignore existing usage. Choosing another word to use in that context is well worth the trouble, even if that word seems a little clumsy.

- George

On 7/28/23 4:44 PM, Steve Gordon wrote:
In the context of software development, a customer test?refers to a specification?of a single thing that the people paying for the software want the software to do in the form of an executable test.
While this sense of customer is not directly related to what customer might mean in the domain model for the software, it can have similar ambiguities and nuances because the people paying for the software usually delegate that level of specification to some combination of line managers, users, business analysts, product marketing, product management, sales, product owners or even the same team that is writing the programmer tests.
On Fri, Jul 28, 2023 at 1:06?PM Sleepyfox <sleepyfox@... <mailto:sleepyfox@...>> wrote:
I like examples.
WRT naming, I worked at an insurance company. There were problems
with churn in API code, particularly around certain domain concepts.
One of those was 'customer'.
To the policy lifecycle team, customer meant 'a person that has
bought one of our policies'.
To the sales team, customer meant 'someone who is considering buying
one of our policies'.
To the support team, customer meant 'a person who needs help, who
may or may not hold a current or expired policy'.
Needless to say, the requirements over what data these teams needed
to store about a 'customer' were quite different, although there was
(of course) common elements.
Once everyone understood that these were different things, we were
able to agree on different language in order to differentiate
different groups' concerns.
Fox
---
On Wed, 26 Jul 2023, 15:19 J. B. Rainsberger, <me@...
<mailto:me@...>> wrote:
On Thu, Jul 6, 2023 at 8:46?PM Ron Jeffries <ronjeffries@...
<mailto:ronjeffries@...>> wrote:
Possibly that is why in XP we called them programmer tests
and customer tests. :)
That's one usually-quite-important axis. :) It's one of the ones
I continue to consider regularly.
--
J. B. (Joe) Rainsberger :: tdd.training
<>?:: jbrains.ca <> ::
blog.thecodewhisperer.com <>
Replies from this account routinely take a few days, which
allows me to reply thoughtfully. I reply more quickly to
messages that clearly require answers urgently. If you need
something from me and are on a deadline, then let me know how
soon you need a reply so that I can better help you to get what
you need. Thank you for your consideration.
--
J. B. (Joe) Rainsberger ::
<> ::
<> ::
<>
Teaching evolutionary design and TDD since 2002
--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Re: Classifying tests: problem? solution? something else?

 

In the context of software development, a customer test?refers to a specification?of a single thing that the people paying for the software want the software to do in the form of an executable test.??

While this sense of customer is not directly related to what customer might mean in the domain model for the software, it can have similar ambiguities and nuances because the people paying for the software usually delegate that level of specification to some combination of line managers, users, business analysts, product marketing, product management, sales, product owners or even the same team that is writing the programmer tests.

On Fri, Jul 28, 2023 at 1:06?PM Sleepyfox <sleepyfox@...> wrote:
I like examples.

WRT naming, I worked at an insurance company. There were problems with churn in API code, particularly around certain domain concepts.

One of those was 'customer'.

To the policy lifecycle team, customer meant 'a person that has bought one of our policies'.

To the sales team, customer meant 'someone who is considering buying one of our policies'.

To the support team, customer meant 'a person who needs help, who may or may not hold a current or expired policy'.

Needless to say, the requirements over what data these teams needed to store about a 'customer' were quite different, although there was (of course) common elements.

Once everyone understood that these were different things, we were able to agree on different language in order to differentiate different groups' concerns.

Fox
---

On Wed, 26 Jul 2023, 15:19 J. B. Rainsberger, <me@...> wrote:
On Thu, Jul 6, 2023 at 8:46?PM Ron Jeffries <ronjeffries@...> wrote:
?
Possibly that is why in XP we called them programmer tests and customer tests. :)

That's one usually-quite-important axis. :) It's one of the ones I continue to consider regularly.
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: Classifying tests: problem? solution? something else?

 

I like examples.

WRT naming, I worked at an insurance company. There were problems with churn in API code, particularly around certain domain concepts.

One of those was 'customer'.

To the policy lifecycle team, customer meant 'a person that has bought one of our policies'.

To the sales team, customer meant 'someone who is considering buying one of our policies'.

To the support team, customer meant 'a person who needs help, who may or may not hold a current or expired policy'.

Needless to say, the requirements over what data these teams needed to store about a 'customer' were quite different, although there was (of course) common elements.

Once everyone understood that these were different things, we were able to agree on different language in order to differentiate different groups' concerns.

Fox
---

On Wed, 26 Jul 2023, 15:19 J. B. Rainsberger, <me@...> wrote:
On Thu, Jul 6, 2023 at 8:46?PM Ron Jeffries <ronjeffries@...> wrote:
?
Possibly that is why in XP we called them programmer tests and customer tests. :)

That's one usually-quite-important axis. :) It's one of the ones I continue to consider regularly.
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: Which comes first: design skill or TDD?

 

"I don't know where people are learning this, apart from generic "life lessons" of perfectionism drilled into them when they were very young."

I left university teaching about 15 years ago, but I always had to fight the old guard of every faculty I worked for on that issue for every programming class I ever taught.? Even the ones who did understand?my points said it was too hard for students to learn good software design without having them do a good design for fixed assignment requirements before writing any code (like making students turn in an outline before writing their papers).? Even if we grant that premise (which I do not), waterfall just being an artifact of teaching skills in isolation gets ingrained as to how to develop software.

On Wed, Jul 26, 2023 at 9:08?AM George Dinwiddie <lists@...> wrote:
On 7/26/23 9:25 AM, J. B. Rainsberger wrote:
> On Wed, Jul 19, 2023 at 3:55?PM Vaughn Cato via
> <> <vaughn_cato=yahoo.com@groups.io
> <mailto:yahoo.com@groups.io>> wrote:
>
>? ? ?One thing that helped me with design by using TDD was how it
>? ? ?encourages making constant changes to the code. It makes changes
>? ? ?necessary because you haven't thought too far ahead, and it makes
>? ? ?changes safe because of the quality of the tests and how fast you
>? ? ?can run them. Because you are constantly changing the code, this
>? ? ?leads to wanting to find ways to make those changes easier, and code
>? ? ?that is easy and safe to change strongly overlaps with well-designed
>? ? ?code. Reducing coupling and increasing cohesion is one of those
>? ? ?things that you find makes code easier to change.
>
>
> This is perhaps one aspect of Evolutionary Design in general that I
> emphasize: it normalizes changing code; it helps reverse the belief that
> changing code is a failure of prediction and that there is a "first
> time" to "get it right".
>
> I recently sidestepped an argument on Reddit about this. It had been a
> long time since I'd encountered someone who genuinely believed that one
> must always get the design right the first time and every other way to
> work is wrong. They framed it as something like "Why would you waste
> time building it the wrong way just so that you can build it the right
> way later?!" They seemed unable to see that it might not be possible to
> see "the right way" and "the wrong way" in advance or that those labels
> might change or that those labels are not helpful in the first place.
>
> I don't know where people are learning this, apart from generic "life
> lessons" of perfectionism drilled into them when they were very young.

A lot of traditional approaches treat any need to change the code as
evidence of a defect. Back around the start of the century, someone told
me they were following Watts Humphrey's Personal Software Process (PSP)
and were counting every time they hit the backspace key as a defect they
had corrected.

? - George

--
? ----------------------------------------------------------------------
? ?* George Dinwiddie *? ? ? ? ? ? ? ? ? ? ?
? ?Software Development? ? ? ? ? ? ? ? ? ?
? ?Consultant and Coach? ? ? ? ?
? ----------------------------------------------------------------------







Re: "The design signals are weak"

 

On 7/26/23 10:12 AM, J. B. Rainsberger wrote:
On Mon, Jul 24, 2023 at 7:25?PM George Dinwiddie <lists@... <mailto:lists@...>> wrote:
I found that much of my design pressure came from the REEN "cat" state.
After I made the test pass, I often noticed that the code that did so
didn't communicate well. Often it was coupled with something else, or
had two concepts intermingled in a what that wasn't cohesive. This is
what drove my refactoring.
I presume you notice the signal to consider refactoring coming from both the production code itself (most likely focused on the code you just added) and from the tests (most likely the tests you most-recently wrote). Is that how you experience it?
Yes, the code I just wrote may be using some value that "seems wrong" for the class to own, and I may need to extract it to some place where it better belongs. I might have just introduced that value as a parameter passed by the test, but that should have a longer lifespan than one method call. Or, I might have written code that just assumes the value, though clearly it could be different for different needs.

Either way, I think about where I would like this value to live. Perhaps it belongs to some application configuration object or data store. Perhaps it's part of a larger sequence of operations. I might extract it out now just to keep it separate, but eventually I'll also push it to the entity where it seems to belong.

- George

--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Re: Which comes first: design skill or TDD?

 

On 7/26/23 9:25 AM, J. B. Rainsberger wrote:
On Wed, Jul 19, 2023 at 3:55?PM Vaughn Cato via groups.io <> <vaughn_cato@... <mailto:yahoo.com@groups.io>> wrote:
One thing that helped me with design by using TDD was how it
encourages making constant changes to the code. It makes changes
necessary because you haven't thought too far ahead, and it makes
changes safe because of the quality of the tests and how fast you
can run them. Because you are constantly changing the code, this
leads to wanting to find ways to make those changes easier, and code
that is easy and safe to change strongly overlaps with well-designed
code. Reducing coupling and increasing cohesion is one of those
things that you find makes code easier to change.
This is perhaps one aspect of Evolutionary Design in general that I emphasize: it normalizes changing code; it helps reverse the belief that changing code is a failure of prediction and that there is a "first time" to "get it right".
I recently sidestepped an argument on Reddit about this. It had been a long time since I'd encountered someone who genuinely believed that one must always get the design right the first time and every other way to work is wrong. They framed it as something like "Why would you waste time building it the wrong way just so that you can build it the right way later?!" They seemed unable to see that it might not be possible to see "the right way" and "the wrong way" in advance or that those labels might change or that those labels are not helpful in the first place.
I don't know where people are learning this, apart from generic "life lessons" of perfectionism drilled into them when they were very young.
A lot of traditional approaches treat any need to change the code as evidence of a defect. Back around the start of the century, someone told me they were following Watts Humphrey's Personal Software Process (PSP) and were counting every time they hit the backspace key as a defect they had corrected.

- George

--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Re: "The design signals are weak"

 

I find the terms weak and strong to be useful in a philosophical context, rather than a judgemental one.?
I e. Something is strong if it always results in a specific outcome , and something is weak if it influences but does not guarantee an outcome.
Atleast that is how I read the initial comment.?
If it doesn't have any influence then the correlation is a non-sequitor.

On Wed, Jul 26, 2023, 17:39 Vaughn Cato via <vaughn_cato=yahoo.com@groups.io> wrote:
When I talk about a signal being weak, I mean that it isn't a strong indication that something must be done to correct it. The term "smell", I believe, was created to try to indicate that it's something that might be a concern, and is worth consideration, but if no clear general improvement is possible after some consideration, then no change is necessary. For example, if a function is long, then I'll look to see if I can find a natural way to break it up, but if after some consideration, it doesn't seem like breaking it up is going to make the code any clearer, or causes other issues, then I'll ignore that signal and continue.

In this sense, I guess I'm saying that the strength of the signal is an indication of how much work I'm willing to do to remove the signal.

For me, the signals such as "I'm having to write a lot of test code to show what I'm trying to achieve" are much stronger, and I'll go to great lengths to avoid it. For example, I may break up a function to make testing easier, even if I wouldn't have broken it up just because it smelled like it was too long.

?- Vaughn


On Wednesday, July 26, 2023 at 09:44:21 AM EDT, J. B. Rainsberger <me@...> wrote:


On Fri, Jul 21, 2023 at 9:56?PM Vaughn Cato via <vaughn_cato=yahoo.com@groups.io> wrote:
?
I find that the design signals that I get directly during refactoring are actually pretty weak. There are code smells, but these aren't the signals. The code smells are things that you or other people have determined generally tend to make the design better, but this is just something that you recognize from experience.

What makes this "weak" for you? The signals can only say "Pay attention here!" I'm not sure they could ever say anything stronger than that.

But then, the signals from test runs can only say "Pay attention here!" as well. What makes them stronger for you than the "code smell"/"design risk" signals?
?
I find the actual design signals come about when trying to create a test or to make a test pass. I get strong signals such as "It's not clear to me how to make this pass", or "I'm having to make changes in a lot of places to make this pass," or "I'm having to write a lot of test code to show what I'm trying to achieve." This leads to me doing more preparatory refactoring than other kinds of refactoring, because the signals are strongest then.

I don't find those signals stronger nor weaker, but merely different. What makes them feel stronger for you?
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002







Re: "The design signals are weak"

 

When I talk about a signal being weak, I mean that it isn't a strong indication that something must be done to correct it. The term "smell", I believe, was created to try to indicate that it's something that might be a concern, and is worth consideration, but if no clear general improvement is possible after some consideration, then no change is necessary. For example, if a function is long, then I'll look to see if I can find a natural way to break it up, but if after some consideration, it doesn't seem like breaking it up is going to make the code any clearer, or causes other issues, then I'll ignore that signal and continue.

In this sense, I guess I'm saying that the strength of the signal is an indication of how much work I'm willing to do to remove the signal.

For me, the signals such as "I'm having to write a lot of test code to show what I'm trying to achieve" are much stronger, and I'll go to great lengths to avoid it. For example, I may break up a function to make testing easier, even if I wouldn't have broken it up just because it smelled like it was too long.

?- Vaughn


On Wednesday, July 26, 2023 at 09:44:21 AM EDT, J. B. Rainsberger <me@...> wrote:


On Fri, Jul 21, 2023 at 9:56?PM Vaughn Cato via <vaughn_cato=yahoo.com@groups.io> wrote:
?
I find that the design signals that I get directly during refactoring are actually pretty weak. There are code smells, but these aren't the signals. The code smells are things that you or other people have determined generally tend to make the design better, but this is just something that you recognize from experience.

What makes this "weak" for you? The signals can only say "Pay attention here!" I'm not sure they could ever say anything stronger than that.

But then, the signals from test runs can only say "Pay attention here!" as well. What makes them stronger for you than the "code smell"/"design risk" signals?
?
I find the actual design signals come about when trying to create a test or to make a test pass. I get strong signals such as "It's not clear to me how to make this pass", or "I'm having to make changes in a lot of places to make this pass," or "I'm having to write a lot of test code to show what I'm trying to achieve." This leads to me doing more preparatory refactoring than other kinds of refactoring, because the signals are strongest then.

I don't find those signals stronger nor weaker, but merely different. What makes them feel stronger for you?
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002







Re: Classifying tests: problem? solution? something else?

 

On Thu, Jul 6, 2023 at 8:46?PM Ron Jeffries <ronjeffries@...> wrote:
?
Possibly that is why in XP we called them programmer tests and customer tests. :)

That's one usually-quite-important axis. :) It's one of the ones I continue to consider regularly.
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: Classifying tests: problem? solution? something else?

 

On Thu, Jul 6, 2023 at 2:06?AM Mauricio Aniche <mauricioaniche@...> wrote:
?
In the past two years, where I have been working on a codebase with hundreds of thousands of tests and almost a hundred different teams touching it, I started to “care less” about semantically classifying tests. That, team members can come up with an agreement of what makes more sense to them in their context. Do we really need to have a single classification company-wise?

It might be wise to have a common vocabulary among such a large project community, but I definitely don't recommend One Corporation-Wide Strategy. :)
?
Nowadays, I really care about classifying tests in terms of their infrastructure costs. This matters globally and must defined at company level, because although code isn’t (well, sometimes is) shared among teams, resources are. Reliability is also another category that I care. Your want the tests you run in the pre-merge to give you 100% sound signal.

I love it! "Let's classify based on what actually matters to us" instead of "Let's classify based on some abstract notion of correctness that some person wrote in a book one time".
?
Do we allow multithreading in our unit test suite, or should these tests be somewhere else? Do we allow mock server in it? When do we need to run all the tests and when can we just run a subset of them? How can we bring (costly) integration tests to the pre-merge? What to do with flaky tests; should we delete them, should we keep them there waiting for someone to fix them, should we move them to another place? These are questions that have been on my mind when I talk about segregating tests.

This reminds me of the first project in which I participated with other people practising TDD who actually wanted to do it. We simply posted the top 10 list of slowest tests every day and used that to guide our choices about breaking integrated things apart to make them more individually inspectable. That was enough.
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: Classifying tests: problem? solution? something else?

 

On Tue, Jul 4, 2023 at 1:19?PM George Dinwiddie <lists@...> wrote:
?
I agree that the naming can be confusing because often the same name
means different things to different people. I don't get too hung up on
the naming of types of tests (though I love Gpaw's "microtests" because
it gets out of the "unit test" mire). Instead, I tray to talk about the
meaning the other person has behind the name.

I'd like to second this loudly: I noticed a huge improvement in my relationships with people when I stopped insisting on my understanding of the terms and instead focused on exploring the differences in our mutual understanding of the terms.
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: "The design signals are weak"

 

On Mon, Jul 24, 2023 at 7:25?PM George Dinwiddie <lists@...> wrote:
?
I found that much of my design pressure came from the REEN "cat" state.
After I made the test pass, I often noticed that the code that did so
didn't communicate well. Often it was coupled with something else, or
had two concepts intermingled in a what that wasn't cohesive. This is
what drove my refactoring.

I presume you notice the signal to consider refactoring coming from both the production code itself (most likely focused on the code you just added) and from the tests (most likely the tests you most-recently wrote). Is that how you experience it?
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: "The design signals are weak"

 

On Mon, Jul 24, 2023 at 1:55?PM Steve Gordon <sgordonphd@...> wrote:
?
I advocate postponing refactoring to any design pattern if related stories are still in the backlog,?especially if they are slices of the same bigger story.? The implementation of the later stories may reveal a better pattern.

Interesting! I advocate becoming comfortable refactoring away from patterns, because many programmers still feel subtle cues of guilt or shame related to undoing design decisions, especially "bigger" ones, such as introducing a pattern.

I have noticed a trend in phases. Each phase increases options and decreases anxiety, guilt, shame, blame, that kind of thing:

1. Person X gets used to refactoring in general, so that it's no big deal to change "little" design decisions.
2. X gets used to refactoring away from patterns, so that it's no big deal. They make peace with changing their mind, even after making "bigger" decisions, such as refactoring towards a pattern.
3. X notices that, since they can refactor away from a pattern, it's OK to refactor towards a pattern somewhat prematurely.
4. X notices that, since they can refactor towards a pattern, it's OK to wait for stronger signals that the pattern will help, trusting that they'll refactor towards the pattern when that becomes clear.

The end results seem the same, except that when someone in phase 1 or phase 2 tries to postpone refactoring to a pattern (for the good reasons you describe), they might never break through phases 3 and 4.

I notice a meta-pattern that (many) people who have made it through phase 4 routinely advise people in phase 1 or phase 2 to do things that stop them from progressing through phases 3 and 4. They do it to try to protect the less-experienced, but it often results in robbing them of the experience they need to progress.

That's one of the reasons I wanted to revive this group: to help the people in phases 1 and 2 find the advice they need to help them get through phases 3 and 4.
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: "The design signals are weak"

 

On Mon, Jul 24, 2023 at 10:51?AM Olof Bjarnason <olof.bjarnason@...> wrote:
I have a couple of opinions/experiences I want to share on the?topic of design pressure and TDD.

TDD has three 'states' which I think of as animals or colors:

?BLUE/Owl: "add a test" (understand the problem / see the big picture)
?RED/Rabbit: "make it pass" (as quickly as possible / run for your life)
?GREEN/Cat: "make it right" (improve readability / enjoy life)

Cute. I love it.
?
I re-invented my own view of what good design is when I started out with TDD in 2006, and I think that requires an effort of curiosity?and time. It is not comfortable if you are stuck in your own ideas of what?good design is -- especially if that design isn't very testable. Prime example: letting go of patterns which makes testing harder, like the Singleton pattern from GoF.

This captures very well how I try to teach the technique: as a change in or a difference in values. The people who practise TDD _and stick with it_ tend to either change their values in the process or discover that TDD helps them live more authentically the values they already had. I think I was in the second category.

*GREEN/refactoring state of TDD does give you a chance to improve things, both in test code and production code. Especially for the 'internal' quality of the production code (implementation details). The API/external design is also possible to change, but at a higher cost than from writing the first test / API design. That is why I think the highest pressure is in BLUE rather than GREEN state.

I think it does more, and this seems to be a somewhat controversial point: it both gives you the opportunity and suggests options and reminds you to consider those options. That is a sense in which TDD leads (on average) to better designs (according to our revised notion of "good" that includes "adaptability" as a "good" property).
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002


Re: "The design signals are weak"

 

On Sat, Jul 22, 2023 at 1:04?PM Steve Gordon <sgordonphd@...> wrote:
?
Promiscuous pairing and code reviews are remedies you can find in the literature that try to address this problem,? If you find any deterministic rules for determining whether a design is good, I would be extremely skeptical of those deterministic rules.? Otherwise, there would be software that you could run on the code base after every change to see if the design got worse.

Indeed. I'm not a fan of the Transformation Priority Premise as it seems to me to have been advertised, even though I imagine I have a similar priority tree of heuristics in my mind when I think about how I might refactor something.

I claim that the value in that priority tree of heuristics comes from having built it oneself (almost certainly through practice), rather than from having some universal tree that everyone ought to follow.
?
I would add if proposed requirement changes are sometimes pushed off or rejected due to how much work it would entail, then that is a good signal that the design might be poor (or that the changes were ill conceived).

This is almost equivalent to how I define legacy code: profitable code that we feel afraid to change. (Only the profitability of the current code is not made clear, although one hopes that it's implied by the fact that anyone's working on it at all. :) )
--
J. B. (Joe) Rainsberger :: ?:: ::

Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration.

--
J. B. (Joe) Rainsberger :: :: ::
Teaching evolutionary design and TDD since 2002