ctrl + shift + ? for shortcuts
© 2025 Groups.io

Re: [TDD] Erroneous Assertion about TDD

 

Very well said Russ!


________________________________
From: Russell Gold <russ@...>
To: testdrivendevelopment@...
Sent: Sunday, June 2, 2013 5:12 AM
Subject: Re: [TDD] Erroneous Assertion about TDD


"is there any room left for creativity in coding?" That's hilarious.

Let's reach back in time�

As poets focus more on rhyme schemes and meter, where is there room for creativity? Now they have this new-fangled thing called a "sonnet." It has to be exactly 14 lines long, all in iambic pentameter, with each of the first 3 groups of lines following an abab rhyme scheme, and the final two a rhyming couplet. Is there are any room left for creativity in poetry?

To ask the question is to demonstrate that one does not understand the industry and the practices one whit. Creativity does not mean undisciplined hacking.

On Apr 20, 2013, at 1:41 AM, Charlie Poole <charliepoole@...> wrote:

Hi All,

Here's an excerpt from a newsletter I got from O'Reilly today...

"For many old-school software engineers, *developing code* has always been *as
much an art as a science*. But as the industry focuses more on practices
such as Test-Driven Development, and Patterns become the lingua franca of
programming, *is there any room left for real creativity in coding?* Or,
has it become an exercise in cookie-cutter production, putting together
components in new ways, but without any real room for individual style?"

How can such misunderstanding can still exist? Anyway, they asked for
opinions and I sent this:

"I was a bit dismayed at your question "...as the industry focuses more on
practices such as Test-Driven Development, and Patterns become the lingua
franca of programming, is there any room left for real creativity in
coding?"

"I'll address the point as it applies to Test-Driven Development, since
that's closest to my heart. I'm an advocate of TDD as well as the
maintainer of NUnit, a unit-test framework that aims to facilitate TDD.

"To put it baldly, anyone who finds that Test-Driven Development stifles
their creativity simply isn't doing TDD as it's intended to be practiced.
The self-imposed discipline of first discovering tests that will require
the development of the production code we actually want to write is without
a doubt difficult to learn. But once it is learned, it's a source of great
creativity. Just as the strict form of the sonnet allowed Shakespeare to
channel his creativity, the discipline of TDD demands creativity from
developers who adopt it.

"Of course, doing TDD this way doesn't just happen. First and foremost, as
hinted in the preceding paragraph, it's a discipline that must be first
understood and chosen by the programmer - either as an individual or as
part of a team that decides to adopt TDD. Management-imposed TDD - like
most management-imposed technical practices - only succeeds in taking
responsibility away from the programmer.

"Test-Driven Development is a technique discovered and advocated by
programmers for programmers. It's a spark for creativity. When it's
reinterpreted as a set of rules to be imposed on the programmers, it can
easily have the opposite affect. But that's not unique to TDD. We've seen
it before with many other techniques and will likely see it again."

What do you think? Have you heard such complaints? What do you tell people
in response?

Charlie


[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links


-----------------
Come read my webnovel, Take a Lemon <>,
and listen to the Misfile radio play <>!






[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links


Re: [TDD] Erroneous Assertion about TDD

 

"is there any room left for creativity in coding?" That's hilarious.

Let's reach back in time�

As poets focus more on rhyme schemes and meter, where is there room for creativity? Now they have this new-fangled thing called a "sonnet." It has to be exactly 14 lines long, all in iambic pentameter, with each of the first 3 groups of lines following an abab rhyme scheme, and the final two a rhyming couplet. Is there are any room left for creativity in poetry?

To ask the question is to demonstrate that one does not understand the industry and the practices one whit. Creativity does not mean undisciplined hacking.

On Apr 20, 2013, at 1:41 AM, Charlie Poole <charliepoole@...> wrote:

Hi All,

Here's an excerpt from a newsletter I got from O'Reilly today...

"For many old-school software engineers, *developing code* has always been *as
much an art as a science*. But as the industry focuses more on practices
such as Test-Driven Development, and Patterns become the lingua franca of
programming, *is there any room left for real creativity in coding?* Or,
has it become an exercise in cookie-cutter production, putting together
components in new ways, but without any real room for individual style?"

How can such misunderstanding can still exist? Anyway, they asked for
opinions and I sent this:

"I was a bit dismayed at your question "...as the industry focuses more on
practices such as Test-Driven Development, and Patterns become the lingua
franca of programming, is there any room left for real creativity in
coding?"

"I'll address the point as it applies to Test-Driven Development, since
that's closest to my heart. I'm an advocate of TDD as well as the
maintainer of NUnit, a unit-test framework that aims to facilitate TDD.

"To put it baldly, anyone who finds that Test-Driven Development stifles
their creativity simply isn't doing TDD as it's intended to be practiced.
The self-imposed discipline of first discovering tests that will require
the development of the production code we actually want to write is without
a doubt difficult to learn. But once it is learned, it's a source of great
creativity. Just as the strict form of the sonnet allowed Shakespeare to
channel his creativity, the discipline of TDD demands creativity from
developers who adopt it.

"Of course, doing TDD this way doesn't just happen. First and foremost, as
hinted in the preceding paragraph, it's a discipline that must be first
understood and chosen by the programmer - either as an individual or as
part of a team that decides to adopt TDD. Management-imposed TDD - like
most management-imposed technical practices - only succeeds in taking
responsibility away from the programmer.

"Test-Driven Development is a technique discovered and advocated by
programmers for programmers. It's a spark for creativity. When it's
reinterpreted as a set of rules to be imposed on the programmers, it can
easily have the opposite affect. But that's not unique to TDD. We've seen
it before with many other techniques and will likely see it again."

What do you think? Have you heard such complaints? What do you tell people
in response?

Charlie


[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links


-----------------
Come read my webnovel, Take a Lemon <>,
and listen to the Misfile radio play <>!






[Non-text portions of this message have been removed]


Re: Value and Principles of Unit Testing.

 

--- John Carter <john.carter@...> wrote:
... that makes me all the more determine to get it Right!
There is no "right." We strive to do a good job. But there is always room for improvement.

You try your best ideas with the team. And then you, as a team, improve them.

The large body of embedded C software they are working on,
has slowly being growing unit test coverage and has now
reached around 16% by SLOC.
...
I been cringing as I wade through the list of "test smells",
I recognize them all.
First: We live, we learn.

No matter how well you have done, tomorrow you will know more and be able to do more. You will look back and see areas where you can improve.

Recognize the accomplishments you have achieved. For example, most projects have no tests. You're all the way up to 16%. You have started, and that's the hardest part.


[I cut a lot of really good stuff from the message here. It's great stuff, so I don't have much to say about it.]


The larger part of production code under test, the weaker your test
coverage is and the more fragile your test is.
Minimize Test Overlap, Verify One Condition Per Test.
But remember: If the parts don't work together properly, the system as a whole fails. There is value to testing large integrated chunks, too.

There should only one reason for a test to fail (Defect
Localization) and only one test to fail due to a defect
(Effective, Targeted Testing).
In future we should aim to be testing, in each test, a
single, specific aspect of the Code Under Test. Ideally,
if that single aspect is broken, only one test should
fail, and that should be the only reason that that test
could fail.
Good ideas. Try not to stress about them too much.


Keep Test Logic Out of Production Code.
But be open to making reasonable changes to the design of the production code to make it testable.

These "test the framework tests" need to be actively discarded.
If you mistrust the framework, add tests to the unit tests for the
framework.
But it is reasonable to test that you are using the functionality of the framework correctly to achieve the desired result.

"const" is a keyword that always make me relax and feel
less stressed. "const" is a remarkable powerful statement.
Use it where ever possible.
["Const correctness" is a religion. And a virus. If/when you adopt it, it will spread throughout your code base. This can be a very good thing. Just don't be surprised at its viral nature.]

I/O functions are hard because...
...
To cope with these facts, we need to alter our designs to make them
testable. (Turns out this is actually A Good Thing!)
...

Yes, testing I/O is hard -- user I/O and interfaces to complex external systems that are non-trivial to control.

Isolate and mock them.


Conclusions
- As our Test Coverage has grown, "smells" and weaknesses
in our tests have emerged and need to be addressed.
- We need to emphasize Design for Test.
- We need to highlight the differences between pure,
stateful, service and I/O functions and adjust our test
strategies accordingly.
- Unit Tests are about Defect Localization, not paranoia.
Looks like a good list. Lots of good ideas.


An agile approach is for you and your team to work together to improve this over time.


Re: [TDD] Value and Principles of Unit Testing.

 

John Carter wrote:
The following is aimed at a my own team, but before I inflict it on them, I
thought I would run past the wise folks of this forum.
I think you have a lot of good stuff in there and it would be worth
making a presentation as someone else on the list suggested.

Also for anyone learning TDD and testing I recommend the article


It's a nice article that summarizes various approaches to testing at
varying "proficiency levels". Key knowledge is that "testing techniques"
is not a bag of universally useful things but a bag of tools you pick up
and drop as you get more and more experienced.

Michal Svoboda


Re: [TDD] Value and Principles of Unit Testing.

 

Jonh,

In the GOOS list someone asked it:

Do you still feel like the Unit tests have their own value to make them
worth it?

The context was that, for him, may be only ATs (Acceptance Tests) was
enough.

Follows my answer why I still value unit tests. As you write about the
value of unit tests too, may be interest you. Some points may be slightly
controversial.

--
o,
Dz�



So, have you ever had a time where you were tempted to JUST write the
AT's, and NOT tests lower down?
...
When you know that you have an acceptance test covering the code, does
that affect when you make a decision to write a unit test or not?

No. When I write one AT I always want to write the unit tests too. But
there was rare cases were we do not follow this rule. Follows one case I am
remembering:

In some systems we used GWT and DTOs to transfer data between the browser
and the server. And we need to copy the data from DTO to server side
classes (Entities). To do this we created converters and its respective
unit testes. Example: ClientDTO, Client, ClientConverter,
ClientConverterTests. Each new converter has its unit tests.

As the DTOs and Entities in many cases had properties with the same names,
we created a generic converter with, of course, unit tests. We could create
unit tests to prove that the conversion was possible. The only way that the
conversion will not work is when the properties names did not match, for
example, DTO has getName() and ENTITY has setNname(). For this case, the
team judged that the risk of this kind of error was very low and, if it
happens, the AT would take it.

Even in this case, if it was decided to create the tests, it would be no
problem. It would serve as a unit test documentation of the dependency
between the DTO and the Entity.

Do you still feel like the Unit tests have their own value to make them
worth it?

Yes. Mainly to design. May be you are a good programmer and you can test
drive only using ATs and keep the design nice. Maybe I could also. I dont
know I never did the experience. But with beginners to intermediate
experienced developers I do not have doubt that it helps a lot. I coach and
help teams in my company in this area and always some came to me and ask:
"I think this test is not good. What do you think about". And generally we
change the design (break a big class in smaller ones(SRP), eliminate
duplication etc.) to what we think is a better design. The pain in the unit
test is an indication of bad design. I think the GOOS book talk with much
more details about it. :)

Others advantages related to unit testing:

FAST FEEDBACK

If you use only ATs to develop you will stay with a red (broken) test for a
bigger time than if you test each class in isolation. I think I would not
like this kind of development: red code refactor; red code refactor; red
code refactor...green. I think it is more pleasurable something like: red
green refactor; red green refactor; ...green. The TDD mantra. I have fun
with it. Each little green is a fast feedback of progress. And it is good.

FAST FEEDBACK II

Not everyone will create ATs to run as fast as the unit tests. In this case
the unit tests are essential. For example there is a system here where the
direct calls tests uses the real database. This tests executes in ten
minutes. The web tests in 2 hours. And the unit tests in less than 5s.

DESIGN(again)

Note everyone will create two kinds of ATs as we do. In many projects they
only automate AT at the external interface(Web for example). In this case
the rule to create unit tests to every code helps inexperienced developers
to not put logic in the interface component. It happened here: "How I would
test it if it is in the interface and I cant instantiate the interface?".
Answer: "Extract all the logic and code as possible from the view. Use, for
example the presenter pattern. Hey, what this sql code is doing here?! Have
you ever heard about repositories or DAOs?"

FASTER PROBLEM DETECTION

If you are creating isolated unit testing ala GOOS (interactive tests) then
when your build breaks there was, theoretically, only one (or few ones)
tests in red state indicating very precisely the error origin. If you use
only ATs all the stack of objects used can be a potential candidate to the
problem. When you work alone it is not a big problem, as probably the last
updated place is the place of the error, but when you work on a team it
starts to make difference.

DEFECT PREVENTION

"The act of designing test� rather than the act of testing (i.e. knowing
what to test and how to test) is one of the best known defect
preventers[33]. Well I am not so smart to state this, this statement is
here:

"Analysis and Quantification of Test Driven Development Approach - Boby
George
A thesis submitted to the Graduate Faculty of North Carolina State
University
in partial fulfillment of the requirements for the Degree of Master of
Science"

and [33]: Beizer, B., Software Testing Techniques. ITP, 2nd Edition. 1990.

I have a feeling that it is true. :)

DOCUMENTATION

The unit tests documents the api to each class used in the project. So if
you are going to maintain a project that is new to you, look at the unit
tests to know how to use the classes. For example, when a search is made in
the ClientSearcher.search(clienteId) and there no itens what happens? It
returns null? It returns an empty list? It throws an
ClientNotFoundException? Look in the unit tests and discover the answer.

SIZE/COMPLEXITY METRICS

The number of unit tests can be o good indicator of the size and complexity
of an application. Suppose an application with no restriction of time to do
a client search:

Story: Search for one client
Given a client with name Dz� Santos is registered
When search by name for Dz� Santos
Then should find the Dz� Santos

And now we change the application to restrict the time to do the
search(adding a new story):

Story: Search for one client must be in less than a one second
Given there are a million clients registered.
And a client with name Dz� Santos is registered
When search by name for Dz� Santos
Then should search in less than a second

The only new AT that is added verifies the time that takes to finish the
search. But probably much more unit tests is created to build the solution.
If you will use cache in the client side you will have unit tests do this
client side code. If you also will write caching in the server side you
will have unit tests to this too. So using only ATs to measure the
size/complexity of an application I think is not a good idea. See more
about this metrics using ATs e UTs in this paper(Agile Metrics at the
Israeli Air Force) :


Side note: Here we use function points :( to pay the outsourced
development. This paper gives me an idea to perhaps change to use tests
points. But it is to a future far far away. If possible.

ENABLE BETTER TEAM WORK

Suppose we are in a team focused to finish the first AT from the first
story. Suppose the following stack (C is class) AT->C1->C2. You are
working in C2. If we use only the AT to drive development we are using the
feedback from the error that is presented when AT is executed. But this
error can change in your next update because of changes made in C1 by other
team member. And you will be distracted by a thing that is not your focus
on the moment.

Other problem. Suppose AT->C1->C2. You are working on C1 and C2 is a class
that is supposed to be finished. If some change in C2 breaks it, your work
in C1 will be injured because you need C2 working properly to finish C1 as
you are only using the AT as guide, and ATs needs the real classes. If we
were using unit tests and mock to C2 this would not be a problem to finish
C1.


ATs DONT GUIDE DEVELOPMENT

As we are using only ATs to guide the development we will probably use the
bottom-up strategy. So if C1->C2->C3 then we will build C3, then C2 and
then C1. And ironically, the ATs will drive less the code than when you use
unit tests ala GOSS. The first code you build (C3) is not directly related
wit the AT. And so may be you are doing code that is not required by the
AT. When you use need driven development (GOOS) all the code is justified
in a top down way by the ATs.

I dont know if all what I said is right. It is just what I think.





On Mon, May 27, 2013 at 12:10 AM, John Carter <john.carter@...>wrote:

**


The following is aimed at a my own team, but before I inflict it on them, I
thought I would run past the wise folks of this forum.

Suggestions, comments, flames all welcome.

What I write below is somewhat Opinionated, slightly controversial, and
hence potentially Combustible Material.

I have no apologies for that.

However, that makes me all the more determine to get it Right!

The audience for the following document is a team of 20-30 experienced
embedded C developers.

The large body of embedded C software they are working on, has slowly being
growing unit test coverage and has now reached around 16% by SLOC.

I'm now calling for us rethink our Unit Testing.

I have been revisiting some of our coverage, and groaning with
embarrassment at some of the stupid things I did earlier on.

I have been reading and rereading books on the subject, especially Gerard
Meszaros' "xUnit Test Patterns, Refactoring Test Code".

I been cringing as I wade through the list of "test smells", I recognize
them all.

I think we can do better. A lot better. I have learnt a lot since I first
introduced Unit Testing, the Industry has learnt a lot.

It is time we took on board that learning.

The Value and Principles of Unit Testing. The Value of Unit Testing

We have Unit Tests for these reasons...

- They are our "always up to date" executable documentation on how to
run our code, and executable specification of what it should do.
- They are by far the most productive compile / run / test / debug cycle
- They provide, by many orders of magnitude, the best and most thorough
test coverage of all test techniques.
- They provide direct feedback on where the bug is (Defect
Localization), unlike other test techniques which merely indicate the
presence of defects.
- They are our safety net to allow change, whether for new features, or
refactoring to reduce technical debt.

To maintain the value of a Unit Test suite ...

- We need to run them on every changeset. ie. They must be small and
faster enough not to impose an intolerable burden on checkins.
- They must be low maintenance (simple, easy to understand)
- and robust (do not break for any reason other than change in what is
directly under test).
- They must be repeatable. ie. Every test run on the same code gives
exactly the same result.
- They must be incapable of adding risk to the customer.

Principles of Unit Testing. Write the Test First!

- Unit Tests save us a lot of debugging effort... but only if we haven't
already debugged it!
- Writing the Tests encourages Design for Testability.... Which is a
Very Good Thing!

Design for Testability and Minimize Untestable Code.

In the past we wrote code as if tests didn't exist. This urgently needs to
change.

*In future we MUST change our Designs to be Testable!*

Why? Because testable code is lightly coupled code. Testable code is
understandable code. Testable code is re-usable code. Testable code is
simpler code.
Communicate Intent

Tests are executable documentation and "worked examples". Make sure your
tests are *Good* documents!
Test (EACH PART IN ISOLATION) Exactly what you Fly, Integrate and Fly
EXACTLY what you tested.

ie. The difference between what you test and what you put into production
is what it is linked to, not what it is.

For the Code Under Test, exactly the same bytes should be fed by the
preprocessor to the compiler for the test run, as are fed into production.
Keep the Tests Independent

Each test should be able to be run independently of all others.
Test the smallest part you can.

The larger part of production code under test, the weaker your test
coverage is and the more fragile your test is.
Minimize Test Overlap, Verify One Condition Per Test.

There should only one reason for a test to fail (Defect Localization) and
only one test to fail due to a defect (Effective, Targeted Testing).

In the past we have done "Paranoid" testing... on every test verify every
condition possible with the set up, Code Under Test and tear down. This
makes our tests fragile with respect to change, stiff to change, and
destroy's defect localization.

In future we should aim to be testing, in each test, a single, specific
aspect of the Code Under Test. Ideally, if that single aspect is broken,
only one test should fail, and that should be the only reason that that
test could fail.
Keep Test Logic Out of Production Code.

It increases risks to your customer, and doesn't test what you fly. There
are better ways of doing this.
Test the System Under Test, not the Framework!

Far too many of our unit tests are exploring how and whether the framework
works. This results in much additional complexity and clouds the intention
of the test

These "test the framework tests" need to be actively discarded.

If you mistrust the framework, add tests to the unit tests for the
framework.

No Unit Test, except those explicitly testing the threading infrastructure,
should ever create another thread!

If you have doubts about the correctness of the threading infrastructure...
you are more than welcome to explicitly test it. Just don't do it in any
test that isn't explicitly solely aimed at testing the threading
infrastructure!.

In many places we have tests that verify whether we understand how I/O
works in specific contexts. In future we should create, use and ACTIVELY
DISCARD such tests!

We should aim to capturing the results of such spikes only as tests of the
permissible range of inputs and possible responses. We can then form tests
that check we invoke such interfaces correctly, and handle all possible
responses correctly.
They may all be functions.... but Pure, Stateful, Services and I/O
functions are very different sort of functions requiring very different
sort of tests. PURE functions

A function that modifies nothing and always returns the same answer given
the same parameters is called a pure function.

These have many nice mathematical properties and are the easiest to test,
to analyse, to reuse and to optimize. Attempt to move as much code as
possible into pure functions.

"const" is a keyword that always make me relax and feel less stressed.
"const" is a remarkable powerful statement. Use it where ever possible.

Tests of pure functions are all about the results, never about functions
they may invoke. ie. Never mock a pure subfunction that a pure function may
use for implementation. Use the real one.
Stateful functions

Stateful functions results depend on some hidden internal state, or as a
side effect modify some hidden internal state.

However, unlike a service, if you can set up exactly the same starting
state, the stateful function will have exactly the same behaviour every
time.

Often a stateful function can be refactored into a pure function where ....

1. the state is passed in as a const parameter.
2. The result can be assigned to the state.

The best use for stateful functions is to encapsulate a bundle of related
state (into a class). These functions (or methods) should guarantee that
required relationships (invariants) between those items are maintained.

Where you have a collection of functions (or methods) encapsulating state
(or a class) the best unit testing strategy is...

1. Construct the object (possibly via a common test fixture).
2. Propagate the object to the required state via a public method (which
has been tested in some other test)
3. Invoke the stateful function under test.
4. Verify the result.
5. Discard the object (possibly via a common tear down function).
6. Keep your tests independent, DO NOT succumb to the temptation to
reuse this object in a subsequent test. Fragility and complexity lies that
way.

Preferably DO NOT assert on private, hidden state of the implementation,
otherwise you couple your test to that particular implementation and
representation, rather than the desired behaviour of the class.
Services

A service function is one whose full effect, and precise result, varies
with things like timing and inputs and threads and loads in a too complex a
manner to be specified in a simple test.

Testing services is all about testing interface specifications. The
services dependencies (unless PURE) must be explicitly cut and controlled
by the test harness.

We have had a strong natural inclination to test whether "this(...)" calls
"that(...)" correctly by letting "this(...)" call "that(...)" and seeing if
the right thing happened.

However, this mostly tests whether the compiler can correctly invoke
functions (yup, it can) rather than whether "this(...)" and "that(...)"
agree on the interface.

Code grown and tested in this manner is fragile and unreusable as it "grew
up together". All kinds of implicit, hidden, undocumented coupling and
preconditions may exist.

We need to explicitly test our conformance to interfaces, and rely on the
compiler to be correct.

1. Does the client make valid requests to the service?
2. Can the service handle those requests?
3. Can the client handle all possible responses from the service?
4. Can the service make every possible response?

I/O functions

I/O functions are hard because...

- I/O primitives tend to be impossible to mock. (The unit test framework
uses them at some level.)
- Care needs to be taken to prevent tests from overwriting / corrupting
production files.
- File / devices tend to be "one per system" thing, so running tests in
parallel can be problematic.
- Some I/O is byte stream representations of highly constrained and
complex objects.
- Some inputs can contain line noise and/or malicious hand crafted
attack vectors.
- Some I/O are complex devices with sensitive time and context dependent
behaviour.

To cope with these facts, we need to alter our designs to make them
testable. (Turns out this is actually A Good Thing!)

1. Placing a thin Facade over I/O primitives gives us a point where we
can mock.
2. Always use relative path names on an explicit absolute base path
(never use the current working directory). Pass the base path in as a
parameter.
3. Move calls to "open()", "connect()", "socket()" up the call graph,
pass the resulting io handle as a parameter.
4. Decouple the I/O from representation. ie. Serialization is about
data, strings and buffers. Not I/O. Implement and test serialization
separately from the task of placing the string on the wire.
5. Where input can be noisy or malicious, it is all the more important
to be able throw random and malicious test vectors at your code!
6. NEVER use "sleep()" or timers in a unit test test harness. If you
doing that, you're doing unit testing wrong!
7. Design your code so your test harness can explicitly (with malice
aforethought) sequence the order of arrival of events.
8. Use patterns like the Humble Object and/or Reactor to move the I/O to
the highest level in the call graph.
9. Focus on testing that the I/O primitives are invoked correctly, and
that we can handle the response. Rely on the I/O primitives working.

Conclusions

- As our Test Coverage has grown, "smells" and weaknesses in our tests
have emerged and need to be addressed.
- We need to emphasize Design for Test.
- We need to highlight the differences between pure, stateful, service
and I/O functions and adjust our test strategies accordingly.
- Unit Tests are about Defect Localization, not paranoia.

--
John Carter Phone : (64)(3) 358 6639
Tait Electronics Fax : (64)(3) 359 4632
PO Box 1645 Christchurch Email : john.carter@...
New Zealand

--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of
legal
or other privilege, none of which is waived or lost by reason of this
transmission.
If you are not an intended recipient, you may not use, disseminate,
distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender
immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or
corrupted during transmission nor can we guarantee that any email or any
attachments are free from computer viruses or other conditions which may
damage or interfere with recipient data, hardware or software. The
recipient relies upon its own procedures and assumes all risk of use and
of
opening any attachments.
------------------------------





Re: [TDD] Value and Principles of Unit Testing.

Donaldson, John
 

John C,

Is there some way you can add a bit of structure?
Personally I find it hard to read because it's just a bunch of sentences with no paragraphing and no headings.
Perhaps if you can think of it as a few slides that you are talking over it will help?

As far as I can tell, the content is fine.

John D.

-----Original Message-----
From: testdrivendevelopment@... [mailto:testdrivendevelopment@...] On Behalf Of John Carter
Sent: 27 May 2013 05:10
To: testdrivendevelopment@...
Subject: [TDD] Value and Principles of Unit Testing.

The following is aimed at a my own team, but before I inflict it on them, I thought I would run past the wise folks of this forum.

Suggestions, comments, flames all welcome.

What I write below is somewhat Opinionated, slightly controversial, and hence potentially Combustible Material.

I have no apologies for that.

However, that makes me all the more determine to get it Right!

The audience for the following document is a team of 20-30 experienced embedded C developers.

The large body of embedded C software they are working on, has slowly being growing unit test coverage and has now reached around 16% by SLOC.

I'm now calling for us rethink our Unit Testing.

I have been revisiting some of our coverage, and groaning with embarrassment at some of the stupid things I did earlier on.

I have been reading and rereading books on the subject, especially Gerard Meszaros' "xUnit Test Patterns, Refactoring Test Code".

I been cringing as I wade through the list of "test smells", I recognize them all.

I think we can do better. A lot better. I have learnt a lot since I first introduced Unit Testing, the Industry has learnt a lot.

It is time we took on board that learning.

The Value and Principles of Unit Testing. The Value of Unit Testing

We have Unit Tests for these reasons...

- They are our "always up to date" executable documentation on how to
run our code, and executable specification of what it should do.
- They are by far the most productive compile / run / test / debug cycle
- They provide, by many orders of magnitude, the best and most thorough
test coverage of all test techniques.
- They provide direct feedback on where the bug is (Defect
Localization), unlike other test techniques which merely indicate the
presence of defects.
- They are our safety net to allow change, whether for new features, or
refactoring to reduce technical debt.

To maintain the value of a Unit Test suite ...

- We need to run them on every changeset. ie. They must be small and
faster enough not to impose an intolerable burden on checkins.
- They must be low maintenance (simple, easy to understand)
- and robust (do not break for any reason other than change in what is
directly under test).
- They must be repeatable. ie. Every test run on the same code gives
exactly the same result.
- They must be incapable of adding risk to the customer.

Principles of Unit Testing. Write the Test First!

- Unit Tests save us a lot of debugging effort... but only if we haven't
already debugged it!
- Writing the Tests encourages Design for Testability.... Which is a
Very Good Thing!

Design for Testability and Minimize Untestable Code.

In the past we wrote code as if tests didn't exist. This urgently needs to change.

*In future we MUST change our Designs to be Testable!*

Why? Because testable code is lightly coupled code. Testable code is understandable code. Testable code is re-usable code. Testable code is simpler code.
Communicate Intent

Tests are executable documentation and "worked examples". Make sure your tests are *Good* documents!
Test (EACH PART IN ISOLATION) Exactly what you Fly, Integrate and Fly EXACTLY what you tested.

ie. The difference between what you test and what you put into production is what it is linked to, not what it is.

For the Code Under Test, exactly the same bytes should be fed by the preprocessor to the compiler for the test run, as are fed into production.
Keep the Tests Independent

Each test should be able to be run independently of all others.
Test the smallest part you can.

The larger part of production code under test, the weaker your test coverage is and the more fragile your test is.
Minimize Test Overlap, Verify One Condition Per Test.

There should only one reason for a test to fail (Defect Localization) and only one test to fail due to a defect (Effective, Targeted Testing).

In the past we have done "Paranoid" testing... on every test verify every condition possible with the set up, Code Under Test and tear down. This makes our tests fragile with respect to change, stiff to change, and destroy's defect localization.

In future we should aim to be testing, in each test, a single, specific aspect of the Code Under Test. Ideally, if that single aspect is broken, only one test should fail, and that should be the only reason that that test could fail.
Keep Test Logic Out of Production Code.

It increases risks to your customer, and doesn't test what you fly. There are better ways of doing this.
Test the System Under Test, not the Framework!

Far too many of our unit tests are exploring how and whether the framework works. This results in much additional complexity and clouds the intention of the test

These "test the framework tests" need to be actively discarded.

If you mistrust the framework, add tests to the unit tests for the framework.

No Unit Test, except those explicitly testing the threading infrastructure, should ever create another thread!

If you have doubts about the correctness of the threading infrastructure...
you are more than welcome to explicitly test it. Just don't do it in any test that isn't explicitly solely aimed at testing the threading infrastructure!.

In many places we have tests that verify whether we understand how I/O works in specific contexts. In future we should create, use and ACTIVELY DISCARD such tests!

We should aim to capturing the results of such spikes only as tests of the permissible range of inputs and possible responses. We can then form tests that check we invoke such interfaces correctly, and handle all possible responses correctly.
They may all be functions.... but Pure, Stateful, Services and I/O functions are very different sort of functions requiring very different sort of tests. PURE functions

A function that modifies nothing and always returns the same answer given the same parameters is called a pure function.

These have many nice mathematical properties and are the easiest to test, to analyse, to reuse and to optimize. Attempt to move as much code as possible into pure functions.

"const" is a keyword that always make me relax and feel less stressed.
"const" is a remarkable powerful statement. Use it where ever possible.

Tests of pure functions are all about the results, never about functions they may invoke. ie. Never mock a pure subfunction that a pure function may use for implementation. Use the real one.
Stateful functions

Stateful functions results depend on some hidden internal state, or as a side effect modify some hidden internal state.

However, unlike a service, if you can set up exactly the same starting state, the stateful function will have exactly the same behaviour every time.

Often a stateful function can be refactored into a pure function where ....

1. the state is passed in as a const parameter.
2. The result can be assigned to the state.

The best use for stateful functions is to encapsulate a bundle of related state (into a class). These functions (or methods) should guarantee that required relationships (invariants) between those items are maintained.

Where you have a collection of functions (or methods) encapsulating state (or a class) the best unit testing strategy is...

1. Construct the object (possibly via a common test fixture).
2. Propagate the object to the required state via a public method (which
has been tested in some other test)
3. Invoke the stateful function under test.
4. Verify the result.
5. Discard the object (possibly via a common tear down function).
6. Keep your tests independent, DO NOT succumb to the temptation to
reuse this object in a subsequent test. Fragility and complexity lies that
way.

Preferably DO NOT assert on private, hidden state of the implementation, otherwise you couple your test to that particular implementation and representation, rather than the desired behaviour of the class.
Services

A service function is one whose full effect, and precise result, varies with things like timing and inputs and threads and loads in a too complex a manner to be specified in a simple test.

Testing services is all about testing interface specifications. The services dependencies (unless PURE) must be explicitly cut and controlled by the test harness.

We have had a strong natural inclination to test whether "this(...)" calls "that(...)" correctly by letting "this(...)" call "that(...)" and seeing if the right thing happened.

However, this mostly tests whether the compiler can correctly invoke functions (yup, it can) rather than whether "this(...)" and "that(...)"
agree on the interface.

Code grown and tested in this manner is fragile and unreusable as it "grew up together". All kinds of implicit, hidden, undocumented coupling and preconditions may exist.

We need to explicitly test our conformance to interfaces, and rely on the compiler to be correct.

1. Does the client make valid requests to the service?
2. Can the service handle those requests?
3. Can the client handle all possible responses from the service?
4. Can the service make every possible response?

I/O functions

I/O functions are hard because...

- I/O primitives tend to be impossible to mock. (The unit test framework
uses them at some level.)
- Care needs to be taken to prevent tests from overwriting / corrupting
production files.
- File / devices tend to be "one per system" thing, so running tests in
parallel can be problematic.
- Some I/O is byte stream representations of highly constrained and
complex objects.
- Some inputs can contain line noise and/or malicious hand crafted
attack vectors.
- Some I/O are complex devices with sensitive time and context dependent
behaviour.

To cope with these facts, we need to alter our designs to make them testable. (Turns out this is actually A Good Thing!)

1. Placing a thin Facade over I/O primitives gives us a point where we
can mock.
2. Always use relative path names on an explicit absolute base path
(never use the current working directory). Pass the base path in as a
parameter.
3. Move calls to "open()", "connect()", "socket()" up the call graph,
pass the resulting io handle as a parameter.
4. Decouple the I/O from representation. ie. Serialization is about
data, strings and buffers. Not I/O. Implement and test serialization
separately from the task of placing the string on the wire.
5. Where input can be noisy or malicious, it is all the more important
to be able throw random and malicious test vectors at your code!
6. NEVER use "sleep()" or timers in a unit test test harness. If you
doing that, you're doing unit testing wrong!
7. Design your code so your test harness can explicitly (with malice
aforethought) sequence the order of arrival of events.
8. Use patterns like the Humble Object and/or Reactor to move the I/O to
the highest level in the call graph.
9. Focus on testing that the I/O primitives are invoked correctly, and
that we can handle the response. Rely on the I/O primitives working.

Conclusions

- As our Test Coverage has grown, "smells" and weaknesses in our tests
have emerged and need to be addressed.
- We need to emphasize Design for Test.
- We need to highlight the differences between pure, stateful, service
and I/O functions and adjust our test strategies accordingly.
- Unit Tests are about Defect Localization, not paranoia.




--
John Carter Phone : (64)(3) 358 6639
Tait Electronics Fax : (64)(3) 359 4632
PO Box 1645 Christchurch Email : john.carter@...
New Zealand

--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of legal or other privilege, none of which is waived or lost by reason of this transmission.
If you are not an intended recipient, you may not use, disseminate, distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or corrupted during transmission nor can we guarantee that any email or any attachments are free from computer viruses or other conditions which may damage or interfere with recipient data, hardware or software. The recipient relies upon its own procedures and assumes all risk of use and of opening any attachments.
------------------------------






------------------------------------

Yahoo! Groups Links


Re: [TDD] Value and Principles of Unit Testing.

 

Two thoughts:

1) if you aren't finding smells in your own code that means you've stopped
learning. Have someone else look and listen to what they say.

2) there are always too many places to look. Remember the "Boyscout Rule."
Remember Michael Feathers' rules for how to prioritize legacy refactoring.
Remember to improve incrementally.
On May 26, 2013 8:10 PM, "John Carter" <john.carter@...> wrote:

**


The following is aimed at a my own team, but before I inflict it on them, I
thought I would run past the wise folks of this forum.

Suggestions, comments, flames all welcome.

What I write below is somewhat Opinionated, slightly controversial, and
hence potentially Combustible Material.

I have no apologies for that.

However, that makes me all the more determine to get it Right!

The audience for the following document is a team of 20-30 experienced
embedded C developers.

The large body of embedded C software they are working on, has slowly being
growing unit test coverage and has now reached around 16% by SLOC.

I'm now calling for us rethink our Unit Testing.

I have been revisiting some of our coverage, and groaning with
embarrassment at some of the stupid things I did earlier on.

I have been reading and rereading books on the subject, especially Gerard
Meszaros' "xUnit Test Patterns, Refactoring Test Code".

I been cringing as I wade through the list of "test smells", I recognize
them all.

I think we can do better. A lot better. I have learnt a lot since I first
introduced Unit Testing, the Industry has learnt a lot.

It is time we took on board that learning.

The Value and Principles of Unit Testing. The Value of Unit Testing

We have Unit Tests for these reasons...

- They are our "always up to date" executable documentation on how to
run our code, and executable specification of what it should do.
- They are by far the most productive compile / run / test / debug cycle
- They provide, by many orders of magnitude, the best and most thorough
test coverage of all test techniques.
- They provide direct feedback on where the bug is (Defect
Localization), unlike other test techniques which merely indicate the
presence of defects.
- They are our safety net to allow change, whether for new features, or
refactoring to reduce technical debt.

To maintain the value of a Unit Test suite ...

- We need to run them on every changeset. ie. They must be small and
faster enough not to impose an intolerable burden on checkins.
- They must be low maintenance (simple, easy to understand)
- and robust (do not break for any reason other than change in what is
directly under test).
- They must be repeatable. ie. Every test run on the same code gives
exactly the same result.
- They must be incapable of adding risk to the customer.

Principles of Unit Testing. Write the Test First!

- Unit Tests save us a lot of debugging effort... but only if we haven't
already debugged it!
- Writing the Tests encourages Design for Testability.... Which is a
Very Good Thing!

Design for Testability and Minimize Untestable Code.

In the past we wrote code as if tests didn't exist. This urgently needs to
change.

*In future we MUST change our Designs to be Testable!*

Why? Because testable code is lightly coupled code. Testable code is
understandable code. Testable code is re-usable code. Testable code is
simpler code.
Communicate Intent

Tests are executable documentation and "worked examples". Make sure your
tests are *Good* documents!
Test (EACH PART IN ISOLATION) Exactly what you Fly, Integrate and Fly
EXACTLY what you tested.

ie. The difference between what you test and what you put into production
is what it is linked to, not what it is.

For the Code Under Test, exactly the same bytes should be fed by the
preprocessor to the compiler for the test run, as are fed into production.
Keep the Tests Independent

Each test should be able to be run independently of all others.
Test the smallest part you can.

The larger part of production code under test, the weaker your test
coverage is and the more fragile your test is.
Minimize Test Overlap, Verify One Condition Per Test.

There should only one reason for a test to fail (Defect Localization) and
only one test to fail due to a defect (Effective, Targeted Testing).

In the past we have done "Paranoid" testing... on every test verify every
condition possible with the set up, Code Under Test and tear down. This
makes our tests fragile with respect to change, stiff to change, and
destroy's defect localization.

In future we should aim to be testing, in each test, a single, specific
aspect of the Code Under Test. Ideally, if that single aspect is broken,
only one test should fail, and that should be the only reason that that
test could fail.
Keep Test Logic Out of Production Code.

It increases risks to your customer, and doesn't test what you fly. There
are better ways of doing this.
Test the System Under Test, not the Framework!

Far too many of our unit tests are exploring how and whether the framework
works. This results in much additional complexity and clouds the intention
of the test

These "test the framework tests" need to be actively discarded.

If you mistrust the framework, add tests to the unit tests for the
framework.

No Unit Test, except those explicitly testing the threading infrastructure,
should ever create another thread!

If you have doubts about the correctness of the threading infrastructure...
you are more than welcome to explicitly test it. Just don't do it in any
test that isn't explicitly solely aimed at testing the threading
infrastructure!.

In many places we have tests that verify whether we understand how I/O
works in specific contexts. In future we should create, use and ACTIVELY
DISCARD such tests!

We should aim to capturing the results of such spikes only as tests of the
permissible range of inputs and possible responses. We can then form tests
that check we invoke such interfaces correctly, and handle all possible
responses correctly.
They may all be functions.... but Pure, Stateful, Services and I/O
functions are very different sort of functions requiring very different
sort of tests. PURE functions

A function that modifies nothing and always returns the same answer given
the same parameters is called a pure function.

These have many nice mathematical properties and are the easiest to test,
to analyse, to reuse and to optimize. Attempt to move as much code as
possible into pure functions.

"const" is a keyword that always make me relax and feel less stressed.
"const" is a remarkable powerful statement. Use it where ever possible.

Tests of pure functions are all about the results, never about functions
they may invoke. ie. Never mock a pure subfunction that a pure function may
use for implementation. Use the real one.
Stateful functions

Stateful functions results depend on some hidden internal state, or as a
side effect modify some hidden internal state.

However, unlike a service, if you can set up exactly the same starting
state, the stateful function will have exactly the same behaviour every
time.

Often a stateful function can be refactored into a pure function where ....

1. the state is passed in as a const parameter.
2. The result can be assigned to the state.

The best use for stateful functions is to encapsulate a bundle of related
state (into a class). These functions (or methods) should guarantee that
required relationships (invariants) between those items are maintained.

Where you have a collection of functions (or methods) encapsulating state
(or a class) the best unit testing strategy is...

1. Construct the object (possibly via a common test fixture).
2. Propagate the object to the required state via a public method (which
has been tested in some other test)
3. Invoke the stateful function under test.
4. Verify the result.
5. Discard the object (possibly via a common tear down function).
6. Keep your tests independent, DO NOT succumb to the temptation to
reuse this object in a subsequent test. Fragility and complexity lies that
way.

Preferably DO NOT assert on private, hidden state of the implementation,
otherwise you couple your test to that particular implementation and
representation, rather than the desired behaviour of the class.
Services

A service function is one whose full effect, and precise result, varies
with things like timing and inputs and threads and loads in a too complex a
manner to be specified in a simple test.

Testing services is all about testing interface specifications. The
services dependencies (unless PURE) must be explicitly cut and controlled
by the test harness.

We have had a strong natural inclination to test whether "this(...)" calls
"that(...)" correctly by letting "this(...)" call "that(...)" and seeing if
the right thing happened.

However, this mostly tests whether the compiler can correctly invoke
functions (yup, it can) rather than whether "this(...)" and "that(...)"
agree on the interface.

Code grown and tested in this manner is fragile and unreusable as it "grew
up together". All kinds of implicit, hidden, undocumented coupling and
preconditions may exist.

We need to explicitly test our conformance to interfaces, and rely on the
compiler to be correct.

1. Does the client make valid requests to the service?
2. Can the service handle those requests?
3. Can the client handle all possible responses from the service?
4. Can the service make every possible response?

I/O functions

I/O functions are hard because...

- I/O primitives tend to be impossible to mock. (The unit test framework
uses them at some level.)
- Care needs to be taken to prevent tests from overwriting / corrupting
production files.
- File / devices tend to be "one per system" thing, so running tests in
parallel can be problematic.
- Some I/O is byte stream representations of highly constrained and
complex objects.
- Some inputs can contain line noise and/or malicious hand crafted
attack vectors.
- Some I/O are complex devices with sensitive time and context dependent
behaviour.

To cope with these facts, we need to alter our designs to make them
testable. (Turns out this is actually A Good Thing!)

1. Placing a thin Facade over I/O primitives gives us a point where we
can mock.
2. Always use relative path names on an explicit absolute base path
(never use the current working directory). Pass the base path in as a
parameter.
3. Move calls to "open()", "connect()", "socket()" up the call graph,
pass the resulting io handle as a parameter.
4. Decouple the I/O from representation. ie. Serialization is about
data, strings and buffers. Not I/O. Implement and test serialization
separately from the task of placing the string on the wire.
5. Where input can be noisy or malicious, it is all the more important
to be able throw random and malicious test vectors at your code!
6. NEVER use "sleep()" or timers in a unit test test harness. If you
doing that, you're doing unit testing wrong!
7. Design your code so your test harness can explicitly (with malice
aforethought) sequence the order of arrival of events.
8. Use patterns like the Humble Object and/or Reactor to move the I/O to
the highest level in the call graph.
9. Focus on testing that the I/O primitives are invoked correctly, and
that we can handle the response. Rely on the I/O primitives working.

Conclusions

- As our Test Coverage has grown, "smells" and weaknesses in our tests
have emerged and need to be addressed.
- We need to emphasize Design for Test.
- We need to highlight the differences between pure, stateful, service
and I/O functions and adjust our test strategies accordingly.
- Unit Tests are about Defect Localization, not paranoia.

--
John Carter Phone : (64)(3) 358 6639
Tait Electronics Fax : (64)(3) 359 4632
PO Box 1645 Christchurch Email : john.carter@...
New Zealand

--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of
legal
or other privilege, none of which is waived or lost by reason of this
transmission.
If you are not an intended recipient, you may not use, disseminate,
distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender
immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or
corrupted during transmission nor can we guarantee that any email or any
attachments are free from computer viruses or other conditions which may
damage or interfere with recipient data, hardware or software. The
recipient relies upon its own procedures and assumes all risk of use and
of
opening any attachments.
------------------------------





[Non-text portions of this message have been removed]


Value and Principles of Unit Testing.

 

The following is aimed at a my own team, but before I inflict it on them, I
thought I would run past the wise folks of this forum.

Suggestions, comments, flames all welcome.

What I write below is somewhat Opinionated, slightly controversial, and
hence potentially Combustible Material.

I have no apologies for that.

However, that makes me all the more determine to get it Right!

The audience for the following document is a team of 20-30 experienced
embedded C developers.

The large body of embedded C software they are working on, has slowly being
growing unit test coverage and has now reached around 16% by SLOC.

I'm now calling for us rethink our Unit Testing.

I have been revisiting some of our coverage, and groaning with
embarrassment at some of the stupid things I did earlier on.

I have been reading and rereading books on the subject, especially Gerard
Meszaros' "xUnit Test Patterns, Refactoring Test Code".

I been cringing as I wade through the list of "test smells", I recognize
them all.

I think we can do better. A lot better. I have learnt a lot since I first
introduced Unit Testing, the Industry has learnt a lot.

It is time we took on board that learning.

The Value and Principles of Unit Testing. The Value of Unit Testing

We have Unit Tests for these reasons...

- They are our "always up to date" executable documentation on how to
run our code, and executable specification of what it should do.
- They are by far the most productive compile / run / test / debug cycle
- They provide, by many orders of magnitude, the best and most thorough
test coverage of all test techniques.
- They provide direct feedback on where the bug is (Defect
Localization), unlike other test techniques which merely indicate the
presence of defects.
- They are our safety net to allow change, whether for new features, or
refactoring to reduce technical debt.

To maintain the value of a Unit Test suite ...

- We need to run them on every changeset. ie. They must be small and
faster enough not to impose an intolerable burden on checkins.
- They must be low maintenance (simple, easy to understand)
- and robust (do not break for any reason other than change in what is
directly under test).
- They must be repeatable. ie. Every test run on the same code gives
exactly the same result.
- They must be incapable of adding risk to the customer.

Principles of Unit Testing. Write the Test First!

- Unit Tests save us a lot of debugging effort... but only if we haven't
already debugged it!
- Writing the Tests encourages Design for Testability.... Which is a
Very Good Thing!

Design for Testability and Minimize Untestable Code.

In the past we wrote code as if tests didn't exist. This urgently needs to
change.

*In future we MUST change our Designs to be Testable!*

Why? Because testable code is lightly coupled code. Testable code is
understandable code. Testable code is re-usable code. Testable code is
simpler code.
Communicate Intent

Tests are executable documentation and "worked examples". Make sure your
tests are *Good* documents!
Test (EACH PART IN ISOLATION) Exactly what you Fly, Integrate and Fly
EXACTLY what you tested.

ie. The difference between what you test and what you put into production
is what it is linked to, not what it is.

For the Code Under Test, exactly the same bytes should be fed by the
preprocessor to the compiler for the test run, as are fed into production.
Keep the Tests Independent

Each test should be able to be run independently of all others.
Test the smallest part you can.

The larger part of production code under test, the weaker your test
coverage is and the more fragile your test is.
Minimize Test Overlap, Verify One Condition Per Test.

There should only one reason for a test to fail (Defect Localization) and
only one test to fail due to a defect (Effective, Targeted Testing).

In the past we have done "Paranoid" testing... on every test verify every
condition possible with the set up, Code Under Test and tear down. This
makes our tests fragile with respect to change, stiff to change, and
destroy's defect localization.

In future we should aim to be testing, in each test, a single, specific
aspect of the Code Under Test. Ideally, if that single aspect is broken,
only one test should fail, and that should be the only reason that that
test could fail.
Keep Test Logic Out of Production Code.

It increases risks to your customer, and doesn't test what you fly. There
are better ways of doing this.
Test the System Under Test, not the Framework!

Far too many of our unit tests are exploring how and whether the framework
works. This results in much additional complexity and clouds the intention
of the test

These "test the framework tests" need to be actively discarded.

If you mistrust the framework, add tests to the unit tests for the
framework.

No Unit Test, except those explicitly testing the threading infrastructure,
should ever create another thread!

If you have doubts about the correctness of the threading infrastructure...
you are more than welcome to explicitly test it. Just don't do it in any
test that isn't explicitly solely aimed at testing the threading
infrastructure!.

In many places we have tests that verify whether we understand how I/O
works in specific contexts. In future we should create, use and ACTIVELY
DISCARD such tests!

We should aim to capturing the results of such spikes only as tests of the
permissible range of inputs and possible responses. We can then form tests
that check we invoke such interfaces correctly, and handle all possible
responses correctly.
They may all be functions.... but Pure, Stateful, Services and I/O
functions are very different sort of functions requiring very different
sort of tests. PURE functions

A function that modifies nothing and always returns the same answer given
the same parameters is called a pure function.

These have many nice mathematical properties and are the easiest to test,
to analyse, to reuse and to optimize. Attempt to move as much code as
possible into pure functions.

"const" is a keyword that always make me relax and feel less stressed.
"const" is a remarkable powerful statement. Use it where ever possible.

Tests of pure functions are all about the results, never about functions
they may invoke. ie. Never mock a pure subfunction that a pure function may
use for implementation. Use the real one.
Stateful functions

Stateful functions results depend on some hidden internal state, or as a
side effect modify some hidden internal state.

However, unlike a service, if you can set up exactly the same starting
state, the stateful function will have exactly the same behaviour every
time.

Often a stateful function can be refactored into a pure function where ....

1. the state is passed in as a const parameter.
2. The result can be assigned to the state.

The best use for stateful functions is to encapsulate a bundle of related
state (into a class). These functions (or methods) should guarantee that
required relationships (invariants) between those items are maintained.

Where you have a collection of functions (or methods) encapsulating state
(or a class) the best unit testing strategy is...

1. Construct the object (possibly via a common test fixture).
2. Propagate the object to the required state via a public method (which
has been tested in some other test)
3. Invoke the stateful function under test.
4. Verify the result.
5. Discard the object (possibly via a common tear down function).
6. Keep your tests independent, DO NOT succumb to the temptation to
reuse this object in a subsequent test. Fragility and complexity lies that
way.

Preferably DO NOT assert on private, hidden state of the implementation,
otherwise you couple your test to that particular implementation and
representation, rather than the desired behaviour of the class.
Services

A service function is one whose full effect, and precise result, varies
with things like timing and inputs and threads and loads in a too complex a
manner to be specified in a simple test.

Testing services is all about testing interface specifications. The
services dependencies (unless PURE) must be explicitly cut and controlled
by the test harness.

We have had a strong natural inclination to test whether "this(...)" calls
"that(...)" correctly by letting "this(...)" call "that(...)" and seeing if
the right thing happened.

However, this mostly tests whether the compiler can correctly invoke
functions (yup, it can) rather than whether "this(...)" and "that(...)"
agree on the interface.

Code grown and tested in this manner is fragile and unreusable as it "grew
up together". All kinds of implicit, hidden, undocumented coupling and
preconditions may exist.

We need to explicitly test our conformance to interfaces, and rely on the
compiler to be correct.

1. Does the client make valid requests to the service?
2. Can the service handle those requests?
3. Can the client handle all possible responses from the service?
4. Can the service make every possible response?

I/O functions

I/O functions are hard because...

- I/O primitives tend to be impossible to mock. (The unit test framework
uses them at some level.)
- Care needs to be taken to prevent tests from overwriting / corrupting
production files.
- File / devices tend to be "one per system" thing, so running tests in
parallel can be problematic.
- Some I/O is byte stream representations of highly constrained and
complex objects.
- Some inputs can contain line noise and/or malicious hand crafted
attack vectors.
- Some I/O are complex devices with sensitive time and context dependent
behaviour.

To cope with these facts, we need to alter our designs to make them
testable. (Turns out this is actually A Good Thing!)

1. Placing a thin Facade over I/O primitives gives us a point where we
can mock.
2. Always use relative path names on an explicit absolute base path
(never use the current working directory). Pass the base path in as a
parameter.
3. Move calls to "open()", "connect()", "socket()" up the call graph,
pass the resulting io handle as a parameter.
4. Decouple the I/O from representation. ie. Serialization is about
data, strings and buffers. Not I/O. Implement and test serialization
separately from the task of placing the string on the wire.
5. Where input can be noisy or malicious, it is all the more important
to be able throw random and malicious test vectors at your code!
6. NEVER use "sleep()" or timers in a unit test test harness. If you
doing that, you're doing unit testing wrong!
7. Design your code so your test harness can explicitly (with malice
aforethought) sequence the order of arrival of events.
8. Use patterns like the Humble Object and/or Reactor to move the I/O to
the highest level in the call graph.
9. Focus on testing that the I/O primitives are invoked correctly, and
that we can handle the response. Rely on the I/O primitives working.

Conclusions

- As our Test Coverage has grown, "smells" and weaknesses in our tests
have emerged and need to be addressed.
- We need to emphasize Design for Test.
- We need to highlight the differences between pure, stateful, service
and I/O functions and adjust our test strategies accordingly.
- Unit Tests are about Defect Localization, not paranoia.




--
John Carter Phone : (64)(3) 358 6639
Tait Electronics Fax : (64)(3) 359 4632
PO Box 1645 Christchurch Email : john.carter@...
New Zealand

--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of legal
or other privilege, none of which is waived or lost by reason of this
transmission.
If you are not an intended recipient, you may not use, disseminate,
distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender
immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or
corrupted during transmission nor can we guarantee that any email or any
attachments are free from computer viruses or other conditions which may
damage or interfere with recipient data, hardware or software. The
recipient relies upon its own procedures and assumes all risk of use and of
opening any attachments.
------------------------------


ANN: NUnitLite 0.9 Release

 

Hi All,

I'm announcing the release of NUnitLite 0.9 today.

In case you were not aware, NUnitLite has surprised me by attracting a
lot of users who previously used full-on NUnit. I originally intended it
for
use on platforms with limited resources, but it turns out that the
simplicity
of a test framework is appealing in other contexts as well.

Here are some of the major changes in the 0.9 release:

Framework

* A .NET 4.5 build is included. When using the 4.5 package,
C# 5.0 async methods may be used as tests, as the target of
a Throws constraint and as an ActualValueDelegate returning
the value to be tested.

* Experimental builds for Silverlight 3.0, 4.0 and 5.0 are included.

* TestContext.Random may be used to provide random values of various
types for use in your tests.

* The experimental Asynchronous attribute has been removed.

Runner

* The runner now supports the -include and -exclude options, which
are used to specify categories of tests to be included in a run.

* Test execution time is now reported at a higher resolution on
systems that support it.

Bug Fixes

* 501784 Theory tests do not work correctly when using null parameters
* 671432 Upgrade NAnt to Latest Release
* 1028188 Add Support for Silverlight
* 1029785 Test loaded from remote folder failed to run with exception
System.IO.Directory Not Found
* 1057981 C#5 async tests are not supported
* 1060631 Add .NET 4.5 build
* 1064014 Simple async tests should not return Task<T>
* 1071164 Support async methods in usage scenarios of Throws
constraints
* 1071714 TestContext is broken when a previous version of the runner
is used alongside a new version of the framework
* 1071861 Error in Path Constraints
* 1072379 Report test execution time at a higher resolution
* 1073750 Remove Asynchronous Attribute
* 1074568 Assert/Assume should support an async method for the
ActualValueDelegate
* 1082330 Better Exception if SetCulture attribute is applied multiple
times
* 1111834 Expose Random Object as part of the test context
* 1172979 Add Category Support to nunitlite Runner
* 1174741 sl-4.0 csproj file is corrupt

You can download NUnitLite 0.9 from or via
NuGet. The web
site is at www.nunitlite.org.

Charlie


Re: [TDD] Re: If you could get your colleagues to read just one book on TDD, which would it be?

 

I came up with an easy way to generate Mikado diagrams programmatically using Ruby-Graphviz. One of the co-authors of the book (Ola) liked it. Here's an example. The Mikado has two direct prerequisites ("1" and "3"), and prerequisite 1 has a further prerequisite ("2"). Outputs to a "mikado.png" file.


require 'graphviz/dsl'

digraph :G do
mikado[:label => 'The Mikado', :shape => 'doublecircle']

mikado << prereq_1[:label => 'Prerequisite 1']
mikado << prereq_3[:label => 'Prerequisite 3']

prereq_1 << prereq_2[:label => 'Prerequisite 2']


output :png => 'mikado.png'
end



Al





________________________________
From: John Carter <john.carter@...>
To: testdrivendevelopment@...
Sent: Sunday, May 5, 2013 6:33 PM
Subject: Re: [TDD] Re: If you could get your colleagues to read just one book on TDD, which would it be?


On Thu, May 2, 2013 at 1:03 PM, George Dinwiddie <lists@...>wrote:


You should pair that with The Mikado Method (mikadomethod.org)
Now THAT is a Very Good Find! An Excellent Book!

Thanks, just what I was hoping for!
[Non-text portions of this message have been removed]


Re: [TDD] [Summary] If you could get your colleagues to read just one book on TDD, which would it be?

 


The TDD by example has the most votes... partly I suspect because it is one
of the oldest.

I for one think it's a very good book on its own merit. There are other
good books on TDD, and GOOS is certainly one of the best, but I think it's
better to read and ponder this one first.

Matteo


[Summary] If you could get your colleagues to read just one book on TDD, which would it be?

 

Test Driven Development by Example (Kent Beck) + 4
GOOS + 2
Refactoring + 2
Working Effectively With Legacy Code + 2
Clean Code + 1
The Mikado Method (mikadomethod.org)
RSpec
Agile Software Development, Principles, Patterns, and Practices
Effective Unit Testing
Refactoring to patterns by Josh Kerievsky.
Osherov, The Art of Unit testing
jbrains, Responsible Design for Android - on going work, somewhat builds on
GOOS
The Agile Samurai
The Art of Agile Development.
Test Driven Development for Embedded C

Test Driven Development for Embedded C is probably the one to go for if
your problem is "How Can we Do this Stuff in C"?



The TDD by example has the most votes... partly I suspect because it is one
of the oldest.



I suspect GOOS and Responsible Design for Android have some updated ideas
in them.



I really really really like The Mikado Method <>



It isn't about Unit Testing... but I suspect (no, I know) it works very
well with Working Effectively With Legacy Code



I have been reading XUnit test patterns : refactoring test code



It is a bit Encyclopedic... but it has some excellent lists of principles,
test code smells and design for testability patterns.

On Tue, Apr 30, 2013 at 3:12 PM, John Carter <john.carter@...>wrote:

Conversely, which books would you expect a TDD master to have read?

--
John Carter Phone : (64)(3) 358 6639
Tait Electronics Fax : (64)(3) 359 4632
PO Box 1645 Christchurch Email : john.carter@...
New Zealand


--
John Carter Phone : (64)(3) 358 6639
Tait Electronics Fax : (64)(3) 359 4632
PO Box 1645 Christchurch Email : john.carter@...
New Zealand
--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of legal
or other privilege, none of which is waived or lost by reason of this
transmission.
If you are not an intended recipient, you may not use, disseminate,
distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender
immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or
corrupted during transmission nor can we guarantee that any email or any
attachments are free from computer viruses or other conditions which may
damage or interfere with recipient data, hardware or software. The
recipient relies upon its own procedures and assumes all risk of use and of
opening any attachments.
------------------------------


Re: [TDD] Re: If you could get your colleagues to read just one book on TDD, which would it be?

 

On Thu, May 2, 2013 at 1:03 PM, George Dinwiddie <lists@...>wrote:


You should pair that with The Mikado Method (mikadomethod.org)
Now THAT is a Very Good Find! An Excellent Book!

Thanks, just what I was hoping for!

--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of legal
or other privilege, none of which is waived or lost by reason of this
transmission.
If you are not an intended recipient, you may not use, disseminate,
distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender
immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or
corrupted during transmission nor can we guarantee that any email or any
attachments are free from computer viruses or other conditions which may
damage or interfere with recipient data, hardware or software. The
recipient relies upon its own procedures and assumes all risk of use and of
opening any attachments.
------------------------------


Re: [TDD] What I hate about *Unit frameworks.

 

It could, but it doesn't necessarily. The only time I have ever seen it
become a problem is when someone was doing something they shouldn't
irrespective of the fact that it was in a test. Also, it's less likely to
come up if you are actually test-driving and not trying to hack a test into
something you didn't build in a testable way.

Maybe I'm a bit oversensitive on this issue. It's just that I hear people
talk about monkey patching like it is an inherently bad idea and I want to
say, "Why is it that you adopted a dynamic language, again?"

On May 3, 2013 1:14 PM, "George Dinwiddie" <lists@...> wrote:

**


Adam,

On 5/3/13 1:50 PM, Adam Sroka wrote:
Perl was my first professional language. I am not afraid of monkey
patching.

A hammer is a useful tool. Please refrain from hitting yourself with it.
Thanks for the admonition. I was just trying to explain how
monkey-patching causes more interference between tests than mocks do.

- George



On Thu, May 2, 2013 at 5:03 PM, George Dinwiddie <
lists@...>wrote:

**


Adam,


On 5/2/13 7:15 PM, Adam Sroka wrote:
Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of
the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")
No, with monkey patching you're often messing up *library code* to the
detriment of other tests.



I only think monkey patching is bad when you violate the implied
interface,
or when you go way down in the inheritance hierarchy and muck with
things
that could have wide ranging effects (Both of which are smells in
dynamic
languages anyway.) But, if you were actually doing TDD something would
go
red when you did either of those things, right?
Maybe. Or maybe your monkey patching makes other tests work, but the app
doesn't when it's in production and the library hasn't been
monkey-patched.

- George




On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <
lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular
to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system
code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that
test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things
and
forget to set them back or otherwise muck with global state. This
has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth
--
----------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------



Re: [TDD] What I hate about *Unit frameworks.

 

Adam,

On 5/3/13 1:50 PM, Adam Sroka wrote:
Perl was my first professional language. I am not afraid of monkey
patching.

A hammer is a useful tool. Please refrain from hitting yourself with it.
Thanks for the admonition. I was just trying to explain how monkey-patching causes more interference between tests than mocks do.

- George



On Thu, May 2, 2013 at 5:03 PM, George Dinwiddie <lists@...>wrote:

**


Adam,


On 5/2/13 7:15 PM, Adam Sroka wrote:
Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of
the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")
No, with monkey patching you're often messing up *library code* to the
detriment of other tests.



I only think monkey patching is bad when you violate the implied
interface,
or when you go way down in the inheritance hierarchy and muck with things
that could have wide ranging effects (Both of which are smells in dynamic
languages anyway.) But, if you were actually doing TDD something would go
red when you did either of those things, right?
Maybe. Or maybe your monkey patching makes other tests work, but the app
doesn't when it's in production and the library hasn't been monkey-patched.

- George




On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <
lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things
and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth
--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Re: [TDD] What I hate about *Unit frameworks.

 

Perl was my first professional language. I am not afraid of monkey
patching.

A hammer is a useful tool. Please refrain from hitting yourself with it.


On Thu, May 2, 2013 at 5:03 PM, George Dinwiddie <lists@...>wrote:

**


Adam,


On 5/2/13 7:15 PM, Adam Sroka wrote:
Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of
the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")
No, with monkey patching you're often messing up *library code* to the
detriment of other tests.



I only think monkey patching is bad when you violate the implied
interface,
or when you go way down in the inheritance hierarchy and muck with things
that could have wide ranging effects (Both of which are smells in dynamic
languages anyway.) But, if you were actually doing TDD something would go
red when you did either of those things, right?
Maybe. Or maybe your monkey patching makes other tests work, but the app
doesn't when it's in production and the library hasn't been monkey-patched.

- George




On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <
lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things
and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth
--
----------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------



[Non-text portions of this message have been removed]


Re: [TDD] What I hate about *Unit frameworks.

 

Adam,

On 5/2/13 7:15 PM, Adam Sroka wrote:
Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")
No, with monkey patching you're often messing up *library code* to the detriment of other tests.


I only think monkey patching is bad when you violate the implied interface,
or when you go way down in the inheritance hierarchy and muck with things
that could have wide ranging effects (Both of which are smells in dynamic
languages anyway.) But, if you were actually doing TDD something would go
red when you did either of those things, right?
Maybe. Or maybe your monkey patching makes other tests work, but the app doesn't when it's in production and the library hasn't been monkey-patched.

- George



On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth
--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Re: [TDD] What I hate about *Unit frameworks.

 

Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")

I only think monkey patching is bad when you violate the implied interface,
or when you go way down in the inheritance hierarchy and muck with things
that could have wide ranging effects (Both of which are smells in dynamic
languages anyway.) But, if you were actually doing TDD something would go
red when you did either of those things, right?


On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth

[Non-text portions of this message have been removed]



[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links



--
----------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------



[Non-text portions of this message have been removed]


Re: [TDD] What I hate about *Unit frameworks.

 

Adam,

On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to TDD?
Seems to me that monkey patching without tests is *fuck all* more dangerous
than writing a test, making it pass in the simplest way possible, and then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by people who would not use monkey patching in the deliverable system code. It's a quick-and-dirty way of mocking using the real objects.

- George

On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth







------------------------------------

Yahoo! Groups Links



--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Re: [TDD] What I hate about *Unit frameworks.

Adrian Howard
 

On 1 May 2013 20:59, John Carter <john.carter@...> wrote:


I guess this could a "per language" thing...

In perl, unless you run with -w (which you should), you never even get told
about using uninitialized variables, and _every_ variable is initialized to
null.

In C/C++ the uninitialized stuff sometimes "accidentally" works if there is
left over correct values from the previous test lying in memory / on stack
etc.
Well - the same thing kind-of applies in perl. Default initialisation and
left over correct values can lead to the wrong behaviour in Perl too.

I'm sure there are per-language issues - but those weren't the class of
bugs that were being surfaced.

The problems that were showing up were related to global state / singletons
that were being left in a "bad" state, or code that was expecting the
"default" state - but was getting a valid non-default state after another
test had run.

For example - I remember there was a serious problem with one test suite
with the logging code that switching to xUnit surfaced. The tests worked
fine in a separate process - but in the shared environment it failed.

The reason was that the logging code failed to use the in-app pool of
database connections properly and always spun out a new connection. This
worked fine when it was isolated in a separate process - since nothing else
had touched the pool. In the shared-process model it failed.

This bug exhibited itself in the live system by the silent loss of some
error/info logs under situations of high load. Ouch!

I've not shifted from per-process tests to shared-process tests in C/C++ -
so I can't be sure. But after my experiences with those Perl test suites
I'd be surprised if you didn't discover new bugs that were being hidden in
addition to having problems with tests succeeding when they should fail.

Maybe the ratios would be different - I don't know.

Cheers,

Adrian

PS Considering the group I should mention that the test suites I'm
discussing were produced test-last not test-first. Whether
that affects things in relation to this discussion I'm unsure ;-)
--
adrianh@... twitter.com/adrianh
t. +44 (0)7752 419080 skype adrianjohnhoward pinboard.in/u:adrianh