¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io

[TDD] What I hate about *Unit frameworks.


 

Hi everyone!

John, usually I don't find the case "this test corrupts that test", and I
wrote thousands of tests.

Any example/case?

Angel "Java" Lopez
@ajlopez
gh:ajlopez



On Tue, Apr 30, 2013 at 8:30 PM, John Carter <john.carter@...>wrote:

**


Most of them invest way too much effort to make up for the deficiencies of
the Windows Operating system.

The Component in a Computer tasked with managing concurrency, task
separation, and ensuring full clean setup and tear down is....

The Operating System.

So if you want to be dead, 100% sure that this test doesn't corrupt that
test... you use a process.

Nope, not try / catch, not exception handling. That's a flaky almost
solution.

One test === One Process.

Each process is an address space that appears.... and vanishes.

But, but, but Processes Are HeavyWeight you can't create and run thousands
of processes!

Oh yes you can.

Processes are lightweight under Unix and have been for decades.

In fact much of the hype about threads is a result of Microsoft marketing
spin to cope with the fact that Windows 3.1 processes were incredibly
heavyweight kludges.

In fact there are relatively few places where one should use threads when
one can use processes.

Want to run a thousand tests? Well, how many cores do you have? Let's keep
every core 100% busy. One test, one core.

But who is going to spin all those processes and mind the result?

GNU parallel is a pretty nifty choice.

Or maybe it is something the xUnit's should be doing.

--
John Carter Phone : (64)(3) 358 6639
Tait Electronics Fax : (64)(3) 359 4632
PO Box 1645 Christchurch Email : john.carter@...
New Zealand

--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of
legal
or other privilege, none of which is waived or lost by reason of this
transmission.
If you are not an intended recipient, you may not use, disseminate,
distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender
immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or
corrupted during transmission nor can we guarantee that any email or any
attachments are free from computer viruses or other conditions which may
damage or interfere with recipient data, hardware or software. The
recipient relies upon its own procedures and assumes all risk of use and
of
opening any attachments.
------------------------------

[Non-text portions of this message have been removed]



[Non-text portions of this message have been removed]


 

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez <ajlopez2000@...>wrote:


John, usually I don't find the case "this test corrupts that test", and I
wrote thousands of tests.

Any example/case?

I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has been
the result of design issues.

--
David
blog:
twitter:
www:


 

I think it's called "monkey patch" for a reason ...

On May 1, 2013, at 9:12 AM, David Stanek <dstanek@...> wrote:

I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has been
the result of design issues.

Ron Jeffries
www.XProgramming.com
There's no word for accountability in Finnish.
Accountability is something that is left when responsibility has been subtracted.
--Pasi Sahlberg


Adrian Howard
 

On 1 May 2013 11:27, Angel Java Lopez <ajlopez2000@...> wrote:
John, usually I don't find the case "this test corrupts that test", and I
wrote thousands of tests.

Any example/case?
I've seen more of the opposite problem. When previously isolated tests
show up a bug when run together - because there was some state that
should have been reset that wasn't.

A few years back I spent a chunk of time with several organisations
moving Perl test suites that were in a one-test-per-process style to a
shared xUnit style test (mostly coz I wrote the most popular Perl
xUnit framework of the period ;-)

The fear of test cases interfering with each other was something that
was raised before we did the moves - but it turned out to be largely
baseless. I don't have the numbers to hand, but I do remember that we
found many more bugs that had been hidden by separate processes than
problems due to tests interfering with each other.

Cheers,

Adrian
--
adrianh@... twitter.com/adrianh
t. +44 (0)7752 419080 skype adrianjohnhoward pinboard.in/u:adrianh


 

On Wed, May 1, 2013 at 10:27 PM, Angel Java Lopez <ajlopez2000@...>wrote:


John, usually I don't find the case "this test corrupts that test", and I
wrote thousands of tests.
Actually... it happened on every one.

As soon as you touch the heap, you have changed the system state.

If it's a GC'd language this isn't quite so problematic, but you still get
other resource leaks.

Have you free'd them all?

Well, you can run your tests under something like valgrind and it will tell
you.

Now which test leaked?

--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of legal
or other privilege, none of which is waived or lost by reason of this
transmission.
If you are not an intended recipient, you may not use, disseminate,
distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender
immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or
corrupted during transmission nor can we guarantee that any email or any
attachments are free from computer viruses or other conditions which may
damage or interfere with recipient data, hardware or software. The
recipient relies upon its own procedures and assumes all risk of use and of
opening any attachments.
------------------------------


 

On Thu, May 2, 2013 at 1:58 AM, Adrian Howard <adrianh@...>wrote:

**
I've seen more of the opposite problem. When previously isolated tests
show up a bug when run together - because there was some state that
should have been reset that wasn't.
I guess this could a "per language" thing...

In perl, unless you run with -w (which you should), you never even get told
about using uninitialized variables, and _every_ variable is initialized to
null.

In C/C++ the uninitialized stuff sometimes "accidentally" works if there is
left over correct values from the previous test lying in memory / on stack
etc.

--

------------------------------
This email, including any attachments, is only for the intended recipient.
It is subject to copyright, is confidential and may be the subject of legal
or other privilege, none of which is waived or lost by reason of this
transmission.
If you are not an intended recipient, you may not use, disseminate,
distribute or reproduce such email, any attachments, or any part thereof.
If you have received a message in error, please notify the sender
immediately and erase all copies of the message and any attachments.
Unfortunately, we cannot warrant that the email has not been altered or
corrupted during transmission nor can we guarantee that any email or any
attachments are free from computer viruses or other conditions which may
damage or interfere with recipient data, hardware or software. The
recipient relies upon its own procedures and assumes all risk of use and of
opening any attachments.
------------------------------


Keith Ray
 

Well, you can run your tests under something like valgrind and it will tell
you.

Now which test leaked?
Pretty easy to find out. Valgrind gives us the call-stack of the allocation that leaked, and most code is only executed by a few tests. (Google "microtest" to see why (and "Mike Wrote Tests").)

Some people make the test-runners keep track of memory usage, and it complains about specific tests.

Use code-review and/or pair programming, and sensible coding standards, to make sure other kinds of resources are not leaked.

C. Keith Ray


MARKETPLACE



Switch to: Text-Only, Daily Digest ? Unsubscribe ? Terms of

[Non-text portions of this message have been removed]


Keith Ray
 

The fear of test cases interfering with each other was something that
was raised before we did the moves - but it turned out to be largely
baseless. I don't have the numbers to hand, but I do remember that we
found many more bugs that had been hidden by separate processes than
problems due to tests interfering with each other.

Cheers,

Adrian
One of the "Aha!" Epiphanies of a programmer I was working with, was the realization that a unit test only needs to test a "unit". He had been setting up real-life data, when a little "fake" data would verify the desired behavior. It also makes the code more robust, because side-effects that might not be seen in a system test, can be found by unit tests exercising all the edge-cases for each function or behavior being tested.


adrianh@... twitter.com/adrianh
t. +44 (0)7752 419080 skype adrianjohnhoward pinboard.in/u:adrianh

[Non-text portions of this message have been removed]


John Roth
 

On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth


 

Maybe I'm just dense, but: what is it about this that is particular to TDD?
Seems to me that monkey patching without tests is *fuck all* more dangerous
than writing a test, making it pass in the simplest way possible, and then
improving the design. What am I missing???
On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth

[Non-text portions of this message have been removed]



[Non-text portions of this message have been removed]


Adrian Howard
 

On 1 May 2013 20:59, John Carter <john.carter@...> wrote:


I guess this could a "per language" thing...

In perl, unless you run with -w (which you should), you never even get told
about using uninitialized variables, and _every_ variable is initialized to
null.

In C/C++ the uninitialized stuff sometimes "accidentally" works if there is
left over correct values from the previous test lying in memory / on stack
etc.
Well - the same thing kind-of applies in perl. Default initialisation and
left over correct values can lead to the wrong behaviour in Perl too.

I'm sure there are per-language issues - but those weren't the class of
bugs that were being surfaced.

The problems that were showing up were related to global state / singletons
that were being left in a "bad" state, or code that was expecting the
"default" state - but was getting a valid non-default state after another
test had run.

For example - I remember there was a serious problem with one test suite
with the logging code that switching to xUnit surfaced. The tests worked
fine in a separate process - but in the shared environment it failed.

The reason was that the logging code failed to use the in-app pool of
database connections properly and always spun out a new connection. This
worked fine when it was isolated in a separate process - since nothing else
had touched the pool. In the shared-process model it failed.

This bug exhibited itself in the live system by the silent loss of some
error/info logs under situations of high load. Ouch!

I've not shifted from per-process tests to shared-process tests in C/C++ -
so I can't be sure. But after my experiences with those Perl test suites
I'd be surprised if you didn't discover new bugs that were being hidden in
addition to having problems with tests succeeding when they should fail.

Maybe the ratios would be different - I don't know.

Cheers,

Adrian

PS Considering the group I should mention that the test suites I'm
discussing were produced test-last not test-first. Whether
that affects things in relation to this discussion I'm unsure ;-)
--
adrianh@... twitter.com/adrianh
t. +44 (0)7752 419080 skype adrianjohnhoward pinboard.in/u:adrianh


 

Adam,

On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to TDD?
Seems to me that monkey patching without tests is *fuck all* more dangerous
than writing a test, making it pass in the simplest way possible, and then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by people who would not use monkey patching in the deliverable system code. It's a quick-and-dirty way of mocking using the real objects.

- George

On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth







------------------------------------

Yahoo! Groups Links



--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


 

Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")

I only think monkey patching is bad when you violate the implied interface,
or when you go way down in the inheritance hierarchy and muck with things
that could have wide ranging effects (Both of which are smells in dynamic
languages anyway.) But, if you were actually doing TDD something would go
red when you did either of those things, right?


On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth

[Non-text portions of this message have been removed]



[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links



--
----------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------



[Non-text portions of this message have been removed]


 

Adam,

On 5/2/13 7:15 PM, Adam Sroka wrote:
Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")
No, with monkey patching you're often messing up *library code* to the detriment of other tests.


I only think monkey patching is bad when you violate the implied interface,
or when you go way down in the inheritance hierarchy and muck with things
that could have wide ranging effects (Both of which are smells in dynamic
languages anyway.) But, if you were actually doing TDD something would go
red when you did either of those things, right?
Maybe. Or maybe your monkey patching makes other tests work, but the app doesn't when it's in production and the library hasn't been monkey-patched.

- George



On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth
--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


 

Perl was my first professional language. I am not afraid of monkey
patching.

A hammer is a useful tool. Please refrain from hitting yourself with it.


On Thu, May 2, 2013 at 5:03 PM, George Dinwiddie <lists@...>wrote:

**


Adam,


On 5/2/13 7:15 PM, Adam Sroka wrote:
Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of
the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")
No, with monkey patching you're often messing up *library code* to the
detriment of other tests.



I only think monkey patching is bad when you violate the implied
interface,
or when you go way down in the inheritance hierarchy and muck with things
that could have wide ranging effects (Both of which are smells in dynamic
languages anyway.) But, if you were actually doing TDD something would go
red when you did either of those things, right?
Maybe. Or maybe your monkey patching makes other tests work, but the app
doesn't when it's in production and the library hasn't been monkey-patched.

- George




On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <
lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things
and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth
--
----------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------



[Non-text portions of this message have been removed]


 

Adam,

On 5/3/13 1:50 PM, Adam Sroka wrote:
Perl was my first professional language. I am not afraid of monkey
patching.

A hammer is a useful tool. Please refrain from hitting yourself with it.
Thanks for the admonition. I was just trying to explain how monkey-patching causes more interference between tests than mocks do.

- George



On Thu, May 2, 2013 at 5:03 PM, George Dinwiddie <lists@...>wrote:

**


Adam,


On 5/2/13 7:15 PM, Adam Sroka wrote:
Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of
the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")
No, with monkey patching you're often messing up *library code* to the
detriment of other tests.



I only think monkey patching is bad when you violate the implied
interface,
or when you go way down in the inheritance hierarchy and muck with things
that could have wide ranging effects (Both of which are smells in dynamic
languages anyway.) But, if you were actually doing TDD something would go
red when you did either of those things, right?
Maybe. Or maybe your monkey patching makes other tests work, but the app
doesn't when it's in production and the library hasn't been monkey-patched.

- George




On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <
lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things
and
forget to set them back or otherwise muck with global state. This has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth
--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


 

It could, but it doesn't necessarily. The only time I have ever seen it
become a problem is when someone was doing something they shouldn't
irrespective of the fact that it was in a test. Also, it's less likely to
come up if you are actually test-driving and not trying to hack a test into
something you didn't build in a testable way.

Maybe I'm a bit oversensitive on this issue. It's just that I hear people
talk about monkey patching like it is an inherently bad idea and I want to
say, "Why is it that you adopted a dynamic language, again?"

On May 3, 2013 1:14 PM, "George Dinwiddie" <lists@...> wrote:

**


Adam,

On 5/3/13 1:50 PM, Adam Sroka wrote:
Perl was my first professional language. I am not afraid of monkey
patching.

A hammer is a useful tool. Please refrain from hitting yourself with it.
Thanks for the admonition. I was just trying to explain how
monkey-patching causes more interference between tests than mocks do.

- George



On Thu, May 2, 2013 at 5:03 PM, George Dinwiddie <
lists@...>wrote:

**


Adam,


On 5/2/13 7:15 PM, Adam Sroka wrote:
Hi George:

That makes sense, but you can do the same thing to yourself with mocks.
That's why you have to make sure you write microtests for both sides of
the
relationship and cover the same conditions (I think J.B. calls them
"contract tests.")
No, with monkey patching you're often messing up *library code* to the
detriment of other tests.



I only think monkey patching is bad when you violate the implied
interface,
or when you go way down in the inheritance hierarchy and muck with
things
that could have wide ranging effects (Both of which are smells in
dynamic
languages anyway.) But, if you were actually doing TDD something would
go
red when you did either of those things, right?
Maybe. Or maybe your monkey patching makes other tests work, but the app
doesn't when it's in production and the library hasn't been
monkey-patched.

- George




On Thu, May 2, 2013 at 7:46 AM, George Dinwiddie <
lists@...>wrote:

**


Adam,


On 5/1/13 11:25 PM, Adam Sroka wrote:
Maybe I'm just dense, but: what is it about this that is particular
to
TDD?
Seems to me that monkey patching without tests is *fuck all* more
dangerous
than writing a test, making it pass in the simplest way possible, and
then
improving the design. What am I missing???
Monkey patching is a common method to create testing seams, even by
people who would not use monkey patching in the deliverable system
code.
It's a quick-and-dirty way of mocking using the real objects.

- George


On May 1, 2013 8:12 PM, "John Roth" <JohnRoth1@...> wrote:

**


On 5/1/13 7:12 AM, David Stanek wrote:

On Wed, May 1, 2013 at 6:27 AM, Angel Java Lopez
<ajlopez2000@... <mailto:ajlopez2000%40gmail.com>>wrote:


John, usually I don't find the case "this test corrupts that
test",
and I
wrote thousands of tests.

Any example/case?
I've seen this in Python tests where developers monkey-patch things
and
forget to set them back or otherwise muck with global state. This
has
been
the result of design issues.
Snort. This is a continuing issue for the Python developers as well.

John Roth
--
----------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------