¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io
Date

Re: [TDD] Test Coverage Tool

 

I will recommend pitest for Java mutation test.



From: testdrivendevelopment@... [mailto:testdrivendevelopment@...] On Behalf Of Sangcheol Hwang
Sent: Friday, October 19, 2012 9:30 AM
To: testdrivendevelopment@...
Subject: Re: [TDD] Test Coverage Tool





covertura is not bad

2012? 10? 19? ???? Adam Sroka?? ??:

Lots of good leads here (in several languages):


On Thu, Oct 18, 2012 at 3:18 PM, Tim Ottinger <tottinge@... <mailto:tottinge%40gmail.com> <javascript:;>>
wrote:

**


Is that agitar?

On Oct 18, 2012 3:55 PM, "Amir Kolsky" <kolsky@... <mailto:kolsky%40actcom.net.il> <javascript:;>>
wrote:

I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.

I can't remember its name. Can someone help my failing memory there?



Thanks, A.



[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links











------------------------------------

Yahoo! Groups Links



--
Blog:
Twiter: @k16wire

[Non-text portions of this message have been removed]





[Non-text portions of this message have been removed]


Re: [TDD] Test Coverage Tool

 

covertura is not bad

2012? 10? 19? ???? Adam Sroka?? ??:

Lots of good leads here (in several languages):


On Thu, Oct 18, 2012 at 3:18 PM, Tim Ottinger <tottinge@...<javascript:;>>
wrote:

**


Is that agitar?

On Oct 18, 2012 3:55 PM, "Amir Kolsky" <kolsky@...<javascript:;>>
wrote:

I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.

I can't remember its name. Can someone help my failing memory there?



Thanks, A.



[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links











------------------------------------

Yahoo! Groups Links



--
Blog:
Twiter: @k16wire


[Non-text portions of this message have been removed]


Re: [TDD] Test Coverage Tool

 

Lots of good leads here (in several languages):


On Thu, Oct 18, 2012 at 3:18 PM, Tim Ottinger <tottinge@...> wrote:

**


Is that agitar?

On Oct 18, 2012 3:55 PM, "Amir Kolsky" <kolsky@...> wrote:

I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.

I can't remember its name. Can someone help my failing memory there?



Thanks, A.



[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links



[Non-text portions of this message have been removed]



[Non-text portions of this message have been removed]


Re: [TDD] Test Coverage Tool

 

Is that agitar?

On Oct 18, 2012 3:55 PM, "Amir Kolsky" <kolsky@...> wrote:

I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.

I can't remember its name. Can someone help my failing memory there?



Thanks, A.







------------------------------------

Yahoo! Groups Links




Re: [TDD] Test Coverage Tool

Amir Kolsky
 

Yeah, that's it. Mutation testing. thanks



From: testdrivendevelopment@...
[mailto:testdrivendevelopment@...] On Behalf Of George Dinwiddie
Sent: Thursday, October 18, 2012 2:15 PM
To: testdrivendevelopment@...
Subject: Re: [TDD] Test Coverage Tool





Amir,

On 10/18/12 5:02 PM, Amir Kolsky wrote:
There exists a tool that looks at the code, changes the direction of
conditionals, the value of constants, etc. This should result with failing
tests. It ensures that someone did not write code without test coverage.
That sounds like mutation testing. I think the phrase "without test
coverage" is misleading, here.

- George




From: testdrivendevelopment@...
<mailto:testdrivendevelopment%40yahoogroups.com>
[mailto:testdrivendevelopment@...
<mailto:testdrivendevelopment%40yahoogroups.com> ] On Behalf Of George
Dinwiddie
Sent: Thursday, October 18, 2012 1:37 PM
To: testdrivendevelopment@...
<mailto:testdrivendevelopment%40yahoogroups.com>
Subject: Re: [TDD] Test Coverage Tool





Amir,

It's not clear to me what you seek.

On 10/18/12 4:27 PM, Amir Kolsky wrote:
I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.
A code coverage tool (such as Emma, for java) would tell you if code was
executed during the test run.

A mutation test tool (such as Jester, for java) would tell you if your
tests detected the mutations created by the tool.

Does that help?

- George
--
----------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------


Re: [TDD] Test Coverage Tool

 

Amir,

On 10/18/12 5:02 PM, Amir Kolsky wrote:
There exists a tool that looks at the code, changes the direction of
conditionals, the value of constants, etc. This should result with failing
tests. It ensures that someone did not write code without test coverage.
That sounds like mutation testing. I think the phrase "without test coverage" is misleading, here.

- George




From: testdrivendevelopment@...
[mailto:testdrivendevelopment@...] On Behalf Of George Dinwiddie
Sent: Thursday, October 18, 2012 1:37 PM
To: testdrivendevelopment@...
Subject: Re: [TDD] Test Coverage Tool





Amir,

It's not clear to me what you seek.

On 10/18/12 4:27 PM, Amir Kolsky wrote:
I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.
A code coverage tool (such as Emma, for java) would tell you if code was
executed during the test run.

A mutation test tool (such as Jester, for java) would tell you if your
tests detected the mutations created by the tool.

Does that help?

- George
--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Re: [TDD] Test Coverage Tool

Amir Kolsky
 

There exists a tool that looks at the code, changes the direction of
conditionals, the value of constants, etc. This should result with failing
tests. It ensures that someone did not write code without test coverage.



From: testdrivendevelopment@...
[mailto:testdrivendevelopment@...] On Behalf Of George Dinwiddie
Sent: Thursday, October 18, 2012 1:37 PM
To: testdrivendevelopment@...
Subject: Re: [TDD] Test Coverage Tool





Amir,

It's not clear to me what you seek.

On 10/18/12 4:27 PM, Amir Kolsky wrote:
I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.
A code coverage tool (such as Emma, for java) would tell you if code was
executed during the test run.

A mutation test tool (such as Jester, for java) would tell you if your
tests detected the mutations created by the tool.

Does that help?

- George

--
----------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------


Re: [TDD] Test Coverage Tool

 

Amir,

It's not clear to me what you seek.

On 10/18/12 4:27 PM, Amir Kolsky wrote:
I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.
A code coverage tool (such as Emma, for java) would tell you if code was executed during the test run.

A mutation test tool (such as Jester, for java) would tell you if your tests detected the mutations created by the tool.

Does that help?

- George

--
----------------------------------------------------------------------
* George Dinwiddie *
Software Development
Consultant and Coach
----------------------------------------------------------------------


Test Coverage Tool

Amir Kolsky
 

I remember there exists a tool that will modify existing code to check
whether that code has coverage or not.

I can't remember its name. Can someone help my failing memory there?



Thanks, A.


XP2013, June 3-7, Vienna, Call for Research and Experience Papers

 

The 14th International Conference on Agile Software Development is a leading conference on agile methods in software and information systems development. XP2013 will be located in Vienna, Austria. This conference brings together industrial practitioners and researchers in the fields of information systems and software development, and examines latest theory, practical applications, and implications of agile methods. For more information visit .

We allow submission as research papers as well as experience reports.

Research papers
===============
We are looking for original research, with implications for theory and practice. Motivation for research and discussion of the related work must be included, as well as a description of the research method applied. Full papers are limited to 15 pages and short papers to 8 pages.

Experience reports
==================
We are looking for experience reports sharing experiences about the business impact generated when introducing agile practices, and challenges to be overcome as well as horror stories on failures including lessons learned. Experience papers must be clearly marked as such so that they can be appropriately reviewed by the program committee. Concerning length and formatting, experience reports must follow the same rules and guidelines as research papers.

If you would like to receive help from an experienced agile researcher to shepherd you in developing a submission, please contact the PC chairs (research@...) by the time indicated in the important dates list.

Areas of interest
=================
We are inviting innovative and high quality research papers as well as experience reports related to agile software development. Topics of particular interest in the conference include, but are not restricted to, the following:

- Foundations and conceptual studies of and for agile methods
- Living examples of agility in all areas
- Current practices and future trends, agile evolution and revolution
- Empirical studies and experiences
- Experiments in software development and their relation to agile methods
- Implications for industrial practice
- Tools and techniques for agile development
- Social and human aspects including teams
- Forming agile organizations and its implications
- Management and governance aspects
- Measurement and metrics
- Global software development and offshoring
- Systems engineering and safety critical systems
- Software- and systems architecture
- Legacy systems
- Usability
- Using agile methods in education and research
- Teaching agile methods in particular in an university context

Presentation of accepted papers
===============================
Accepted papers will either be presented within the research track or as part of the main conference track-depending on whether the implications of the contribution are mainly related to theory and/or practice. Authors of papers that are selected for presentations in the main conference track will be assigned a mentor who will advise and assist in adapting the presentation to suit a mixed audience of practitioners and researchers.

Research Program Committee
==========================
Hubert Baumeister, Technical University of Denmark, Denmark (co-chair)
Barbara Weber, University of Innsbruck Austria (co-chair)

Muhammad Ali Babar, IT University of Copenhagen, Denmark
Robert Biddle, Carleton University, Canada
Luigi Buglione, Engineering.IT / ETS, Italy
Ivica Crnkovic, Mlardalen University, Sweden
Simon Cromarty, Red Gate Software, UK
Steven D. Fraser, Cisco Research, USA
Torgeir Dingsoyr, SINTEF ICT, Norway
Tore Dyb, SINTEF and Department of Informatic, University of Oslo, Norway Amr Elssamadisy, Gemba Systems, USA
Juan Garbajosa, Universidad Politecnica de Madrid / Technical University of Madrid, Spain
Alfredo Goldma, University of So Paulo, Brazil
Des Greer, Queens University Belfast, Ireland
Rashina Hoda, The University of Auckland, New Zealand
Kirsi Korhonen NSN, Finland
Pasi Kuvaja, University of Oulu, Finland
Stig Larsson, Effective Change AB, Sweden
Casper Lassenius, Aalto University, Finland
Lech Madeyski, Wroclaw University of Technology, Poland
Michele Marchesi, University of Cagliari, Italy
Grigori Melnik Microsoft, Canada
Alok Mishra, Atilim University, Turkey
Nils Brede Moe, SINTEF ICT, Norway
Ana Moreno, University Madrid, Spain
Oscar Nierstrasz, University of Bern, Switzerland
Maria Paasivaar, Helsinki University of Technology, Finland
Jennifer Perez, Technical University of Madrid, Spain
Kai Petersen, Blekinge Institute of Technology/Ericsson AB, Sweden
Adam Porter, University of Maryland, USA Outi Salo, Nokia, Finland
Helen Sharp, The Open University, UK
Alberto Sillitti, Free University of Bolzano, Italy
Darja Smite, Blekinge Institute of Technology, Sweden
Giancarlo Succi, Free University of Bolzano/Bozen, Italy
Marco Torchiano, Politecnico di Torino, Italy
Stefan Van Baelen, iMinds, Belgium
Xiaofeng Wang, Free University of Bozen-Bolzano, Italy
Hironori Washizaki, Waseda University, Japan
Werner Wild, EVOLUTION, Austria
Laurie Williams, North Carolina State University, USA
Agustin Yague, Universidad Politecnica de Madrid, Spain

Submission instructions
=======================
Papers should be submitted electronically as a self-contained PDF file using the EasyChair submission site: . Papers have to be written in English and formatted according to Springer's LNBIP format: . Papers not following the guidelines will be rejected.



--
Hubert Baumeister
Associate Professor
mailto:hub@...

phone/skype: (+45)4525-3729 / hubert.baumeister


Regression Testing Survey

 

We are conducting some research into automated regression testing practices in industry, and as part of this research we have created a survey questionnaire for which we are seeking respondents. I would be grateful if anyone who is involved in an organization that uses regression testing could answer the survey at the URL below:



Please forward this link to anyone you know who might be in a position to respond

A/Prof David Parsons. Massey University, New Zealand


Re: [TDD] How do you go about splitting a class using TDD?

 

We split a class last week. We used the recipe in Refactoring. We ran
the tests after just about every step (search and replace is only one
step if you squint right). It worked.


--
Tim Ottinger, Sr. Consultant, Industrial Logic
-------------------------------------



Re: [TDD] How do you go about splitting a class using TDD?

 

The only thing you are getting wrong is that Nayan was being sarcastic in
the response - talking about the "old way" of doing things (usually in
waterfall/waterfail).

Have a nice weekend. :)

On 16 September 2012 06:58, arnonaxelrod <arnonaxelrod@...> wrote:

**


But the whole idea around TDD is the concept of "emergent design", where
you don't have the full requirements and you don't do a "big design
up-front", but rather you start with one simple example and adding more and
more examples as you go.
Also, I refer to testing that validate the requirements as "Acceptance
tests" (and the practice of writing them before the code as
ATDD/BDD/Specification by example/etc.), while TDD (which uses Unit Tests)
is for driving the *design*.

Am I getting anything wrong here?

--- In testdrivendevelopment@..., Nayan Hajratwala <nayan@...>
wrote:


On Sep 10, 2012, at 5:34 AM, arnonaxelrod <arnonaxelrod@...> wrote:
My problems is as follows:
...
after some time I realize that one of the classes
has multiple responsibilities and it needs to be refactored and split
it


I never actually have this problem, and it sounds to me like you have a
problem with your process. You see, if you just spent some more time on
making sure you got the requirements exactly right and then came up with
some *correct* class diagrams, sequence diagrams, activity diagrams, etc,
then you would have gotten the code right the first time and never have had
to change ... whoops, wrong list.


---
Nayan Hajratwala - 734.658.6032 - - @nhajratw


[Non-text portions of this message have been removed]


Re: [TDD] How do you go about splitting a class using TDD?

 

But the whole idea around TDD is the concept of "emergent design", where you don't have the full requirements and you don't do a "big design up-front", but rather you start with one simple example and adding more and more examples as you go.
Also, I refer to testing that validate the requirements as "Acceptance tests" (and the practice of writing them before the code as ATDD/BDD/Specification by example/etc.), while TDD (which uses Unit Tests) is for driving the *design*.

Am I getting anything wrong here?

--- In testdrivendevelopment@..., Nayan Hajratwala <nayan@...> wrote:

On Sep 10, 2012, at 5:34 AM, arnonaxelrod <arnonaxelrod@...> wrote:
My problems is as follows:
...
after some time I realize that one of the classes
has multiple responsibilities and it needs to be refactored and split it

I never actually have this problem, and it sounds to me like you have a problem with your process. You see, if you just spent some more time on making sure you got the requirements exactly right and then came up with some *correct* class diagrams, sequence diagrams, activity diagrams, etc, then you would have gotten the code right the first time and never have had to change ... whoops, wrong list.


---
Nayan Hajratwala - 734.658.6032 - - @nhajratw


Re: How do you go about splitting a class using TDD?

 

Thanks. I'm not sure I follow you completely, but that's maybe because I wasn't very clear, as your last question implies.

So to be more clear, I'll explain a bit further and will try to give an example:
The 2 responsibilities usually have a dependency between them. In other words, after the refactoring, A2 will implement an interface that will be injected into A1's constructor. So to your questions: "do you have tests that exercise
BOTH responsibilities?" and "Is any A method that uses BOTH responsibilities?" - the answer is YES (before the refactoring).
Unfortunately I can't think right now of a very conceret example, but here's a "generic" example:
Suppose that the first class (A) was parsing some text and act upon it in some manner which varies slightly according to the format of the text. So at first I would write tests for each of these variations by providing different text and examining the behavior of the class. After adding some more variations, I would see the need to seperate the behavior of each variation to a different "strategy" class, and let the original class only handle the common behavior, and delegate the other stuff to the appropriate strategy class (after identifying which strategy it should use).
So after the refactoring I would like the original class to get a list of strategies, and each strategy only to handle the neccessary variation.
The tests of course should reflect that at the end...

--- In testdrivendevelopment@..., Angel Java Lopez <ajlopez2000@...> wrote:

Hmmm....

Let A the original class. Let A1, A2 your imagined new classes,
attending responsibilities R1, R2.

I would write A2, using TDD.
Then, re-factor the original tests that exercises responsibility R2, to
reach A2.
Then, remove R2-related code from class A.
Rename A to A1.

But maybe I'm thinking in a clear separation btw R1, R2.

The part I was unable to understand: do you have tests that exercise
BOTH responsibilities? Any concrete example?
Is any A method that uses BOTH responsibilities?

On Mon, Sep 10, 2012 at 6:34 AM, arnonaxelrod <arnonaxelrod@...>wrote:

**



Hi all,

I feel pretty skilled in TDD, and I'm even consired the "TDD expert" in
my company, but nevertheless, there are some cases that I feel I don't
know how to handle properly, so I would like to hear other's opinions.

My problems is as follows:
Even though in general TDD helps me think of the core responsibility of
a class, and extract every other responsibility to dependent classes,
there are cases that after some time I realize that one of the classes
has multiple responsibilities and it needs to be refactored and split it
into 2 classes. This conclusion often comes because the tests of that
class start to become complicated or repetitive. I can pretty easily do
refactoring to split this class to the design I want (and I do it in
small steps, keeping on the green bar). My problem is that I end up with
the same complicated and repetitive tests that now tests the 2 classes
together, while I would like to have seperate tests for each class.
The only (more-or-less safe) manner I could think of for doing that, is
to do the following for each test (after I completed the refactoring of
the production code):

1. Duplicate the test case
2. Change one copy of the test to use a mock instead of the 1st
class, and the other copy of the test to use a mock instead of the 2nd
class.
3. Then if I see that an identical test already exists for one of the
copies, I delete it.

I think that sometimes its possible to do the following:

1. start by creating the 2 classes from scratch (using TDD of course)
2. Change the old tests to use the new classes instead of the old one
3. Delete the old class
4. Delete the old tests

Both of these techniques seems pretty cumbersome and time consuming, so
I wonder: how do the "real experts" go about this issue?

[Non-text portions of this message have been removed]



[Non-text portions of this message have been removed]


Re: [TDD] How do you go about splitting a class using TDD?

Adrian Howard
 

Hi Arnon,

On 10 Sep 2012, at 10:34, arnonaxelrod <arnonaxelrod@...> wrote:

My problems is as follows:
Even though in general TDD helps me think of the core responsibility of
a class, and extract every other responsibility to dependent classes,
there are cases that after some time I realize that one of the classes
has multiple responsibilities and it needs to be refactored and split it
into 2 classes. This conclusion often comes because the tests of that
class start to become complicated or repetitive. I can pretty easily do
refactoring to split this class to the design I want (and I do it in
small steps, keeping on the green bar). My problem is that I end up with
the same complicated and repetitive tests that now tests the 2 classes
together, while I would like to have seperate tests for each class.
[snip]
Both of these techniques seems pretty cumbersome and time consuming, so
I wonder: how do the "real experts" go about this issue?

I make no claim to being a real expert - but this is what I do.

I just leave the "bad" tests and carry on :-)

I don't care about the complicated and repetitive tests until they get in my way. I do this for three reasons:

* Refactoring the test suite doesn't give me value now - it gives me value later (when the poorly structured test gets in the way of me making a future change). So I might as well wait until the later and do it then, giving me more time to do productive stuff now.

* Sometimes I get it wrong. Turns out I shouldn't have pulled that class out. Turns out I should have split the responsibilities some other way. If I've spent time refactoring the test suite up-front then that time has been just been wasted.

* We're pretty non-mockish non-strict-unit-testing in the way we do TDD since we've not found being strict about those things brings much, if any, value to us. So having the tests for Foo also test Bar (or vice versa) wouldn't bother me until the point that dependency started getting in the way of change.

The very millisecond that I found those complicated or repetitive tests got in the way of making further updates - then I'd refactor/mock/whatever depending on what's causing the problem.

I probably wouldn't spend a lot of time trying to make the test suite ideal in every possible way (unless it looked trivial) - but I would do enough to make the problem we were facing go away (e.g. deal with the repetition, but not mock stuff out).

I'd trust that this process of incrementally improving the test suite would, over time, get rid of the problem without spending a large single chunk of time refactoring/tweaking the test suite.

So far it seems to have worked pretty well.

Cheers,

Adrian
--
adrianh@... twitter.com/adrianh
t. +44 (0)7752 419080 skype adrianjohnhoward pinboard.in/u:adrianh


Re: How do you go about splitting a class using TDD?

 

--- "arnonaxelrod" <arnonaxelrod@...> wrote:
My problems is as follows:
[a class] has multiple responsibilities and it needs to be
refactored and split it into 2 classes. ... My problem is
that I end up with the same complicated and repetitive tests
that now tests the 2 classes together, while I would like to
have seperate tests for each class.
I don't adhere to the mistaken belief that "xUnit" tests have anything to do with *UNIT* testing, and should be limited to testing one class at a time.

If one of the classes implements a clear general-purpose abstraction and resuability is desired, then I'd separate out some tests that are specific to that class and add a number of other tests to "fill out" the expected abstraction. But that's the exception, rather than the rule.

If your tests are a maintenance problem, then you need to work on the tests. The class(es) under test are not the problem.


Re: [TDD] How do you go about splitting a class using TDD?

 

When you think that the one class has two responsibilities it makes
sense to split it. If you did that first you'd have two classes with
one test, and that test would be testing both responsibilities.

Right from there I would neither duplicate the test nor start mocking
anything (Actually, I would never deliberately duplicate a test, but
many people I consider peers would.) First of all, it is unusual that
a class with two responsibilties has two equal responsibilities that
call one another. It is more usual, in my experience, that either one
should be delegating to the other or that the two exist side-by-side
with some (or no) overlap.

If Foo delegates to Bar then you can write a new test that tests Bar
directly and then modify Foobar's test to test Foo with a mock Bar. If
they have a small amount of overlapping data you can consider whether
that belongs in it's own value object or whether it properly belongs
in one place or the other (If you figure out which then it degenerates
to the first case.) More often than you might think you can separate
the two entirely, perhaps by having a higher level object call them
both.

If none of that works (i.e. Foo must know about Bar and Bar must know
about Foo) then the premise that they are two separate
responsibilities is questionable. Perhaps the class does need to be
split but you are splitting it in the wrong place. Back off, let the
smell get a little worse, then it should be more obvious what to do.


Re: [TDD] How do you go about splitting a class using TDD?

 

It's a refactoring question, but what the heck?

Alternative path:

Start by creating A2, which inherits everything from A (classes
essentially identical other than access permissions).

Make A's private methods/properties protected.

One at a time:
Create A2 tests, by slowly (one-at-a-time) adding tests or moving
tests from A's test suite.
Where an A test might need A2 support, mock it.
As properties not needed by A to A2.
Move methods not tested in A to A2.

Drop the inheritance relationship between A and A2.

Don't test A2 through A. Test A2 directly, test A with mocked-out A2.


This is off-the-cuff at 9pm after a long day, so feel free to
embellish as needed.

--
Tim Ottinger, Sr. Consultant, Industrial Logic
-------------------------------------



Re: [TDD] How do you go about splitting a class using TDD?

 

I think you might be over-thinking this. I would probably just refactor the class using the existing tests. I don't see the need to modify the tests themselves until you discover a value in changing the interface.

That is, the most common case I can think of would be to extract one of more delegators, leaving the rest of the system to invoke the original class, which is now at least partially a facade for the delegators. If for some reason you want to completely separate the classes so that the rest of the system will start calling the delegators directly, now you have a new interface and new scenario which needs to be tested - so write that new test and THEN make the delegate directly visible.

On Sep 10, 2012, at 5:34 AM, arnonaxelrod wrote:


Hi all,

I feel pretty skilled in TDD, and I'm even consired the "TDD expert" in
my company, but nevertheless, there are some cases that I feel I don't
know how to handle properly, so I would like to hear other's opinions.

My problems is as follows:
Even though in general TDD helps me think of the core responsibility of
a class, and extract every other responsibility to dependent classes,
there are cases that after some time I realize that one of the classes
has multiple responsibilities and it needs to be refactored and split it
into 2 classes. This conclusion often comes because the tests of that
class start to become complicated or repetitive. I can pretty easily do
refactoring to split this class to the design I want (and I do it in
small steps, keeping on the green bar). My problem is that I end up with
the same complicated and repetitive tests that now tests the 2 classes
together, while I would like to have seperate tests for each class.
The only (more-or-less safe) manner I could think of for doing that, is
to do the following for each test (after I completed the refactoring of
the production code):

1. Duplicate the test case
2. Change one copy of the test to use a mock instead of the 1st
class, and the other copy of the test to use a mock instead of the 2nd
class.
3. Then if I see that an identical test already exists for one of the
copies, I delete it.

I think that sometimes its possible to do the following:

1. start by creating the 2 classes from scratch (using TDD of course)
2. Change the old tests to use the new classes instead of the old one
3. Delete the old class
4. Delete the old tests

Both of these techniques seems pretty cumbersome and time consuming, so
I wonder: how do the "real experts" go about this issue?







------------------------------------

Yahoo! Groups Links


-----------------
Come read my webnovel, Take a Lemon <>,
and listen to the Misfile radio play <>!