The interviewer can certainly provide a new requirement to implement and then motivate refactoring to design patterns that unify commonalties in the implementations (or even provide additional code to motivate such refactoring), but that is quite different than asking me to do speculative OO design without specific future requirements, which I simply do not do anymore as a matter of principle.
toggle quoted message
Show quoted text
Why not let the interviewer be the ¡°additional requirements that motivate reuse¡±? You can explain your reservations but then go ahead with the new requirements. JD ? From: [email protected] <[email protected]> On Behalf Of Steve Gordon Sent: 16 January 2020 17:31 To: [email protected] Subject: Re: [testdrivendevelopment] Tests ¡°scoped¡± to implementation ? TDD is a great technique, but does not constitute all of software development.? Just because writing a test that succeeds without writing any additional code may not be a step in TDD does not mean that such tests cannot serve useful software development?purposes. Where I tend to quarrel with interviewers is that I firmly believe?in not imposing design patterns until?they emerge in the code, which does not tend to happen until there are additional requirements that motivate reuse.? Many interviewers request speculative design patterns that I resist on principle.? It has cost me more than a few jobs. ? ? On Thu, 16 Jan 2020 at 16:57, Russell Gold <russ@...> wrote: = ? ? I'd add the test because it would require reasoning about the specific code to decide whether it would (probably) run. Suppose we implemented it with a hash table, for example :)
And that's my point. An order or particular?test cases you wanna have depends (or can depend) on your implementation. If you go for a particular?solution, like Hashtable or do sth else it can simply drive you to write a new test case which shall fail first. I thought that idea of TDD (at least how it was defined by Uncle Bob) is to add a new failing test that would force you to change implementation of your SUT, and not simply repeat what is already done¡?
That depends; if you have known requirements, it makes sense to add unit tests to confirm the corresponding behavior, even if they pass when you write them, as you might change your implementation later and any requirements you forget to test for now could easily be forgotten. The fact that your current implementation happens to pass those tests doesn¡¯t make them superfluous.
Of course you can as nothing prevents you from doing it. In the simplified case i already provided above, what other cases can you add to express the requirement?? I believe that working in micro steps of TDD would allow (in majority cases) to express all requirements as test cases... and other tests would be simply redundant as they will simply duplicate existing test/tests.? In my case by adding all possible tests -? summing #1 with #2, #1 with #3 and #2 with #3 for array with 3 elements? and getting to the solution with double/nested loop i can not see any point of adding more tests for my existing implementation. If my implementation would be more sophisticated than I could probably have more/different test cases but for the existing solution i believe it covers all possible options (i am not talking about property-based testing)
R
On Jan 16, 2020, at 7:25 AM, Rene Wiersma via Groups.Io <R.wiersma@...> wrote: ? Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.
? The seemingly easy way of learning ¡ª by asking ¡ª is not necessarily the best. When you eventually understand, you will understand fully. ?
?
|
Steve Gordon wrote on 1/16/20 9:31 AM: Where I tend to quarrel with interviewers is that I firmly believe?in not imposing design patterns until?they emerge in the code, which does not tend to happen until there are additional requirements that motivate reuse.? Many interviewers request speculative design patterns that I resist on principle.? It has cost me more than a few jobs. You probably didn't want those jobs. :-) Jeff
|
Hi Steve,
I understand the sentiment and I agree with you from a production code?perspective, however the interview may be trying to understand if you understand patterns at all, and you're not helping them by refusing to even try.? I mean, in a typical?programming interview we write a bit of code that nobody needs to solve a problem that nobody has, eg fizz buzz.? Just for the sake of showing that I can, I would do it.? I would also add that in real work, I would wait for the time when using a patterns simplifies the code, rather than complicate it.
toggle quoted message
Show quoted text
TDD is a great technique, but does not constitute all of software development.? Just because writing a test that succeeds without writing any additional code may not be a step in TDD does not mean that such tests cannot serve useful software development?purposes.
Where I tend to quarrel with interviewers is that I firmly believe?in not imposing design patterns until?they emerge in the code, which does not tend to happen until there are additional requirements that motivate reuse.? Many interviewers request speculative design patterns that I resist on principle.? It has cost me more than a few jobs.
On Thu, 16 Jan 2020 at 16:57, Russell Gold < russ@...> wrote: =
I'd add the test because it would require reasoning about the specific code to decide whether it would (probably) run.
Suppose we implemented it with a hash table, for example :)
And that's my point. An order or particular?test cases you wanna have depends (or can depend) on your implementation. If you go for a particular?solution, like Hashtable or do sth else it can simply drive you to write a new test case which shall fail first. I thought that idea of TDD (at least how it was defined by Uncle Bob) is to add a new failing test that would force you to change implementation of your SUT, and not simply repeat what is already done¡?
That depends; if you have known requirements, it makes sense to add unit tests to confirm the corresponding behavior, even if they pass when you write them, as you might change your implementation later and any requirements you forget to test for now could easily be forgotten. The fact that your current implementation happens to pass those tests doesn¡¯t make them superfluous.
Of course you can as nothing prevents you from doing it. In the simplified case i already provided above, what other cases can you add to express the requirement??
I believe that working in micro steps of TDD would allow (in majority cases) to express all requirements as test cases... and other tests would be simply redundant as they will simply duplicate existing test/tests.? In my case by adding all possible tests - for empty array - array with 1 element - array with 2 elements -? summing #1 with #2, #1 with #3 and #2 with #3 for array with 3 elements? and getting to the solution with double/nested loop i can not see any point of adding more tests for my existing implementation. If my implementation would be more sophisticated than I could probably have more/different test cases but for the existing solution i believe it covers all possible options (i am not talking about property-based testing)
br JM
?
br JM ?
R On Jan 16, 2020, at 7:25 AM, Rene Wiersma via Groups.Io < R.wiersma@...> wrote:
Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.
Ron Jeffries
The seemingly easy way of learning ¡ª by asking ¡ª is not necessarily the best. When you eventually understand, you will understand fully. ¡ª Dragon ? ?The Line War ? ?(Neal Asher)
|
Anytime I write a test that passes first, I
consider:
* Was the code so unclear I wasn't sure whether it'd pass? (Often the
design--both prod & test--warrants improvement in this case.) Does
this test give me confidence I wouldn't otherwise have?
* Do I sense it's a constraint that someone in the future might likely
break, particularly given the way things are coded?
* Does it document something important or otherwise useful about the
system, something not easily gleaned from the other tests? If I pull up a
list of all the test names, does this one add anything to the story?
If any of these hold true, I'll keep the test around for now... as long
as it's also a legitimate test! (I always watch the damn thing fail at
least once, regardless of whether it passed at the outset.)
Jeff
|
Jeff Langr / Langr Software
Solutions, Inc.???
|
?+1-719-287-4335
|
|
|
Russell Gold wrote on 1/16/20 5:54 AM:
toggle quoted message
Show quoted text
I don¡¯t understand the idea that they are noise. If they confirm desired
behavior, having them seems a plus. So why would you feel a need to
remove them?
Thx Ron. Actually I added more tests as requested by
interviewer but at the moment I saw then all passing I deleted them as I
consider them as a noise and redundant code ( given I stick to my
solution)?
P.s I
didn¡¯t get the job as my rejection ?to add more ( wasteful ) tests made
me ¡°hard to manage¡± in his eyes? I
don't see anything wrong with adding tests to confirm things. A common
case is when someone says "I think that breaks when ...". The ideal
response, in my view, is just to add the test and run them. Let the
computer decide.
As for
adding tests in case of refactoring, that seems speculative to me and
therefore probably wasteful. In an interview situation, of course, it's a
question of how badly one wants the job offer.?
But there's nothing wrong with writing an
additional test in response to a concern or question or doubt.
R
I
had a live coding interview yesterday and I faced interesting ( at least
for me) issue.? A
small background: I was about to implement a function ?to figure out if
sum of any 2 elements in input array is equal to given number. Example? Input. ? ?
? ? ? ? ? Output?
[1,2]
and 3 ? ? true? [1,2,3]
and 5 ? ? ? true? [1,2,3]
and 4 ? ? ? ? true? [1,2,3]
and 6 ? ? ? ?False?
As
I was constrainted by time (~ 25 mins) I started with TDD but decided
to skip most of micro steps.? In the end I implemented something pretty
naive ( with O(n^2) complexity - comparing sum of all possible pairs)
but it wasn¡¯t highly welcome by interviewer.? Moreover my interviewer
wanted ?me to add extra test cases ?( beside the ones which brought me
to my solution - as shown above ) just in case when¡± in future you want
to refactor an existing solution to something more sophisticated¡±. I
strongly refused as these tests will not make any sense from TDD point
of view: they will all immediately pass.?
Do you
believe that adding extra tests cases ¡°for future refactoring¡± makes
sense ?? I
can imagine that for a particular solution of this task ( algorithm
being sorting input list + using 2 pointers) If I go strictly with TDD (
a new test case must first fail) a new solution would ( but does t need
to ) require a different test cases... What do you think ? Is it
possible that TDD is not a good fit for strong ¡°algorithmic tasks¡± ?
Ron Jeffries
It was
important to me that we accumulate the learnings about the application
over time by modifying the program to look as if we had known what we
were doing all along. -- Ward Cunningham
--?
|
I had a live coding interview yesterday and I faced interesting ( at least for me) issue.? A small background: I was about to implement a function ?to figure out if sum of any 2 elements in input array is equal to given number. Example? Input. ? ? ? ? ? ? ? Output?
[1,2] and 3 ? ? true? [1,2,3] and 5 ? ? ? true? [1,2,3] and 4 ? ? ? ? true? [1,2,3] and 6 ? ? ? ?False?
As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.? In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn¡¯t highly welcome by interviewer.? Moreover my interviewer wanted ?me to add extra test cases ?( beside the ones which brought me to my solution - as shown above ) just in case when¡± in future you want to refactor an existing solution to something more sophisticated¡±. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass.?
Do you believe that adding extra tests cases ¡°for future refactoring¡± makes sense ?? I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases... What do you think ? Is it possible that TDD is not a good fit for strong ¡°algorithmic tasks¡± ?
Writing the tests first can help identify missing or unclear parts of the specification, which seems useful for these kinds of interview questions. You can demonstrate how you ask for clarification and what you do with the information.
Thinking clearly about inputs and expected outputs (a part of TDD) certainly fits algorithmic tasks well, mostly because it helps one detect the situation where their desired algorithm _almost_ meets the specification, but doesn't quite.
Incremental design doesn't always lead the programmer to discover a new algorithm that fits the problem well. I don't interpret this as "TDD doesn't fit" but rather that TDD mostly guides one's existing thinking and helps one notice when one needs to learn something to help with the problem. For example, if I don't know binary search, then I don't think incremental design would guide me from linear search to binary search, but the act of trying to build the search feature incrementally _might_ lead me to consider other ways to search. Seeing dozens of examples of searching might give me enough information (I'd see patterns) to have the insight that "with a sorted search space, I can jump around and certain useful invariants hold". Of course, seeing examples is just one way to gain that insight; different people have different ways of getting there. I encourage you to learn how to put your mind in the state that tends to lead it to insight more easily.
When I started practising TDD, I spent some months establishing the habit of thinking about tests first. This included choosing to write some code test-first and refactoring it incrementally, _even when I didn't need it_. Once I established more-helpful habits, I stopped approaching TDD so strictly, and instead trusted myself to use any tricks I knew to write code, confident that I would add tests and refactor safely when I found that useful. I don't think I would force myself to answer every interview question by using TDD, although I would probably apply the general principle of "make it run, make it work, make it fast" somehow. This might mean starting with the O(n^2) implementation, then spending the remaining time figuring out how to improve it. It depends significantly on what I believe the interviewer wants: do they prefer a slow-but-working solution or do they prefer to see more of my thinking on an incomplete solution that goes in the right general direction? If I don't know what to do, I just guess and hope that I'm right.?
I will say this about your problem statement: given the examples you showed, I have one important question to ask: is the input array sorted or not? I would approach the problem very differently if it were sorted than if it isn't.?
Finally, regarding adding extra tests, I do that, but I spent several months practising _not_ doing that precisely in order to understand when I need it and when I don't. I see a pattern among the enthusiastic programmers learning TDD/test-first programming/evolutionary design: they practise, but they don't clearly-enough identify when they're following a set of rules _for practice_ or _to perform_. The implied rule here, "I will only add tests when they force me to change the code", makes perfect sense in a context of deliberate practice, but I don't always follow it when writing code for pay. I followed it long enough to understand why it helped and I follow it when I notice myself falling back into bad old habits.?
J. B. Rainsberger :: | |
|
Lately, while helping with some legacy code, I've been finding that my "safe refactorings" aren't quite as safe as I wanted them to be, and that if I had started with the complete test first, I would have saved myself 10 minutes of thinking in the wrong?direction. Especially when functions or idioms lie about what they are doing in non obvious ways.
Always good to test assumptions. :)
toggle quoted message
Show quoted text
I had a live coding interview yesterday and I faced interesting ( at least for me) issue.? A small background: I was about to implement a function ?to figure out if sum of any 2 elements in input array is equal to given number. Example? Input. ? ? ? ? ? ? ? Output?
[1,2] and 3 ? ? true? [1,2,3] and 5 ? ? ? true? [1,2,3] and 4 ? ? ? ? true? [1,2,3] and 6 ? ? ? ?False?
As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.? In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn¡¯t highly welcome by interviewer.? Moreover my interviewer wanted ?me to add extra test cases ?( beside the ones which brought me to my solution - as shown above ) just in case when¡± in future you want to refactor an existing solution to something more sophisticated¡±. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass.?
Do you believe that adding extra tests cases ¡°for future refactoring¡± makes sense ?? I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases... What do you think ? Is it possible that TDD is not a good fit for strong ¡°algorithmic tasks¡± ?
Writing the tests first can help identify missing or unclear parts of the specification, which seems useful for these kinds of interview questions. You can demonstrate how you ask for clarification and what you do with the information.
Thinking clearly about inputs and expected outputs (a part of TDD) certainly fits algorithmic tasks well, mostly because it helps one detect the situation where their desired algorithm _almost_ meets the specification, but doesn't quite.
Incremental design doesn't always lead the programmer to discover a new algorithm that fits the problem well. I don't interpret this as "TDD doesn't fit" but rather that TDD mostly guides one's existing thinking and helps one notice when one needs to learn something to help with the problem. For example, if I don't know binary search, then I don't think incremental design would guide me from linear search to binary search, but the act of trying to build the search feature incrementally _might_ lead me to consider other ways to search. Seeing dozens of examples of searching might give me enough information (I'd see patterns) to have the insight that "with a sorted search space, I can jump around and certain useful invariants hold". Of course, seeing examples is just one way to gain that insight; different people have different ways of getting there. I encourage you to learn how to put your mind in the state that tends to lead it to insight more easily.
When I started practising TDD, I spent some months establishing the habit of thinking about tests first. This included choosing to write some code test-first and refactoring it incrementally, _even when I didn't need it_. Once I established more-helpful habits, I stopped approaching TDD so strictly, and instead trusted myself to use any tricks I knew to write code, confident that I would add tests and refactor safely when I found that useful. I don't think I would force myself to answer every interview question by using TDD, although I would probably apply the general principle of "make it run, make it work, make it fast" somehow. This might mean starting with the O(n^2) implementation, then spending the remaining time figuring out how to improve it. It depends significantly on what I believe the interviewer wants: do they prefer a slow-but-working solution or do they prefer to see more of my thinking on an incomplete solution that goes in the right general direction? If I don't know what to do, I just guess and hope that I'm right.?
I will say this about your problem statement: given the examples you showed, I have one important question to ask: is the input array sorted or not? I would approach the problem very differently if it were sorted than if it isn't.?
Finally, regarding adding extra tests, I do that, but I spent several months practising _not_ doing that precisely in order to understand when I need it and when I don't. I see a pattern among the enthusiastic programmers learning TDD/test-first programming/evolutionary design: they practise, but they don't clearly-enough identify when they're following a set of rules _for practice_ or _to perform_. The implied rule here, "I will only add tests when they force me to change the code", makes perfect sense in a context of deliberate practice, but I don't always follow it when writing code for pay. I followed it long enough to understand why it helped and I follow it when I notice myself falling back into bad old habits.?
J. B. Rainsberger :: | |
|
Massively agree. With legacy tests, I usually try to start with discovery tests - that is, I take my best guess at what the code is doing and write tests to see if I am right.?
Of course, that doesn¡¯t mean that it is what the author¡¯s *wanted* the code to do, but it the best approximation I have. That way, at least I can minimize the chances of breaking current behavior unintentionally.?
The most frustrating part is when this legacy code was written just six months prior...
----------------- Author,?Getting Started with Apache Maven <> Author, HttpUnit <> and SimpleStub <> Now blogging at <>
Have you listened to Edict Zero <>? If not, you don¡¯t know what you¡¯re missing!
toggle quoted message
Show quoted text
On Feb 16, 2020, at 1:12 PM, Avi Kessner < akessner@...> wrote:
Lately, while helping with some legacy code, I've been finding that my "safe refactorings" aren't quite as safe as I wanted them to be, and that if I had started with the complete test first, I would have saved myself 10 minutes of thinking in the wrong?direction. Especially when functions or idioms lie about what they are doing in non obvious ways.
Always good to test assumptions. :)
I had a live coding interview yesterday and I faced interesting ( at least for me) issue.? A small background: I was about to implement a function ?to figure out if sum of any 2 elements in input array is equal to given number. Example? Input. ? ? ? ? ? ? ? Output?
[1,2] and 3 ? ? true? [1,2,3] and 5 ? ? ? true? [1,2,3] and 4 ? ? ? ? true? [1,2,3] and 6 ? ? ? ?False?
As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.? In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn¡¯t highly welcome by interviewer.? Moreover my interviewer wanted ?me to add extra test cases ?( beside the ones which brought me to my solution - as shown above ) just in case when¡± in future you want to refactor an existing solution to something more sophisticated¡±. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass.?
Do you believe that adding extra tests cases ¡°for future refactoring¡± makes sense ?? I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases... What do you think ? Is it possible that TDD is not a good fit for strong ¡°algorithmic tasks¡± ?
Writing the tests first can help identify missing or unclear parts of the specification, which seems useful for these kinds of interview questions. You can demonstrate how you ask for clarification and what you do with the information.
Thinking clearly about inputs and expected outputs (a part of TDD) certainly fits algorithmic tasks well, mostly because it helps one detect the situation where their desired algorithm _almost_ meets the specification, but doesn't quite.
Incremental design doesn't always lead the programmer to discover a new algorithm that fits the problem well. I don't interpret this as "TDD doesn't fit" but rather that TDD mostly guides one's existing thinking and helps one notice when one needs to learn something to help with the problem. For example, if I don't know binary search, then I don't think incremental design would guide me from linear search to binary search, but the act of trying to build the search feature incrementally _might_ lead me to consider other ways to search. Seeing dozens of examples of searching might give me enough information (I'd see patterns) to have the insight that "with a sorted search space, I can jump around and certain useful invariants hold". Of course, seeing examples is just one way to gain that insight; different people have different ways of getting there. I encourage you to learn how to put your mind in the state that tends to lead it to insight more easily.
When I started practising TDD, I spent some months establishing the habit of thinking about tests first. This included choosing to write some code test-first and refactoring it incrementally, _even when I didn't need it_. Once I established more-helpful habits, I stopped approaching TDD so strictly, and instead trusted myself to use any tricks I knew to write code, confident that I would add tests and refactor safely when I found that useful. I don't think I would force myself to answer every interview question by using TDD, although I would probably apply the general principle of "make it run, make it work, make it fast" somehow. This might mean starting with the O(n^2) implementation, then spending the remaining time figuring out how to improve it. It depends significantly on what I believe the interviewer wants: do they prefer a slow-but-working solution or do they prefer to see more of my thinking on an incomplete solution that goes in the right general direction? If I don't know what to do, I just guess and hope that I'm right.?
I will say this about your problem statement: given the examples you showed, I have one important question to ask: is the input array sorted or not? I would approach the problem very differently if it were sorted than if it isn't.?
Finally, regarding adding extra tests, I do that, but I spent several months practising _not_ doing that precisely in order to understand when I need it and when I don't. I see a pattern among the enthusiastic programmers learning TDD/test-first programming/evolutionary design: they practise, but they don't clearly-enough identify when they're following a set of rules _for practice_ or _to perform_. The implied rule here, "I will only add tests when they force me to change the code", makes perfect sense in a context of deliberate practice, but I don't always follow it when writing code for pay. I followed it long enough to understand why it helped and I follow it when I notice myself falling back into bad old habits.?
J. B. Rainsberger ::??|??|?
|
There was a function named "updateSubscription" which was used to set a "newSubscription" which was then returned.?? Inside the very large function was a line which read "validation = isValid(<long list of conditions>);" this was untyped TypeScript. Then later it read "if (validation) {isInvalid = false; return validation}"
I had thought that validation was a boolean and refactored out the logic before calling the function. But validation was actually an http response object. Hide quoted text On Mon, Feb 17, 2020, 01:40 A. Lester Buck III < buck@...> wrote: Well that is just torturing us TDD newbies....
Can you share examples on the list?? What does it mean for a function or idiom to lie about what they are doing in non-obvious ways?
Thanks!
Lester
On Sun, Feb 16, 2020, 23:46 Russell Gold < russ@...> wrote: Massively agree. With legacy tests, I usually try to start with discovery tests - that is, I take my best guess at what the code is doing and write tests to see if I am right.?
Of course, that doesn¡¯t mean that it is what the author¡¯s *wanted* the code to do, but it the best approximation I have. That way, at least I can minimize the chances of breaking current behavior unintentionally.?
The most frustrating part is when this legacy code was written just six months prior...
----------------- Author,?Getting Started with Apache Maven <> Author, HttpUnit <> and SimpleStub <> Now blogging at <>
Have you listened to Edict Zero <>? If not, you don¡¯t know what you¡¯re missing!
On Feb 16, 2020, at 1:12 PM, Avi Kessner < akessner@...> wrote:
Lately, while helping with some legacy code, I've been finding that my "safe refactorings" aren't quite as safe as I wanted them to be, and that if I had started with the complete test first, I would have saved myself 10 minutes of thinking in the wrong?direction. Especially when functions or idioms lie about what they are doing in non obvious ways.
Always good to test assumptions. :)
I had a live coding interview yesterday and I faced interesting ( at least for me) issue.? A small background: I was about to implement a function ?to figure out if sum of any 2 elements in input array is equal to given number. Example? Input. ? ? ? ? ? ? ? Output?
[1,2] and 3 ? ? true? [1,2,3] and 5 ? ? ? true? [1,2,3] and 4 ? ? ? ? true? [1,2,3] and 6 ? ? ? ?False?
As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.? In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn¡¯t highly welcome by interviewer.? Moreover my interviewer wanted ?me to add extra test cases ?( beside the ones which brought me to my solution - as shown above ) just in case when¡± in future you want to refactor an existing solution to something more sophisticated¡±. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass.?
Do you believe that adding extra tests cases ¡°for future refactoring¡± makes sense ?? I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases... What do you think ? Is it possible that TDD is not a good fit for strong ¡°algorithmic tasks¡± ?
Writing the tests first can help identify missing or unclear parts of the specification, which seems useful for these kinds of interview questions. You can demonstrate how you ask for clarification and what you do with the information.
Thinking clearly about inputs and expected outputs (a part of TDD) certainly fits algorithmic tasks well, mostly because it helps one detect the situation where their desired algorithm _almost_ meets the specification, but doesn't quite.
Incremental design doesn't always lead the programmer to discover a new algorithm that fits the problem well. I don't interpret this as "TDD doesn't fit" but rather that TDD mostly guides one's existing thinking and helps one notice when one needs to learn something to help with the problem. For example, if I don't know binary search, then I don't think incremental design would guide me from linear search to binary search, but the act of trying to build the search feature incrementally _might_ lead me to consider other ways to search. Seeing dozens of examples of searching might give me enough information (I'd see patterns) to have the insight that "with a sorted search space, I can jump around and certain useful invariants hold". Of course, seeing examples is just one way to gain that insight; different people have different ways of getting there. I encourage you to learn how to put your mind in the state that tends to lead it to insight more easily.
When I started practising TDD, I spent some months establishing the habit of thinking about tests first. This included choosing to write some code test-first and refactoring it incrementally, _even when I didn't need it_. Once I established more-helpful habits, I stopped approaching TDD so strictly, and instead trusted myself to use any tricks I knew to write code, confident that I would add tests and refactor safely when I found that useful. I don't think I would force myself to answer every interview question by using TDD, although I would probably apply the general principle of "make it run, make it work, make it fast" somehow. This might mean starting with the O(n^2) implementation, then spending the remaining time figuring out how to improve it. It depends significantly on what I believe the interviewer wants: do they prefer a slow-but-working solution or do they prefer to see more of my thinking on an incomplete solution that goes in the right general direction? If I don't know what to do, I just guess and hope that I'm right.?
I will say this about your problem statement: given the examples you showed, I have one important question to ask: is the input array sorted or not? I would approach the problem very differently if it were sorted than if it isn't.?
Finally, regarding adding extra tests, I do that, but I spent several months practising _not_ doing that precisely in order to understand when I need it and when I don't. I see a pattern among the enthusiastic programmers learning TDD/test-first programming/evolutionary design: they practise, but they don't clearly-enough identify when they're following a set of rules _for practice_ or _to perform_. The implied rule here, "I will only add tests when they force me to change the code", makes perfect sense in a context of deliberate practice, but I don't always follow it when writing code for pay. I followed it long enough to understand why it helped and I follow it when I notice myself falling back into bad old habits.?
J. B. Rainsberger ::??|??|?
|
Hey
During interview i explicitly asked: - what should be the output when input array is empty - can input array hold only a particular subset of Ints (like non-negative only) - is input array sorted.
And the responses were: - for an empty input array, output shall be true only and only if given number is 0 (and false in other case) - input array can hold any arbitrary ints (positive, negative and zero) -?input array is NOT sorted?
I asked for "expected" algorithm at the very end of interview and I was told: "sort input array and use 2 pointers".? Given i knew "expected" algorithm from the very beginning and use TDD to implement it _could_ drive me to different tests from these i did for my naive solution - moving 2 pointers on sorted array requires a different set of tests (like boundary conditions)? which are "specific" to this algorithm.? What do you think?
br JM
toggle quoted message
Show quoted text
On Sun, 16 Feb 2020 at 18:43, J. B. Rainsberger < jbrains762@...> wrote:
I had a live coding interview yesterday and I faced interesting ( at least for me) issue.? A small background: I was about to implement a function ?to figure out if sum of any 2 elements in input array is equal to given number. Example? Input. ? ? ? ? ? ? ? Output?
[1,2] and 3 ? ? true? [1,2,3] and 5 ? ? ? true? [1,2,3] and 4 ? ? ? ? true? [1,2,3] and 6 ? ? ? ?False?
As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.? In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn¡¯t highly welcome by interviewer.? Moreover my interviewer wanted ?me to add extra test cases ?( beside the ones which brought me to my solution - as shown above ) just in case when¡± in future you want to refactor an existing solution to something more sophisticated¡±. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass.?
Do you believe that adding extra tests cases ¡°for future refactoring¡± makes sense ?? I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases... What do you think ? Is it possible that TDD is not a good fit for strong ¡°algorithmic tasks¡± ?
Writing the tests first can help identify missing or unclear parts of the specification, which seems useful for these kinds of interview questions. You can demonstrate how you ask for clarification and what you do with the information.
Thinking clearly about inputs and expected outputs (a part of TDD) certainly fits algorithmic tasks well, mostly because it helps one detect the situation where their desired algorithm _almost_ meets the specification, but doesn't quite.
Incremental design doesn't always lead the programmer to discover a new algorithm that fits the problem well. I don't interpret this as "TDD doesn't fit" but rather that TDD mostly guides one's existing thinking and helps one notice when one needs to learn something to help with the problem. For example, if I don't know binary search, then I don't think incremental design would guide me from linear search to binary search, but the act of trying to build the search feature incrementally _might_ lead me to consider other ways to search. Seeing dozens of examples of searching might give me enough information (I'd see patterns) to have the insight that "with a sorted search space, I can jump around and certain useful invariants hold". Of course, seeing examples is just one way to gain that insight; different people have different ways of getting there. I encourage you to learn how to put your mind in the state that tends to lead it to insight more easily.
When I started practising TDD, I spent some months establishing the habit of thinking about tests first. This included choosing to write some code test-first and refactoring it incrementally, _even when I didn't need it_. Once I established more-helpful habits, I stopped approaching TDD so strictly, and instead trusted myself to use any tricks I knew to write code, confident that I would add tests and refactor safely when I found that useful. I don't think I would force myself to answer every interview question by using TDD, although I would probably apply the general principle of "make it run, make it work, make it fast" somehow. This might mean starting with the O(n^2) implementation, then spending the remaining time figuring out how to improve it. It depends significantly on what I believe the interviewer wants: do they prefer a slow-but-working solution or do they prefer to see more of my thinking on an incomplete solution that goes in the right general direction? If I don't know what to do, I just guess and hope that I'm right.?
I will say this about your problem statement: given the examples you showed, I have one important question to ask: is the input array sorted or not? I would approach the problem very differently if it were sorted than if it isn't.?
Finally, regarding adding extra tests, I do that, but I spent several months practising _not_ doing that precisely in order to understand when I need it and when I don't. I see a pattern among the enthusiastic programmers learning TDD/test-first programming/evolutionary design: they practise, but they don't clearly-enough identify when they're following a set of rules _for practice_ or _to perform_. The implied rule here, "I will only add tests when they force me to change the code", makes perfect sense in a context of deliberate practice, but I don't always follow it when writing code for pay. I followed it long enough to understand why it helped and I follow it when I notice myself falling back into bad old habits.?
J. B. Rainsberger :: | |
|