Keyboard Shortcuts
Likes
Search
Does an AI assistant help with learning/using TDD?
Here in the middle of 2023 we see an AI bubble in the stock market - launched by OpenAI's release of ChatGPT. ?One can not turn on an info feed without hearing about AI these days. ?It is hotter than... Crocks at the Beach in Boca Chica Texas. Have a look at this article and let me know if this helps learning/practicing Testing as a first class responsibility role of Developing. |
|||||||||||
Of course, the snippet used here is (intentionally) quite simple, and probably nothing like a real-world scenario. Generating insights from an LLM above and beyond small “toy” scenarios (more like PoCs than anything else) still will require a lot of work to guide the LLM down the path you want to go, and won’t substitute for the skill and knowledge of a developer any time soon.
Mike Emeigh -- Mike Emeigh
Sent from Gmail Mobile |
|||||||||||
开云体育a) yes I had ChatGPT resolve some nasty CSS interaction issues. It was conversationally incremental and reasonably effective.b) no I have not tried to get it to help with TDD. I wonder about the converse. Can TDD be used (and sold in this context) to help an AI assistant generate effective code? Cheers, Jeff
|
|||||||||||
开云体育Re: the converse Can TDD be used (and sold in this context) to help an AI assistant generate effective code? I use ChatGPT to generate some good user stories for a test case - A Travel App, and found it quite acceptable PO assistant. See details: ? It took very little knowledge/work to generate some user stories - I would assume these could easily be feed into a team building the actual App and with a bit of team backlog refinement work be ready for Sprint planning. ?Now could we ask the AI to product the set of unit test for the user story -> then ask it to produce the production code? That is an experiment worth running… several times to learn… Thanks for a great idea! David
|
|||||||||||
On Thu, 22 Jun 2023 at 21:36 Jeff Langr <jeff@...> wrote:
I have used GitHub Copilot in my hobby project— it’s javascript and I don’t know the language well, so it often suggests code that’s better than what I would write. It does work with TDD, sort of. ?The main issue is that it will suggest production code that does more than strictly needed for the current test; often guessing the complete function on a single shot.? Then I have the choice to delete it and write the bare minimum, or to keep it and ask Copilot to write the missing tests.
|
|||||||||||
All of my attempts to use chatgpt to help with real coding problems have resulted in failures for me.? Test scenarios with well known problems like fuzzbuzz have been fun and I was amazed at the good results originally. However, I either lack the prompting skills or the patience to use the AI for real problems.? My biggest problem has been imagined APIs that don't really exist, or the api exists but doesn't actually do what chatgpt is imagining it should do.? That's the problem with probabilistic answers. It's only probably right. On Fri, Jun 23, 2023, 14:21 Matteo Vaccari <matteo.vaccari@...> wrote:
|
|||||||||||
My experience is different— I had GPT or copilot help me in some relatively hairy algorithms; it saved me a ton of time. Eg loading a bunch of bitmaps in JavaScript and then executing a callback when the last has finished loading. A moderately skilled is dev can do this blindfolded I suppose but the complicated contortion I was going to try to do on my own would have taken me off track for long. And it was much faster than searching stackoverflow. Another instance was when it suggested that I reset a variable I was going to forget to reset.? Now when I need something even a little complicated I start by writing the comment for what it’s supposed to do; when it works it’s a lot of fun. Working alone is boring, I keep repeating the same coding patterns I am used to do; with copilot I’m curious to see what it proposes me. On Fri, 23 Jun 2023 at 15:18 Avi Kessner <akessner@...> wrote:
|
|||||||||||
Have you used an AI assistant to do any Dev work?Yes, I made a try where I had very few knowledge of the business to test if it can help to boot projects without business experts. The idea was to have something to show to attract business experts and see if it can enlight me when I don't understand the business quite well. I've used langchain with gpt4 api, and it seems to work well. Have you used an AI assistant to help in TDD?No, I've not found a good tool for that, my attempts with copilot were not successful, first I don't use vscode and github and then it takes more time finally than doing the job yourself. I'm watching in the hope I've enough time to try their "fill in the middle" feature for TDD someday. Has someone tried startcoder ? any feedback is welcome. thanks, Gregory |
|||||||||||
On Fri, Jun 23, 2023 at 8:21?AM Matteo Vaccari <matteo.vaccari@...> wrote: ?
I can imagine using ChatGPT in the same way that I have in the past
used CodeWars: to read small bits of other people's code as a mechanism
to help me learn a language incrementally, especially its idioms. I have not yet tried it with any serious effort at building something I care about keeping. J. B. (Joe) Rainsberger :: ?:: :: Replies from this account routinely take a few days, which allows me to reply thoughtfully. I reply more quickly to messages that clearly require answers urgently. If you need something from me and are on a deadline, then let me know how soon you need a reply so that I can better help you to get what you need. Thank you for your consideration. -- J. B. (Joe) Rainsberger :: :: :: Teaching evolutionary design and TDD since 2002 |
|||||||||||
I did an interesting experiment with ChatGPT recently, going through a BDD cycle with it. A worry I have about the use of these tools is that there will be a tendency to trust code does what was intended, but never verifying that is the case. Something we are quite familiar with in TDD, but aggravated?by the ease of generating complex chunks of code.? I was thinking the answer was in a feedback loop, and tried building in a familiar one. It was only a small experiment, though I'm eager to spend some time on a more extensive one. The flow was roughly: - Describe a feature, including some of the business/application it fits in - Ask GPT to generate Gherkin Illustrative scenarios for the feature (getting to illustrative, and explicit examples, took a few tries, but it got there) - Verify that the scenarios were complete and minimal (complete: we think so, minimal: it came up with unrelated requirements, so we needed to tweak that, felt very much like a normal session with a PO and team) - Ask GPT to extract a Domain Language from the description (definitions, to make it easy to feed into later sessions) - Ask it to generate the glue code / implementation of the gherkin scenarios - Ask it to implement the code that would implement the scenarios This got results that worked, and did not have any code that did things we did not ask for. Which I thought was pretty impressive. That statement was from a check by me, though. It might be a good cautious step to ask GPT to ensure there's no code not exercised by the tests. We did a second run through this, extending the feature (we used a simple shopping basket?example, extending it with taxes, and discount codes). This worked best by feeding in the generated domain language and scenarios, and then taking it from there. In its current form, I could see limitations simply by the amount of memory available for the model, though that might prompt (heh) the user to think well about modularisation.? All in all, I liked the process, and it went pretty fast once we figured out how to ask for the right type of response. Wouter On Mon, Jun 26, 2023 at 3:20?PM J. B. Rainsberger <me@...> wrote:
|
|||||||||||
Hi there ? A recent article I found quite interesting is the one by Roberto Ostinelli: he made a very insightful experiment using AI as a pair programming companion: He set up an experiment where two AI chatbots pair on a kata autonomously, with one responsible for writing the next test and the other responsible for making it pass. Very impressive and fun to watch. It's not a real TDD cycle (the refactor step is left at the end of the session), but I find the possibilities of using these tools as companions while coding interesting. Coming back to the original question: I'm experimenting with ChatGPT as a conversational tool to help me clarify my intents, lay down more clearly the goals that I want to achieve and identify all the constraints of the problem that I want to solve, which is something I benefit from when doing TDD, especially when I ask myself "what is the next small problem that I want to solve?". Cheers Pietro |
|||||||||||
This paper () shows that these
toggle quoted message
Show quoted text
models are still quite bad at writing good tests. Nevertheless, as an assistant, maybe they can help. We're about to start more formal experiments around this topic. Cheers, -- Maurício Aniche Author of Effective Software Testing: A Developer's Guide On Wed, Jul 19, 2023 at 10:56?PM David Koontz <david@...> wrote:
|
|||||||||||
What interests me more is if the LLM can write code based on the tests I write, rather than generating tests for the code I write. brought to you by the letters A, V, and I and the number 47 On Thu, Jul 20, 2023 at 9:42?AM Mauricio Aniche <mauricioaniche@...> wrote: This paper () shows that these |
|||||||||||
开云体育Thanks for the scientific report on AI generation of unit test Mauricio - very interesting.?David Koontz ?? ? Email: ?david@... ?? ? (360) 259-8380 ? ? ?http://about.me/davidakoontz On Jul 20, 2023, at 5:40 AM, Avi Kessner <akessner@...> wrote:
|