AI is a powerful tool for coding and solving problems. How do we interact with it and trust it, now and into the future?
Resource information | Details |
---|---|
Article Title | Why AI Isn’t Ready to Be a Real Coder |
Author | Rina Diane Caballar |
Focus | Agenic AI, LLMs, Software productivity |
Presentation Language | English |
Years ago a coworker walked into my office, sat down, and started describing a problem he was having with a system he was building. He described what he was trying to do and what was going wrong vs. what he expected to happen. After a few minutes of him talking through his situation, he paused mid-sentence, said "Oh right, that's it, I know what I've got wrong," then left. All this happened without me saying a single word. He came in, talked through his problem, solved it himself, then left. Though we didn't know the term at the time, this is the very idea behind "Rubber Duck Debugging," where in this instance, I was the rubber duck.
"Rubber Ducking" as it is sometimes referred to, takes advantage of the fact that verbalizing a problem causes you to structure it in a way such that you gain a deeper understanding of it and any issues or problems you may be having with it. The idea is that you place an actual rubber duck on your desk and talk to it as you work through a problem. The rubber duck itself is not required, as this is a general problem-solving technique, but it can help and be fun.
Now with most, if not all, of us writing prompts for AI LLMs, sometimes multiple times for a single problem, the question arises: is AI is the new rubber duck? And how many of us have essentially solved our own problem before we've even hit "Send" at the AI?
A recent IEEE article by Rina Diane Caballar, titled "Why AI Isn't Ready to Be a Real Coder," touches on this and many other interesting ideas on the current state and possible future of AI as a coding assistant presented in a paper at the 2025 International Conference on Machine Learning.
It starts by observing that AI-powered software development has yet to reach “the point where you can really collaborate with these tools the way you can with a human programmer.” Further, and somewhat related to the rubber duck approach, the article points out that “If it takes longer to explain to the system all the things you want to do and all the details of what you want to do, then all you have is just programming by another name.” It also cautions that “We’re adapting to the tool, so instead of the tool serving us, we’re serving the tool. And it is sometimes more work than just writing the code.”
In fairness, many of the challenges addressed in the paper will be solved relatively quickly, most likely via Agenic AI that uses a continuous improvement loop, possibly using genetic algorithms driving that loop, selecting the best solutions.
Finally, trust is still a big issue, with the need for humans to be in the loop still. “That team dynamics - when an AI agent can become a member of the team, what kind of tasks will it be doing, and how the rest of the team will be interacting with the agent—is essentially where the human-AI boundary lies.”