Are We Letting AI Code for Us — and Killing Our Skills?
The trade-off between mastery and speed

Published on
Jun 30, 2025
Read time
3 min read
Are we sacrificing too much thinking time? Photo by Juan Rumimpunu on Unsplash.
With so many overblown headlines, it’s difficult to get a clear grasp of the productivity improvements that AI coding tools like ChatGPT, GitHub Copilot and Cursor bring.
At work, I lead a team of software engineers (and still actively contribute code), and my finger-in-the-air estimate is that these AI tools bring an improvement closer to 10% rather than the 10x claimed by some (for example: here, here, here, here and here). A Google study comes closer to my estimate, estimating an average 21% improvement on developer productivity for users of AI.
I am not in the strongly anti-AI camp. In fact, I believe that it’s made me and others on my team — most of them senior software engineers — more productive. But at the same time, I have become concerned about the longterm effects of being overdependent on AI tooling. In the long run, will it make us worse at writing software? Does it deprive us of practice, making us learn more slowly?
We don’t know the answer for certain, but a study on essay writing suggests that, maybe, there is a risk that overdependence on AI might lead to our skills atrophying. A draft paper published earlier this month by MIT suggests that — when writing an essay — those using ChatGPT ‘consistently underperformed’ compared to those using a search engine or nothing at all, and ‘significantly underperformed’ when being asked to quote the essay they had produced.
Overall, those who didn’t use ChatGPT ‘engaged more extensive brain network interactions’; not surprisingly, perhaps, the brains of the non-AI group worked harder. Writing code is different enough from writing essays that we should be hesitant to draw too many conclusions from this study, but it is cause for concern.
Of course, how we use the AI tools matters — a lot. Presumably, if we ensure that we understand every line of code generated for us, we’re doing better than anyone who blindly trusts what the tools have generated for them. The AI-assisted essay writers in the study seemed to be delegating most of the thinking to ChatGPT, and we don’t have to use AI tools in this way.
Nevertheless — when taking suggestions from AI — we become more like code reviewers and less like active creators. Does this matter? I couldn’t find any studies trying to answer this particular question, but I suspect that, although code review certainly teaches something to the reviewer, this learning is unlikely to be as deep as if we wrote the code ourselves.
On the flip-side, AI tooling can be a significant timesaver. In the past, I would’ve probably gone to StackOverflow to grab a good implementation of a debounce function, as I’d probably find something better than what I could come up with myself. Now, Cursor autocompletes it for me. (And, for the record, I think that’s fine: you don’t have to have a perfect debounce function committed to memory to be a good software engineer).
Occasionally, I trust Cursor with larger tasks, such as writing unit tests for a particular chunk of code. The output almost always requires tweaking but — if I can provide an example of something similar — the result is usually a decent starting point and it saves me time that I would’ve spent writing boilerplate or tweaking an existing test file to fit a new purpose.
For professional programmers, productivity matters and, particularly if we’re senior, we’re being measured on both the quality and quantity of our output. But learning matters too. Whether LLMs’ coding ability plateaus or continues to grow, I believe that there will always be a market for human beings who deeply understand code. If we want to be (or continue to be) those people, we should think carefully about how much of our thinking we delegate to AI.
As so often, balance is probably the best approach. Throwing out the AI tools entirely may do more harm than good, especially in a job market where “AI skills” are often high on the agenda. But it’s important to know when and how to use it: don’t delegate away too much of the important thinking.
Related articles
You might also enjoy...

Automate Your Release Notes with AI
How to save time every week using GitLab, OpenAI, and Node.js
11 min read

3 Strategies to Overcome OpenAI Token Limits
Learn how to use the OpenAI API to have a conversation with GPT-4 and how to exceed the token limits
22 min read

Focus Doesn’t Scale
How to multitask when you’re wired for deep work
4 min read