The State of AI Programming Going into 2026

Have LLMs become a mandatory programming tool?

Published on
Dec 30, 2025

Read time
4 min read

Introduction

The internet is full of overblown takes about AI. Most people take an extreme stance, whether positive or negative, and it can be difficult to find nuance. But going into 2026, my stance on AI has shifted. I used to feel that choosing it was more of a trade-off; now, I believe that — in most cases — not choosing it is a mistake.

Before you dismiss this view as one of the extreme ends of the debate, allow me a moment to explain myself. I am not a card-carrying member of the AI hype machine. I have mixed feelings about many aspects of AI research and the business models of the most dominant AI companies: in particular, I am critical of these companies’ disregard for copyright; of their massive investment into data centres and the environmental implications of this; of the potential impact of AI on the job market; and of the effects of AI adoption in areas like education, social media, political campaigning and more.

Despite all the problems, it feels undeniable that some AI models — or, more specifically, LLMs — have become really good at programming, at least using popular languages and frameworks, to the point that I now see AI as a mandatory tool for programmers.

I have been working as a software engineer for the best part of a decade, and I was a relatively early adopter of using AI tools for coding. I’d describe myself as a curious sceptic; I could immediately see some use for the technology, but also frequently bumped into its limitations.

But those limitations are changing, and — because of this — so are my views on using AI for programming. In a previous article, I quoted a study that suggested AI use could lead to a 21% increase in developers’ productivity. This tallied with my experience more than the claims that its use could lead to a 10x improvement in productivity.

My view on this has changed. The latest LLMs, such as Anthropic’s flagship Opus 4.5 model, feel a lot more capable. Using models like these, I believe that there are times when AI can lead to multiplicative gains in productivity if certain conditions are true: mainly, that the task in question is the type of task that AI is really good at, and that the person using the AI is skilled enough to get the most out of it.

Where AI Shines

Here’s a concrete example of where AI shines. In a mature codebase, one of the most painful tasks for a programmer can be upgrading old libraries. If the library is fundamental to the codebase, with a lot of breaking changes and complex relationships with other libraries, then the amount of work required can quickly spiral, as upgrading a critical library may require upgrading other libraries that interact with it. On top of this, documentation is often limited or hard to find — especially if you need to reference older versions. It’s the sort of task that can sit in a team’s backlog for months.

How did we deal with this before AI? Slowly, or sometimes not at all! When you worked out how to fix a particular roadblock — the hard but more interesting part — you often needed to apply this fix across a large number of files, which could become very tedious. Now, in the right hands, AI can make this process much faster. The sorts of upgrades that might’ve taken me a few days in the past are now being completed in an afternoon.

There is a knack to getting this level of productivity gain. For a complex upgrade, I find it helps to keep the scope of the AI fairly narrow, focusing it on one issue (or set of related issues) at a time. Having said that, I have occasionally had success with "just do it"-style prompts, though I think this works better on smaller codebases and more popular libraries.

Another AI superpower is creating single-use scripts. Previously, taking the time to develop a single-use script was sometimes difficult to justify, especially if there was a quicker (albeit messier) way of getting the same result manually. AI, however, is very good at both repetitive manual work and creating a helpful script. For these tasks, the amount of context required is usually smaller, which leads to reliably high-quality results. I also don’t need to spend time worrying about code quality, style or architecture. I can run the script, validate the results, tweak if needed, and then bin it forever.

Experience Still Matters

To see these real productivity gains, I feel it’s important to understand the AI’s output and direct it with precision, and so I don’t think these productivity gains will be as high for less experienced programmers. If you’re learning something (whether as a junior developer or as an experienced developer picking up a new tool), it’s probably best to forgo the most productive approach in favour of an approach that uses AI more minimally: for example, by asking it questions, but preventing it from writing the code for you.

That said, a more hands-off approach may become increasingly viable. I was fascinated by David Bau’s blog post on vibe coding, where he shares having some success with trusting the AI to do more and not "looking at all the code in detail", which works because of a heavy focus on testing — and testing the tests! This is something I’d like to experiment with more, but it still feels like a decent amount of experience is required to get a good outcome.

Conclusion

Overall, while I respect the craft-focused programmers who remain stubbornly opposed to introducing AI tools to their toolkit, I believe the productivity gap is widening between those who use these tools and those who do not. There are, of course, some niche areas where AI output is likely to be less good, as there simply isn’t enough training data to make it useful, and so this argument may not apply to people working in those areas. But for most of us, as we enter 2026, using AI tooling is no longer optional.

© 2026 Bret Cameron