Studies show that the productivity impact of AI-assisted coding can even be negative. A “perception gap” can make developers feel more productive when their output drops.
Here, I explore ways to pursue real positive productivity gains.
Some studied show an AI-assisted coding productivity gain of 20%. One is the Google study (*).
Others show a loss of 20%. One is the METR study (**).
The METR study interestingly reveals a ‘perception gap’: many developers while slowed down, still believe they are more productive.
The 40% gains estimated or some economists and AI experts, and the x10 gains mentioned by some AI enthusiasts, are still to be captured by any study.
This leaves software developers searching for ways to minimise the productivity losses and maximise the gains.
And leaves organisations searching for ways to support their teams without falling for false promises.
We’ve all had that friend who only brags about their big wins at the casino, never their losses.
The hype of online conversations about the miracles of AI-assisted coding, can often be alike.
AI hype, AI-assisted coding productivity fluctuations and the ‘perception gap’ makes it hard for organisations and developers.
So while AI models, tools and IDEs are rapidly improving, organisations and developers are learning how to best use them, what can they actually do to maximise the performance gains and avoid being slowed down.
A few times, I noticed it actually took longer to code using the AI than to just manually code the solution myself.
This happened most often when I picked up a task that was too large without breaking it down. In those cases, the AI coding assistant did a great job for 90% of the task, and then got stuck in a loop of bad guesses, flawed “logic”, and non-working solutions. To salvage that 90% I’d pour in extra time, in most cases failing in the end to make AI produce a working result.
More generally, retrospecting on productivity losses, I realised a single wrong decision was often enough to derail my effort.
This reflection pushed me to track all my errors, and find ways to avoid repeating them in the future. Eventually I identified a set of heuristics for being productive with AI-assisted coding.
Stopping and reflecting after every AI-assisted coding session, I gradually distilled a set of heuristics that improved my success rate and reduced the instances when AI slowed me down.
I have recently posted this list of heuristics, inviting others to compare and comment, and engage in a discussion that fuels continuous learning. A quick caveat: these heuristics are context-dependent, in a rapidly evolving environment, therefore treat them as inspiration for your own experiments, not as recipes to copy-past.
In terms of volume of work, these heuristics covered the largest part of the whole SDLC.
One of those heuristics is interestingly named: Know when to revert to manual coding.
A Google article (***) makes a similar point: “it is important to find a balance between the cost of review and added value”. In other words, one must weigh the time spent babysitting the AI against the speed gains actually achieved with the AI.
This trade-off captures the crux of the matter. The rest of the heuristics cover a wide range of common situations.
See this post for for the full list of heuristics (link).
I recently shared thoughts on strategies for adopting AI-assisted coding at the organisational level. A well-crafted strategy is an antidote to chasing unrealistic promises and to following costly recipes that distract from the work that has real chances to bring actual benefits.
While this strategy focuses on the environment in which the AI-Assisted Coding adoption happens, more than the actual ways of working (like, for example, the heuristics mentioned before), it still play a decisive role in determining success.
For a deeper dive, see this a post on the organisational strategy for AI-assisted coding adoption (link).
The cases below are the most surprising to me, where AI-assisted coding saved me days of work, reducing tasks to minutes or hours. Without the help of AI, or of an expert at hand, I would have been stuck for so long that I might not have attempted to undertake the endeavour at all.
In other words, AI was the key enabler. It turned problems that felt impossible or too costly into challenges I could tackle quickly and effectively.
Below is a list of these cases I mentioned before.
Learning
Working outside the comfort zone
Despite the big gains, these of tasks represented a small portion of the volume of work of the whole SDLC. This may depend heavily on the context.
For example, AI-assistant may play an even bigger role as enabler for:
On the other end of the spectrum, I’ve found AI far less helpful when it comes to making definitive strategic decisions such as:
The reason may be that these decisions are inherently more nuanced. They may depend on unarticulated needs and subtle preferences of the stakeholders, and a broader prospective that draws on input from multiple stakeholders, that AI simply doesn’t have access to.
___________________
(*) Google study paper: https://arxiv.org/abs/2410.12944
(**) METR study paper: https://arxiv.org/abs/2507.09089
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
(***) Google article: https://research.google/blog/ai-in-software-engineering-at-google-progress-and-the-path-ahead/
See how we can help.
You, your team, your Tech.