Loading...

AI-Assisted coding helpful heuristics

Posted on

A list of heuristics for AI-Assisted coding

With LLMs being non-deterministic and our natural tendency to anthropomorphise them, it’s easy to see a cause-and-effect relationship between the approach used and the outcome, even when there is none.
Pareidolia in action?

That is why it is important to keep experimenting with different heuristics, apply critical thinking, and find out what really works for you, in context.

Take the heuristics listed below as potential experiments you can try out, or inspirations to come up with your own heuristics.

A) 1 goal => 1 chat => 1 one commit

Keeping the context small, chats short and focused, worked wonders. Each small goal = a new chat + a new commit at the end. This way, the task success rate went up, and conversations stayed sharp.
Pietro Maffi suggested the variation 1 feature => 1 chat => 1 commit, which is working well for him.

With models improving, I’m applying this exception: if the first task succeeds swiftly and efficiently, and the second task builds upon the work of the first, I can continue using the same chat for the second task (but not the third).

B) Feedback beats “perfect” prompting

I got 10x better results by feeding the LLM with real outputs, logs, and debug info. Even better: showing how to run the command that generates the feedback and where to find it. This consistently helped the LLM break out of a loop of bad guesses, faulty “logic” deductions, and non-working solutions.

B.1) Corollary: Conversation beat “perfect” prompting

LLM responses can change over time due to their non-deterministic nature, the model’s evolution, or temporary quirks. Even the “perfect” prompt that worked miracles once may not work well next time. A conversation with its back-and-forth is much more likely to achieve the desired outcome.

C) 3 strikes => switch model

When even feedback loops failed, swapping models usually solved the problem faster than wrestling with the one stuck chasing its tail, often due to its limitations with the problem at hand, or temporary quirks. It’s a good idea to keep a log of which tools or models fail specific tasks, noting the language and technology stack. etc. This helps to spot trends and derive insights.

D) Follow my lead

When nothing else worked, I solved one instance manually and showed the LLM the pattern to follow. Asking it to replicate from my example worked well, especially for repetitive coding tasks.
It also worked well when I could point to specific parts of the existing codebase where the same or similar problems were already solved.

D.2) When going in circles, break it down

When the LLM keep going in circles without finding and fixing the root cause of the problem, it helps to outline a specific strategy. After breaking down the problem into small incremental steps, feed the LLM with an incremental strategies describing the steps startign from from zero and gradually adding up toward the goal.

E) Know when to revert to manual coding

When, after attempting the heuristics A ÷ D, the LLM is still erratic and cannot produce a good quality working solution, it is time to cut the losses. Do the work yourself. This will save you time and frustration.
This was inspired by comments from Dave Nicolette.

F) My scratchpad > the LLM’s plan

A simple text file scratchpad for my own work plan gave me speed, focus, and flexibility.  It lets me edit and mix plans from multiple LLMs, bounce back after a crash, start a new chat anytime without losing the plan or the progress, and seamlessly switch LLMs in the middle of any task.

G) Keystone question > persona

Instead of “pretend you’re a <role>…,” I start with a specific central question about the core of the task at hand. With the relevant details and using pertinent language, this set the stage better than a persona for the LLM.

H) Chat context Handoff

This is an auxiliary heuristic for heuristics A and C, and whenever the LLM starts to drift and/or the chat’s context window becomes too large and polluted. Use an “Extraction prompt” in the current chat to create a summary of key info to be handed off to a new chat, with something like:
<< You’re handing this task off to another engineer. Write a summary of … This engineer has no context, include whatever will help them>>
This heuristic was suggested by Piergiorgio Grossi and Pete Hodgson.

I) Brute force sandbox (AKA don’t let LLMs shit all over your code)

When using brute force methods like Vibe coding and multi-agent swarms, the genie may splatter the code with low-quality, unnecessary, and even buggy changes. Use a sandbox copy of your code to do the work and, in the end, extract the working solution found into your clean code, in a controlled way.
This heuristic (not the colourful name 😂) was inspired by a post from Kent Beck.

J) Document the hard stuff. Let the LLM do the rest.

Focus on documenting the stable, foundational knowledge that’s tough for an LLM to infer correctly. For everything else, let the LLM do the heavy lifting from the live codebase.

K) Share learnings and retrospect with a fellow human.

After a difficult task that required multiple attempts, share your learnings with someone else. The other person’s insightful questions and your own reflections will likely lead to even more learning. Thanks to Matteo Vaccari for asking insightful questions leading to this heuristic.

L) Do not multi-task.

When the LLM takes time to finish the task, it is easy to get distracted and focus on other tasks. Multi-tasking like this may often lead to costly mistakes, and the repeated context switch can be exhausting. An alternative is to keep the focus on the current task, with things like making sure all the file changes of the last successful step have been committed or at least staged, preparing the prompt for the next step, updating your own to-do plan notes, keeping an eye on the agent and its file changes to spot if it got stuck in a loop or if its going for a tangent.


Develop Technical Excellence that delivers.


See how we can help.
You, your team, your Tech.