Loading...

What I’ve learned so far coding with an LLM genie

Posted on

A list of heuristics for AI-Assisted coding

With LLMs being non-deterministic and our natural tendency to anthropomorphise them, it’s easy to see cause and effect between the approach used and the outcome, even when there is none.
Pareidolia in action?

That is why it is important to keep experimenting with different heuristics, apply critical thinking, and find out what really works for you, in context.

Take the heuristics listed below as potential experiments you can try out, or inspirations to come up with your own heuristics.

A) 1 goal => 1 chat => 1 one commit

Keeping the context small, chats short and focused, worked wonders. Each small goal = a new chat + a new commit at the end. This way, the task success rate went up, and conversations stayed sharp.
Pietro Maffi suggested the variation 1 feature => 1 chat => 1 commit, which is working well for him.

B) Feedback beats “perfect” prompting

I got 10x better results by feeding the LLM with real outputs, logs, and debug info. Even better: showing how to run the command that generates the feedback and where to find it. This consistently helped the LLM break out of a loop of bad guesses, faulty “logic” deductions, and non-working solutions.

B.1) Corollary: Conversation beat “perfect” prompting

LLM responses can change over time due to their non-deterministic nature, the model’s evolution, or temporary quirks. Even the “perfect” prompt that worked miracles once may not work well next time. A conversation with its back-and-forth is much more likely to achieve the desired outcome.

C) 3 strikes => switch model

When even feedback loops failed, swapping models usually solved the problem faster than wrestling with the one stuck chasing its tail, often due to its limitations with the problem at hand, or temporary quirks.

D) Follow my lead

When nothing else worked, I solved one instance manually and showed the LLM the pattern to follow. Asking it to replicate from my example worked well, especially for repetitive coding tasks.
It also worked well when I could point to specific parts of the existing codebase where the same or similar problems were already solved.

E) Know when to revert to manual coding

When, after attempting the heuristics A ÷ D, the LLM is still erratic and cannot produce a good quality working solution, it is time to cut the losses. Do the work yourself. This will save you time and frustration.
This was inspired by comments from Dave Nicolette.

F) My scratchpad > the LLM’s plan

A simple text file scratchpad for my own work plan gave me speed, focus, and flexibility.  It lets me edit and mix plans from multiple LLMs, bounce back after a crash, start a new chat anytime without losing the plan or the progress, and seamlessly switch LLMs in the middle of any task.

G) Keystone question > persona

Instead of “pretend you’re a <role>…,” I start with a specific central question about the core of the task at hand. With the relevant details and using pertinent language, this set the stage better than a persona for the LLM.

H) Chat context Handoff

This is an auxiliary heuristic for the heuristics A and C, and whenever the LLM start to drift and/or the chat’s context window starts to become too big and polluted. Use an “Extraction prompt” in the current chat to create a summary of key info to be handed off to a new chat, with something like:
<< You’re handing this task off to another engineer. Write a summary of … This engineer has no context, include whatever will help them>>
This heuristic was suggested by Piergiorgio Grossi and Pete Hodgson.

I) Brute force sandbox (AKA don’t let LLMs shit all over your code)

When using brute force methods like Vibe coding and multi-agent swarms, the genie may splatter the code with low-quality, unnecessary, and even buggy changes. Use a sandbox copy of your code to do the work and, in the end, extract the working solution found into your clean code, in a controlled way.
This heuristic (not the colourful name 😂) was inspired by a post from Kent Beck.

J) Document the hard stuff. Let the LLM do the rest.

Focus on documenting the stable, foundational knowledge that’s tough for an LLM to infer correctly. For everything else, let the LLM do the heavy lifting from the live codebase.

________________________________

👉 How about you?
Have you tried a similar heuristic?
Did it work or not?
In which context?
What else have you tried?


Develop Technical Excellence that delivers.


See how we can help.
You, your team, your Tech.