How did it work out so far?
Prehistory
"Before Claude 3.7"
The age of code completion
- Plugins like Copilot, Tabnine, Supermaven, Augment Code (the ones I tried)
- Cursor, Windsurf up and coming
But heavy Jetbrains IDE user
- Jetbrains is brilliant with structural and semantic code understanding
- "Non-AI" code completions are already really-really good
→ Not much benefit to using AI
The Start of History: Claude 3.7
- The LLM appears to be smart enough
- Tools are good enough
- The trajectory is clear
→ Must explore it myself for hands-on experience
Vibe Coding
Letting AI loose to build a whole system based on a single prompt
vs.
Owning every line of code, including AI generated
Adapting the Requirements
Finding the right level of requirement detail that Claude would get, and it's less effort than writing the code itself
[NEW MODULE]
[ELEMENT 1]
[DETAIL 1.1]
[DETAIL 1.2]
[ELEMENT 2]
[DETAIL 2.1]
[DETAIL 2.2]
[DETAIL 2.2.1]
[DETAIL 2.2.2]
[DETAIL 2.2.3]
[DETAIL 2.3]
[ELEMENT 3]
...
The Return of The Command Line
😰 SCARY! Letting AI rummage around on my computer!
But...
- Fast
- Advantageous access to LLMs
Use All of Them!
illustrative
- Select primary and use their "$200" subscription to do the work
- Also subscribe to others for reviews and "unblocking" the primary
Your mileage may vary, as a freelancer, this works well for me:
- Primary: ChatGPT with Codex
- Secondary: Claude with Claude Code
- PAYG: Gemini with Gemini CLI
The New Era: GPT-5
Just feels "sensible":
- Adheres to repo standards
- Stays focused
- Communicates concisely and professionally
- Nearly repeatable results
Feature Documents
The interface between you and the LLM.
Must include:
- Current technical content (that we are changing)
- Updated technical content (where we are going)
- Action plan
Docs First!
Each time the context window is compacted, the docs might be re-read by the LLM.
Having lost most of it's memory, it can easily get confused.
Prioritise doc updates in the action plans to lock in the direction of travel.