Notes on building a personal AI operating environment.
I write about what I have built around Anthropic's Claude Code: memory, hooks, safety layers, infrastructure, and the specific patterns that have earned their place. Every post is grounded in something concrete. No speculation about the future of AI.
Techniques
All techniques →The five skills that have shaped how I work. Each gets a short page explaining what it does and why it earned a permanent slot.
Persistent task runner. Retry loop with email fallback so "email me when done" actually means that.
Read →Claude, Gemini and GPT-5.4 on the same question in parallel. Blind round, then informed rounds. Receipts from months of use.
Read →Nightly consolidation that promotes useful session insights into the canonical topic files. The memory layer that stops rotting.
Read →Manufacture a task-specific context file before the task starts. Sub-session reads only that file. Stops context-window guessing.
Read →Twenty-three numbered patterns of how I broke my own system. Checked against every non-trivial change before it ships.
Read →Featured essays
A personal AI operating environment: worked example and receipts
What happens when one person uses an AI coding assistant as the primary interface to a real physical and operational life, and systematically fixes every failure that occurs along the way.
Six layers of defence for an AI agent over a 3D printer
The printer-safety architecture I now run, the specific incidents that produced each layer, and why the pattern generalises beyond 3D printing.
Lessons as code: turning postmortems into pre-flight checks
A file I read at the start of every session, twenty-three numbered patterns of how I have broken my own system, and the pre-flight skill that checks proposed work against them. The pattern is the most portable thing on this site.
Recent writing
All writing →- Story918MB, an Ofsted inspection, and a governor who is not a developerMy kids school was rated Requires Improvement and facing re-inspection. The evidence base was 1,650 files and 918 megabytes. No governor was going to read all of it. So we built a tool that could.
- StoryBuilding from my phone while watching the kidsThe five-step evolution of how I reach the development environment on the Mac Mini from an iPhone in a playground. The useful insight is that where you build shapes what you build.
- EssayContext as a first-class artifact: the /deep-context pipelineStop hoping that relevant information will fit in the context window. Start manufacturing a task-specific context file before the task begins. The mechanism, the receipts, and the benchmark that gates it shipping.
- Essay"Email me when done": a persistent task runner with a delivery guaranteeLong-running tasks fail silently if the session dies before the result is ready. This is the runner I built to make "email me when done" actually mean that. Retry loop, fallback email paths, and a last-ditch file.
- StoryFrom model to agent: what changed when I stopped predicting and started investigatingWhy the regression models that came out of the hackathon got replaced within weeks by three agentic tools. The short version: probability scores without narrative are not what analysts need.
- EssayMemory that sleeps: a tiered memory architecture with daily consolidationA two-tier retrieval system (semantic plus keyword), canonical topic files as curated truth, and a nightly consolidation pass that promotes session insights into the canonical tier. Why each piece exists and what fails without it.
- EssayOne hour, one command: disaster recovery for a solo AI shopWhat backups, what intentional exclusions, and a sequence that reconstitutes the whole personal AI operating environment in under an hour. The honest version, including the accepted gaps.
- StoryOne hour, one marketing listA vague ask ("give me a list of prospects that look like X") turned into a working pipeline across three data sources in under sixty minutes. A small build, but the speed is the point.