I write about building companies with AI
Strategy, tools, and what I'm learning along the way.
Essays
- Recursive Refinement
Reviewers default to grading tolerance. Recursive refinement keeps them asking what the original ask still requires, until the team has fully closed the gap.
- The Lag Effect
People aren't outsourcing their thinking to LLMs. The same Stack Overflow lag effect is still running, just on a shorter clock and with new friction.
- LLM Anxiety
Agents in long sessions degrade in a recognizable pattern. The same approval-seeking lever that breaks them is also one of the better alignment tools you have.
- Same prompt, worse results
Back in March I shared a command for spawning agent teams. A lot of people tried it on their codebases and got different results than mine. Same prompt.
- Distribution is the only moat AI can't kill
Building got cheap this year. Being heard didn't. The channels founders trusted stopped winning, and the only edge left is the one most of them never built.
- Four months is the new eighteen
Eighteen months was the fast timeline to ship a product. Not because building took that long, but because learning did. Four months holds that lesson now.
- Delete the work and start again
When an AI session gets me close but not right, I delete everything and start over. Not sometimes. As a rule. The code is disposable. The clarity isn't.
- Thanks to AI, your job description is now wrong
Every layer between customers and the codebase used to protect scarce engineering attention. That scarcity is gone. Your job description hasn't caught up.
- Your ticket is a prompt
The instinct to break work into atomic tickets was right for human teams. For agents, it reproduces the same fragmentation disease at machine speed.
- Your AI adoption problem isn't tech debt, it's the operating model
Conway's Law used to show up in JIRA workflows. Now it shows up in the skills and scaffolding companies bolt onto AI. The problem was never the codebase.
- Agents have a human personality problem
I kept seeing the same team dysfunctions in my agent teams that I'd spent years teaching human organizations to fix. The correction fit in a sentence.
- Months to minutes: an AI feature-gap harness
The best product outcomes always came from someone who talked to customers and could also build. That was rare and didn't scale. Now it's a system property.
- Anatomy of a good ad-hoc claude agent team
I show the prompt first, then unpack every decision behind it. Problem framing, role design, workflow structure, and why organizational dynamics still apply.
- Surprising benefit of AI on my sleep
I sleep better now because of AI. Not through any app, but because offloading ideas to agents eliminated the subconscious churn I didn't know was costing me.
- The Study of One
Find one subject you know and understand, see if they've encountered the problem in their everyday life. My favorite subject is myself.
- Timing
You may have heard that an idea is only as good as its execution. The statement in all its brilliance is somewhat flawed.
- How to Lose the Right Way
Unless you've exhausted all the options that present lower risk, there is no reason to skip ahead to the highest risk play.
- Innate Default
Is my innate default sufficiently futuristic? Am I building the reasoning framework that serves the future I want to live in?
- Investing Lessons Learned from a Floor Trader
In 2007 I did a stint on a trading floor. The strategies were considered infallible. Then I tried applying them to life.