AI
How AI agents work in practice: governance, prompt architecture, agent management, and the methodology behind getting reliable output from systems that hallucinate by default.
17 posts
- 7 min read
What Is Compaction in AI?
Compaction is what happens when your AI tool runs out of room to remember. It shrinks the conversation history to keep going — but the important parts don't always survive.
- 8 min read
The Craftsperson's Tools: Who Owns the AI Governance Docs You Built?
Your AI governance frameworks encode how you think into portable files. The employer owns the output — but the tools? Where you build them determines who keeps them.
- 10 min read
Local RAG for Claude Code: Semantic Search Over Your Own Project
Claude Code has session memory and governance documents, but on a project with five hundred markdown files, the gap between what the agent can read at startup and what the project actually knows gets wider every week. pmem is a local RAG memory system that gives Claude Code persistent, semantic search across your project's full history. No external APIs. Setup in two minutes.
- 6 min read
SideMark: Building a Markdown Editor for Two Authors
Every markdown editor assumes one author. When AI agents became co-editors, the 'file changed on disk' dialog became a workflow killer. SideMark is what happens when you solve that problem from the inside.
- 5 min read
Why AI Persona Switching Doesn't Work as Code Review
Asking Claude to 'switch to CEO mode' for a second opinion feels like independent review — but shared context means the reviewer already knows the answers. Here's what actually breaks and what to do instead.
- 6 min read
I Have a Team Now — It Just Happens to Be AI
The progression from 'AI helps me write emails' to 'I manage a department of specialized agents across six domains' happened in eight months. The output scaled. The cognitive load dropped. That combination is new.
- 4 min read
Cognitive Property: Who Owns the Way You Think?
Your AI governance frameworks and decision-making logic are repeatable, transferable, and extractable. That's not a productivity feature. It's cognitive property.
- 6 min read
I Manage AI Agents the Way I Manage Teams
Separation of concerns, governance docs, knowing when to restructure. The same management principles that make human teams effective apply directly to AI agents — with examples from a multi-agent production workflow.
- 7 min read
The Governance Documents
ROADMAP.md, ARCHITECTURE.md, CLAUDE.md, and CHANGELOG.md aren't project management overhead. They're the system that gives AI agents persistent memory across sessions — and the mechanism that makes Pass@1 work.
- 7 min read
Governance Is Architecture
AI governance isn't a compliance checklist — it's an architectural decision. How you structure agent permissions, context windows, and audit trails determines whether your AI system is reliable or just fast.
- 5 min read
AI Adoption: The 0.04% Don't Know They're the 0.04%
The people actually building with AI are too busy building to post about it. If the LinkedIn AI feed makes you feel behind, you're measuring against the wrong cohort.
- 8 min read
The AI Perimeter: Where Automation Should End and Judgment Should Begin
AI is the most powerful tool most of us have ever had. That makes knowing when not to use it the actual skill.
- 9 min read
LLMs Are Practically ADHD
ADHD and large language models share the same failure modes: context loss, confabulation, and drift without external structure. The coping strategies developed for ADHD brains transfer directly to AI agent architecture.
- 7 min read
Your Reminders Don't Work Because They're Too Predictable
Traditional reminder apps fail ADHD brains because predictability breeds dismissal. I built a Telegram bot with fuzzy scheduling, natural language, and a relentless nag mode.
- 6 min read
Auto-Compaction Is Costing You Sessions
Claude Code auto-compacts at ~83% context usage — and you lose control of what gets preserved. Here's a script that warns you before it happens so you can save your session state first.
- 5 min read
Your AI Builds the Code. Who Reviews It?
If the same AI agent that writes your code also reviews it, you have someone grading their own homework. I built an adversarial code review agent to fix that.
- 5 min read
What Is Pass@1?
Pass@1 means your AI agent gets it right on the first attempt. The methodology: governance documents thorough enough to eliminate ambiguity, so the first generation is the one you ship.