AI-Augmented Engineering Practice

What I'm working on during career break, and what I've found.

Thesis

Predictable AI output isn't about making the model smarter. It's about removing ambiguity about where responsibility lives. The same principle that holds for team structure, system design, and architecture decisions.

What I'm doing

  • Designing skills for Claude Code with production constraints in mind
  • Observing how agents fail under real engineering pressure — not toy examples
  • Building small artifacts that test specific failure modes
  • Documenting what holds up

What I've found so far

  • Skills get worse when you add more rules. Constraint design beats rule accumulation. The instinct to add more guardrails when something fails is usually wrong — the guardrails create new failure modes.
  • Agents fail in predictable categories. Context drift, ownership ambiguity, recovery loops. These aren't bugs to fix — they're design problems to solve with architecture, not prompting.
  • The architecture of an agent's responsibilities matters more than its prompt. Where does the agent's authority end? What does it hand off, and to what? These questions have the same structure as team ownership boundaries.

How this connects to engineering leadership

Same instinct that shapes team structure and system design — define ownership boundaries, then let the unit operate within them. AI agents and engineering teams have more in common than the discourse suggests.

Notes and observations are in progress.