Nate B Jones

AI strategy educator and daily content creator. Focuses on production agent architecture, AI development workflows, and translating AI research into actionable engineering guidance for builders.

Channels

Content in This Wiki

Key Ideas

  • “Building agents is 80% plumbing, 20% AI” — the core thesis from his Claude Code leak analysis
  • Premature complexity is the most common agent failure mode: building multi-agent coordination before sessions survive crashes
  • Released an agentic harness skill with two modes: design mode (architect a new harness) and evaluation mode (audit an existing codebase against the 12 primitives)
  • Framed the Anthropic leaks (Claude Mythos + Claude Code) as a velocity vs. operational discipline question
  • “MCP is the growth hack of 2026” — if your product isn’t an MCP server, it should be; MCP turns any tool into an agent-accessible command-line primitive
  • Design is following development to the command line: the product-design-engineering triangle collapses when “is this buildable?” is answered instantly
  • Three Lego bricks every agent needs: Memory + Proactivity + Tools. Remove any one and the agent stops being useful. OpenClaw’s explosive appeal is reducible to these three.
  • “The value of a loop isn’t in any single cycle — it’s in the accumulation across cycles” — the compound interest thesis for agent work
  • The terminal is “free time travel” — developers get agent capabilities months before everyone else just by being willing to use a different window
  • Four Prompting Disciplines: Prompting has diverged into four skills (prompt craft, context engineering, intent engineering, specification engineering). Most people only practice the first. The gap is 10x.
  • “Memory architecture determines agent capabilities much more than model selection does” — the compounding advantage of owning your memory infrastructure
  • The human web vs agent web fork: note-taking apps are built for human eyes; agents need infrastructure designed for machine-to-machine readability
  • Klarna as anti-example: Perfect context, wrong intent = AI resolved 2.3M conversations but optimized for speed not satisfaction
  • Five Levels of AI Coding: L0 spicy autocomplete → L5 dark factory. 90% of “AI-native” devs are stuck at L2. StrongDM’s 3-person team with Attractor is the clearest L5 example. The bottleneck has moved from implementation speed to spec quality.
  • Frontier Operations: The expanding bubble — as AI gets smarter, the surface area for human judgment GROWS. Five persistent skills. “The first workforce skill that expires on a roughly quarterly cycle.”
  • The J-curve: Most orgs are stuck at the bottom, getting measurably slower with AI while believing they’re faster
  • AI Professional Interface: Replace the 0.4% success rate hiring pipeline with an AI-powered interface. Five components. The fit assessment tool inverts the power dynamic — both sides evaluate fit. “Showing is almost always more persuasive than telling.”

See Also