Anthropic

US frontier-AI lab founded by former OpenAI research staff. Builds the Claude model family (Haiku / Sonnet / Opus) and Claude Code (CLI agent harness). The wiki’s most heavily-referenced organization — Anthropic’s products underpin the production agent architecture thread, the planning-first AI coding thread, and the MCP standard that touches nearly every other tool the wiki tracks.

  • Founded: 2021 (by Dario Amodei, Daniela Amodei, and former OpenAI staff)
  • Corporate structure: Public Benefit Corporation (PBC) — same legal structure that OpenAI later proposed as a model for frontier labs in its April 2026 industrial-policy paper
  • Run rate: ~$2.5B as of early 2026 (per the Claude Code leak coverage)
  • Sites: anthropic.com, claude.ai, console.anthropic.com

Products tracked in this wiki

  • Claude — frontier model family. Per Matthew Berman’s framing: best for work and coding. Distinct from ChatGPT (best for ease of use) and Gemini (best for search/research). Tiers from Free → Pro → Max → Heavy.
  • Claude Code — Anthropic’s CLI agent harness. Multiple wiki threads converge on this product:
    • The architecture leak (early 2026) — Anthropic accidentally exposed Claude Code’s source map via a build configuration error. Nate B Jones analyzed it and identified the Agentic Harness Primitives: 12 production-grade infrastructure patterns, 207-entry command registry, 184-entry tool registry, six built-in agent types, 18-module Bash security architecture, sessions persisted as JSON.
    • loop (March 2026) — native /loop command for proactive agent scheduling. The “heartbeat” primitive that enables OpenBrain-style accumulated-value loops.
    • Ultra Plan (April 2026) — /ultra-plan offloads planning to a cloud-hosted Opus 4.6 instance with 3 parallel exploration agents + 1 critique agent. ~10–15 min total vs ~45 min for local plan mode in Nate Herk’s side-by-side benchmark.
    • Skills ecosystemGStack, Superpowers, Agency, Impeccable, Open Viking, Hermes Agent, skills.sh directory. The wiki tracks more Claude Code skills than for any other agent harness.

Standards Anthropic created

  • Model Context Protocol — per Ras Mic’s explainer on the Greg Isenberg podcast, MCP is Anthropic’s “3D chess” play: by putting MCP servers in the hands of service providers (not LLM vendors), Anthropic externalized the integration cost across the entire ecosystem. Every new MCP server makes every compliant client more capable, for free. As of 2026 it’s the de-facto standard across Claude Code, Cursor, Augment Agent, Archon OS, and countless others.

In the wiki’s larger threads

  • Production agent architecture — Anthropic’s Claude Code leak gave the wiki the canonical 12-primitive framework via Agentic Harness Primitives. Nearly every agent-architecture page traces back to this analysis.
  • Planning-first AI coding — Ultra Plan is the strongest official endorsement of the planning-discipline thesis the wiki has accumulated across Cole Medin, BMad Code, and Nate B Jones.
  • MCP-everywhere — see mcp for the canonical concept page; every tool that mentions MCP is downstream of Anthropic’s standard.
  • Counter to OpenClaw — Nate B Jones’s loop + OpenBrain + MCP “three Lego brick” thesis is explicitly framed as “you can replicate what OpenClaw does with three Anthropic-native primitives, without the security risks.” Anthropic is positioned in the wiki as the primitives-based safer alternative to monolithic agent frameworks.

How Anthropic differs from OpenAI in the wiki’s coverage

AnthropicOpenAI
Wiki referencesHeavy: products, standards, leaked architecture, multiple skillsHeavy: products, policy paper, GPT-5 prompting difficulty
PBC structureYes (since founding)Yes (since restructuring)
Wiki-tracked policy positionsNone (yet)The April 2026 Industrial Policy paper
Standards publishedMCP (de-facto)None tracked yet
Editorial position in wikiPrimary infrastructure provider for the agent architecture threadDirect interested party in the policy thread; treat sources accordingly

Anthropic has not (as of April 2026) published an equivalent industrial-policy paper. If they do, it becomes the second source for the AI ethics, politics, and policy thread tracked in tasks.

Editorial framing the wiki applies to Anthropic sources

  • Anthropic’s Claude Code leak was an unintentional source — the wiki treats it as higher-credibility primary source material than self-published positioning. The Agentic Harness Primitives page is built on this leak.
  • Anthropic’s intentional publications (the model cards, the System Cards, capability evaluations, the Responsible Scaling Policy) are positioning from an interested party — flag the same way as OpenAI sources.
  • When a source compares Claude vs GPT-5 vs Gemini and the creator is Anthropic-favorable (e.g., uses Claude Code daily), note the editorial relationship in the source-summary’s bias notes.

See Also