Andrej Karpathy
AI researcher. Former director of AI at Tesla, former OpenAI founding member. Known for influential educational content on neural networks, deep learning, and AI systems.
Channels
- YouTube: Andrej Karpathy — deep technical AI education
- X/Twitter: @karpathy
- GitHub: github.com/karpathy — open-source projects and gists
- Blog: karpathy.ai
Content in This Wiki
- Andrej Karpathy Just 10x’d Everyone’s Claude Code (covered via Nate Herk) — Karpathy’s viral LLM Wiki Pattern gist; Nate Herk’s video is the primary source ingested, not the gist itself
Key Ideas
- Published the LLM Wiki Pattern as a gist (April 2026) — using LLMs to maintain structured markdown knowledge bases instead of relying on chat history or RAG
- His own wiki: ~100 articles / ~500K words; uses index files rather than embedding-based retrieval
- Left the pattern intentionally vague for others to customize
- Connects the idea to Vannevar Bush’s Memex (1945)
- AutoResearch (autoresearch): Open-sourced
auto_researchrepo — gives an agent a training setup, single GPU, and success metric → agent runs ~100 experiments overnight autonomously. Persistent log turns random search into intelligent convergence. Hit 50K GitHub stars in a single week per the Dubibubii ingest. Shopify CEO Toby Lutke used it to produce an agent-optimized model that outperformed a larger human-tuned model. Also inspired Nick Saraev’s application to Claude Code skill optimization and Hermes Agent’s self-improving loop concept.
Gaps
- Karpathy’s original gist has not been ingested directly — would add depth and primary-source details to LLM Wiki Pattern
See Also
- LLM Wiki Pattern
- AutoResearch and Evals — methodology originating from Karpathy’s repo
- Nate Herk
- Claude Code
- Source: Karpathy 10x’d Claude Code