RAG vs Wiki

A comparison of traditional Retrieval-Augmented Generation (semantic search RAG) and the LLM Wiki Pattern for knowledge management.

Comparison

AspectLLM WikiSemantic Search RAG
SearchReads index files, follows wikilinksEmbedding similarity search
InfrastructureMarkdown files onlyEmbedding model + vector DB + chunking pipeline
CostToken usage onlyOngoing compute + storage
MaintenanceLint passes + source additionsRe-embedding when data changes
Scale ceilingHundreds of pagesMillions of documents
Relationship depthDeep — explicit links and cross-referencesShallow — chunk-level similarity
Knowledge persistenceCompiled once, updated incrementallyRe-derived on every query
Setup time~5 minutesHours to days

When to Use Which

LLM Wiki is better for:

  • Personal knowledge bases
  • Research projects with dozens to hundreds of sources
  • Cases where deep synthesis and cross-referencing matter
  • Low-infrastructure environments

Traditional RAG is better for:

  • Enterprise-scale document collections (millions of docs)
  • Cases where exact retrieval precision matters more than synthesis
  • Frequently changing document collections at scale

Key Stat

One user reported a 95% reduction in token usage after converting 383 scattered files and 100+ meeting transcripts from a traditional approach to a structured wiki.

See Also