AI Coding Workflow (PLANNING.md + TASK.md Pattern)
Cole Medin’s end-to-end process for working with AI coding assistants (Cursor, Windsurf, Cline, Roo Code, Claude Code). The throughline: higher-level markdown files + global rules + MCP servers + a disciplined initial prompt produce one-shot results that ad-hoc prompting can’t match. Cole demonstrated the pattern by one-shotting a Supabase MCP server in a single prompt.
The Golden Rules
- Use higher-level markdown docs (
PLANNING.md,TASK.md, README, install/docs files) to give the LLM persistent context. - Keep code files under 500 lines — long files hallucinate.
- Start fresh conversations often — long threads degrade.
- One feature per prompt — don’t ask for many things at once.
- Always write tests — ideally after every new feature.
- Be specific in requests — describe technologies, libraries, expected outputs.
- Write docs and comments as you go.
- Implement environment variables yourself — never trust the LLM with secrets, DB security, API keys. Cole cites a viral case of a vibe-coded SaaS getting hacked two days after launch.
The Five-Step Process
1. Planning files
PLANNING.md— vision, architecture, constraints, tech stack. The LLM reads this at conversation start (enforced via global rules).TASK.md— granular task list. LLM updates as tasks complete. Acts as a project manager handoff.
Cole creates these outside the IDE in Claude Desktop or any chatbot — saves IDE credits. He recommends using multiple LLMs (one prompt to several models, combine results) via a hub like Global GPT.
2. Global rules (system prompt for the IDE)
Workspace-level rules tell the IDE to:
- Always read
PLANNING.mdat conversation start - Mark off
TASK.mditems as complete - Keep files under 500 lines
- Write tests in a dedicated
tests/dir; mock DB and LLM calls; cover success + failure + edge case - Maintain README and inline comments
- Follow style guidelines
Set in Windsurf via Manage Memories → Workspace Rules; equivalent in Cursor, Cline, Roo.
3. MCP servers
Three Cole-recommended core servers for any project:
- Filesystem — agent reaches outside the project (other folders, asset libraries, prior projects)
- Brave Search — web search for documentation, libraries, frameworks; AI-summarized results
- Git — version control for backups; agent commits at known-good states so you can revert when it breaks five prompts later
Optional: Quadrant for long-term agent memory (or use IDE-native memories).
4. The initial prompt
- Be very specific — golden rule #6 applies most here
- Provide documentation and examples three ways:
- IDE-native docs ingestion (Windsurf
@docs, Cursor’s @docs) - Brave/web MCP for live search
- Manual links to GitHub repos with reference implementations
- IDE-native docs ingestion (Windsurf
5. Iterate → test → commit → deploy
- One change at a time in iteration
- Generate tests with the same global-rules-enforced patterns
- Use the git MCP to commit at known-good states
- Deploy via Docker (LLMs are good at Dockerfiles — abundant training data)
Why It Works
The pattern enforces context discipline without constant prompting cost. Global rules carry the persistent instructions. PLANNING/TASK markdown files carry persistent project state. MCP servers extend tool reach. The actual user prompt can stay short because everything else is already in scope.
Evolution: Archon as the YAML-packaged version
Cole’s Archon has pivoted from an “AI OS” knowledge backbone into a workflow engine that packages this same pattern as YAML DAG workflows. The PLANNING.md/TASK.md pattern is the lightweight version (two markdown files, manual discipline); Archon is the heavyweight version (YAML-defined phases, validation gates, git worktree isolation per run, approval loops). The archon-piv-loop workflow is a direct implementation of the PIV loop with human review between iterations. See source.
Compared to Other Workflows
- Archon — the YAML-packaged evolution of this pattern; deterministic workflow engine with git worktree isolation. Use Archon when you want repeatable, fire-and-forget workflow runs; use the markdown pattern when Archon is overkill.
- bmad-method — heavier (six personas, six artifacts) vs. Cole’s lighter two-file pattern. Use BMAD for SaaS-scale apps; use Cole’s for projects under ~10 stories.
- four-prompting-disciplines — Cole’s pattern is mostly context engineering (discipline #2) with light specification engineering (#4) in the global rules.
- autoresearch-evals — adjacent self-improvement pattern from Nick Saraev for skill development.
See Also
- cole-medin — author
- bmad-method — heavier alternative
- claude-code, cursor — IDEs this pattern targets
- mcp — protocol enabling step 3
- four-prompting-disciplines — broader theory
- archon-os — YAML-packaged evolution of this pattern
- Source: Code 100x Faster with AI