GUIDE
Updated October 2025
11 topics

Best Practices Guide

Patterns and workflows that work well with Workunit. Tips for workunit creation, AI collaboration, context preservation, and team productivity.

Overview

This guide distills insights from teams successfully using Workunit to build products faster with AI collaboration. These patterns help you avoid common pitfalls, maximize AI effectiveness, and maintain context across complex multi-model workflows.

Core Philosophy
Workunit is built on three principles: simplicity first, context preservation, and human-centered AI. Every best practice here serves one of these goals. Start simple, preserve the "why" behind decisions, and keep humans in control.

Workunit Creation

Great workunits start with clarity. The effort you invest in creating a well-defined workunit pays dividends throughout the entire lifecycle.

Effective Naming

Names should be clear, concise, and action-oriented. Think of them as headlines that instantly communicate what you're building.

Do This
  • • "Build JWT Authentication System"
  • • "Add Stripe Payment Integration"
  • • "Optimize Database Query Performance"
  • • "Implement User Profile Editing"
Clear verb + specific feature + context
Not This
  • • "Auth stuff"
  • • "Payment feature"
  • • "Make it faster"
  • • "User things"
Vague, no action verb, unclear scope

Problem Statements That Work

A strong problem statement answers three questions: What's broken? Why does it matter? Who does it affect?

Formula: Context + Problem + Impact
Context
Our application currently has no authentication system.
Problem
Any user can access protected resources and administrative functions without verification.
Impact
This exposes sensitive user data, creates security vulnerabilities, and prevents us from launching to production.
This gives AI models the full context they need to make informed implementation decisions.

Measurable Success Criteria

Success criteria should be specific, measurable, and testable. Avoid subjective terms like 'better' or 'improved'.

Measurable Criteria
  • • "API response time under 200ms for 95th percentile"
  • • "All passwords hashed with bcrypt cost factor 12"
  • • "Test coverage above 90% for authentication flows"
  • • "JWT tokens expire after 24 hours"
Vague Criteria
  • • "System should be fast"
  • • "Good security"
  • • "Well tested"
  • • "Tokens configured properly"

Task Management

Effective task breakdown enables parallel work by both humans and AI models. Good tasks are atomic, independent where possible, and clearly scoped.

Task Breakdown Strategy

The 2-8 Hour Rule
Each task should take 2-8 hours to complete. Smaller tasks finish too quickly to be worth tracking. Larger tasks hide complexity and create bottlenecks.
Too Small
"Write function to hash password" (30 minutes)
Too Large
"Implement entire authentication system" (40 hours)
Just Right
"Implement user registration endpoint with email validation, password hashing, and database persistence" (4-6 hours)
Pro Tip: Front-Load Planning Tasks
Use Claude Sonnet for initial task breakdown. It excels at decomposing complex features into logical, sequenced tasks that other models can execute efficiently.

Managing Dependencies

Explicitly mark which tasks can run in parallel and which have dependencies. This enables efficient multi-model collaboration.

Example: Authentication System Tasks
1. Design database schema
No dependencies - can start immediately
2. Implement user registration endpoint
Depends on: Task 1
3. Implement login endpoint
Depends on: Task 1 (can run parallel with Task 2)
4. Add authentication middleware
Depends on: Tasks 2 & 3
5. Write integration tests
No dependencies - can start anytime (parallel with all)
Tasks 2, 3, and 5 can run in parallel with different AI models, dramatically reducing total time.

Consistent Status Updates

Real-time status updates prevent duplicate work and enable effective multi-model collaboration.

Status Lifecycle
To Do In Progress Done
When to Update
  • • Mark "In Progress" immediately when starting work
  • • Mark "Done" as soon as task is complete and verified
  • • Mark "Blocked" if dependencies are missing or issues arise
  • • Update estimated completion time if scope changes

Saving Context

Context atoms are your project's living memory. Each atom is a typed, importance-ranked unit of knowledge — decisions, insights, questions, attempts, and progress — that enables future AI models to pick up where previous models left off without losing critical insights.

When to Save Context

Critical Moments for Saving Context Atoms
After Completing Significant Work
Save a progress atom: "We just implemented the JWT authentication flow. Document what we learned about token expiry and refresh patterns."
Preserves implementation insights while they're fresh
When Making Architecture Decisions
Save a decision atom: "Document why we chose PostgreSQL over MongoDB for user data, including scalability and query pattern considerations."
Captures the "why" behind technical choices
After Discovering Patterns or Gotchas
Save an insight or attempt atom: "Document the race condition we found in concurrent webhook processing and how we solved it with idempotency keys."
Prevents future models from repeating mistakes
Before Switching Context
Save a progress atom: "Summarize current state and next steps before moving to the payment integration workunit."
Creates clean handoff points for resuming later

What to Include in Context

Atom Types and What They Capture
decision — Technical Decisions and Rationale
Not just what you chose, but why you chose it over alternatives. Set importance to high or critical for architectural choices.
insight — Implementation Patterns
Code patterns, architectural approaches, library usage patterns. Use tags to make them discoverable across workunits.
attempt — Gotchas and Failed Approaches
Things that didn't work, edge cases discovered, pitfalls to avoid. Prevents future models from repeating the same mistakes.
progress — Status and Integration Points
Work completed, how this connects to other systems, required environment setup, and next steps.
question — Open Questions
Unresolved questions, areas needing further research, decisions deferred for later. Close them with a decision atom that uses supersedes_id to link back.

Format Guidelines

Use the save_context MCP tool to save individual atoms. Each atom should focus on one decision, insight, or update. Prefer multiple focused atoms over a single large blob.

Example save_context Calls
Recording an architectural decision:
save_context(
  workunit_id: "...",
  atom_type:   "decision",
  importance:  "high",
  title:       "Chose JWT over sessions for auth",
  content:     "JWT chosen for horizontal scaling. 24-hour token
               expiry balances security and UX. Refresh tokens
               stored in PostgreSQL to enable revocation.",
  tags:        ["auth", "architecture"]
)
Capturing a gotcha or failed approach:
save_context(
  workunit_id: "...",
  atom_type:   "attempt",
  importance:  "normal",
  title:       "bcrypt cost factor >12 causes login latency",
  content:     "Tried cost factor 14 for stronger hashing, but
               login response exceeded 2s. Settled on cost factor
               12 which keeps latency under 200ms while remaining
               secure.",
  tags:        ["auth", "performance"]
)
Saving a progress update before switching tasks:
save_context(
  workunit_id: "...",
  atom_type:   "progress",
  importance:  "normal",
  title:       "Auth middleware complete, tests passing",
  content:     "Implemented JWT validation middleware and
               integration tests. Next: password reset flow
               with time-limited tokens. Rate limiting on auth
               endpoints still TODO.",
  tags:        ["auth", "middleware"]
)
Keep content concise but comprehensive. Use supersedes_id when updating a previous atom with new information.

Multi-LLM Workflows

The real power of Workunit emerges when you orchestrate multiple AI models, each playing to their strengths on the same workunit.

Choosing the Right Model

C
Claude Sonnet 4.5 - Strategic Phase
  • • Initial project planning and task decomposition
  • • Architecture design and system design reviews
  • • Security audits and compliance analysis
  • • Writing comprehensive problem statements
  • • Code review requiring deep reasoning
G
GPT-5 - Execution Phase
  • • Implementing well-defined features rapidly
  • • Writing tests and boilerplate code
  • • Bug fixing and debugging sessions
  • • API endpoint implementation
  • • Refactoring and code cleanup
G
Gemini - Analysis Phase
  • • Performance profiling and optimization
  • • Log analysis and pattern detection
  • • Code quality audits at scale
  • • Data migration and transformation
  • • Visual diagram analysis

Parallel Execution Strategies

Example: 3-Model Parallel Workflow
9:00 AM - Claude: Planning
Create workunit, break into 8 tasks, document architecture decisions. Tasks 3, 4, 5 marked as parallelizable.
10:00 AM - Launch Parallel Execution
  • GPT Instance 1: Implements Task 3 (user registration)
  • GPT Instance 2: Implements Task 4 (login endpoint)
  • Gemini: Works on Task 5 (integration tests)
12:00 PM - Status Check
All three tasks completed. Claude reviews the combined work, suggests improvements. Total time: 2 hours instead of 6 hours sequential.
Result: 67% time reduction through parallel execution with multiple models.

Smooth Context Handoffs

The key to multi-model success is ensuring each model has full context when they pick up work.

Before Handoff
  • Update all task statuses to current state
  • Save context atoms documenting work completed
  • Note any blockers or dependencies discovered
  • Document patterns or gotchas for next model
During Handoff
  • Explicitly tell next model to read context atoms first
  • Point to specific tasks ready for pickup
  • Mention any context from linked assets
Example Handoff Prompt
"Get workunit 'Payment Integration' including context atoms. Read the architecture decision atoms first, then implement Task 4 (webhook validation). Dependencies from Task 2 are complete."

Team Collaboration

Effective teams treat AI models as team members while maintaining clear human accountability and decision-making authority.

Communication Patterns

Human-AI Communication Protocol
Daily Standups with Context
Start each day by having an AI model summarize overnight progress: "Summarize all workunits updated in the last 24 hours with status changes and blockers."
Async Updates via Context Atoms
Team members working in different timezones use context atoms as async communication. Each person's AI saves progress and decision atoms that the next timezone reads.
AI-Generated Progress Reports
"Generate a progress report for all active workunits this week for the team meeting." AI synthesizes status from all workunits automatically.

Handoff Protocols

Clean Handoff Checklist
All task statuses current (no stale "In Progress")
Context atoms saved for recent work
Blockers documented with context
Next steps clearly identified
Dependencies resolved or documented
Code committed and pushed (if applicable)

Asset Organization

Well-organized assets provide critical context to AI models without cluttering individual workunits.

Strategic Asset Linking

When to Link Assets
Link Products
When workunit affects user-facing product features or requires product context for decisions.
Link Systems
When workunit modifies or integrates with specific technical systems (databases, APIs, services).
Link People
When subject matter expertise needed, or stakeholder awareness required.
Link Knowledge
When workunit relies on specific documentation, standards, or domain knowledge.

Asset Metadata Best Practices

Rich asset metadata helps AI models understand context without reading entire codebases or documentation.

System Assets
  • • Technology stack and version
  • • Repository URLs and paths
  • • API documentation links
  • • Environment dependencies
  • • Key maintainers
Knowledge Assets
  • • Document type and purpose
  • • Last updated date
  • • Authoritative source links
  • • Related standards/specs
  • • Key decision makers

Context Preservation

Context atoms are how Workunit preserves institutional knowledge. These practices ensure the 'why' behind decisions never gets lost.

Daily Documentation Habits

Context Preservation Rituals
End-of-Session Summary
Before switching tasks, have AI write 200-word summary: what was done, why, and what's next.
Decision Documentation
When choosing between alternatives, document the choice AND why alternatives were rejected.
Gotcha Capture
Immediately document any surprising behavior, edge cases, or "that was weird" moments.
Weekly Review
Ask AI to generate weekly summary across all active workunits, identifying patterns and blockers.

Trail-of-Thought Practices

Trail-of-thought documentation captures the evolution of understanding, not just the final state.

Trail-of-Thought
"Initially chose MongoDB for flexibility, but switched to PostgreSQL after discovering our queries are primarily relational joins."
Shows the journey and reasoning evolution
Final State Only
"Using PostgreSQL for data storage."
Loses valuable context about why and the learning process

Productivity Patterns

Learn from common mistakes and proven efficiency patterns to maximize your team's velocity.

Common Anti-Patterns to Avoid

The "Re-Explanation Loop"
Spending hours explaining the same context to different AI models because you didn't write it to the workunit.
Fix: Save context atoms after every significant session
The "Mega Workunit"
Creating one massive workunit for an entire feature with 50+ tasks instead of breaking into focused workunits.
Fix: One workunit per 5-15 related tasks, typically 1-2 weeks of work
The "Stale Status"
Forgetting to update task statuses, causing AI models to duplicate work or skip completed dependencies.
Fix: Update status immediately when starting/completing tasks
The "Single Model Trap"
Using only one AI model for everything instead of leveraging specialized strengths.
Fix: Use Claude for planning, GPT for execution, Gemini for analysis
The "Asset Void"
Not linking relevant assets, forcing AI models to work without critical system context.
Fix: Link all relevant systems, products, and knowledge assets upfront

Proven Efficiency Tips

Template Workunits
Create template workunits for common patterns (new API endpoint, database migration, UI component). Clone and customize instead of starting from scratch.
Batch Similar Tasks
Group similar tasks together in time blocks. Use the same AI model for all similar tasks to maintain context and reduce switching overhead.
URL-Based Navigation
Just copy-paste workunit URLs to AI models instead of searching by name. Fastest way to share context: "Get this workunit: https://workunit.app/projects/f47ac10b-58cc-4372-a567-0e02b2c3d479/workunits/abc123"
Automated Status Reports
Schedule weekly prompts: "Generate progress report for all active workunits, highlight blockers." AI reads all statuses automatically.
Parallel Planning Sessions
Use Claude to plan 3-4 workunits in parallel during a focused planning session, then execute all week with GPT instances.
Context Snapshots
Before major changes, ask AI to write comprehensive context snapshot. Creates restore point if you need to backtrack.

Cloud Execution Tips

Get the most out of Cloud Execution's Explore and Implement modes with these proven practices:

Start with Explore Before Implement
Use Explore mode to generate task suggestions first, then review and refine them before running Implement. This two-step approach gives you better results and lower costs than jumping straight to implementation.
Write Specific Problem Statements
The AI agent uses your problem statement as its primary guide. "Add input validation to the user registration form" produces better results than "improve the app." Include file paths or component names when possible.
Review All Suggested Tasks
Don't accept every task the Explore agent suggests. Review each one critically, discard irrelevant suggestions, and edit task descriptions to match your exact requirements before implementing.
Always Review AI-Created Pull Requests
Treat AI-generated code like any other contribution. Review the diff, check test results, run your CI pipeline, and ensure the changes meet your coding standards before merging.
Monitor API Key Usage
Check your Sprites and OpenRouter dashboards periodically. Set spending limits on OpenRouter to prevent unexpected costs, especially when experimenting with larger models like Claude Opus.