Best Practices Guide

Learn proven patterns and workflows from successful teams using Workunit. Master workunit creation, AI collaboration, context preservation, and team productivity.

Last updated: October 2025

Overview

This guide distills insights from teams successfully using Workunit to build products faster with AI collaboration. These patterns help you avoid common pitfalls, maximize AI effectiveness, and maintain context across complex multi-model workflows.

Core Philosophy
Workunit is built on three principles: simplicity first, context preservation, and human-centered AI. Every best practice here serves one of these goals. Start simple, preserve the "why" behind decisions, and keep humans in control.

Workunit Creation

Great workunits start with clarity. The effort you invest in creating a well-defined workunit pays dividends throughout the entire lifecycle.

Effective Naming

Names should be clear, concise, and action-oriented. Think of them as headlines that instantly communicate what you're building.

Do This
  • • "Build JWT Authentication System"
  • • "Add Stripe Payment Integration"
  • • "Optimize Database Query Performance"
  • • "Implement User Profile Editing"
Clear verb + specific feature + context
Not This
  • • "Auth stuff"
  • • "Payment feature"
  • • "Make it faster"
  • • "User things"
Vague, no action verb, unclear scope

Problem Statements That Work

A strong problem statement answers three questions: What's broken? Why does it matter? Who does it affect?

Formula: Context + Problem + Impact
Context
Our application currently has no authentication system.
Problem
Any user can access protected resources and administrative functions without verification.
Impact
This exposes sensitive user data, creates security vulnerabilities, and prevents us from launching to production.
This gives AI models the full context they need to make informed implementation decisions.

Measurable Success Criteria

Success criteria should be specific, measurable, and testable. Avoid subjective terms like 'better' or 'improved'.

Measurable Criteria
  • • "API response time under 200ms for 95th percentile"
  • • "All passwords hashed with bcrypt cost factor 12"
  • • "Test coverage above 90% for authentication flows"
  • • "JWT tokens expire after 24 hours"
Vague Criteria
  • • "System should be fast"
  • • "Good security"
  • • "Well tested"
  • • "Tokens configured properly"

Task Management

Effective task breakdown enables parallel work by both humans and AI models. Good tasks are atomic, independent where possible, and clearly scoped.

Task Breakdown Strategy

The 2-8 Hour Rule
Each task should take 2-8 hours to complete. Smaller tasks finish too quickly to be worth tracking. Larger tasks hide complexity and create bottlenecks.
Too Small
"Write function to hash password" (30 minutes)
Too Large
"Implement entire authentication system" (40 hours)
Just Right
"Implement user registration endpoint with email validation, password hashing, and database persistence" (4-6 hours)
Pro Tip: Front-Load Planning Tasks
Use Claude Sonnet for initial task breakdown. It excels at decomposing complex features into logical, sequenced tasks that other models can execute efficiently.

Managing Dependencies

Explicitly mark which tasks can run in parallel and which have dependencies. This enables efficient multi-model collaboration.

Example: Authentication System Tasks
1. Design database schema
No dependencies - can start immediately
2. Implement user registration endpoint
Depends on: Task 1
3. Implement login endpoint
Depends on: Task 1 (can run parallel with Task 2)
4. Add authentication middleware
Depends on: Tasks 2 & 3
5. Write integration tests
No dependencies - can start anytime (parallel with all)
Tasks 2, 3, and 5 can run in parallel with different AI models, dramatically reducing total time.

Consistent Status Updates

Real-time status updates prevent duplicate work and enable effective multi-model collaboration.

Status Lifecycle
To Do In Progress Done
When to Update
  • • Mark "In Progress" immediately when starting work
  • • Mark "Done" as soon as task is complete and verified
  • • Mark "Blocked" if dependencies are missing or issues arise
  • • Update estimated completion time if scope changes

AI Context Writing

AI context is your project's living memory. Well-written context enables future AI models to pick up where previous models left off without losing critical insights.

When to Write Context

Critical Moments for Context Updates
After Completing Significant Work
"We just implemented the JWT authentication flow. Document what we learned about token expiry and refresh patterns."
Preserves implementation insights while they're fresh
When Making Architecture Decisions
"Document why we chose PostgreSQL over MongoDB for user data, including scalability and query pattern considerations."
Captures the "why" behind technical choices
After Discovering Patterns or Gotchas
"Add notes about the race condition we found in concurrent webhook processing and how we solved it with idempotency keys."
Prevents future models from repeating mistakes
Before Switching Context
"Summarize current state and next steps before we move to the payment integration workunit."
Creates clean handoff points for resuming later

What to Include in Context

Essential Context Elements
Technical Decisions & Rationale
Not just what you chose, but why you chose it over alternatives
Implementation Patterns
Code patterns, architectural approaches, library usage patterns
Gotchas & Edge Cases
Things that didn't work, edge cases discovered, pitfalls to avoid
Dependencies & Integration Points
How this connects to other systems, required environment setup
Testing Strategy
What's tested, what's not, known test gaps, test data setup
Next Steps & Open Questions
What should happen next, unresolved questions, areas needing improvement

Format Guidelines

Use markdown formatting for readability and structure. AI models parse markdown well, and humans appreciate the clarity.

Example AI Context Structure
# Session Summary - October 10, 2025

## Work Completed
- Implemented JWT authentication with access/refresh token pattern
- Added bcrypt password hashing with cost factor 12
- Created middleware for route protection
- Wrote integration tests covering happy path and error cases

## Technical Decisions

**JWT vs Sessions**: Chose JWT for horizontal scaling capability

**Token Expiry**: 15-minute access tokens, 7-day refresh tokens
- Balances security (short-lived access) with UX (less frequent re-auth)

**Storage**: Refresh tokens in PostgreSQL with user_id index
- Enables token revocation for security

## Patterns Discovered
- Middleware chain pattern works well: auth → validation → handler
- Token refresh should be separate endpoint, not middleware
- Store token issued_at timestamp for future revocation needs

## Gotchas Encountered
- bcrypt cost factor >12 causes noticeable latency on login
- JWT secret must be >32 bytes for HS256 algorithm
- Refresh token rotation prevents token replay attacks

## Next Steps
- Implement password reset flow with time-limited tokens
- Add rate limiting to auth endpoints (prevent brute force)
- Consider adding 2FA support for Unlimited tier users

## Open Questions
- Should we implement "remember me" functionality?
- Need to decide on token revocation strategy for logout
Aim for 500-2000 words. Focus on insights that help future work, not just a changelog.

Multi-LLM Workflows

The real power of Workunit emerges when you orchestrate multiple AI models, each playing to their strengths on the same workunit.

Choosing the Right Model

C
Claude Sonnet 4.5 - Strategic Phase
  • • Initial project planning and task decomposition
  • • Architecture design and system design reviews
  • • Security audits and compliance analysis
  • • Writing comprehensive problem statements
  • • Code review requiring deep reasoning
G
GPT-5 - Execution Phase
  • • Implementing well-defined features rapidly
  • • Writing tests and boilerplate code
  • • Bug fixing and debugging sessions
  • • API endpoint implementation
  • • Refactoring and code cleanup
G
Gemini - Analysis Phase
  • • Performance profiling and optimization
  • • Log analysis and pattern detection
  • • Code quality audits at scale
  • • Data migration and transformation
  • • Visual diagram analysis

Parallel Execution Strategies

Example: 3-Model Parallel Workflow
9:00 AM - Claude: Planning
Create workunit, break into 8 tasks, document architecture decisions. Tasks 3, 4, 5 marked as parallelizable.
10:00 AM - Launch Parallel Execution
  • GPT Instance 1: Implements Task 3 (user registration)
  • GPT Instance 2: Implements Task 4 (login endpoint)
  • Gemini: Works on Task 5 (integration tests)
12:00 PM - Status Check
All three tasks completed. Claude reviews the combined work, suggests improvements. Total time: 2 hours instead of 6 hours sequential.
Result: 67% time reduction through parallel execution with multiple models.

Smooth Context Handoffs

The key to multi-model success is ensuring each model has full context when they pick up work.

Before Handoff
  • Update all task statuses to current state
  • Write AI context documenting work completed
  • Note any blockers or dependencies discovered
  • Document patterns or gotchas for next model
During Handoff
  • Explicitly tell next model to read AI context first
  • Point to specific tasks ready for pickup
  • Mention any context from linked assets
Example Handoff Prompt
"Get workunit 'Payment Integration' with full AI context. Read Claude's architecture notes, then implement Task 4 (webhook validation). Dependencies from Task 2 are complete."

Team Collaboration

Effective teams treat AI models as team members while maintaining clear human accountability and decision-making authority.

Communication Patterns

Human-AI Communication Protocol
Daily Standups with AI Context
Start each day by having an AI model summarize overnight progress: "Summarize all workunits updated in the last 24 hours with status changes and blockers."
Async Updates via AI Context
Team members working in different timezones use AI context as async communication. Each person's AI writes context updates that the next timezone reads.
AI-Generated Progress Reports
"Generate a progress report for all active workunits this week for the team meeting." AI synthesizes status from all workunits automatically.

Handoff Protocols

Clean Handoff Checklist
All task statuses current (no stale "In Progress")
AI context updated with recent work
Blockers documented with context
Next steps clearly identified
Dependencies resolved or documented
Code committed and pushed (if applicable)

Asset Organization

Well-organized assets provide critical context to AI models without cluttering individual workunits.

Strategic Asset Linking

When to Link Assets
Link Products
When workunit affects user-facing product features or requires product context for decisions.
Link Systems
When workunit modifies or integrates with specific technical systems (databases, APIs, services).
Link People
When subject matter expertise needed, or stakeholder awareness required.
Link Knowledge
When workunit relies on specific documentation, standards, or domain knowledge.

Asset Metadata Best Practices

Rich asset metadata helps AI models understand context without reading entire codebases or documentation.

System Assets
  • • Technology stack and version
  • • Repository URLs and paths
  • • API documentation links
  • • Environment dependencies
  • • Key maintainers
Knowledge Assets
  • • Document type and purpose
  • • Last updated date
  • • Authoritative source links
  • • Related standards/specs
  • • Key decision makers

Context Preservation

Context preservation is Workunit's superpower. These practices ensure the 'why' behind decisions never gets lost.

Daily Documentation Habits

Context Preservation Rituals
End-of-Session Summary
Before switching tasks, have AI write 200-word summary: what was done, why, and what's next.
Decision Documentation
When choosing between alternatives, document the choice AND why alternatives were rejected.
Gotcha Capture
Immediately document any surprising behavior, edge cases, or "that was weird" moments.
Weekly Review
Ask AI to generate weekly summary across all active workunits, identifying patterns and blockers.

Trail-of-Thought Practices

Trail-of-thought documentation captures the evolution of understanding, not just the final state.

Trail-of-Thought
"Initially chose MongoDB for flexibility, but switched to PostgreSQL after discovering our queries are primarily relational joins."
Shows the journey and reasoning evolution
Final State Only
"Using PostgreSQL for data storage."
Loses valuable context about why and the learning process

Productivity Patterns

Learn from common mistakes and proven efficiency patterns to maximize your team's velocity.

Common Anti-Patterns to Avoid

The "Re-Explanation Loop"
Spending hours explaining the same context to different AI models because you didn't write it to the workunit.
Fix: Update AI context after every significant session
The "Mega Workunit"
Creating one massive workunit for an entire feature with 50+ tasks instead of breaking into focused workunits.
Fix: One workunit per 5-15 related tasks, typically 1-2 weeks of work
The "Stale Status"
Forgetting to update task statuses, causing AI models to duplicate work or skip completed dependencies.
Fix: Update status immediately when starting/completing tasks
The "Single Model Trap"
Using only one AI model for everything instead of leveraging specialized strengths.
Fix: Use Claude for planning, GPT for execution, Gemini for analysis
The "Asset Void"
Not linking relevant assets, forcing AI models to work without critical system context.
Fix: Link all relevant systems, products, and knowledge assets upfront

Proven Efficiency Tips

Template Workunits
Create template workunits for common patterns (new API endpoint, database migration, UI component). Clone and customize instead of starting from scratch.
Batch Similar Tasks
Group similar tasks together in time blocks. Use the same AI model for all similar tasks to maintain context and reduce switching overhead.
URL-Based Navigation
Just copy-paste workunit URLs to AI models instead of searching by name. Fastest way to share context: "Get this workunit: https://workunit.app/workunits/abc123"
Automated Status Reports
Schedule weekly prompts: "Generate progress report for all active workunits, highlight blockers." AI reads all statuses automatically.
Parallel Planning Sessions
Use Claude to plan 3-4 workunits in parallel during a focused planning session, then execute all week with GPT instances.
Context Snapshots
Before major changes, ask AI to write comprehensive context snapshot. Creates restore point if you need to backtrack.

Next Steps

Ready to implement these best practices? Start with these resources:

Questions About Best Practices?

These practices evolved from real teams building real products. We're here to help you adapt them to your workflow.