Workunit Workflows Guide

Master effective workunit management patterns for individuals and small teams. Learn lifecycle management, organization strategies, and proven workflows for feature development, bug fixes, and research.

Last updated: October 2025

Overview

Workunits are the core organizing principle in Workunit. Each workunit represents a discrete unit of work with a clear problem to solve and measurable success criteria. This guide teaches you how to manage workunits effectively, from creation through completion.

Whether you're building features, fixing bugs, conducting research, or refactoring code, mastering workunit workflows will help you maintain context, collaborate with AI, and deliver consistently without complexity.

Workunit Lifecycle

Every workunit moves through a lifecycle from creation to completion. Understanding these states helps you manage work effectively and communicate status to your team and AI assistants.

Active: Work in Progress

Active workunits are your current focus
These are workunits you're actively working on or plan to work on soon. They appear at the top of your workunit list and are the primary context for AI assistants.
When to use Active:
  • • You're actively coding, designing, or implementing
  • • Tasks are in progress with clear next steps
  • • No blockers preventing immediate progress
  • • Team members or AI are working on related tasks
Best practices:
  • • Keep 2-5 active workunits maximum (avoid context switching)
  • • Update task status regularly to maintain momentum
  • • Add AI context frequently during active development
  • • Mark tasks "in progress" to signal focus areas

Paused: Temporarily Blocked

Paused workunits are temporarily on hold
Use paused status when work is blocked by external dependencies, waiting for decisions, or deprioritized temporarily. Paused workunits preserve context but don't clutter your active list.
When to use Paused:
  • • Waiting for third-party API access or credentials
  • • Blocked by upstream dependencies (other workunits)
  • • Awaiting design feedback or stakeholder decisions
  • • Temporarily deprioritized but will resume later
Best practices:
  • • Document why it's paused in the AI context or notes
  • • Set a reminder or due date for when to revisit
  • • Create tasks for unblocking actions (e.g., "Request API keys")
  • • Review paused workunits weekly to check if blockers are resolved

Completed: Success Criteria Met

Completed workunits achieved their goals
Mark a workunit complete when all success criteria are met, code is deployed (if applicable), and the original problem is solved. Completed workunits serve as valuable context for future work.
When to mark Complete:
  • • All success criteria are verifiably met
  • • Code is merged, reviewed, and deployed (for development)
  • • Documentation is updated and stakeholders notified
  • • No follow-up tasks remain for this specific problem
Best practices:
  • • Write a completion summary in AI context documenting outcomes
  • • Mark all tasks as done before completing the workunit
  • • Link to deployment, PR, or final deliverables
  • • Reflect on lessons learned for future reference

Archived: No Longer Relevant

Archived workunits are no longer relevant
Archive workunits that became unnecessary, were superseded by other work, or represent abandoned approaches. Archiving preserves historical context without cluttering your workspace.
When to Archive:
  • • Requirements changed and work is no longer needed
  • • Superseded by a different approach or workunit
  • • Experimental work that didn't pan out
  • • Duplicate workunits created by mistake
Best practices:
  • • Document why it's archived (prevents future confusion)
  • • Link to superseding workunit if applicable
  • • Preserve insights in AI context for historical reference
  • • Clean up archived workunits periodically to reduce clutter

Creating Effective Workunits

A well-defined workunit sets the foundation for successful execution. The quality of your problem statement, success criteria, and task breakdown directly impacts how effectively you and your AI assistants can collaborate.

Writing Problem Statements

A problem statement explains what you're solving and why it matters. Good problem statements provide context for AI assistants and future collaborators to understand the purpose and constraints.

Good Problem Statement
Workunit: Implement Rate Limiting for API
Our public API currently has no rate limiting, allowing unlimited requests per user. This creates several problems: (1) potential abuse by malicious actors exhausting server resources, (2) unfair resource distribution where heavy users impact performance for all users, (3) no way to enforce tier-based usage limits for our pricing model. Production metrics show 3 users making 10,000+ requests/hour while average users make ~50/hour.
Why it's good: Explains the problem, impact, and provides quantitative context. AI assistants understand both what to build and why.
Weak Problem Statement
Workunit: Add Rate Limiting
We need to add rate limiting to the API.
Why it's weak: No context on why, no explanation of current state, no data. AI assistants lack information to make informed implementation decisions.
Pro Tip: The "5 Whys" Technique
If your problem statement feels shallow, ask "why" five times to uncover the root problem. This helps you articulate deeper context that guides better solutions.

Defining Success Criteria

Success criteria are specific, measurable conditions that define when the workunit is complete. They prevent scope creep and provide clear completion targets for you and AI assistants.

Good Success Criteria Are:
  • Specific: "API responds with 429 status when limit exceeded" not "Rate limiting works"
  • Measurable: "100 requests/minute per user" not "Reasonable rate limits"
  • Testable: "Integration tests verify limit enforcement" not "Code looks good"
  • Complete: Include deployment, documentation, and verification steps
Example Success Criteria:
  • • API enforces 100 requests/minute limit per authenticated user
  • • Returns 429 status with Retry-After header when limit exceeded
  • • Redis-backed rate limiting with 1-minute sliding window
  • • Different limits for Free (100/min), Pro (1000/min), Enterprise (10000/min) tiers
  • • Integration tests covering limit enforcement and tier-based limits
  • • Deployed to production with monitoring alerts for high 429 rates
  • • API documentation updated with rate limit details

Breaking Down Tasks

Tasks are the actionable steps that lead to completing the workunit. Good task breakdown enables parallel work, clear progress tracking, and effective AI collaboration.

Task Size Guidelines:
  • • Each task should take 1-4 hours for an experienced developer
  • • Tasks over 4 hours should be broken into smaller tasks
  • • Target 5-10 tasks per workunit for most work (sweet spot for clarity)
  • • Small workunits (bug fixes, research) may have 2-5 tasks
  • • Large features can extend to 15-20 tasks when necessary
  • • One task = one deliverable (e.g., "schema designed" or "tests passing")
Ordering Tasks:
  • • Start with foundational tasks (design, schema, architecture)
  • • Follow with implementation tasks (core logic, features)
  • • End with verification tasks (tests, documentation, deployment)
  • • Identify which tasks can run in parallel for AI collaboration
Writing Task Descriptions:
  • • Use action verbs: "Implement", "Design", "Write", "Deploy"
  • • Be specific: "Implement Redis-backed rate limiter with sliding window" not "Add rate limiting"
  • • Include acceptance criteria: what "done" looks like for this task
  • • Note dependencies: "Requires task 2 (schema) to be complete first"

Organizing Your Work

Effective organization helps you focus on the right work at the right time. Workunit provides simple but powerful tools: priorities, due dates, and tags.

Using Priorities

Priorities help you focus on what matters most. Workunit uses four priority levels: urgent, high, normal, and low.

Urgent: Drop Everything
Production outages, critical security vulnerabilities, blocking bugs affecting revenue. Urgent workunits should be rare (1-2% of total work).
Example: "API returning 500 errors for all payment requests"
High: Next in Queue
Important features with deadlines, significant bugs affecting user experience, time-sensitive improvements. Work on these after urgent items are resolved.
Example: "Implement OAuth login before beta launch next week"
Normal: Standard Work
Regular feature development, routine maintenance, non-blocking bugs. This should be 70-80% of your workunits. Default priority for new workunits.
Example: "Add email notification preferences to user settings"
Low: Nice to Have
Minor improvements, cosmetic fixes, speculative features. Work on these during slow periods or when waiting for feedback on higher-priority items.
Example: "Refactor utility functions for better code organization"
Avoid Priority Inflation
If everything is high priority, nothing is. Reserve "urgent" for true emergencies and "high" for time-sensitive work. Most work should be "normal" priority.

Setting Due Dates

Due dates create urgency and help with time management. Use them strategically to maintain focus without creating artificial stress.

When to Set Due Dates:
  • • External deadlines (product launches, compliance requirements)
  • • Coordinated releases (features needed by other teams)
  • • Time-boxed experiments ("evaluate viability within 1 week")
  • • Personal accountability (prevent infinite procrastination)
When NOT to Set Due Dates:
  • • Exploratory work with undefined scope
  • • Research with uncertain time requirements
  • • Low-priority improvements with no urgency
  • • Work blocked by external dependencies beyond your control
Setting Realistic Dates:
  • • Estimate task durations, then add 50% buffer for unknowns
  • • Consider current workload and competing priorities
  • • Account for review cycles, testing, and deployment time
  • • Adjust dates as new information emerges (don't anchor to outdated estimates)

Tagging for Organization

Tags are flexible labels that help you categorize and filter workunits. Unlike priorities and due dates, you can apply multiple tags to each workunit.

Common Tagging Strategies:
By Type:
feature bug refactor research documentation
By Component:
api frontend database auth payments
By Domain:
security performance ux devops testing
Tagging Best Practices:
  • • Keep tag names short and lowercase (easier to type and remember)
  • • Use 2-5 tags per workunit (more tags = harder to find relevant work)
  • • Establish team conventions for common tags (prevents tag proliferation)
  • • Combine tags for powerful filtering (e.g., "bug + api + security")
  • • Periodically review and consolidate similar tags (merge "docs" and "documentation")

Common Workflows

Learn proven patterns for common types of work. These workflows show how to structure workunits for different scenarios, from feature development to bug fixes.

Feature Development

Workflow Pattern:
Plan → Design → Implement → Test → Deploy → Document
Example: User Profile Customization
Problem: Users can't customize their profiles. They've requested the ability to add profile photos, bio text, and social links. Analytics show 60% of users visit the profile settings page, indicating interest in personalization.
Success Criteria:
  • • Users can upload profile photos (max 5MB, jpg/png)
  • • Users can add bio text (max 500 characters)
  • • Users can add 3 social links (Mastodon, GitHub, LinkedIn)
  • • Changes visible immediately on public profiles
  • • Tests cover file upload, validation, and display
  • • Deployed with feature flag for gradual rollout
Tasks:
  1. Design database schema for profile fields
  2. Implement file upload endpoint with S3 storage
  3. Create profile editing UI with validation
  4. Build public profile display page
  5. Write integration tests for profile updates
  6. Add feature flag and monitoring
  7. Deploy to staging for QA testing
  8. Update user documentation and help articles
Priority: Normal | Tags: feature, frontend, backend, user-experience

Bug Fixes

Workflow Pattern:
Reproduce → Root Cause → Fix → Test → Deploy → Verify
Example: Email Verification Not Sending
Problem: Users report not receiving email verification emails after registration. Logs show 15% of registrations fail to send emails, with SMTP timeouts in error logs. This blocks users from accessing their accounts and is causing support ticket volume to increase.
Success Criteria:
  • • 99%+ of verification emails successfully sent
  • • SMTP timeouts eliminated or handled with retry logic
  • • Monitoring alerts set up for email delivery failures
  • • Tests verify retry behavior and failure handling
  • • Deployed fix verified in production with metrics
Tasks:
  1. Reproduce issue in development environment
  2. Analyze SMTP logs and identify root cause (connection pooling?)
  3. Implement fix: Add connection pool management + retry logic
  4. Write tests for email sending with simulated failures
  5. Deploy to staging and verify with load testing
  6. Add monitoring dashboard for email delivery rates
  7. Deploy to production with gradual rollout
  8. Monitor for 48 hours to confirm resolution
Priority: High | Tags: bug, backend, email, reliability

Research & Exploration

Workflow Pattern:
Define Question → Research → Prototype → Evaluate → Document Findings
Example: Evaluate Real-time Collaboration Libraries
Problem: We're considering adding real-time collaboration features (cursor presence, live edits) to our document editor. Need to evaluate technical approaches (WebSockets vs WebRTC vs CRDT libraries) and assess complexity, performance, and maintenance burden before committing.
Success Criteria:
  • • Evaluate 3 technical approaches with working prototypes
  • • Document pros/cons of each approach (complexity, performance, cost)
  • • Recommendation with rationale for chosen approach
  • • Estimated implementation timeline and resource requirements
  • • AI context documenting research findings for future reference
Tasks:
  1. Research available libraries (Yjs, Automerge, ShareDB)
  2. Build minimal prototype with WebSocket approach
  3. Build minimal prototype with CRDT library (Yjs)
  4. Performance testing: measure latency and memory usage
  5. Evaluate developer experience and documentation quality
  6. Cost analysis: server resources, third-party services
  7. Write recommendation document with trade-offs
  8. Present findings to team for decision
Priority: Normal | Tags: research, real-time, prototyping, architecture | Due: 2 weeks
Research Tip: Time-box Exploration
Set a deadline for research workunits to prevent endless exploration. Make the best decision you can with available information by the deadline, then move forward.

Refactoring

Workflow Pattern:
Identify Problem → Add Tests → Refactor → Verify → Document Changes
Example: Consolidate Payment Processing Logic
Problem: Payment processing logic is duplicated across 5 different endpoints (subscriptions, one-time purchases, upgrades, downgrades, refunds). Each implementation has subtle differences, making bug fixes error-prone. Code review identified 3 inconsistencies in error handling between endpoints.
Success Criteria:
  • • Single PaymentProcessor service handles all payment types
  • • All 5 endpoints migrated to use centralized service
  • • Existing tests pass without modification (behavior unchanged)
  • • Code coverage maintained or improved (≥90%)
  • • No production errors during migration
  • • Architecture documentation updated
Tasks:
  1. Audit existing payment logic and document differences
  2. Design unified PaymentProcessor interface
  3. Add comprehensive tests for existing behavior
  4. Implement PaymentProcessor service
  5. Migrate subscription endpoint (deploy + monitor)
  6. Migrate remaining endpoints one at a time
  7. Remove deprecated code after migration complete
  8. Update architecture docs and code comments
Priority: Normal | Tags: refactor, backend, payments, technical-debt

Best Practices

Apply these principles to maintain effective workunit management over time:

Keep Workunits Focused
One workunit = one problem. If you find yourself saying "and also we should...", create a separate workunit. Focused workunits are easier to complete, track, and collaborate on with AI.
Update Regularly
Mark tasks complete as you finish them, update AI context after significant work, and adjust priorities as circumstances change. Stale workunits lose value quickly.
Write for Future You
In 3 months, will you understand why this work mattered? Write problem statements and AI context for someone with no memory of the current situation. Your future self (and AI assistants) will thank you.
Link Related Work
Use asset links to connect workunits to products, systems, and related workunits. This creates a web of context that helps AI assistants understand relationships and dependencies.
Review and Archive
Weekly: Review paused workunits for unblocking. Monthly: Archive completed workunits older than 30 days. Quarterly: Review all workunits for relevance. A clean workspace maintains focus.
Celebrate Completion
When you complete a workunit, take a moment to reflect on what you learned. Update AI context with insights, link to deployed work, and acknowledge progress. Completion is worth celebrating.

Next Steps

Ready to master workunit workflows? Here's where to go next:

Questions About Workflows?

We're here to help you find the workflow patterns that work best for your team.