Core Concepts
Understanding Workunit's fundamental building blocks: Workunits, Tasks, Assets, and AI Context Sharing
Last updated: October 2025
Overview
Workunit solves a fundamental problem: how do you share context between multiple AI models working on the same project? Traditional project management tools weren't built for AI collaboration. Workunit was designed from the ground up to be the universal context layer for multi-LLM workflows.
This guide explains Workunit's core concepts and how they work together to enable seamless collaboration between Claude, GPT, Gemini, and any future AI model.
Workunits
A workunit is the central organizing concept in Workunit. Think of it as a context-persistent container for a project, feature, or initiative. Unlike a simple task list, a workunit captures the why behind your work—the problem you're solving, the success criteria, and the full context that AI models need to collaborate effectively.
Example: User Authentication Workunit
- Name:
- "Implement JWT Authentication"
- Problem Statement:
- "Users need secure authentication to access protected resources. Current session-based auth doesn't scale horizontally."
- Success Criteria:
- "Users can log in with email/password, receive JWT tokens, and access protected endpoints. Tokens expire after 24 hours with refresh capability."
- Status:
- Active
Key Components
Name
A clear, descriptive title that identifies the work at a glance.
Problem Statement
The core problem you're solving. This is the most important field—it gives AI models the 'why' behind your work. Without it, AI can't make informed decisions about implementation approaches or trade-offs.
Success Criteria
How you know the work is complete. Defines the boundaries and expected outcomes. This helps AI models understand when to stop iterating and what 'done' looks like.
Status
Current state of the workunit: Active (currently being worked on), Paused (temporarily on hold), Completed (finished), or Archived (no longer relevant).
When to Create a Workunit
Create a workunit when you have work that:
- • Requires multiple steps or tasks: A feature implementation, bug investigation, or infrastructure upgrade
- • Benefits from AI collaboration: Planning, execution, code review, or analysis where different LLMs excel
- • Needs context persistence: Work spanning multiple sessions where context loss would be costly
- • Involves team coordination: Work where humans and AI models need to stay synchronized
Workunit Lifecycle
Tasks
Tasks are the actionable work items within a workunit. They break down the problem into concrete steps that can be assigned, tracked, and completed. Unlike traditional task lists, Workunit tasks are designed for AI collaboration—any AI model can see task status in real-time and update progress.
Task States
Task Organization
Tasks within a workunit can be organized in several ways:
- • Priority levels: High, Normal, Low. Helps AI models and team members focus on what matters most.
- • Dependencies: Tasks can depend on other tasks. Prevents starting work that will be blocked.
- • Assignment: Tasks can be assigned to team members. Unassigned tasks are available for any AI or human to pick up.
- • Estimated effort: Optional time estimates help with planning and tracking progress.
AI Integration with Tasks
Real-World Workflow Example
- Claude Sonnet 4.5 creates a workunit with 5 tasks for implementing authentication
- Claude marks Task 1 "Design database schema" as In Progress
- GPT-5 sees Task 1 is in progress and waits
- Claude completes Task 1, marks it Done
- GPT-5 immediately sees Task 1 is done, picks up Task 2 "Implement login endpoint"
- Gemini joins, sees Tasks 1-2 are handled, starts Task 3 "Add password hashing"
- All models work in parallel on independent tasks, synchronized in real-time
Assets
Assets are the organizational context that helps AI models understand your environment. They represent the people, products, systems, and knowledge that relate to your work. By linking assets to workunits, you give AI models the context they need to make informed decisions without lengthy explanations.
Workunit supports four types of assets, each serving a specific purpose:
People Assets
Who does the work: individuals, teams, departments, or contractors.
- • Individual: "Sarah Chen (Backend Engineer)" with capabilities like Go, PostgreSQL, gRPC
- • Team: "Platform Team" with 5 members and expertise in infrastructure
- • Contractor: "Design Agency XYZ" with UI/UX capabilities
When linked to a workunit, people assets help AI models understand who's available, what skills are on the team, and who to assign tasks to based on capabilities.
Product Assets
What you deliver to customers: software, hardware, services, or any deliverable.
- • Software: "Workunit Platform" with metadata like repository URL, version 2.1.0, production status
- • API: "Workunit API v1" with details about endpoints, authentication, and SLA
- • Service: "Premium Support Plan" with service levels and delivery times
Linking product assets to workunits tells AI models what you're building or improving. This context helps models understand constraints, integration points, and quality requirements.
System Assets
How you operate: infrastructure, processes, tools, and dependencies.
- • Infrastructure: "PostgreSQL Database" with connection details, uptime SLA, backup schedule
- • Service: "Authentication Service" with API endpoints, monitoring URLs, dependencies
- • Tool: "CI/CD Pipeline" with build steps, deployment targets, and access controls
System assets give AI models critical context about your technical environment. Models can understand dependencies, identify risks, and suggest appropriate implementation approaches.
Knowledge Assets
Information and expertise: documentation, standards, templates, research, or any reference material.
- • Documentation: "API Integration Guide" with content stored directly in Workunit or linked externally
- • Standard: "Coding Standards" with version tracking and compliance requirements
- • Research: "Performance Benchmarks" with test results and optimization findings
Knowledge assets ensure AI models follow your team's standards, patterns, and best practices. Instead of explaining your coding style to each model, link your standards document once.
AI Context Sharing
This is Workunit's core value proposition: enabling multiple AI models to collaborate on the same work by sharing context in real-time. Traditional AI workflows require re-explaining your entire project every time you switch models. Workunit eliminates this friction.
Real-Time Status Tracking
When any AI model updates a workunit or task, the change is immediately visible to all other models. No more duplicate work. No more asking 'What did we already complete?'
How It Works
- Claude marks a task "In Progress" via MCP
- Workunit updates the task status in its database
- GPT queries Workunit via MCP and sees the status change immediately
- GPT skips that task and picks up the next available one
- When Claude finishes, it marks the task "Done"
- All models instantly see the task is complete when they next query Workunit
Use the Best LLM for Each Task
Different AI models have different strengths. Workunit lets you leverage each model's capabilities on the same project without losing context when you switch.
Context Preservation Across Sessions
Unlike chat conversations that lose context when you close the window, workunits persist indefinitely. AI models can pick up where they left off—even weeks later—by reading the workunit's problem statement, task history, and AI-generated context summaries.
- • Problem context: Why this work matters and what success looks like
- • Task history: What's been completed, what's in progress, what's blocked
- • Asset relationships: People, products, systems, and knowledge linked to the work
- • AI-generated insights: Summaries, decisions, and patterns discovered during development
- • Human annotations: Comments, decisions, and guidance from team members
Model Context Protocol (MCP)
The Model Context Protocol is the technical bridge that connects AI assistants to Workunit. Think of it as a standardized API that lets AI models read workunits, create tasks, update status, and access organizational context—all without leaving their native interface.
How MCP Works
- Installation: You add Workunit's MCP server to your AI tool (like Claude Code or Gemini CLI) with a single command
- Authentication: The first time the AI model tries to access Workunit, you authorize it via OAuth (like logging into an app with Google)
- Tool Registration: The AI model discovers what Workunit capabilities are available (create workunit, list tasks, update status, etc.)
- Natural Interaction: You talk to the AI normally: "Create a workunit for implementing search" or "Show me open tasks"
- Background Execution: The AI model calls the appropriate Workunit MCP tools, gets the results, and presents them to you naturally
Getting Started with MCP
Ready to connect your AI tools to Workunit? Our MCP Integration Guide walks you through installation and setup for Claude Code, Gemini CLI, and other MCP-compatible tools.
MCP Integration Guide
See setup instructions for other tools and troubleshooting help
Next Steps
Now that you understand the core concepts, here's how to put them into practice:
Quick Start Guide
Create your first workunit and connect AI assistants in 10 minutes
MCP Integration
Complete guide to connecting Claude, Gemini, and other AI tools via Model Context Protocol
Create a Workunit
Start organizing your work with your first workunit
Add Assets
Build organizational context with products, systems, people, and knowledge assets
Questions About Core Concepts?
We're here to help you understand how Workunit can fit into your workflow.