AI Features Guide
How Workunit's multi-model features, context sharing, and workflow tools work together
Overview
Workunit is designed from the ground up for AI collaboration. Unlike traditional project management tools adapted for AI, Workunit treats AI models as first-class team members, enabling seamless context sharing, real-time synchronization, and workflow automation across multiple LLMs.
This guide explores Workunit's AI features in depth, showing you how to leverage multi-LLM collaboration, preserve context across sessions, and accelerate your development workflow with AI-powered assistance.
Multi-LLM Collaboration
This is Workunit's core value proposition: enabling multiple AI models to collaborate on the same project by sharing context in real-time. Stop re-explaining your project every time you switch models. Start leveraging the unique strengths of Claude, GPT, Gemini, and future models—all working from the same shared context.
Why Use Multiple AI Models?
Different AI models excel at different tasks. By using the right model for each phase of your work, you can dramatically improve both speed and quality:
- • Breaking down complex problems into clear task hierarchies
- • Designing system architecture with security and scalability in mind
- • Writing comprehensive problem statements and success criteria
- • Code review and architectural analysis
- • Fast code generation and feature implementation
- • Writing tests and boilerplate code efficiently
- • Debugging and fixing implementation issues
- • Executing well-defined tasks from the plan
- • Analyzing performance metrics and identifying bottlenecks
- • Processing large datasets and generating insights
- • Code quality analysis and optimization suggestions
- • Visual reasoning and diagram interpretation
Real-World Multi-Model Workflows
Here's how teams use multiple models together on the same workunit:
Example: Building a Payment Integration
- • Detailed problem statement with security considerations
- • 8 well-defined tasks covering API integration, webhooks, testing
- • Success criteria including PCI compliance and error recovery
- • Context atoms documenting security requirements and architecture decisions
- • Reads Claude's plan and understands security requirements
- • Implements API client, payment processing, and webhook handling
- • Marks tasks complete as work progresses
- • Saves context atoms with implementation notes (libraries used, gotchas)
- • Reads GPT's implementation notes from context atoms
- • Reviews code for webhook signature verification
- • Suggests improvements for idempotency handling
- • Saves context atoms with security findings
- • Reads Claude's security recommendations from context atoms
- • Applies improvements to webhook handling and idempotency
- • Writes unit and integration tests covering edge cases
- • Marks remaining tasks complete
- • Reads full context from Claude's plan and GPT's implementation
- • Analyzes test performance metrics
- • Identifies database query bottleneck in webhook processing
- • Saves context atoms with optimization recommendations
Result: Five different AI interactions on the same workunit, each model building on previous work. Total context re-explanation time: zero. Each model used its unique strengths while maintaining perfect continuity.
Real-Time Status Synchronization
When any AI model updates a workunit or task, the change is immediately visible to all other models. This prevents duplicate work and enables true parallel collaboration:
How Synchronization Works
- Claude marks Task 1 as "In Progress" via MCP call
- Workunit immediately updates the task status in its database
- GPT queries the workunit 2 seconds later, sees Task 1 is in progress
- GPT skips Task 1 and picks up Task 2 instead
- Both models work in parallel on independent tasks
- When Claude finishes, it marks Task 1 "Done"
- GPT and Gemini instantly see the update when they next query
- No duplicate work, no conflicts, no manual coordination needed
Choosing the Right Model for Each Task
Use this decision guide to maximize efficiency:
- • Initial project planning and task breakdown
- • Architecture design and system design decisions
- • Security reviews and compliance analysis
- • Complex code reviews requiring deep reasoning
- • Writing comprehensive documentation
- • Executing well-defined implementation tasks
- • Writing tests and boilerplate code quickly
- • Bug fixes and debugging sessions
- • Refactoring and code cleanup
- • API integration and endpoint implementation
- • Performance analysis and optimization
- • Processing and analyzing log files or metrics
- • Code quality analysis and pattern detection
- • Visual diagram interpretation and creation
- • Data transformation and migration tasks
Context Atoms
One of Workunit's most powerful features is that AI models can save structured context atoms directly to workunits. This creates a trail-of-thought documentation system where insights, decisions, and learnings are preserved for future sessions and other AI models—each atom typed, tagged, and ranked by importance.
Atom Types
Each context atom is a typed, importance-ranked unit of knowledge saved via the save_context MCP tool. Think of atoms as the AI's lab notebook—capturing not just what was done, but why and how—organized so future sessions can filter, search, and build on them.
When AI Models Should Save Context
Encourage your AI assistants to save context atoms at these key moments:
Best Practices for Context Atoms
title="Chose JWT over sessions for auth",
content="JWT chosen for horizontal scaling. 24h expiry balances security/UX. Refresh token via HttpOnly cookie.",
tags=["auth", "architecture"])
title="Middleware chain validates JWT before handler runs",
content="AuthMiddleware extracts + validates token, sets user in ctx. All protected routes use this pattern.",
tags=["auth", "middleware"])
title="Session: auth middleware and login endpoint complete",
content="Implemented JWT middleware, login endpoint, and bcrypt hashing. Next: refresh token rotation.",
tags=["auth", "progress"])
Context-Aware Help
AI models connected to Workunit via MCP understand your project context automatically. Instead of lengthy explanations, you can ask direct questions and get informed assistance based on your workunit's problem statement, tasks, assets, and context atoms.
How AI Models Understand Your Context
When you ask an AI assistant about a workunit, it automatically receives:
- • Problem Statement: The core problem you're solving and why it matters
- • Success Criteria: How you'll know when the work is complete
- • Task History: What's been completed, what's in progress, what's blocked
- • Linked Assets: Related products, systems, people, and knowledge
- • Context Atoms: Summaries and insights from previous AI sessions
- • Current Status: Active, paused, completed, or archived
Getting AI Assistance on Specific Workunits
Here are practical examples of how to get context-aware help:
AI-Generated Summaries and Insights
Ask AI models to generate summaries of your workunits to quickly understand status or prepare updates:
Cloud Execution
Cloud Execution is Workunit's most advanced AI feature. Instead of chatting with AI about your code, Cloud Execution sends an AI agent to a cloud VM where it can directly interact with your GitHub repository—reading files, writing code, running tests, and creating pull requests.
There are two modes: Explore mode analyzes your repository and suggests workunit tasks based on your problem statement, while Implement mode executes your tasks by writing code and creating a pull request.
Next Steps
Ready to unlock AI-powered collaboration? Here's where to go next:
Run AI agents on cloud VMs to explore repos and implement code changes
Connect your AI tools to Workunit and start using multi-LLM workflows
Create your first AI-enabled workunit in 10 minutes
Deep dive into workunits, tasks, and context preservation
Enable AI collaboration for your entire team
We're here to help you get the most out of AI-powered workflows.