GUIDE
Updated October 2025
6 topics

AI Features Guide

How Workunit's multi-model features, context sharing, and workflow tools work together

Overview

Workunit is designed from the ground up for AI collaboration. Unlike traditional project management tools adapted for AI, Workunit treats AI models as first-class team members, enabling seamless context sharing, real-time synchronization, and workflow automation across multiple LLMs.

This guide explores Workunit's AI features in depth, showing you how to leverage multi-LLM collaboration, preserve context across sessions, and accelerate your development workflow with AI-powered assistance.

Multi-LLM Collaboration

This is Workunit's core value proposition: enabling multiple AI models to collaborate on the same project by sharing context in real-time. Stop re-explaining your project every time you switch models. Start leveraging the unique strengths of Claude, GPT, Gemini, and future models—all working from the same shared context.

Why Use Multiple AI Models?

Different AI models excel at different tasks. By using the right model for each phase of your work, you can dramatically improve both speed and quality:

C
Claude Sonnet 4.5: Strategic Planning
  • Breaking down complex problems into clear task hierarchies
  • Designing system architecture with security and scalability in mind
  • Writing comprehensive problem statements and success criteria
  • Code review and architectural analysis
G
GPT-5: Rapid Execution
  • Fast code generation and feature implementation
  • Writing tests and boilerplate code efficiently
  • Debugging and fixing implementation issues
  • Executing well-defined tasks from the plan
G
Gemini: Data Analysis & Optimization
  • Analyzing performance metrics and identifying bottlenecks
  • Processing large datasets and generating insights
  • Code quality analysis and optimization suggestions
  • Visual reasoning and diagram interpretation

Real-World Multi-Model Workflows

Here's how teams use multiple models together on the same workunit:

Example: Building a Payment Integration

1
Planning Phase - Claude Sonnet 4.5
You to Claude:
"Create a workunit for integrating Stripe payments. Consider security, error handling, and webhook processing."
Claude creates:
  • • Detailed problem statement with security considerations
  • • 8 well-defined tasks covering API integration, webhooks, testing
  • • Success criteria including PCI compliance and error recovery
  • • Context atoms documenting security requirements and architecture decisions
2
Implementation Phase - GPT-5
You to GPT:
"Get workunit 'Stripe Payment Integration' and implement tasks 1-3"
GPT:
  • • Reads Claude's plan and understands security requirements
  • • Implements API client, payment processing, and webhook handling
  • • Marks tasks complete as work progresses
  • • Saves context atoms with implementation notes (libraries used, gotchas)
3
Code Review Phase - Claude Sonnet 4.5
You to Claude:
"Review workunit 'Stripe Payment Integration' for security issues"
Claude:
  • • Reads GPT's implementation notes from context atoms
  • • Reviews code for webhook signature verification
  • • Suggests improvements for idempotency handling
  • • Saves context atoms with security findings
4
Testing Phase - GPT-5
You to GPT:
"Implement Claude's security improvements and write comprehensive tests"
GPT:
  • • Reads Claude's security recommendations from context atoms
  • • Applies improvements to webhook handling and idempotency
  • • Writes unit and integration tests covering edge cases
  • • Marks remaining tasks complete
5
Performance Analysis - Gemini
You to Gemini:
"Analyze test results for 'Stripe Payment Integration' and identify performance bottlenecks"
Gemini:
  • • Reads full context from Claude's plan and GPT's implementation
  • • Analyzes test performance metrics
  • • Identifies database query bottleneck in webhook processing
  • • Saves context atoms with optimization recommendations

Result: Five different AI interactions on the same workunit, each model building on previous work. Total context re-explanation time: zero. Each model used its unique strengths while maintaining perfect continuity.

Real-Time Status Synchronization

When any AI model updates a workunit or task, the change is immediately visible to all other models. This prevents duplicate work and enables true parallel collaboration:

How Synchronization Works

  1. Claude marks Task 1 as "In Progress" via MCP call
  2. Workunit immediately updates the task status in its database
  3. GPT queries the workunit 2 seconds later, sees Task 1 is in progress
  4. GPT skips Task 1 and picks up Task 2 instead
  5. Both models work in parallel on independent tasks
  6. When Claude finishes, it marks Task 1 "Done"
  7. GPT and Gemini instantly see the update when they next query
  8. No duplicate work, no conflicts, no manual coordination needed
Pro Tip: Parallel Task Execution
When planning with Claude, explicitly mark which tasks can run in parallel. Then use GPT and Gemini to execute those tasks simultaneously, dramatically reducing total completion time.

Choosing the Right Model for Each Task

Use this decision guide to maximize efficiency:

Use Claude Sonnet 4.5 for:
  • • Initial project planning and task breakdown
  • • Architecture design and system design decisions
  • • Security reviews and compliance analysis
  • • Complex code reviews requiring deep reasoning
  • • Writing comprehensive documentation
Use GPT-5 for:
  • • Executing well-defined implementation tasks
  • • Writing tests and boilerplate code quickly
  • • Bug fixes and debugging sessions
  • • Refactoring and code cleanup
  • • API integration and endpoint implementation
Use Gemini for:
  • • Performance analysis and optimization
  • • Processing and analyzing log files or metrics
  • • Code quality analysis and pattern detection
  • • Visual diagram interpretation and creation
  • • Data transformation and migration tasks

Context Atoms

One of Workunit's most powerful features is that AI models can save structured context atoms directly to workunits. This creates a trail-of-thought documentation system where insights, decisions, and learnings are preserved for future sessions and other AI models—each atom typed, tagged, and ranked by importance.

Atom Types

Each context atom is a typed, importance-ranked unit of knowledge saved via the save_context MCP tool. Think of atoms as the AI's lab notebook—capturing not just what was done, but why and how—organized so future sessions can filter, search, and build on them.

The Five Atom Types:
decision Architectural and design choices
Record why a specific approach was chosen over alternatives. Future models can understand and respect past decisions rather than re-litigating them.
insight Patterns and discoveries
Capture observations about code structure, optimization opportunities, gotchas, and lessons learned that would otherwise be lost between sessions.
question Open questions and uncertainties
Surface things that need answers before work can proceed or that should be revisited. Keeps ambiguity visible rather than buried.
attempt Approaches tried (including failures)
Document approaches that were tried and whether they succeeded or failed. Prevents future models from repeating the same failed paths.
progress Session summaries and status
Summarize what was accomplished in a session, what state the work is in, and what comes next. Enables seamless handoffs between sessions and models.
Importance Levels:
low normal high critical
Importance affects how prominently atoms surface in the context timeline. Use critical for irreversible decisions or blockers, high for significant findings, and normal or low for routine notes.

When AI Models Should Save Context

Encourage your AI assistants to save context atoms at these key moments:

progress After completing significant work
"Save a progress atom summarizing what we accomplished today"
decision When making a technical decision
"Save a decision atom explaining why we chose JWT over sessions"
insight When discovering a pattern or gotcha
"Save an insight atom about the middleware chain pattern we're using"
progress Before pausing work
"Save a progress atom with current state and next steps before we switch tasks"
question When encountering an open question
"Save a question atom: should we use optimistic locking here?"
attempt When trying an approach (success or failure)
"Save an attempt atom: tried batching the inserts, hit deadlocks, rolling back"

Best Practices for Context Atoms

Focus on the "Why"
Atoms should explain reasoning and decisions, not just list what was done. Future AI models need to understand why choices were made to make informed follow-up decisions.
Use Specific Titles
Write titles that describe the decision or finding concisely, for example "Chose JWT for stateless horizontal scaling" rather than "Auth decision". Specific titles make atoms discoverable when filtering or searching the context timeline.
Use Supersedes for Updated Decisions
When a decision changes, save a new atom with the supersedes_id field pointing to the original. This creates a visible chain so future models can follow the evolution of thinking without losing the original reasoning.
Tag for Discoverability
Add tags to atoms so they surface in cross-workunit searches. Use consistent tag vocabulary—for example "auth", "performance", "database"—so insights from one workunit can inform related ones.
Include Gotchas as Attempts
Document failed approaches and edge cases as attempt atoms rather than burying them in prose. This prevents future models from repeating the same mistakes.
Example: Saving Multiple Atoms After an Auth Session
Decision atom
save_context(workunit_id, atom_type="decision", importance="high",
  title="Chose JWT over sessions for auth",
  content="JWT chosen for horizontal scaling. 24h expiry balances security/UX. Refresh token via HttpOnly cookie.",
  tags=["auth", "architecture"])
Insight atom
save_context(workunit_id, atom_type="insight", importance="normal",
  title="Middleware chain validates JWT before handler runs",
  content="AuthMiddleware extracts + validates token, sets user in ctx. All protected routes use this pattern.",
  tags=["auth", "middleware"])
Progress atom
save_context(workunit_id, atom_type="progress", importance="normal",
  title="Session: auth middleware and login endpoint complete",
  content="Implemented JWT middleware, login endpoint, and bcrypt hashing. Next: refresh token rotation.",
  tags=["auth", "progress"])

Context-Aware Help

AI models connected to Workunit via MCP understand your project context automatically. Instead of lengthy explanations, you can ask direct questions and get informed assistance based on your workunit's problem statement, tasks, assets, and context atoms.

How AI Models Understand Your Context

When you ask an AI assistant about a workunit, it automatically receives:

  • Problem Statement: The core problem you're solving and why it matters
  • Success Criteria: How you'll know when the work is complete
  • Task History: What's been completed, what's in progress, what's blocked
  • Linked Assets: Related products, systems, people, and knowledge
  • Context Atoms: Summaries and insights from previous AI sessions
  • Current Status: Active, paused, completed, or archived

Getting AI Assistance on Specific Workunits

Here are practical examples of how to get context-aware help:

Task Planning Assistance
"Get workunit 'User Authentication' and suggest the next tasks to work on"
AI reads the problem, completed tasks, and context atoms, then suggests prioritized next steps based on full project understanding.
Architecture Decisions
"Review workunit 'Payment Integration' and recommend database schema design"
AI understands the payment context, success criteria, and linked systems to provide tailored schema recommendations.
Debugging Help
"I'm stuck on task 'Implement webhook validation' in workunit 'Stripe Integration' - help me debug"
AI reads the task details, related context atoms from implementation, and can provide specific debugging guidance.
Code Review Requests
"Review completed tasks in workunit 'API Security Hardening' for potential issues"
AI analyzes the security context, completed tasks, and provides targeted code review focusing on security patterns.

AI-Generated Summaries and Insights

Ask AI models to generate summaries of your workunits to quickly understand status or prepare updates:

Status Summary:
"Summarize the current status of workunit 'Mobile App Redesign'"
Progress Report:
"Generate a progress report for all active workunits this week"
Technical Deep Dive:
"Explain the technical approach in workunit 'Database Migration' for a non-technical stakeholder"

Cloud Execution

Cloud Execution is Workunit's most advanced AI feature. Instead of chatting with AI about your code, Cloud Execution sends an AI agent to a cloud VM where it can directly interact with your GitHub repository—reading files, writing code, running tests, and creating pull requests.

There are two modes: Explore mode analyzes your repository and suggests workunit tasks based on your problem statement, while Implement mode executes your tasks by writing code and creating a pull request.

Cloud Execution Guide
Complete setup guide for Explore and Implement modes, API keys, GitHub integration, and security details