AI Features Guide

Unlock the full power of AI collaboration with Workunit's multi-LLM features, context-aware assistance, and intelligent workflow automation

Last updated: October 2025

Overview

Workunit is designed from the ground up for AI collaboration. Unlike traditional project management tools adapted for AI, Workunit treats AI models as first-class team members, enabling seamless context sharing, real-time synchronization, and intelligent workflow automation across multiple LLMs.

This guide explores Workunit's AI features in depth, showing you how to leverage multi-LLM collaboration, preserve context across sessions, and accelerate your development workflow with AI-powered assistance.

Multi-LLM Collaboration

This is Workunit's core value proposition: enabling multiple AI models to collaborate on the same project by sharing context in real-time. Stop re-explaining your project every time you switch models. Start leveraging the unique strengths of Claude, GPT, Gemini, and future models—all working from the same shared context.

Why Use Multiple AI Models?

Different AI models excel at different tasks. By using the right model for each phase of your work, you can dramatically improve both speed and quality:

C
Claude Sonnet 4.5: Strategic Planning
  • Breaking down complex problems into clear task hierarchies
  • Designing system architecture with security and scalability in mind
  • Writing comprehensive problem statements and success criteria
  • Code review and architectural analysis
G
GPT-5: Rapid Execution
  • Fast code generation and feature implementation
  • Writing tests and boilerplate code efficiently
  • Debugging and fixing implementation issues
  • Executing well-defined tasks from the plan
G
Gemini: Data Analysis & Optimization
  • Analyzing performance metrics and identifying bottlenecks
  • Processing large datasets and generating insights
  • Code quality analysis and optimization suggestions
  • Visual reasoning and diagram interpretation

Real-World Multi-Model Workflows

Here's how teams use multiple models together on the same workunit:

Example: Building a Payment Integration

1
Planning Phase - Claude Sonnet 4.5
You to Claude:
"Create a workunit for integrating Stripe payments. Consider security, error handling, and webhook processing."
Claude creates:
  • • Detailed problem statement with security considerations
  • • 8 well-defined tasks covering API integration, webhooks, testing
  • • Success criteria including PCI compliance and error recovery
  • • AI context documenting security requirements and architecture decisions
2
Implementation Phase - GPT-5
You to GPT:
"Get workunit 'Stripe Payment Integration' and implement tasks 1-3"
GPT:
  • • Reads Claude's plan and understands security requirements
  • • Implements API client, payment processing, and webhook handling
  • • Marks tasks complete as work progresses
  • • Adds implementation notes to AI context (libraries used, gotchas)
3
Code Review Phase - Claude Sonnet 4.5
You to Claude:
"Review workunit 'Stripe Payment Integration' for security issues"
Claude:
  • • Reads GPT's implementation notes from AI context
  • • Reviews code for webhook signature verification
  • • Suggests improvements for idempotency handling
  • • Updates AI context with security findings
4
Testing Phase - GPT-5
You to GPT:
"Implement Claude's security improvements and write comprehensive tests"
GPT:
  • • Reads Claude's security recommendations from AI context
  • • Applies improvements to webhook handling and idempotency
  • • Writes unit and integration tests covering edge cases
  • • Marks remaining tasks complete
5
Performance Analysis - Gemini
You to Gemini:
"Analyze test results for 'Stripe Payment Integration' and identify performance bottlenecks"
Gemini:
  • • Reads full context from Claude's plan and GPT's implementation
  • • Analyzes test performance metrics
  • • Identifies database query bottleneck in webhook processing
  • • Updates AI context with optimization recommendations

Result: Five different AI interactions on the same workunit, each model building on previous work. Total context re-explanation time: zero. Each model used its unique strengths while maintaining perfect continuity.

Real-Time Status Synchronization

When any AI model updates a workunit or task, the change is immediately visible to all other models. This prevents duplicate work and enables true parallel collaboration:

How Synchronization Works

  1. Claude marks Task 1 as "In Progress" via MCP call
  2. Workunit immediately updates the task status in its database
  3. GPT queries the workunit 2 seconds later, sees Task 1 is in progress
  4. GPT skips Task 1 and picks up Task 2 instead
  5. Both models work in parallel on independent tasks
  6. When Claude finishes, it marks Task 1 "Done"
  7. GPT and Gemini instantly see the update when they next query
  8. No duplicate work, no conflicts, no manual coordination needed
Pro Tip: Parallel Task Execution
When planning with Claude, explicitly mark which tasks can run in parallel. Then use GPT and Gemini to execute those tasks simultaneously, dramatically reducing total completion time.

Choosing the Right Model for Each Task

Use this decision guide to maximize efficiency:

Use Claude Sonnet 4.5 for:
  • • Initial project planning and task breakdown
  • • Architecture design and system design decisions
  • • Security reviews and compliance analysis
  • • Complex code reviews requiring deep reasoning
  • • Writing comprehensive documentation
Use GPT-5 for:
  • • Executing well-defined implementation tasks
  • • Writing tests and boilerplate code quickly
  • • Bug fixes and debugging sessions
  • • Refactoring and code cleanup
  • • API integration and endpoint implementation
Use Gemini for:
  • • Performance analysis and optimization
  • • Processing and analyzing log files or metrics
  • • Code quality analysis and pattern detection
  • • Visual diagram interpretation and creation
  • • Data transformation and migration tasks

AI Context Writing

One of Workunit's most powerful features is that AI models can write context directly to workunits. This creates a trail-of-thought documentation system where insights, decisions, and learnings are preserved for future sessions and other AI models.

What is AI Context?

AI context is a markdown field on each workunit where AI models can write summaries, document decisions, record patterns discovered, and preserve reasoning for future reference. Think of it as the AI's lab notebook—capturing not just what was done, but why and how.

AI Context Typically Includes:
  • Session Summaries: What was accomplished in each AI session
  • Technical Decisions: Why specific approaches were chosen over alternatives
  • Patterns Discovered: Insights about code structure, optimization opportunities, gotchas
  • Implementation Notes: Libraries used, configuration details, setup steps
  • Blockers & Solutions: Problems encountered and how they were resolved
  • Next Steps: What should be done next and what to consider

When AI Models Should Write Context

Encourage your AI assistants to write context at these key moments:

  • After completing significant work: "Update the workunit with what we accomplished today"
  • When making important decisions: "Document why we chose JWT over sessions"
  • After discovering patterns: "Add notes about the middleware chain pattern we're using"
  • When resolving complex issues: "Document how we fixed the race condition"
  • Before pausing work: "Summarize current state and next steps before we switch to another task"

Best Practices for AI Context

Focus on the "Why"
Context should explain reasoning and decisions, not just list what was done. Future AI models need to understand why choices were made to make informed follow-up decisions.
Use Markdown Formatting
Structure context with headers, lists, and code blocks for readability. Well-formatted context is easier for both AI models and humans to parse.
Keep It Concise
Aim for 500-2000 words. Too brief loses valuable insights, too verbose becomes noise. Focus on insights that would help future work.
Update, Don't Replace
When adding new context, append to existing content rather than overwriting. This preserves the full history of decisions and learnings.
Include Gotchas
Document edge cases, gotchas, and things that didn't work. This prevents future AI models from making the same mistakes.
Example AI Context Request:
"After implementing the authentication system, update the workunit AI context with:
- Technical decisions made (JWT vs sessions, bcrypt cost factor)
- Libraries used and why
- Patterns discovered (middleware chain structure)
- Gotchas encountered (token expiry handling)
- Next steps (refresh token implementation)"

Context-Aware Help

AI models connected to Workunit via MCP understand your project context automatically. Instead of lengthy explanations, you can ask direct questions and get informed assistance based on your workunit's problem statement, tasks, assets, and AI context.

How AI Models Understand Your Context

When you ask an AI assistant about a workunit, it automatically receives:

  • Problem Statement: The core problem you're solving and why it matters
  • Success Criteria: How you'll know when the work is complete
  • Task History: What's been completed, what's in progress, what's blocked
  • Linked Assets: Related products, systems, people, and knowledge
  • AI Context: Summaries and insights from previous AI sessions
  • Current Status: Active, paused, completed, or archived

Getting AI Assistance on Specific Workunits

Here are practical examples of how to get context-aware help:

Task Planning Assistance
"Get workunit 'User Authentication' and suggest the next tasks to work on"
AI reads the problem, completed tasks, and AI context, then suggests prioritized next steps based on full project understanding.
Architecture Decisions
"Review workunit 'Payment Integration' and recommend database schema design"
AI understands the payment context, success criteria, and linked systems to provide tailored schema recommendations.
Debugging Help
"I'm stuck on task 'Implement webhook validation' in workunit 'Stripe Integration' - help me debug"
AI reads the task details, related AI context from implementation, and can provide specific debugging guidance.
Code Review Requests
"Review completed tasks in workunit 'API Security Hardening' for potential issues"
AI analyzes the security context, completed tasks, and provides targeted code review focusing on security patterns.

AI-Generated Summaries and Insights

Ask AI models to generate summaries of your workunits to quickly understand status or prepare updates:

Status Summary:
"Summarize the current status of workunit 'Mobile App Redesign'"
Progress Report:
"Generate a progress report for all active workunits this week"
Technical Deep Dive:
"Explain the technical approach in workunit 'Database Migration' for a non-technical stakeholder"

Next Steps

Ready to unlock AI-powered collaboration? Here's where to go next:

Questions About AI Features?

We're here to help you get the most out of AI-powered workflows.