A Kanban board designed for human teams doesn't work perfectly for AI agents. Agents need stricter rules, automatic transitions, and explicit blocking semantics. Here's how to design a task management system that works for both humans and AI.
Not started → In progress → QA → Done
↓ ↑
Blocked ─────────┘
Rules:
- Not started → In progress: Agent picks up task
- In progress → QA: Agent finished, awaiting human review
- In progress → Blocked: Agent can't proceed (missing data/access/approval)
- Blocked → In progress: Blocker resolved
- QA → Done: Human approves (ONLY human can set Done)
- QA → In progress: Human requests changes
An AI agent should never mark its own work as "Done." The QA step ensures:
Every 30 minutes:
1. Query board for In progress tasks
2. For each: can I make progress?
→ Yes: do work, update card
→ No: move to Blocked with explanation
3. Query board for Not started tasks
4. For each: prerequisites met?
→ Yes: move to In progress, start work
5. Check Blocked tasks: blocker resolved?
→ Yes: move back to In progress
6. Nothing to do? → HEARTBEAT_OK
A card should move to Blocked when:
Every Blocked card must include: WHO you're waiting for, WHAT you're waiting for, WHEN you asked.
Every task card should have explicit completion criteria:
Task: Fix login timeout
DoD:
- [ ] SESSION_TIMEOUT set to 30s in docker-compose
- [ ] Tested with slow connection simulation
- [ ] No CSRF token issues
- [ ] Deployed to staging
- [ ] Added to monitoring
Without DoD, the agent doesn't know when to stop working and move to QA.
Beyond status tracking, every card should document the journey:
This turns task cards into knowledge artifacts. Future sessions (or team members) can understand not just what happened, but why.