Building Autonomous AI Dev Teams with Clawdbot and ADA

AIAgentsClawdbotAutomationDeveloper ToolsOpen Source

A deep dive into building autonomous AI dev teams with Clawdbot, featuring real results from multiple projects and a setup guide.

Building Autonomous AI Dev Teams with Clawdbot and ADA

I have AI agents writing real code, creating GitHub issues, opening pull requests, optimizing database queries, and managing sprints — across four different projects — while I sleep. This isn't a demo. It's my actual workflow.

Here's how I got here, how it works, and how you can set it up yourself.

The Journey: From Personal Assistant to Autonomous Dev Teams

It started simply enough. I set up Clawdbot — an open-source AI agent framework — as a personal assistant. I called it R.I.A. (Rathi's Intelligent Agent). It managed my calendar, triaged emails, handled smart home stuff. Standard AI assistant things.

But Clawdbot has this concept of heartbeats — periodic execution cycles where the agent wakes up, reads its context, does something useful, and goes back to sleep. That's what changed everything. Because once you have an agent that can wake up on a schedule and take autonomous action... you start wondering: what if it could write code?

So I built a system on top of Clawdbot's heartbeat model that simulates an entire development team. Different roles — CEO, Engineering, Product, Ops, Research, Design, Growth, Scrum — each with their own playbooks, focus areas, and memory banks. They rotate through on each heartbeat, and each one does something productive: creating issues, writing specs, shipping PRs, reviewing code quality.

I called it ADA — Autonomous Dev Agents.

The best part? ADA agents now build ADA itself. The framework is dogfooding its own development with 15 completed autonomous cycles and counting. That's the kind of recursive loop that makes you feel like you're living in the future.

How It Actually Works

The Heartbeat Model

Clawdbot's core insight is that AI agents don't need to be constantly running. They need to wake up periodically, understand the current state, do one meaningful thing, and record what happened. That's a heartbeat.

Every heartbeat cycle runs a 7-phase dispatch protocol:

Phase 1: Context Load     → Read rotation state, roster, rules, memory bank, playbook
Phase 2: Situational Awareness → Check GitHub issues/PRs, diff against memory
Phase 3: Execute          → Pick ONE high-impact action for the current role
Phase 4: Memory Update    → Write what changed to bank.md
Phase 5: Compression      → Archive and compress memory if it's getting long
Phase 6: Evolution        → Assess if the team structure needs to change
Phase 7: State Update     → Advance rotation, commit & push

The key word there is ONE. Each cycle, the agent picks the single highest-impact action for the current role. Not a laundry list. One focused task. Then it updates memory and rotates to the next role.

The Roster: Your AI Team

Every project gets a roster.json that defines its team. Here's a simplified version from my SocialTrade project:

{
  "company": "SocialTrade",
  "product": "SocialTrade — Social Trading Platform",
  "tagline": "Share picks. Follow traders. Win together.",
  "roles": [
    {
      "id": "ceo",
      "name": "Alpha",
      "title": "CEO",
      "emoji": "👔",
      "focus": ["business_strategy", "competitive_intelligence", "go_to_market"],
      "actions": ["write_business_plans", "swot_analysis", "market_research"]
    },
    {
      "id": "frontend",
      "name": "The Chart Wizard",
      "title": "Frontend Engineer",
      "emoji": "⚛️",
      "focus": ["react_components", "feed_rendering", "leaderboard_ui"],
      "actions": ["write_components", "create_prs", "optimize_renders"]
    },
    {
      "id": "backend",
      "name": "The Quant",
      "title": "Backend Engineer",
      "emoji": "🗄️",
      "focus": ["supabase_schema", "rls_policies", "database_migrations"],
      "actions": ["write_migrations", "optimize_queries", "create_edge_functions"]
    }
    // ... plus product, scrum, research, ops, growth, design
  ],
  "rotation_order": [
    "ceo", "growth", "research", "product", "scrum",
    "frontend", "backend", "ops", "design"
  ]
}

Each role has a name (gives it personality), focus areas (what it pays attention to), and actions (what it's allowed to do). The rotation order determines who goes next. Nine roles means nine different perspectives hitting the codebase in sequence.

Memory That Persists

This is what makes the whole thing work. Every project has a bank.md — a shared memory file that every role reads before acting and updates after acting. It looks like this:

# 🧠 Memory Bank — SocialTrade
 
> **Last updated:** 2026-02-01 | **Cycle:** 16 | **Version:** 4
 
## Current Status
 
### In Progress
- Leaderboard materialized view optimization IMPLEMENTED ⚡
  — eliminated 100+ N+1 queries, 95% performance improvement
- Copy Trading feature specification CREATED
  — comprehensive product spec based on Research findings
- Risk Management Framework established ✅
  — 22 security vulnerabilities addressed
 
### Blockers
- CI pipeline critical blocker (PR #39): Blocking 5 ready-to-merge PRs
 
### Architecture Decisions
| ID | Decision | Date | Author |
|----|----------|------|--------|
| ADR-001 | Split engineering into frontend and backend roles | 2026-01 | Builder |
| ADR-002 | 9-role rotation order | 2026-01 | System |

Every role section tracks what it last did, what it's working on, and what's in its pipeline. When the Engineering agent wakes up, it reads the bank, sees that the Product agent just created a spec, and knows exactly what to implement. When Ops wakes up, it sees there are PRs to merge and CI to fix.

The agents talk to each other through this shared memory — no real-time coordination needed.

The Dispatch Protocol

The DISPATCH.md file is the playbook the agent follows on every heartbeat. It's the same 7-phase protocol across all projects, but customized per-repo:

# Agent Dispatch Protocol
 
## Phase 1: Context Load
1. Read `agents/state/rotation.json` → determine current role
2. Read `agents/roster.json` → get rotation order
3. Read `agents/rules/RULES.md` → know the rules
4. Read `agents/memory/bank.md` → understand current project state
5. Read the current role's playbook: `agents/playbooks/<role>.md`
 
## Phase 2: Situational Awareness
6. Check GitHub: `gh issue list` and `gh pr list`
7. Cross-reference with memory bank — what changed since last cycle?
8. Identify the highest-impact action for this role
 
## Phase 3: Execute
9. Pick ONE action from the role's playbook
10. Execute via GitHub (create issue, write code + PR, add docs)
 
## Phase 4: Memory Update
12. Update bank.md with what changed
 
## Phase 5: Compression Check
13. Bank > 200 lines? Compress. 10+ cycles? Compress.
 
## Phase 6: Evolution Check
15. Is there a capability gap no role covers?
    Are 5+ issues piling up in a new domain?
 
## Phase 7: State Update
17. Advance rotation, update timestamp, commit & push

Phase 6 is fascinating — the agents can propose evolving their own team structure. If they notice issues piling up in an area no role covers, they can suggest adding a new role. The SocialTrade team actually split the Engineering role into separate Frontend and Backend roles (ADR-001) because one role couldn't effectively cover React components and Supabase schema optimization.

Real Results

This isn't theoretical. Here's what's actually running:

SocialTrade: 16 Cycles and Counting

The SocialTrade autonomous team has completed 16 full rotation cycles. Real output:

  • The Quant (Backend) built a materialized view for the leaderboard that eliminated 100+ N+1 queries — a 95% performance improvement on the most-hit endpoint
  • The Chart Wizard (Frontend) implemented pick performance cards with real-time P&L display
  • Alpha (CEO) produced a Q1 strategic roadmap and comprehensive risk management framework covering 22 security vulnerabilities
  • The Analyst (Research) delivered a complete copy trading API feasibility assessment — evaluated providers, recommended Alpaca, estimated $30K MVP cost
  • The Strategist (Product) turned that research into a full feature specification with user stories and implementation phases

All of this happened autonomously. I reviewed the PRs, but I didn't write any of it.

ADA Building Itself: 15 Cycles

The ADA project — the framework for setting up these agent teams — is built by its own agents. 15 cycles in:

  • The Builder (Engineering) implemented the ada init CLI command and core rotation logic
  • The Guardian (Ops) set up the CI pipeline with lint, typecheck, and test stages
  • The Architect (Design) specified the @ada/core API with an immutable-first design
  • The Dealmaker (Growth) produced a pitch deck and identified target VCs
  • The Founder (CEO) developed the freemium business model (open-source CLI → SaaS dashboard)

The LLM orchestration architecture was debated across multiple cycles between Research, Engineering, and Design before settling on a hybrid Clawdbot approach. That's agents having a real architectural discussion through GitHub issues and memory bank updates.

Chaat Club: Multi-Repo Agent Teams

This one's interesting. Chaat Club has two repos — a consumer food discovery app and a restaurant management portal — sharing a single Supabase backend. I set up a unified agent team that manages both:

{
  "company": "Chaat Club",
  "product": "Chaat — Food Discovery + Restaurant Management Platform",
  "repos": [
    {
      "name": "chaat-app",
      "github": "Chaat-Club/chaat-app",
      "purpose": "Consumer food discovery and social platform"
    },
    {
      "name": "chaat-restaurant-portal",
      "github": "Chaat-Club/chaat-restaurant-portal",
      "purpose": "Restaurant management dashboard (B2B)"
    }
  ],
  "shared_backend": "Supabase (shared instance for both apps)"
}

Each role has a scope field that defines which repos it can touch. The Backend Lead works on the shared Supabase layer. The Frontend Lead works across both apps. Product thinks about both consumer and restaurant experiences. One team, two codebases, shared context.

R.I.A.: The Personal AI That Ties It All Together

R.I.A. is the orchestrator — my personal Clawdbot instance that kicks off heartbeats for each project. It also handles my personal stuff: updating my GitHub profile, managing this website, triaging emails. It's the "main agent" that spawns the project-specific agent teams.

How to Set It Up

Want to run autonomous dev agents on your own repo? Here's how.

1. Install Clawdbot

npm install -g clawdbot

2. Configure It

clawdbot configure

This walks you through connecting your LLM provider (Anthropic, OpenAI, etc.), GitHub token, and other integrations.

3. Create the Agent Directory Structure

In your repo root:

agents/
├── DISPATCH.md              # The 7-phase heartbeat protocol
├── roster.json              # Your team definition
├── memory/
│   ├── bank.md              # Shared memory (the brain)
│   └── banks/               # Per-role memory banks
│       ├── ceo.md
│       ├── engineering.md
│       ├── product.md
│       └── ...
├── playbooks/               # Role-specific instructions
│   ├── ceo.md
│   ├── engineering.md
│   ├── product.md
│   └── ...
├── rules/
│   └── RULES.md             # Team rules and conventions
└── state/
    └── rotation.json        # Current rotation state

4. Define Your Roster

Create agents/roster.json. Start with the template and customize:

{
  "company": "Your Company",
  "product": "Your Product",
  "roles": [
    {
      "id": "product",
      "name": "The PM",
      "title": "Product Lead",
      "emoji": "📦",
      "focus": ["features", "roadmap", "user_stories"],
      "actions": ["create_feature_issues", "write_specs", "prioritize_backlog"]
    },
    {
      "id": "engineering",
      "name": "The Builder",
      "title": "Lead Engineer",
      "emoji": "⚙️",
      "focus": ["implementation", "architecture", "testing"],
      "actions": ["write_code", "create_prs", "code_review"]
    },
    {
      "id": "ops",
      "name": "The Guardian",
      "title": "DevOps Lead",
      "emoji": "🛡️",
      "focus": ["ci_cd", "code_quality", "security"],
      "actions": ["merge_prs", "fix_ci", "enforce_standards"]
    }
  ],
  "rotation_order": ["product", "engineering", "ops"]
}

You don't need all 9 roles to start. Three is plenty. Add more as the project grows.

5. Initialize Rotation State

Create agents/state/rotation.json:

{
  "current_index": 0,
  "last_role": null,
  "last_run": null,
  "cycle_count": 0,
  "history": []
}

6. Write the Dispatch Protocol

Copy the DISPATCH.md from any of the examples above, or grab the template from the ADA repo. Customize the branch strategy (main vs develop) and any project-specific rules.

7. Set Up Memory

Create agents/memory/bank.md with your initial project state:

# 🧠 Memory Bank — [Your Project]
 
> **Last updated:** [date] | **Cycle:** 0 | **Version:** 1
 
## Current Status
 
### Active Sprint
- Sprint 0: Foundation
- Goal: [Your initial goals]
 
### In Progress
- (nothing yet — first cycle will populate this)
 
### Blockers
- (none)
 
## Architecture Decisions
(none yet)
 
## Role State
(will be populated as agents run)

8. Run It

You can trigger heartbeats via Clawdbot's built-in scheduler, a cron job, or manually:

# Run a single heartbeat cycle
ada run
 
# Or trigger via Clawdbot heartbeat
clawdbot heartbeat --dispatch agents/DISPATCH.md

Each heartbeat runs one role, does one thing, updates memory, and advances to the next role. Let it run for a few cycles and watch your GitHub fill up with issues, PRs, and meaningful commits.

Resources

The Honest Take

Let me be real about what works and what doesn't.

What Works Well

Memory persistence is the killer feature. The bank.md pattern means agents genuinely build on each other's work. When the Backend agent optimized the SocialTrade leaderboard, the Frontend agent picked that up in the next cycle and adjusted the UI to use the new endpoint. No coordination meeting required.

Role specialization creates better output. A "Product Lead" agent writing specs produces meaningfully different (and better) output than a generic "do everything" agent. The constraints of the role focus the LLM's attention.

GitHub integration makes it real. Everything flows through issues and PRs. You get a full audit trail. You can review, comment, request changes — the normal developer workflow. The agents feel like team members, not magic boxes.

The evolution mechanism catches gaps. The SocialTrade team splitting Engineering into Frontend and Backend was proposed by the agents themselves when they noticed the scope was too broad for one role.

What's Still Rough

Agents sometimes produce placeholder content. Especially early on, you'll see specs that are structured perfectly but lack depth. The "99% query reduction" is real — but some early cycle outputs were more like well-formatted TODOs than actual implementations.

You still need human oversight. I review every PR. The agents can't yet catch subtle architectural mistakes or business logic errors that require domain context beyond what's in the memory bank. Think of it as having very productive junior developers who need code review.

Memory compression is an art. Bank files grow fast. The compression protocol (archive + summarize at 200 lines or 10 cycles) works, but sometimes important context gets lost in summarization. I've had agents re-propose solutions that were already rejected, because the rejection got compressed away.

Cold starts are slow. The first 3-5 cycles of any new project are mostly setup — creating initial issues, writing foundational specs, establishing conventions. It gets productive around cycle 6-8.

The Vision

I'm working toward a world where you can point an AI team at a repo, give it a product brief, and come back in a week to a working MVP with proper tests, CI, documentation, and a sprint backlog. We're not there yet — but we're closer than most people think.

The fact that ADA is building itself, with 15 autonomous cycles producing real CLI code, CI pipelines, API specifications, and business strategy — that's not a demo. That's the future of software development being prototyped in real time.

If you want to try it, start small. One repo, three roles, let it run for 10 cycles. You'll be surprised what comes out the other side.


All code examples in this post are from real, running configurations. The roster files, dispatch protocols, and memory bank excerpts are taken directly from active projects as of February 2026.

Have questions or want to share your own agent team setup? Find me on GitHub or Twitter.

About the Author

Ishan Rathi is an AI Engineer at Amazon with a Master's degree in Artificial Intelligence from Johns Hopkins University. Passionate about building intelligent systems and sharing insights on AI, machine learning, and software engineering.

Learn more about me

Stay Updated

Subscribe to get notified about new articles and insights.

Connect with me:

© 2026 Ishan Rathi. All rights reserved.

Built with Next.js & Tailwind CSS