From Agents to Coordination: Reframing Distributed AI as a Systems Problem
What feels new about agentic AI is the tooling. What isn’t new is the problem.
Intelligence is no longer the bottleneck
Agents feel novel because the barriers are lower: better models, cheaper inference, and easy parallelism. But once you move past "one agent, one chat window," intelligence stops being the limiting factor.
Getting many autonomous actors to respect dependencies, avoid duplication, and remain inspectable is a coordination problem.
The Architect vs. The Swarm
In building Farcaster, I settled on a pattern that separates "designing the work" from "doing the work." I call it the Architect and the Swarm.
- The Architect: A high-level model that takes a goal and breaks it into a Directed Acyclic Graph (DAG) of discrete tasks.
- The Swarm: A collection of worker "drones" that claim tasks from the graph, execute them, and report back.
This reframes the agent from a "chatbot companion" to a "distributed worker." The system doesn't ask the agent to "fix the codebase." It asks the Architect to design a plan, then asks the Swarm to execute the steps.
Coordination is Constraint
Coordination is not micromanagement; it’s the definition of boundaries. In Farcaster, these constraints are enforced via a local SQLite substrate:
- Structured Workplans: Jobs aren't just text; they are JSON-defined tasks with explicit dependencies.
- Lease Management: Workers don't "own" tasks; they rent them with heartbeats.
- Atomic Transitions: Success is only recorded if the entire state transition is valid.
Coordination limits blast radius, preserves intent, and enables recovery.
Humans as Stabilizers
Fully autonomous systems are brittle. In my workflows, I've found that human intervention points aren't a sign of failure—they are stabilizers. Whether it's a review gate before a PR is opened or a TUI dashboard to monitor the swarm's progress, visibility is what makes the system trustworthy.
Human-in-the-loop is not a performance bottleneck; it's a security property.
Reframing the Problem
Agent intelligence will continue to improve. But as agents become cheaper and more numerous, the systems that matter will be the ones that take coordination seriously.
We're moving away from "conversational AI" and toward "distributed task orchestration." It's quieter, more boring, and significantly more reliable.