The Lost Track: Distributed AI & Multi-Agent Systems

In my previous post, I wrote about the work I did to graduate: Ship Detection with U-Nets. It was solid, practical, funded research.

But it wasn't the work I wanted to do.

Digging through my archives from 2017, I found my original research proposal: "Distributed Artificial Intelligence Research: Outline of Proposed Research." Reading it now, nine years later, is haunting. I was trying to describe the architecture we are finally building today, but I lacked the tools (and the LLMs) to make it real.

The Vision: Distributed Problem Solving

The core of the proposal was distinguishing between Distributed Problem Solving (DPS) and Multi-Agent Systems (MAS).

My hypothesis was that a pure MAS approach is too chaotic, but a pure DPS approach is too rigid. I wanted to build a hybrid: a network of autonomous agents with a "thin" DPS management layer for resource allocation—not telling agents what to think, but ensuring they had the data to think.

The Use Cases (Then vs. Now)

I wasn't interested in abstract game theory. I wanted to apply this to the physical world. The proposal specifically listed two domains:

1. Forestry Management

I proposed using agents to ingest remote sensor data (soil acidity, moisture, temperature) to autonomously schedule land cycling. Instead of a top-down master plan, each plot of land (represented by an agent) could bid for resources or signal its readiness for production or recovery.

"Preventative measures by government agencies to allocate resources into improving national park health... and point of interest wildfire prediction."

2. Marine Traffic & Conservation

I wanted to use tidal and current data from Fisheries and Oceans Canada to dynamically route shipping. The idea was to have "Conservation Agents" that would block or re-route "Shipping Agents" to allow marine life to recover in polluted or high-traffic zones.

Why It Didn't Happen (Then)

In 2017, building this meant writing thousands of lines of Java (JADE framework) or C++. The "agents" were incredibly brittle state machines. They didn't have "reasoning"; they had `if/else` blocks. To make them do anything useful required defining the entire ontology of the world upfront.

I pivoted to Deep Learning because it was working now. U-Nets could actually find ships. My autonomous forestry agents were just XML files arguing with each other.

Closing the Loop

It is 2026. The irony is not lost on me.

We now have the "reasoning engine" that was missing. LLMs provide the cognitive runtime for these agents. We have the infrastructure (Kubernetes, Wasm, Rust) to run them efficiently.

The "Ship Detection" work gave me the systems engineering chops (Redis, Cloud, GPUs). The "MAS" vision gave me the architectural blueprint.

We are finally building the system I sketched out on that PDF in 2017. The agents are no longer brittle. And they are ready to work.