Back to flin
flin

Parallel Agents in the FLIN Runtime

FLIN's parallel agent system: concurrent execution, message passing, and the agent-based runtime model.

Thales & Claude | March 25, 2026 11 min flin
flinagentsconcurrencyparallelruntimeactor-model

Session 011 was the first time we used parallel agents to build FLIN. Not parallel agents in the language runtime -- parallel agents in the development process itself. Two Claude subagents, running simultaneously on independent files, producing code that compiled together into a working system.

But this article is about both meanings of "parallel agents": the development methodology that let us build FLIN faster than any sequential process could, and the agent-based concurrency model within the FLIN runtime that enables parallel execution without shared mutable state.

---

The Development Agents

Before Session 011, every feature was built sequentially. Claude wrote a function, tested it, moved to the next function, tested it, and so on. This was safe but slow. A typical session produced 500-800 lines of code.

Session 011 changed the approach. Three tasks needed to happen:

1. Implement the Heap struct and garbage collector in src/vm/memory.rs. 2. Implement entity operations in src/vm/vm.rs. 3. Write integration tests in tests/integration_vm.rs.

Tasks 1 and 3 were completely independent -- they touched different files with no shared state. Task 2 modified a shared file (vm.rs), but its changes were limited to adding new fields and match arms.

We launched two parallel subagents:

AGENT 1 (af469e7): Create src/vm/memory.rs
- Task: Implement Heap struct and GC (mark, sweep, collect)
- File: src/vm/memory.rs (NEW)
- Result: 731 lines, 22 GC tests

AGENT 2 (a307f11): Create tests/integration_vm.rs - Task: Write end-to-end tests for counter.flin - File: tests/integration_vm.rs (NEW) - Result: 638 lines, 18 tests ```

Both agents ran simultaneously on different files. The main agent implemented entity operations while waiting. When all three finished, the code compiled on the first attempt. No merge conflicts. No integration failures.

Session 011 produced 1,500 lines of code and 40 new tests -- roughly double the output of a sequential session.

---

Rules for Parallel Development

The success of parallel agents depends on strict rules. Session 011 established them and they were added to the project's CLAUDE.md:

When to use parallel agents: - Tasks operate on different files with no shared imports. - Tasks have no data dependencies (Agent B does not need the output of Agent A). - The total is 2-3 agents maximum (more causes coordination overhead).

When NOT to use parallel agents: - Tasks modify the same file. - One task depends on types or functions defined by another task. - The logic requires iterative debugging (parallel agents cannot debug each other's code).

The template for launching agents:

AGENT N: [One-sentence description]
- File: [exact file path]
- Task: [detailed specification including function signatures]
- Dependencies: [what this agent can assume exists]
- Deliverables: [what the agent must produce]

The specification must be precise enough that the agent can work without asking questions. If the specification is ambiguous, the agent will make assumptions, and those assumptions may conflict with the main agent's assumptions. Explicit is better than implicit.

---

Session 013: Three Agents at Once

Session 013 pushed the parallel approach further with three simultaneous agents:

AgentTaskFileResult
Agent 1Math built-ins + opcodesbytecode.rs, vm.rs~200 lines
Agent 2HOF infrastructurevm.rs~300 lines
Agent 3Tests for all built-insintegration_vm.rs~700 lines

This was more aggressive: Agents 1 and 2 both modified vm.rs. The risk of merge conflicts was real. The mitigation was clear specification: Agent 1 added match arms for math opcodes (0xE1-0xE8), while Agent 2 added the HofContext struct and hof_call_next/hof_handle_return methods. Their changes did not overlap because they operated on different sections of the file.

All three agents completed. The code needed manual fixes for borrow checker issues in the HOF helpers (Rust's borrow checker does not forgive ambiguity, and parallel agents sometimes generate code that conflicts with each other's borrowing patterns). After those fixes, 338 tests passed.

---

The Agent-Based Concurrency Model

Now let us shift from how we built FLIN to how FLIN itself handles concurrency.

FLIN's runtime uses an agent-based model inspired by Erlang's actor system and Go's goroutines. An agent is an independent unit of computation with its own state, communicating with other agents through message passing. No shared mutable state. No locks. No data races.

The agent model maps naturally to web application patterns:

  • HTTP handler agents: Each incoming HTTP request spawns an agent that processes the request, queries the database, renders the view, and sends the response. Agents for different requests run concurrently.
  • Background task agents: Long-running operations (sending emails, processing uploads, computing reports) run as background agents that do not block the HTTP request path.
  • WebSocket agents: Each WebSocket connection has its own agent that manages subscriptions, receives messages, and sends updates.

---

Agent Architecture in Rust

Under the hood, FLIN agents are Tokio tasks communicating through channels:

pub struct Agent {
    id: AgentId,
    state: AgentState,
    mailbox: mpsc::Receiver<Message>,
    sender: mpsc::Sender<Message>,
}

pub enum Message { Request(HttpRequest), EntityChanged(String, ChangeType, EntityInstance), Timer(Duration), Shutdown, }

impl Agent { pub async fn run(mut self) { loop { match self.mailbox.recv().await { Some(Message::Request(req)) => { self.handle_request(req).await; } Some(Message::EntityChanged(entity_type, change, entity)) => { self.handle_entity_change(&entity_type, change, &entity).await; } Some(Message::Timer(duration)) => { self.handle_timer(duration).await; } Some(Message::Shutdown) | None => { break; } } } } } ```

Each agent has a mailbox (an mpsc::Receiver) and a sender (an mpsc::Sender that other agents use to send messages to it). The agent's run method loops, processing messages one at a time. Within a single agent, execution is sequential -- no concurrency concerns. Between agents, execution is concurrent -- they run on separate Tokio tasks and communicate only through messages.

This design eliminates an entire class of bugs:

  • No data races: Each agent owns its state exclusively. No other agent can read or write it.
  • No deadlocks: Agents do not hold locks. They send messages and continue. If Agent A sends a message to Agent B and Agent B sends a message to Agent A, both messages are queued in mailboxes and processed sequentially. No circular wait.
  • No shared mutable state: The only shared data structures are the message channels themselves, which are thread-safe by construction (Tokio's mpsc channels).

---

Agent Lifecycle

Agents in FLIN have a simple lifecycle:

pub enum AgentState {
    Starting,
    Running,
    Paused,
    ShuttingDown,
    Terminated,
}

Starting: The agent is initialising -- loading configuration, connecting to resources. Running: The agent is processing messages from its mailbox. Paused: The agent is temporarily suspended (e.g., waiting for a resource). ShuttingDown: The agent has received a Shutdown message and is cleaning up. Terminated: The agent has exited its run loop.

The runtime maintains a registry of all active agents:

pub struct AgentRegistry {
    agents: HashMap<AgentId, mpsc::Sender<Message>>,
    counters: AgentCounters,
}

impl AgentRegistry { pub fn spawn(&mut self, agent: Agent) -> AgentId { let id = agent.id; let sender = agent.sender.clone(); self.agents.insert(id, sender); self.counters.spawned += 1;

tokio::spawn(async move { agent.run().await; });

id }

pub fn send(&self, id: AgentId, msg: Message) -> Result<(), AgentError> { let sender = self.agents.get(&id) .ok_or(AgentError::NotFound(id))?; sender.try_send(msg) .map_err(|_| AgentError::MailboxFull(id)) } } ```

The registry maps agent IDs to their mailbox senders. Any agent (or the runtime itself) can send a message to any other agent by ID. The registry does not hold references to the agents' state -- only to their mailbox senders. This ensures that the registry does not create a central bottleneck or a shared-state problem.

---

Message Passing Patterns

Three message passing patterns emerge in FLIN applications:

Request-Response

The HTTP server spawns a handler agent for each request. The handler processes the request and sends the response back through a oneshot channel:

let (response_tx, response_rx) = oneshot::channel();
registry.send(handler_id, Message::Request(req, response_tx))?;
let response = response_rx.await?;

Publish-Subscribe

Entity change notifications use a pub-sub pattern. When an entity is saved, the runtime publishes a change event to all agents that have subscribed to that entity type:

// Subscribe
registry.send(subscription_agent_id,
    Message::Subscribe("Todo".into()));

// Publish (triggered by save/delete) for subscriber_id in subscribers.get("Todo").unwrap_or(&empty) { registry.send(*subscriber_id, Message::EntityChanged("Todo".into(), ChangeType::Create, entity.clone())); } ```

Fire-and-Forget

Background tasks use fire-and-forget messaging. The HTTP handler sends a message to the background agent and immediately returns the response to the client:

// Send email in background -- don't wait for result
registry.send(email_agent_id,
    Message::SendEmail(to, subject, body))?;

// Return response immediately Ok(Response::ok("Entity saved")) ```

---

Why Agents, Not Threads

The agent model was chosen over raw threads for several reasons:

Developer simplicity. FLIN developers should not need to understand mutexes, condition variables, or memory ordering. They should write save user and the runtime handles the concurrency. Agents make this possible because the runtime manages agent lifecycles and message routing. The developer never sees a thread.

Fault isolation. If an agent panics (due to a bug in the FLIN program or an unexpected input), only that agent terminates. The runtime can restart it, log the error, and continue serving other requests. With shared-state concurrency, a panic in one thread can leave shared state in an inconsistent state, corrupting other threads.

Natural fit for web applications. HTTP requests are inherently independent -- each request has its own state (headers, body, session) and produces its own response. Mapping each request to an agent is a natural fit. WebSocket connections are likewise independent agents that process messages from their respective clients.

---

Performance Characteristics

The agent model has specific performance trade-offs:

Throughput: Agents add overhead compared to raw threads -- each message pass involves a channel send and receive, which includes atomic operations. For FLIN's target workload (hundreds of concurrent requests, not millions), this overhead is negligible.

Latency: Message passing adds a small latency compared to direct function calls. A channel send on Tokio's mpsc channel takes about 50 nanoseconds. For an HTTP handler that spends milliseconds on database queries and HTML rendering, 50 nanoseconds of message overhead is invisible.

Memory: Each agent has its own state, which means data is duplicated across agents rather than shared. For small application state (a few kilobytes per request), this is acceptable. For large datasets, agents can reference shared read-only data (like the compiled bytecode) through Arc without copying.

---

The Parallel Story

The parallel agent story runs on two tracks: how we built FLIN, and how FLIN runs applications.

In both cases, the principle is the same: independent tasks should run independently. Whether it is two Claude subagents building two different Rust files, or two HTTP handler agents processing two different requests, the pattern is isolation, independence, and message-based coordination.

Session 011 proved that parallel development agents could double throughput without introducing integration failures. The FLIN runtime's agent model promises the same for running applications: concurrent request handling without shared-state bugs.

Both are bets on the same insight: the cost of coordination is almost always higher than the cost of duplication. Cloning a value is cheaper than protecting it with a mutex. Specifying a task completely is cheaper than debugging a merge conflict. Isolation is cheaper than communication.

---

What Comes Next

This article concludes the "Virtual Machine" arc of the "How We Built FLIN" series. We have covered the stack machine architecture, memory management and garbage collection, closures and higher-order functions, view rendering, the complete opcode reference, hot module reload, async and concurrency, the reactivity engine, the first browser render, and the parallel agent system.

The next arc dives into the type system: how FLIN's type checker prevents bugs before the VM ever sees the bytecode. Type inference, generics, entity schema validation, and the type-safe bridge between FLIN code and the database.

From runtime to compile time. From execution to verification. The story continues.

---

This is Part 30 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO built a programming language from scratch.

Series Navigation: - [21] Building a Stack-Based Virtual Machine in Rust - [22] Memory Management and Garbage Collection - [23] Closures and Higher-Order Functions in the VM - [24] How the VM Executes Views - [25] The Complete FLIN Opcode Reference - [26] Hot Module Reload in 42ms - [27] Async and Concurrency in the VM - [28] The Reactivity Engine - [29] The First Browser Render - [30] Parallel Agents in the FLIN Runtime (you are here)

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles