Back to 0cron
0cron

4 Agents, 1 Product: Building 0cron in a Single Session

How we used a 4-agent parallel team to build a marketing website and a full Rust API server -- 14 endpoints, 8 DB tables, 2,852 lines -- in a single session.

Thales & Claude | March 25, 2026 13 min 0cron
0cronmulti-agentrustaxumparallel-buildmethodology

By Thales & Claude -- CEO & AI CTO, ZeroSuite, Inc.

On February 14, 2026, we sat down to build the second product in the ZeroSuite ecosystem. By the end of the session, we had a fully structured Rust API server with 14 endpoints, 8 database tables, a scheduler engine, an HTTP executor with retry logic, a natural language parser, a notification system, an encryption layer -- and a marketing website with 15 sections, bilingual support, and a dark/light theme toggle.

41 files. 3,671 lines of code. Zero compilation errors.

The method: four AI agents working in parallel, coordinated by a team lead, with explicit dependency ordering so nothing blocked unnecessarily.

This is how it worked.

---

The Team Structure

Claude Code supports a TeamCreate primitive that spawns multiple agents in isolated worktrees. Each agent gets its own copy of the repository, works independently, and merges its changes back. The key is in the dependency graph -- which agents can start immediately, and which must wait for another to finish.

Here is the team we assembled:

team-lead (coordinator)
    |
    +-- website-builder     -> Task #1: Marketing website (independent)
    +-- api-architect       -> Task #2: Rust project scaffold (independent)
    +-- core-engine         -> Task #3: Business logic services (blocked by #2)
    +-- api-routes          -> Task #4: HTTP handlers + routes (blocked by #2)

Four agents, two execution waves:

Wave 1 (parallel): website-builder and api-architect start simultaneously. The marketing site has no dependency on the API codebase, and the API scaffold has no dependency on the website. They can work in complete isolation.

Wave 2 (parallel, after Wave 1): Once api-architect finishes the project structure -- Cargo.toml, directory layout, model definitions, database schema, error types -- the core-engine and api-routes agents unblock. core-engine writes the service layer (scheduler, executor, NLP parser, notifications, secrets). api-routes writes the HTTP handlers, request/response types, middleware, and router assembly. Both build on the scaffold but work in different directories, so they proceed in parallel.

The website-builder runs independently throughout. It was actually the last to finish because it produced the largest single file -- 597 lines of HTML with inline CSS and JavaScript.

---

Wave 1: The API Scaffold

The api-architect agent's job was to create the foundation that the other two agents would build on. This is the most critical task in the dependency chain -- if the types are wrong, or the directory structure is awkward, or the error handling pattern is inconsistent, everything downstream inherits those problems.

The agent produced the following structure:

0cron-core/
+-- Cargo.toml                    28 dependencies
+-- Dockerfile                    multi-stage Rust builder -> debian-slim
+-- .env.example                  6 environment variables
+-- src/
    +-- main.rs                   Server entry point
    +-- config.rs                 AppConfig from environment
    +-- error.rs                  AppError enum + IntoResponse
    +-- models/
    |   +-- mod.rs                Re-exports
    |   +-- user.rs               User struct
    |   +-- team.rs               Team + TeamMember
    |   +-- job.rs                Job + JobConfig + RetryConfig
    |   +-- execution.rs          Execution struct
    |   +-- secret.rs             Secret struct
    |   +-- monitor.rs            Monitor struct
    |   +-- api_key.rs            ApiKey struct
    +-- db/
    |   +-- mod.rs                Database struct + connect + migrate
    |   +-- migrations/
    |       +-- 001_initial.sql   Full schema (8 tables + indexes)
    +-- services/
    |   +-- mod.rs                Module declarations (stubs)
    +-- api/
        +-- mod.rs                Router skeleton + AppState
        +-- types.rs              Placeholder
        +-- middleware/
        +-- handlers/

The critical deliverable was the AppState struct and the database schema. Everything else in the system flows through these two artifacts.

The AppState is deliberately minimal:

#[derive(Debug, Clone)]
pub struct AppState {
    pub db: Database,
    pub redis: redis::Client,
    pub config: Arc<AppConfig>,
}

Three fields. Database wraps a PgPool from SQLx. redis::Client is the connection factory for Redis. AppConfig holds environment-derived settings wrapped in Arc for cheap cloning across async tasks.

This struct gets passed as Axum state to every handler. No dependency injection framework, no trait objects, no service locator pattern. The database pool handles connection multiplexing internally. The Redis client creates connections on demand. The config is immutable after startup.

The 8-table schema maps directly to the domain:

TablePurposeKey Design Choice
usersAccounts with email, password hash, OAuth, timezone, planStripe customer ID stored directly for billing integration
teamsOrganizational containersEvery user gets a default team on registration
team_membersRole-based membership (admin, editor, viewer)Composite primary key on (team_id, user_id)
jobsCron job definitionsconfig as JSONB for flexibility; runtime stats denormalized
executionsFull execution historyRequest and response captured; cascading delete with job
secretsAES-256-GCM encrypted valuesScoped to team, unique on (team_id, key)
monitorsHeartbeat/ping monitorsUnique ping token for incoming health checks
api_keysAuthentication tokensHash stored, prefix indexed for lookup

---

Wave 1: The Marketing Website (Parallel)

While api-architect was laying out Rust types and SQL schemas, website-builder was producing a completely independent deliverable: the marketing landing page.

The brief was specific: a single HTML file with inline CSS and JavaScript, matching the design system from 0diff.dev (the first ZeroSuite product's website), with full English/French internationalization and a light/dark theme toggle.

The agent delivered 597 lines covering 15 sections:

1. Fixed navigation with blur backdrop and hamburger menu for mobile 2. Hero section with animated terminal demo and stat row 3. Problem statements -- six pain point cards with developer quotes 4. Cost statistics bar (38% of dev time on scheduling issues, $4,100 average annual cost, 12h average debug time, 73% using unreliable free tools) 5. Feature showcase -- nine cards with embedded code snippets 6. Interactive terminal demo showing the dashboard UI 7. Use case tabs (Indie, Team, WordPress, DevOps) with specific scenarios 8. Pricing card with green glow effect and competitor comparison 9. "How It Works" three-step flow (Define, Connect, Relax) 10. Comparison table against cron-job.org, EasyCron, and Cronhub 11. API code examples with tabbed views (JavaScript, Python, cURL, Go) 12. Schedule examples showing NLP-to-cron mapping 13. ZeroSuite ecosystem cross-promotion (six product cards) 14. Final call-to-action 15. Footer

Every text string uses data-i18n attributes. A switchLanguage() function swaps all text content between English and French. The language preference persists in localStorage and auto-detects from the browser's navigator.language on first visit.

---

Wave 2: The Core Engine

Once the scaffold was in place, core-engine received the models, the database schema, and the error types as context. Its job: implement the seven service modules that contain all business logic.

This is where the real engineering decisions live. The services are where scheduling theory meets production reality.

The most important file is scheduler.rs -- 198 lines that form the beating heart of the entire product. We cover this in depth in Part 3 of this series, but the core loop is worth showing here:

pub struct Scheduler {
    redis: redis::Client,
    pool: PgPool,
    config: Arc<AppConfig>,
}

impl Scheduler { pub fn start(self: Arc) -> tokio::task::JoinHandle<()> { tokio::spawn(async move { tracing::info!("Scheduler started"); loop { if let Err(e) = self.poll().await { tracing::error!("Scheduler poll error: {e}"); } tokio::time::sleep(std::time::Duration::from_secs(1)).await; } }) } } ```

A background tokio task that polls every second. Inside poll(), it queries a Redis sorted set for jobs whose score (next execution timestamp) is less than or equal to now, acquires a distributed lock to prevent double-execution, spawns an executor task, and re-schedules the job for its next run.

The executor (executor.rs, 309 lines post-evolution) handles the actual HTTP request lifecycle: secret interpolation in URLs, headers, and bodies; configurable timeout; retry with exponential, linear, or fixed backoff; complete request/response logging to the executions table; and notification dispatch on success or failure.

The NLP parser (nlp_parser.rs, 151 lines) translates natural language into cron expressions using approximately 20 regex patterns. No LLM inference, no external API calls. Deterministic, auditable, fast.

The full service inventory from this agent:

ServiceLinesResponsibility
scheduler.rs198Redis sorted set polling, distributed locks, next-run computation
executor.rs228HTTP execution, secret interpolation, retry logic, result logging
nlp_parser.rs151Natural language to cron expression translation (~20 patterns)
notifications.rs229Slack Block Kit, Discord embeds, Telegram Bot API, generic webhook, email
secrets.rs92AES-256-GCM encrypt/decrypt, ${secrets.KEY} interpolation
monitors.rs104Heartbeat CRUD, ping token generation, missed-ping detection
jobs.rs272Full CRUD, schedule resolution, pause/resume, manual trigger, stats

Total from core-engine: 1,274 lines of business logic.

---

Wave 2: The API Routes (Parallel)

While core-engine was implementing services, api-routes was building the HTTP layer -- handlers, request/response types, authentication middleware, and the router.

The router assembly shows the full API surface:

fn api_routes() -> Router<AppState> {
    Router::new()
        // Auth (no auth required)
        .route("/auth/register", post(handlers::auth::register))
        .route("/auth/login", post(handlers::auth::login))
        // Jobs
        .route("/jobs",
            post(handlers::jobs::create_job)
                .get(handlers::jobs::list_jobs))
        .route("/jobs/{id}",
            get(handlers::jobs::get_job)
                .put(handlers::jobs::update_job)
                .delete(handlers::jobs::delete_job))
        .route("/jobs/{id}/trigger", post(handlers::jobs::trigger_job))
        .route("/jobs/{id}/pause", post(handlers::jobs::pause_job))
        .route("/jobs/{id}/resume", post(handlers::jobs::resume_job))
        .route("/jobs/{id}/history", get(handlers::jobs::get_job_history))
        .route("/jobs/{id}/stats", get(handlers::jobs::get_job_stats))
        // Monitors
        .route("/monitors",
            post(handlers::monitors::create_monitor)
                .get(handlers::monitors::list_monitors))
        .route("/ping/{token}", get(handlers::monitors::ping))
        // Account
        .route("/account", get(handlers::account::get_account))
        .route("/account/usage", get(handlers::account::get_usage))
}

14 endpoints in the initial build. Standard REST conventions: POST for creation, GET for retrieval, PUT for updates, DELETE for removal. Nested routes for job-specific actions (trigger, pause, resume, history, stats).

The authentication middleware supports two mechanisms -- JWT tokens for dashboard users and API keys for programmatic access:

// From middleware/auth.rs
// JWT: Authorization: Bearer <jwt-token>
// API Key: Authorization: Bearer 0c_<api-key>

API keys are prefixed with 0c_ so the middleware can distinguish them from JWT tokens at parse time. The key itself is hashed with SHA-256 before storage; only the first 8 characters (the prefix) are stored in plaintext for identification in the dashboard.

The api-routes agent also produced types.rs -- 257 lines of request and response DTOs. Every handler has a typed input (validated with the validator crate) and a typed output (serialized with serde). No raw JSON manipulation in handlers; the type system enforces the API contract.

---

The Merge: 0 Errors, 53 Warnings

When all four agents completed, the team lead merged their changes. This is the moment of truth in any parallel build -- do the pieces actually fit together?

cargo check

Result: 0 errors. 53 warnings.

Every warning was the same: "unused import" or "unused variable." This is expected and correct. The core-engine agent wrote service functions that the api-routes agent's handlers call -- but since the agents worked in parallel, neither could verify the call sites during development. The compiler saw the service functions as unused because it analyzed each crate holistically, and some imports were declared but their usage had not yet been wired.

Zero errors means the type contracts were correct. The AppState struct matched between api/mod.rs and main.rs. The model types in services/ matched what handlers/ expected. The database query return types matched the model structs. The error type implemented IntoResponse correctly.

This is not accidental. It is a consequence of three things:

1. The scaffold was well-designed. The api-architect agent defined the shared types, the error pattern, and the state struct before anyone else started writing code. Every subsequent agent built against a stable interface.

2. Rust's type system catches integration errors at compile time. If core-engine defined a function signature that api-routes called incorrectly, cargo check would have caught it. No integration tests needed for basic structural correctness.

3. The agents worked in clearly delineated directories. core-engine owned services/. api-routes owned api/handlers/, api/types.rs, and api/middleware/. There were no file-level conflicts because the work was partitioned by directory, not by feature.

---

The Final Tally

Here is what the session produced:

DeliverableFilesLinesAgent
Marketing website4597 (HTML)website-builder
Rust project scaffold14~400api-architect
Business logic services81,274core-engine
HTTP handlers + routes7~900api-routes
Shared (models, config, error, DB)12~500api-architect
Total41~3,6714 agents

Dependencies resolved:

  • Cargo.toml with 28 crate dependencies (Axum, Tokio, SQLx, Redis, reqwest, serde, jsonwebtoken, argon2, aes-gcm, cron, chrono, uuid, and more)
  • Multi-stage Dockerfile for production builds
  • Database migration with 8 tables, proper indexes, and foreign key constraints

What the session did not produce:

  • Tests (deliberately deferred -- we prefer to stabilize the API surface before writing integration tests)
  • Frontend dashboard (separate session, separate agents)
  • Deployment configuration (handled in a later session with Docker Compose)
  • Billing integration (added in a follow-up session on March 11)

---

Why This Approach Works

The four-agent parallel build is not the right approach for everything. It works specifically when:

1. The deliverables are clearly separable. A marketing website and a Rust API have zero code overlap. Service logic and HTTP handlers live in different directories and communicate through function signatures.

2. The dependency graph is shallow. Two waves, not five. If agent C depends on agent B which depends on agent A, you have a serial pipeline disguised as a parallel system. Our graph had one dependency edge: Wave 2 depended on the scaffold from Wave 1.

3. The interface contracts are established upfront. The scaffold agent defined the types before the implementation agents started. This is the parallel-build equivalent of writing the API spec before writing the code.

4. The language has a strong type system. Rust's compiler is, in effect, a fifth agent -- an integration tester that runs after the merge and catches every structural mismatch. In a dynamically typed language, parallel builds carry significantly more integration risk.

The total wall-clock time for this session was a single afternoon. A single-agent sequential build would have taken longer, not because of raw throughput, but because of context switching. When one agent writes the scheduler, it holds the entire scheduling domain in context. When a single agent writes the scheduler and then pivots to writing the marketing website's French translations, it pays a context-switch penalty on every transition.

Parallelism is not just about speed. It is about keeping each agent deep in a single domain.

---

What Happened Next

This session produced the v0.1 skeleton. Over the following weeks, we added:

  • Admin system (March 11) -- user management, global job listing, platform statistics
  • Billing integration (March 11) -- Stripe Checkout, customer portal, trial tracking, plan enforcement
  • Google OAuth -- social login alongside email/password
  • Dashboard API -- summary endpoint aggregating job counts, execution rates, and failure metrics
  • Analytics endpoint -- time-series execution data for dashboard charts
  • Notification management API -- per-user, per-channel notification configuration
  • API key management -- create, list, and revoke API keys with granular permissions
  • Secrets management -- encrypted secret storage and interpolation in job configs
  • Contact form API -- public endpoint for the marketing site
  • Transaction history -- billing event log

The codebase grew from 2,852 lines of Rust to significantly more, but the architecture established in this first session -- the AppState pattern, the service/handler separation, the Redis-backed scheduler, the JSONB job config -- remained intact throughout. The scaffold held.

That is the point of getting the first session right.

---

This is Part 2 of a three-part series on building 0cron.dev. Next up: Building a Cron Scheduler Engine in Rust -- a deep dive into Redis sorted sets, distributed locking, and the tick-based polling architecture that makes 0cron's scheduler tick (literally).

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles