Back to 0cron
0cron

From Abidjan to Production: Launching 0cron.dev

The full story of building 0cron.dev: 3 sessions, 4 agents, 3,500+ lines of Rust, a SvelteKit dashboard, Stripe billing, and an admin system -- all from Abidjan with zero human engineers.

Thales & Claude | March 25, 2026 15 min 0cron
0cronlaunchretrospectiveabidjanai-ctobuild-in-publicafrica-tech

This is the final article in a 10-part series about building 0cron.dev, a $1.99/month cron job service built entirely by a CEO in Abidjan and an AI CTO. No human engineers. No office. No venture capital. Just Juste and Claude, working through Claude Code sessions to build a production SaaS from scratch.

Over the previous nine articles, we covered individual features in isolation: the scheduler engine, natural language parsing, multi-channel notifications, Stripe billing, the SvelteKit dashboard, heartbeat monitoring, and the security architecture. This article steps back and tells the full story -- the three sessions that took 0cron from nothing to production-ready, what the final system looks like, and what we learned about the CEO + AI CTO model along the way.

Session 1: Foundation (February 14, 2026)

The first session was the most ambitious. We used a four-agent parallel build strategy: four instances of Claude working simultaneously on different parts of the system, coordinated by a single architect prompt. This approach was covered in detail in article 2, but the summary is worth repeating here.

Agent 1 built the Rust API server: project scaffolding, Axum web framework setup, database connection pooling, the job CRUD endpoints, and the initial scheduler loop.

Agent 2 built the database layer: 8 PostgreSQL tables (users, teams, jobs, executions, secrets, monitors, notifications, api_keys), all migrations, and the SQLx query layer.

Agent 3 built the notification system: email via SMTP, Slack webhooks, Discord webhooks, Telegram Bot API, and generic webhook delivery.

Agent 4 built the marketing website: static HTML with responsive design, pricing tables, feature sections, and the landing page that would later be preserved when we converted to SvelteKit.

By the end of Session 1, we had:

  • 14 REST API endpoints
  • 8 database tables with proper foreign keys and indexes
  • 2,852 lines of Rust
  • 41 files
  • A working scheduler that could parse cron expressions, execute HTTP jobs, and record results
  • A static marketing website

The foundation was solid, but it was not shippable. There was no authentication, no payment processing, no dashboard UI, and the notification channels were wired but not tested. Session 1 proved the architecture. Sessions 2 and 3 would make it real.

Session 2: The Polish Sprint (March 11, 2026)

Session 2 was a single, long session divided into five phases. Each phase addressed a specific gap between "working prototype" and "shippable product."

Phase 1: Icons

The initial codebase used emoji for all visual indicators -- checkmarks, warning signs, status badges. This is common in prototypes and completely unacceptable in production. Emoji render differently across operating systems, browsers, and devices. A green checkmark on macOS looks nothing like a green checkmark on Windows, and on some Android devices it might not render at all.

We replaced every emoji with Lucide SVG icons. Twenty-three inline SVGs, each carefully sized and colored to match the context. Status indicators became colored dots with consistent rendering. Navigation icons became Lucide components imported from lucide-svelte. The result was pixel-perfect consistency across every platform.

This phase took the least time but had an outsized impact on perceived quality. Users form opinions about software within seconds of seeing it. Inconsistent emoji make a product look amateur. Consistent SVG icons make it look professional.

Phase 2: Google Sign-In

Authentication via Google required both frontend and backend work. On the backend, we implemented the full JWKS verification flow described in article 9: decode the JWT header, fetch Google's public keys, validate the RS256 signature, check the claims, and upsert the user.

On the frontend, we integrated Google's Sign-In button component and handled the OAuth redirect flow. The token from Google is sent to our backend, which verifies it and returns our own JWT. From that point forward, the user is authenticated with 0cron's token, not Google's.

The account linking logic was particularly important. If a user signed up with email first and later clicked "Sign in with Google" using the same email, we needed to link the accounts rather than create a duplicate. The upsert query handles this: find by email, update the Google ID if it is a new link, or create a new user if the email is not in the system.

Phase 3: Stripe Payment Gateway

Billing integration (covered in depth in article 6) required three components: Stripe Checkout for subscription creation, Stripe Customer Portal for self-service management, and webhooks for event processing.

The webhook handler processes subscription lifecycle events: checkout.session.completed activates the subscription, customer.subscription.updated handles plan changes, customer.subscription.deleted handles cancellations, and invoice.payment_failed triggers dunning notifications.

Every webhook is verified with HMAC-SHA256 signatures before processing, as described in article 9. This is not optional -- without verification, anyone who discovers the webhook URL could send fake payment confirmations.

Phase 4: SvelteKit Dashboard

This was the largest phase and the subject of article 7. Converting from static HTML to a full SvelteKit 2 application with 13 route pages, a reactive auth store, an API client, and a job creation wizard. The route group architecture ((auth) and (app)) provided clean separation between public, login, and authenticated pages.

Phase 5: Wiring

The final phase connected the remaining loose ends. The scheduler was activated with proper job pickup, execution, and result recording. SMTP email delivery was configured and tested. Footer links on the marketing page were fixed. The dashboard's stats cards were connected to real API endpoints instead of mock data.

Phase 5 was unglamorous but essential. A product is not the sum of its features; it is the integration between them. A scheduler that executes jobs but does not record results is broken. A notification system that sends emails but cannot reach the SMTP server is decoration. Wiring is where prototypes become products.

Session 3: Admin System (March 11, 2026)

Session 3 added the administrative layer. A SaaS product needs operational visibility -- the ability to see all users, all jobs, system health, and to intervene when something goes wrong.

We built an AdminUser extractor for the Axum middleware stack. This is a Rust type that, when used as a handler parameter, automatically verifies that the authenticated user has admin privileges. If they do not, the request is rejected with a 403 before the handler runs.

Five admin endpoints were added: list all users (with pagination), list all jobs (across all teams), view system statistics (total users, total jobs, total executions, success rates), impersonate a user (for debugging), and force-execute a job (for testing).

On the frontend, admin dashboard pages were added inside a new (app)/admin/ route group. These pages are protected by both the standard auth guard and an additional admin check in the sidebar (only users with is_admin see the admin navigation section, as shown in article 7).

Session 4: Trial and Billing Lifecycle (March 11, 2026)

The final session added the 60-day free trial and billing lifecycle automation. This was a business-critical feature: users need to try the product before paying, but the trial needs a clear path to conversion.

The implementation includes:

  • A 60-day trial period that starts at account creation
  • Trial reminder emails at 10 days, 3 days, and 1 day before expiry
  • Automatic feature gating when the trial expires without a subscription
  • Marketing API documentation for the public website

The reminder cadence (10/3/1 days) was chosen based on SaaS conversion research. The 10-day reminder is informational ("your trial is ending soon"). The 3-day reminder is motivational ("here is what you will lose access to"). The 1-day reminder is urgent ("last chance to subscribe"). Each email includes a direct link to the billing page with the Stripe Checkout flow.

The Final System

After four sessions, here is what 0cron consists of.

Backend

The Rust backend is the core of the system. Here is how the server bootstraps.

#[tokio::main]
async fn main() -> Result<()> {
    dotenvy::dotenv().ok();
    tracing_subscriber::init();

let db = PgPoolOptions::new() .max_connections(20) .connect(&env::var("DATABASE_URL")?) .await?;

sqlx::migrate!().run(&db).await?;

let state = AppState { db: db.clone(), encryption_key: load_encryption_key()?, jwt_secret: env::var("JWT_SECRET")?, google_client_id: env::var("GOOGLE_CLIENT_ID")?, stripe_secret: env::var("STRIPE_SECRET_KEY")?, stripe_webhook_secret: env::var("STRIPE_WEBHOOK_SECRET")?, };

let app = Router::new() // Public .route("/health", get(health_check)) .route("/v1/auth/register", post(register)) .route("/v1/auth/login", post(login)) .route("/v1/auth/google", post(google_login)) .route("/v1/ping/{token}", get(ping_monitor)) // Authenticated .route("/v1/jobs", get(list_jobs).post(create_job)) .route("/v1/jobs/{id}", get(get_job).put(update_job).delete(delete_job)) .route("/v1/jobs/{id}/trigger", post(trigger_job)) .route("/v1/jobs/{id}/pause", post(pause_job)) .route("/v1/monitors", get(list_monitors).post(create_monitor_handler)) .route("/v1/secrets", get(list_secrets).post(create_secret)) .route("/v1/secrets/{key}", delete(delete_secret)) .route("/v1/api-keys", get(list_api_keys).post(create_api_key)) .route("/v1/billing/checkout", post(create_checkout)) .route("/v1/billing/portal", post(create_portal)) .route("/v1/dashboard/stats", get(dashboard_stats)) // Admin .route("/v1/admin/users", get(admin_list_users)) .route("/v1/admin/jobs", get(admin_list_jobs)) .route("/v1/admin/stats", get(admin_stats)) // Webhooks .route("/webhooks/stripe", post(stripe_webhook)) .with_state(state);

// Start background scheduler tokio::spawn(scheduler_loop(db.clone())); // Start monitor checker tokio::spawn(monitor_check_loop(db.clone())); // Start trial reminder checker tokio::spawn(trial_reminder_loop(db.clone()));

let addr = SocketAddr::from(([0, 0, 0, 0], 8000)); tracing::info!("0cron server listening on {addr}"); axum::serve(TcpListener::bind(addr).await?, app).await?; Ok(()) } ```

This single function reveals the architecture: a PostgreSQL connection pool, automatic migrations, a shared application state holding all secrets and configuration, a router with layered endpoints (public, authenticated, admin, webhooks), and three background tasks (scheduler, monitor checker, trial reminders) spawned as Tokio tasks.

The numbers:

  • ~3,500+ lines of Rust across 35+ files
  • 18+ API endpoints (14 core + admin + billing + dashboard)
  • 8 database tables with 5 migrations
  • 5 notification channels (email, Slack, Discord, Telegram, webhook)
  • NLP schedule parser with ~20 patterns
  • AES-256-GCM encrypted secrets with interpolation
  • Full Stripe billing lifecycle
  • 60-day free trial with automated reminders
  • Admin system with user/job visibility

Frontend

The SvelteKit frontend provides the user interface:

  • 13+ route pages across two layout groups
  • Svelte 5 runes-based auth store with localStorage persistence
  • API client with Bearer token injection and 401 auto-redirect
  • Dark sidebar with Lucide icons and conditional admin section
  • Job creation wizard with 18 schedule presets and cPanel-style cron builder
  • Stripe Checkout and Customer Portal integration
  • Encrypted secrets management UI
  • Execution history with status indicators

Database

teams
  |-- users (team_id FK)
  |-- jobs (team_id FK)
  |     |-- executions (job_id FK)
  |-- monitors (team_id FK)
  |-- secrets (team_id FK)
  |-- api_keys (user_id FK)
  |-- subscriptions (team_id FK)
  |-- notification_configs (team_id FK)

The team is the organizational unit. Users belong to teams. Jobs, monitors, secrets, and billing are all team-scoped. This means future multi-user teams require no schema changes -- just adding members to an existing team.

What Makes This Different

There are other cron job services. EasyCron, cron-job.org, Cronhub, Healthchecks.io. 0cron is not the first to solve this problem. What makes it different is not a single feature but the combination of decisions.

Price. $1.99/month. Most competitors charge $10-20/month for comparable features. We can do this because our engineering cost is zero -- no salaries, no offices, no equity dilution. The marginal cost per user is compute and database storage, which at our scale is measured in cents.

Natural language scheduling. No other cron service lets you type "every weekday at 9am" and get a valid cron expression. This removes the barrier that keeps non-ops developers from using cron services.

Encrypted secrets with interpolation. Most competitors require you to paste API keys directly into job configurations. 0cron stores them encrypted and injects them at execution time. Your credentials are never visible in the UI after initial entry.

60-day free trial. Not 14 days. Not 30 days. Sixty days. Long enough to build real workflows, integrate with real systems, and make 0cron load-bearing in your infrastructure before you pay a cent. We can afford this because, again, our operating costs are a fraction of a traditional company.

Built in public. This 10-article series is the transparent story of how 0cron was built. Every architectural decision, every line of code, every trade-off is documented. No other cron service has published its source code explanations at this level of detail.

The CEO + AI CTO Model

0cron was built by two entities: Juste Thales Gnimavo, the CEO, based in Abidjan, and Claude, the AI CTO, operating through Claude Code. No human engineers were hired, contracted, or consulted.

This model works because of a specific division of labor.

Juste provides direction, decisions, and domain knowledge. Which features to build, in which order, at what price point. What the user experience should feel like. Where to cut scope and where to invest depth. These are business and product decisions that require understanding of the market, the audience, and the constraints.

Claude provides implementation, architecture, and technical execution. Translating product requirements into code. Choosing the right data structures, algorithms, and libraries. Writing the Rust, the TypeScript, the SQL, the HTML. Handling the thousands of small decisions (error handling, edge cases, naming conventions) that add up to a working system.

The model does not work because AI can replace engineers. It works because the scope of decisions that require human judgment is smaller than most companies assume. A CEO who understands their product deeply can direct an AI CTO effectively, and the AI can produce production-grade code at a speed that would require a team of 3-5 human engineers.

Four sessions. Three weeks of calendar time. Zero human engineering hours (beyond Juste's product direction and Claude Code sessions). The result is a production-ready SaaS with features that compete with services built by funded teams over months.

Lessons Learned

Parallel agents work, but coordination is the bottleneck.

Session 1's four-agent build was effective because the architect prompt clearly defined interfaces between agents. Agent 1 (API) and Agent 2 (database) needed to agree on table schemas. Agent 3 (notifications) needed to match the event types from Agent 1. Agent 4 (website) was independent. When interfaces are clear, parallelism scales. When they are ambiguous, agents produce incompatible code.

Polish is not optional.

Session 2's five phases were all polish. Replacing emoji with icons, adding authentication, integrating payments, building the dashboard, wiring everything together. None of these phases added new core functionality -- the scheduler, the NLP parser, the notification system were all built in Session 1. But without Session 2, the product was unusable. The gap between "technically works" and "someone would pay for this" is almost entirely polish.

Scope control is the superpower.

Heartbeat monitoring is 105 lines. The secrets module is 93 lines. The entire backend is 3,500 lines. For context, a typical enterprise cron service backend would be 30,000-50,000 lines. We achieved a 10x reduction in code volume by ruthlessly scoping every feature to its minimal viable form. No ping history, no secret versioning, no adaptive grace periods, no mutual TLS. Each of these omissions saved hundreds of lines of code and days of development time, and none of them prevents the product from delivering value.

Rust was the right choice.

Rust's type system caught dozens of bugs at compile time that would have been runtime errors in Python or JavaScript. The borrow checker enforced memory safety without a garbage collector. The async runtime (Tokio) handled concurrent job execution efficiently. And the compiled binary is a single executable with no runtime dependencies, which simplifies deployment enormously. For a background processing system that runs 24/7 and handles user credentials, the reliability guarantees of Rust justify its steeper learning curve.

SvelteKit was the right choice for the frontend.

File-based routing eliminated configuration boilerplate. Svelte 5 runes made state management natural. TailwindCSS provided a consistent design system without custom CSS. The entire dashboard was built in a single session phase, which would have been impossible with a framework that required more setup, more configuration, and more boilerplate.

What Is Next

0cron is built. Deploying to production requires a checklist of operational tasks:

  • Provision a VPS and configure the Rust binary as a systemd service
  • Set up PostgreSQL 17 with automated backups
  • Configure DNS for 0cron.dev with SSL via Let's Encrypt
  • Create the Stripe account and configure webhook endpoints
  • Set up Google OAuth credentials for the production domain
  • Deploy the SvelteKit frontend behind a reverse proxy
  • Set up monitoring for the monitoring service (yes, we will use 0cron to monitor itself)
  • Write API documentation for the public-facing endpoints
  • Create a status page at status.0cron.dev

Each of these is a known, well-documented task. There are no unsolved technical problems. The product is complete; the deployment is operations.

The Series in Retrospect

Ten articles. Thousands of words. Dozens of code snippets. The full story of building a SaaS product from Abidjan with zero human engineers.

If you have read this far, you know more about 0cron's internals than most users of competing products know about theirs. That transparency is intentional. We believe that showing your work builds trust, demonstrates competence, and creates a record that future builders can learn from.

0cron is one of six products being built by ZeroSuite -- all using the same CEO + AI CTO model. The others are Sh0.app (URL shortener), D-blo.ai (AI education platform for African students), and three more in development. Each product follows the same pattern: identify a clear need, scope ruthlessly, build with Claude Code, and ship.

The world's first documented CEO + AI CTO partnership is just getting started. And the cron jobs will run on time.

Visit 0cron.dev to start your 60-day free trial.

---

This is article 10 of 10 in the "How We Built 0cron" series.

1. Why the World Needs a $2 Cron Job Service 2. 4 Agents, 1 Product: Building 0cron in a Single Session 3. Building a Cron Scheduler Engine in Rust 4. "Every Day at 9am": Natural Language Schedule Parsing 5. Multi-Channel Notifications: Email, Slack, Discord, Telegram, Webhooks 6. Stripe Integration for a $1.99/month SaaS 7. From Static HTML to SvelteKit Dashboard Overnight 8. Heartbeat Monitoring: When Your Job Should Ping You 9. Encrypted Secrets, API Keys, and Security 10. From Abidjan to Production: Launching 0cron.dev (you are here)

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles