A cron job that fails silently is worse than a cron job that does not exist.
If your nightly database backup stopped running three days ago and nobody noticed, you do not have a backup system. You have a liability. The entire value proposition of a managed cron service collapses if failures go undetected, which means notifications are not a feature -- they are the product.
When we designed 0cron.dev, we asked a simple question: where do developers actually receive alerts? The answer, in 2026, is everywhere. Some teams live in Slack. Others use Discord. Solo developers check Telegram on their phones. Enterprise teams have PagerDuty or Opsgenie hooked up to webhook endpoints. And email, despite years of predictions about its death, remains the universal fallback.
So we built five notification channels. Not because five is a nice round number, but because those are the five places where developers already pay attention. And we made every channel independently configurable with per-event filters, because receiving a Slack message for every successful health check is the fastest way to make someone disable notifications entirely.
This article covers the 296-line notification service, the channel dispatch system, and the design decisions behind each integration.
The NotificationService: One Struct, Five Channels
The notification system is centred on a single struct that holds the SMTP configuration and exposes methods for each channel:
pub struct NotificationService {
smtp_host: String,
smtp_port: u16,
smtp_username: String,
smtp_password: String,
smtp_from: String,
}impl NotificationService { pub fn from_env() -> Self { Self { smtp_host: std::env::var("SMTP_HOST").unwrap_or_else(|_| "smtp.gmail.com".to_string()), smtp_port: std::env::var("SMTP_PORT") .unwrap_or_else(|_| "587".to_string()) .parse() .unwrap_or(587), smtp_username: std::env::var("SMTP_USERNAME").unwrap_or_default(), smtp_password: std::env::var("SMTP_PASSWORD").unwrap_or_default(), smtp_from: std::env::var("SMTP_FROM") .unwrap_or_else(|_| "[email protected]".to_string()), } } } ```
The design is deliberately simple. SMTP configuration comes from environment variables with sensible defaults. We do not use a configuration file or a database table for SMTP settings because they are deployment-level concerns, not user-level concerns. Every 0cron instance uses one SMTP provider, and that provider is configured once at deployment.
The from_env() constructor uses unwrap_or_default() for credentials, which means the service initialises even if SMTP is not configured. This is intentional: in development and testing, we want the notification service to exist (so the rest of the system compiles and runs) but gracefully degrade to logging when SMTP credentials are absent.
The Dispatch Loop: How Notifications Get Sent
When a job execution completes -- whether it succeeds or fails -- the executor calls send_job_notifications(). This function is the bridge between the execution engine and the notification service. It loads the user's notification configuration from the database and iterates over all five channels:
pub async fn send_job_notifications(
pool: &PgPool,
notification_service: &NotificationService,
user_id: i64,
job_name: &str,
execution: &Execution,
) -> Result<()> {
let config = sqlx::query_as::<_, NotificationConfig>(
"SELECT notification_config FROM users WHERE id = $1"
)
.fetch_optional(pool)
.await?;let config = match config { Some(c) => c, None => return Ok(()), // no config = no notifications };
let is_success = execution.status == "success"; let channels = [ ("email", &config.email), ("slack", &config.slack), ("discord", &config.discord), ("telegram", &config.telegram), ("webhook", &config.webhook), ];
for (channel_name, channel_config) in &channels { if !channel_config.enabled { continue; } if is_success && !channel_config.on_success { continue; } if !is_success && !channel_config.on_failure { continue; }
let body = format_notification_body(job_name, execution);
match *channel_name { "email" => notification_service.send_email(&channel_config.target, &body).await, "slack" => notification_service.send_slack(&channel_config.target, job_name, execution).await, "discord" => notification_service.send_discord(&channel_config.target, job_name, execution).await, "telegram" => notification_service.send_telegram(&channel_config.target, &body).await, "webhook" => notification_service.send_webhook(&channel_config.target, job_name, execution).await, _ => Ok(()), }; }
Ok(()) } ```
Three things stand out in this dispatch loop.
Per-channel filtering. Each channel has its own enabled, on_success, and on_failure flags. A user might want email notifications only on failure (the "wake me up if something breaks" pattern), Slack notifications on both success and failure (the "team visibility" pattern), and webhook on everything (the "feed my monitoring dashboard" pattern). The two-line filter check -- if is_success && !channel_config.on_success -- makes this trivially configurable without complex conditional logic.
Graceful failure. The dispatch loop does not short-circuit on errors. If the Slack webhook is misconfigured but the email SMTP works fine, the user still gets their email notification. Each send_* call returns a Result, but the loop silently continues on error. We log failures, but we never let a broken Slack URL prevent an email from being sent.
No async fan-out. We send notifications sequentially, not in parallel. This was a deliberate choice. Sending five HTTP requests in parallel would save a few hundred milliseconds, but it would also mean five concurrent connections per job execution, which under load could overwhelm external APIs. Sequential dispatch with short timeouts (5 seconds per channel) is simpler, more predictable, and still completes within one second for all five channels in the normal case.
Slack: Block Kit for Rich Messages
Slack is the most popular notification channel among our users, and it deserved a richer format than plain text. We use Slack's Block Kit format, which structures messages into visual blocks with headers, sections, and context elements:
async fn send_slack(&self, webhook_url: &str, job_name: &str, execution: &Execution) -> Result<()> {
let status_icon = if execution.status == "success" { "[OK]" } else { "[FAILED]" };
let color = if execution.status == "success" { "#36a64f" } else { "#dc3545" };let payload = serde_json::json!({ "attachments": [{ "color": color, "blocks": [ { "type": "header", "text": { "type": "plain_text", "text": format!("{status_icon} Job: {job_name}") } }, { "type": "section", "fields": [ { "type": "mrkdwn", "text": format!("Status:\n{}", execution.status) }, { "type": "mrkdwn", "text": format!("Duration:\n{}ms", execution.duration_ms.unwrap_or(0)) }, { "type": "mrkdwn", "text": format!("HTTP Status:\n{}", execution.http_status.unwrap_or(0)) }, { "type": "mrkdwn", "text": format!("Executed At:\n{}", execution.started_at) } ] } ] }] });
reqwest::Client::new() .post(webhook_url) .json(&payload) .timeout(std::time::Duration::from_secs(5)) .send() .await?;
Ok(()) } ```
We chose Block Kit over plain text for a specific reason: scanability. A Slack channel that receives 50 cron notifications per day is unusable if each notification is a wall of unformatted text. Block Kit gives us colour-coded status (green bar for success, red for failure), a bold header with the job name, and structured fields that a developer can scan in under two seconds.
The attachments wrapper with a color field is what produces the vertical colour bar on the left side of the message. This is technically a legacy Slack feature -- the modern approach is blocks at the top level -- but the colour bar is so useful for visual triage that we use the attachment wrapper specifically to get it.
The 5-second timeout is important. Slack's webhook API occasionally takes 2-3 seconds to respond under load. Without a timeout, a slow Slack response would block the notification loop and delay subsequent channel deliveries. With the timeout, we accept the occasional dropped Slack notification in exchange for guaranteed timely delivery on other channels.
Telegram: The Mobile-First Channel
Telegram is particularly popular among solo developers and small teams, especially in our target market of developers in Africa and Southeast Asia, where Telegram often serves as the primary messaging platform. The integration uses the Bot API:
async fn send_telegram(&self, config: &str, body: &str) -> Result<()> {
// config format: "bot_token:chat_id"
let parts: Vec<&str> = config.splitn(2, ':').collect();
if parts.len() != 2 {
tracing::warn!("Invalid Telegram config format, expected 'bot_token:chat_id'");
return Ok(());
}
let (bot_token, chat_id) = (parts[0], parts[1]);let url = format!("https://api.telegram.org/bot{bot_token}/sendMessage");
reqwest::Client::new() .post(&url) .json(&serde_json::json!({ "chat_id": chat_id, "text": body, "parse_mode": "HTML" })) .timeout(std::time::Duration::from_secs(5)) .send() .await?;
Ok(()) } ```
The configuration format -- bot_token:chat_id packed into a single string -- is a pragmatic compromise. The user's notification config in the database stores one target string per channel. For email, that target is an email address. For Slack, it is a webhook URL. For Telegram, we need two pieces of information (the bot token and the chat ID), so we pack them into one string with a colon delimiter.
We considered using a JSON object for the target field, but that would mean different parsing logic for different channels, which adds complexity to both the backend and the frontend configuration UI. A single string with a documented format is simpler for everyone.
The parse_mode: "HTML" setting lets us send formatted messages with bold and tags, which makes the notification body more readable on mobile screens. We use HTML rather than Markdown because Telegram's Markdown parsing is notoriously inconsistent across versions, while HTML rendering is reliable.monospace
Webhooks: The Escape Hatch
Webhooks are the most powerful and least opinionated notification channel. While the other four channels send human-readable messages, the webhook channel sends structured JSON with the full execution payload:
async fn send_webhook(&self, webhook_url: &str, job_name: &str, execution: &Execution) -> Result<()> {
let payload = serde_json::json!({
"event": "job.execution.completed",
"job": {
"name": job_name,
},
"execution": {
"id": execution.id,
"status": execution.status,
"http_status": execution.http_status,
"duration_ms": execution.duration_ms,
"response_body": execution.response_body,
"error_message": execution.error_message,
"started_at": execution.started_at,
"completed_at": execution.completed_at,
"retry_count": execution.retry_count,
},
"timestamp": chrono::Utc::now().to_rfc3339(),
});reqwest::Client::new() .post(webhook_url) .header("Content-Type", "application/json") .header("User-Agent", "0cron-webhook/1.0") .json(&payload) .timeout(std::time::Duration::from_secs(5)) .send() .await?;
Ok(()) } ```
The webhook payload includes everything: execution ID, status, HTTP status code, duration, response body, error message, timestamps, and retry count. This is intentionally verbose. The webhook channel exists for users who want to build custom integrations -- piping execution data into Datadog, triggering a PagerDuty incident, updating a status page, feeding a custom dashboard. They need the raw data, not a pre-formatted summary.
The User-Agent: 0cron-webhook/1.0 header is a small but important detail. It lets webhook receivers identify requests from 0cron and distinguish them from other webhook sources. Some firewall and WAF configurations whitelist traffic based on User-Agent, so providing a distinctive one is a courtesy to users with strict security configurations.
The event: "job.execution.completed" field is forward-looking. Today, we only send one event type. But the schema is designed so that future events (job created, job paused, trial expiring, payment failed) can use the same webhook infrastructure with different event types. The receiver can filter on the event field without changing their endpoint URL.
The Email Fallback: SMTP with Graceful Degradation
Email is simultaneously the most reliable and most frustrating notification channel to implement. SMTP is a protocol from 1982, and it shows. But it is universal -- every developer has an email address, and emails arrive even when Slack is down and Discord is having an outage.
Our email implementation uses the lettre crate with STARTTLS, which is the standard for modern SMTP:
async fn send_email(&self, to_address: &str, body: &str) -> Result<()> {
if self.smtp_username.is_empty() {
tracing::info!("SMTP not configured, logging notification: {body}");
return Ok(());
}let email = Message::builder() .from(self.smtp_from.parse()?) .to(to_address.parse()?) .subject("0cron Job Notification") .body(body.to_string())?;
let creds = Credentials::new( self.smtp_username.clone(), self.smtp_password.clone(), );
let mailer = SmtpTransport::starttls_relay(&self.smtp_host)? .port(self.smtp_port) .credentials(creds) .build();
match mailer.send(&email) { Ok(_) => tracing::info!("Email notification sent to {to_address}"), Err(e) => tracing::error!("Failed to send email to {to_address}: {e}"), }
Ok(()) } ```
The most important line in this function is the first one: if self.smtp_username.is_empty(). This is the graceful degradation. In development, in testing, and on any deployment where SMTP is not configured, the function logs the notification body and returns success. It does not panic. It does not return an error that would propagate up and fail the job execution. It simply notes that it would have sent an email and moves on.
This pattern -- "log instead of send when not configured" -- is critical for development workflows. When you are testing job execution locally, you do not want to set up an SMTP server just to verify that the executor runs correctly. The notification service adapts to its environment.
The STARTTLS connection is the modern standard for SMTP security. Unlike direct TLS (port 465), STARTTLS starts as an unencrypted connection on port 587 and upgrades to TLS during the handshake. This is what Gmail, SendGrid, Mailgun, and most business SMTP providers expect.
The Notification Config: Per-Channel, Per-Event
The notification configuration is stored as a JSON column in the users table. Here is an example of what a fully configured user looks like:
{
"email": {
"enabled": true,
"target": "[email protected]",
"on_success": false,
"on_failure": true
},
"slack": {
"enabled": true,
"target": "https://hooks.slack.com/services/T00/B00/xxxxx",
"on_success": true,
"on_failure": true
},
"discord": {
"enabled": false,
"target": "",
"on_success": false,
"on_failure": false
},
"telegram": {
"enabled": true,
"target": "123456789:AAF...:987654321",
"on_success": false,
"on_failure": true
},
"webhook": {
"enabled": true,
"target": "https://api.example.com/0cron-events",
"on_success": true,
"on_failure": true
}
}This structure encodes a common configuration pattern: email and Telegram for failures only (the "alert me when something breaks" channels), Slack for everything (the "team awareness" channel), Discord disabled (not everyone uses it), and webhook for everything (feeding a monitoring dashboard).
The on_success and on_failure flags are the key design insight. Most notification systems are binary: on or off. But cron jobs are different from application errors. A job that runs 1,440 times a day (every minute) and succeeds every time would generate 1,440 success notifications. Nobody wants that. But the same user absolutely wants to know about the one failure in those 1,440 runs.
Per-event filtering solves this elegantly. The common configuration is on_success: false, on_failure: true -- silence on success, alert on failure. But some users want success notifications too, perhaps for a critical monthly batch job where silence is itself suspicious. The configuration supports both patterns without adding complexity to the dispatch logic.
Why Five Channels (and Not Three, or Ten)
We did not pick five channels arbitrarily. We surveyed the cron job and monitoring tool landscape and identified the platforms where developers actually respond to alerts:
Email is universal. Everyone has it. It is the lowest common denominator and the only channel that works even when the receiving application is down (because email is store-and-forward).
Slack dominates team communication in startups and mid-sized companies. A #cron-alerts channel is a standard pattern. Slack's API is well-documented, and incoming webhooks are trivially set up.
Discord fills the same role for open-source teams, gaming companies, and developer communities. Its webhook API is nearly identical to Slack's in concept (though the JSON format differs), so supporting both was marginal additional effort.
Telegram is popular globally, particularly in Africa, Eastern Europe, and Southeast Asia -- markets where 0cron has strong positioning due to its low price point. Telegram bots are free, fast, and work on slow connections.
Webhooks are the escape hatch. Any integration we did not build -- PagerDuty, Opsgenie, Microsoft Teams, custom dashboards, Zapier workflows -- can be connected through a webhook. Instead of building ten integrations, we build four plus a generic webhook, and the user connects the rest.
We explicitly chose not to support SMS. SMS delivery is unreliable, costs money per message (which conflicts with our unlimited-notifications pricing), and is increasingly blocked by spam filters. Push notifications via a mobile app would be ideal, but we do not have a mobile app yet. When we do, it will become the sixth channel.
Reliability Considerations
Notifications are fire-and-forget by design. If a Slack webhook returns a 500 error, we log it and move on. We do not retry, we do not queue, we do not dead-letter. This is a conscious trade-off.
The reasoning: notification delivery is a best-effort service. The authoritative record of a job execution is the execution log in the database, not the Slack message. If a notification fails, the user can always check the 0cron dashboard to see what happened. Retrying failed notifications would add complexity (retry queues, backoff logic, deduplication) for marginal benefit, and could mask underlying configuration problems. A user with a broken webhook URL should see in their logs that notifications are failing, fix the URL, and resume -- not have the system silently retry for hours.
That said, we do have one reliability mechanism: the sequential dispatch order. Email is always attempted first, because it is the most reliable channel. If the notification service has a transient error (memory pressure, network timeout), the most important channel has the best chance of succeeding.
What Comes Next
The notification system has a clear upgrade path. The next features on the roadmap are:
Notification templates. Let users customise the message format per channel. A Slack notification might want different fields than an email notification.
Escalation policies. If a job fails three times in a row, escalate from Slack to email to phone call. This requires tracking notification history, which we do not currently do.
Digest mode. Instead of one notification per execution, send a daily summary: "47 jobs ran, 2 failed, here are the details." This is particularly valuable for users with hundreds of jobs.
But the current system -- five channels, per-event filtering, graceful degradation, structured webhook payloads -- covers the needs of 95% of users at launch. And it does it in 296 lines of Rust.
---
This is Part 5 of a 10-part series on building 0cron.dev.
| # | Article | Focus |
|---|---|---|
| 1 | Why the World Needs a $2 Cron Job Service | Market analysis and pricing philosophy |
| 2 | 4 Agents, 1 Product: Building 0cron in a Single Session | Parallel build with 4 Claude agents |
| 3 | Building a Cron Scheduler Engine in Rust | Axum, Redis sorted sets, job executor |
| 4 | "Every Day at 9am": Natural Language Schedule Parsing | Regex-based NLP parser in 152 lines |
| 5 | Multi-Channel Notifications: Email, Slack, Discord, Telegram, Webhooks | This article |
| 6 | Stripe Integration for a $1.99/month SaaS | Billing, trials, and webhook handling |
| 7 | From Static HTML to SvelteKit Dashboard Overnight | Frontend architecture and Svelte 5 runes |
| 8 | Heartbeat Monitoring: When Your Job Should Ping You | Monitor model, pings, and grace periods |
| 9 | Encrypted Secrets, API Keys, and Security | AES-256-GCM, API key auth, HMAC signing |
| 10 | From Abidjan to Production: Launching 0cron.dev | The full story and what comes next |