Back to sh0
sh0

From cargo build to a Live Server: The Release Pipeline

How we built sh0's release pipeline: multi-stage Docker builds, cross-compilation challenges, GitHub Actions CI/CD, binary distribution, and the first production deploy.

Thales & Claude | March 25, 2026 12 min sh0
releaseci-cdgithub-actionsdockercross-compilationrustdeploy

For ten days, sh0 existed only on a MacBook in Abidjan. It compiled, it passed tests, it ran locally. But a PaaS that cannot deploy itself is an ironic failure. On March 21 and 22, 2026, we built the infrastructure to take sh0 from cargo build to a running server at demo.sh0.app -- and we learned that the last mile is where most of the pain lives.

This is the story of release profiles, cross-compilation nightmares, .gitignore bugs that broke production builds, and a Content Security Policy that turned the dashboard into a white screen. The glamorous side of shipping software.

The Release Profile: Squeezing the Binary

Rust's default release build is already fast, but we wanted small. sh0 ships as a single binary that users download and run. Every megabyte matters, especially for users on African internet connections.

# Cargo.toml
[profile.release]
opt-level = 3        # Maximum optimization
lto = "fat"          # Full link-time optimization
codegen-units = 1    # Single codegen unit (better optimization)
strip = "symbols"    # Remove debug symbols
panic = "abort"      # No unwinding machinery

Each setting is a trade-off:

  • Fat LTO merges all crates into a single compilation unit, enabling cross-crate inlining. Build time doubles. Binary size drops 15%.
  • codegen-units = 1 prevents the compiler from splitting code across parallel codegen units, which produces better-optimised output at the cost of slower compilation.
  • strip = "symbols" removes debug symbols. The binary shrinks from 40 MB to 25 MB, but stack traces become useless in production. We accepted this because sh0 logs structured error messages that do not depend on symbol tables.
  • panic = "abort" removes the unwinding infrastructure. The binary is smaller and panics terminate immediately instead of unwinding the stack.

The result: a 25 MB binary on macOS ARM64, 27 MB on Linux x64. Compressed to 10-11 MB in .tar.gz. Acceptable for a curl | bash install.

Build Metadata: Knowing What You Shipped

Every production system needs to answer the question "what version is running?" We embedded two pieces of metadata at compile time via build.rs:

// crates/sh0/build.rs
fn main() {
    let hash = Command::new("git")
        .args(["rev-parse", "--short", "HEAD"])
        .output()
        .map(|o| String::from_utf8_lossy(&o.stdout).trim().to_string())
        .unwrap_or_else(|_| "unknown".to_string());

let time = Command::new("date") .args(["-u", "+%Y-%m-%dT%H:%M:%SZ"]) .output() .map(|o| String::from_utf8_lossy(&o.stdout).trim().to_string()) .unwrap_or_else(|_| "unknown".to_string());

println!("cargo:rustc-env=SH0_BUILD_HASH={}", hash); println!("cargo:rustc-env=SH0_BUILD_TIME={}", time); } ```

Now sh0 version outputs sh0 v1.0.0 (38a9d27, 2026-03-21T17:01:20Z) -- the version, the exact commit, and the build timestamp. When a user reports a bug, we know exactly what they are running.

The Multi-Stage Dockerfile

sh0 has two Dockerfiles serving different purposes:

Root Dockerfile (the sh0 binary)

Three stages: build the dashboard, compile the Rust binary, package the runtime.

# Stage 1: Build the Svelte dashboard
FROM node:22-slim AS dashboard
WORKDIR /app/dashboard
COPY dashboard/package*.json ./
RUN npm ci
COPY dashboard/ ./
RUN npm run build

# Stage 2: Compile the Rust binary FROM rust:1.87-bookworm AS builder WORKDIR /app COPY --from=dashboard /app/dashboard/build ./dashboard/build COPY Cargo.toml Cargo.lock ./ COPY crates/ ./crates/ RUN cargo build --release

# Stage 3: Minimal runtime FROM debian:bookworm-slim RUN apt-get update && apt-get install -y ca-certificates curl && rm -rf /var/lib/apt/lists/* COPY --from=builder /app/target/release/sh0 /usr/local/bin/sh0 EXPOSE 9000 VOLUME ["/var/lib/sh0"] ENTRYPOINT ["sh0", "serve", "--port", "9000"] ```

The dashboard must be built first because the Rust binary embeds it via the include_dir! macro. At compile time, the macro reads every file in dashboard/build/ and bakes it into the binary as a static asset. This is what makes sh0 a single binary -- the entire web UI is inside the executable.

Website Dockerfile (sh0.dev)

The website Dockerfile was simpler in theory but harder in practice. The critical lesson: Easypanel passes configuration as --build-arg, but SvelteKit reads process.env at runtime. We needed explicit ARG to ENV forwarding for all 20 configuration variables:

ARG DATABASE_PATH
ENV DATABASE_PATH=$DATABASE_PATH

ARG STRIPE_SECRET_KEY ENV STRIPE_SECRET_KEY=$STRIPE_SECRET_KEY

# ... 18 more variables ```

This is the kind of problem that only surfaces in production. Locally, environment variables are loaded from .env. On Easypanel, they are injected as build arguments. If you do not forward them, every $env/dynamic/private import returns undefined at runtime, and the entire site silently breaks.

The Install Script

Users install sh0 with a single command:

curl -fsSL https://get.sh0.dev | bash

The install script is served by the SvelteKit website. When the Host header is get.sh0.dev, the request handler short-circuits before SvelteKit routing and returns the script as text/plain:

// hooks.server.ts
export const handle: Handle = async ({ event, resolve }) => {
    const host = event.request.headers.get('host') ?? '';
    if (host === 'get.sh0.dev' || host === 'get.sh0.dev:443') {
        return new Response(INSTALL_SCRIPT, {
            headers: {
                'Content-Type': 'text/plain; charset=utf-8',
                'Cache-Control': 'public, max-age=300'
            }
        });
    }
    // ... normal SvelteKit handling
};

The script itself handles platform detection, download, and verification:

1. Detect OS (uname -s -> linux/darwin) and architecture (uname -m -> x86_64/aarch64/arm64) 2. Download the correct .tar.gz from https://sh0.dev/releases/latest/ 3. Download checksums.txt and verify SHA256 (supports both sha256sum on Linux and shasum -a 256 on macOS) 4. Extract to /usr/local/bin/sh0 (using sudo if not root) 5. Create data directory at /var/lib/sh0 6. Print a quick-start guide with the next three commands to run

The binary download route (/releases/[...path]) streams files from disk using node:fs.createReadStream to avoid loading 11 MB binaries into memory. It includes path traversal protection (rejects .., validates resolved path stays within the releases directory) and MIME type filtering (only serves .tar.gz, .txt, and .sha256).

Cross-Compilation: The Pain

sh0 needs to run on four platform combinations: Linux x64, Linux ARM64, macOS x64, macOS ARM64. Building all four from a single machine is the promise of cross-compilation. The reality was less cooperative.

Attempt 1: cross on macOS

The cross tool wraps cargo build in a Docker container with the target platform's toolchain. In theory, cross build --target x86_64-unknown-linux-gnu --release produces a Linux x64 binary from macOS. In practice:

Problem 1: Docker credential helper. macOS Docker Desktop puts its credential helper at /Applications/Docker.app/Contents/Resources/bin/docker-credential-osxkeychain, which is not in the default PATH inside the cross container. Fix: add it to PATH before invoking cross.

Problem 2: GCC memcmp bug. The aws-lc-sys crate (a dependency of rustls) uses GCC inline assembly for its crypto primitives. The GCC version in the cross container has a known memcmp bug that causes cryptographic operations to produce incorrect results. This is not a build failure -- the binary compiles fine but produces wrong answers at runtime. The most dangerous kind of bug.

Fix: force the cmake builder instead of GCC assembly:

export AWS_LC_SYS_CMAKE_BUILDER=1

And add cmake + perl as pre-build dependencies in Cross.toml:

# Cross.toml
[target.x86_64-unknown-linux-gnu]
pre-build = ["apt-get update && apt-get install -y cmake perl"]

[target.aarch64-unknown-linux-gnu] pre-build = ["apt-get update && apt-get install -y cmake perl"]

[build.env] passthrough = ["AWS_LC_SYS_CMAKE_BUILDER"] ```

Attempt 2: Build on the Target

When cross-compilation produced unreliable results, we fell back to the most reliable method: build on the actual target platform. For the v1.0.0 release:

  • macOS ARM64: built natively on the development MacBook
  • Linux x64: built on the demo server (Ubuntu 24.04)

Building on the demo server required uploading the source as a tarball (the repository is private, so git clone was not an option without deploying SSH keys to a server we did not fully trust yet), installing the full Rust toolchain + Node 22 + build-essential + libssl-dev + cmake + perl, and running the build. Inelegant but correct.

GitHub Actions CI/CD

For ongoing development, we set up two workflows:

CI (every push to main, every PR)

Four parallel jobs:

# .github/workflows/ci.yml
jobs:
  fmt:
    runs-on: ubuntu-latest
    steps:
      - run: cargo fmt --all -- --check

clippy: runs-on: ubuntu-latest steps: - run: npm ci && npm run build # Dashboard must exist for include_dir! working-directory: dashboard - run: cargo clippy -- -D warnings

test: runs-on: ubuntu-latest steps: - run: npm ci && npm run build working-directory: dashboard - run: cargo test

dashboard: runs-on: ubuntu-latest steps: - run: npm ci && npm run build && npx svelte-check working-directory: dashboard ```

The critical detail: clippy and test jobs must build the dashboard first. The include_dir!("dashboard/build") macro runs at compile time and panics if the directory does not exist. This caused the first CI failure and took an embarrassingly long time to diagnose.

Release (on version tags)

Triggered by pushing a v* tag:

# .github/workflows/release.yml (simplified)
strategy:
  matrix:
    include:
      - target: x86_64-unknown-linux-gnu
        runner: ubuntu-latest
        cross: true
      - target: aarch64-unknown-linux-gnu
        runner: ubuntu-latest
        cross: true
      - target: aarch64-apple-darwin
        runner: macos-14
        cross: false
      - target: x86_64-apple-darwin
        runner: macos-13
        cross: false

Linux targets use cross (with the cmake fix). macOS targets build natively on Apple Silicon (macos-14) and Intel (macos-13) runners. A final job collects all four artifacts, generates checksums, and creates a GitHub Release using softprops/action-gh-release.

The .gitignore Bugs That Broke Production

Two .gitignore patterns caused production build failures on Easypanel:

Bug 1: data/ matched too broadly

The website .gitignore contained data/ to exclude the SQLite database directory. But git's pattern matching is not anchored to the root by default. data/ matches data/ at any depth -- including src/lib/data/docs-navigation.ts, which defines the documentation sidebar structure.

The file existed locally (git tracks files that were committed before the ignore rule), but when Easypanel ran a fresh git clone, the file was missing. The build failed with ENOENT: docs-navigation.

Fix: change data/ to /data/ (root-anchored pattern).

Bug 2: */secret* was too greedy

The root .gitignore contained */secret* to prevent accidentally committing credential files. Reasonable intent. But the documentation site had a page at security/secrets-management/+page.svelte. The word "secrets" matched the glob, and the entire page was excluded from the repository.

Easypanel built the site, the prerenderer tried to visit /docs/security/secrets-management, got a 404, and the build failed.

Fix: narrow the pattern to specific file extensions:

**/*secret*.md
**/*secret*.txt
**/*secret*.json
**/*secret*.env

Both bugs were invisible in local development because git does not un-track files that were committed before an ignore rule was added. They only surfaced in fresh clones -- exactly the environment a CI/CD system uses.

The First Deploy: demo.sh0.app

With binaries built and the website live, we deployed sh0 to a Hetzner server:

  • Server: Ubuntu 24.04.4 LTS, 8 GB RAM, 150 GB disk (IP: 5.78.182.107)
  • Domain: demo.sh0.app (with wildcard *.sh0.app for app subdomains)

The deployment was manual: upload the binary, install Docker and Caddy, run sh0 serve. The startup log showed all subsystems initialising: Caddy proxy, Docker client, metrics collector, cron scheduler, alert engine, autoscaler, uptime checker. The dashboard loaded at https://demo.sh0.app.

Except it did not. The dashboard showed a white page.

The CSP Fix

The Content-Security-Policy header in router.rs included:

font-src 'self'
style-src 'self' 'unsafe-inline'

The dashboard uses Google Fonts (DM Mono, Inter). The browser blocked requests to fonts.gstatic.com (font files) and fonts.googleapis.com (CSS). No fonts loaded, the CSS could not apply, and the page rendered as a blank white rectangle.

Fix:

"font-src 'self' https://fonts.gstatic.com"
"style-src 'self' 'unsafe-inline' https://fonts.googleapis.com"

Rebuilt on the server, restarted sh0, and the dashboard appeared in full. The entire debugging cycle -- white page, check browser console, identify CSP violation, fix the header, rebuild, redeploy -- took about twenty minutes. A reminder that "it works on my machine" means nothing when the machine does not enforce CSP headers.

The Auto-Provisioning Addition

After the manual deploy, we added automatic dependency installation. When a user runs sh0 serve on a fresh server without Docker or Caddy, sh0 now detects their absence and installs them:

// commands/setup.rs
pub async fn ensure_dependencies(auto_install: bool) -> Result<()> {
    // Check Docker
    if !is_docker_installed().await {
        if auto_install && cfg!(target_os = "linux") && is_root() {
            info!("Docker not found. Installing...");
            Command::new("sh")
                .args(["-c", "curl -fsSL https://get.docker.com | sh"])
                .status()?;
            Command::new("systemctl")
                .args(["start", "docker"])
                .status()?;
        } else {
            bail!("Docker is required. Install it with: curl -fsSL https://get.docker.com | sh");
        }
    }

// Check Caddy (tries multiple paths) if !is_caddy_installed().await { if auto_install && cfg!(target_os = "linux") && is_root() { install_caddy().await?; } else { bail!("Caddy is required. See: https://caddyserver.com/docs/install"); } }

Ok(()) } ```

Auto-installation only happens on Linux when running as root. On macOS, sh0 prints instructions and exits. The Caddy installer uses the official Cloudsmith APT repository on Debian/Ubuntu and falls back to a direct binary download on other distributions. After installation, sh0 stops Caddy's default systemd service (sh0 manages Caddy programmatically via its API, not through systemd).

The Infrastructure State After Day 11

By the end of March 22, sh0 existed as a running product:

ServiceURLStatus
Websitehttps://sh0.devLive on Easypanel
Install scripthttps://get.sh0.devServing install.sh
Binary downloadshttps://sh0.dev/releases/latest/v1.0.0 (Linux x64 + macOS ARM64)
Download pagehttps://sh0.dev/downloadLive with checksums
Demo serverhttps://demo.sh0.appRunning sh0 v1.0.0
CI pipelineGitHub Actionsfmt + clippy + test + dashboard
Release pipelineGitHub ActionsCross-compile + GitHub Release

Ten days earlier, sh0 was a cargo init. Now it was a deployed product with a website, an install script, binary distribution, CI/CD, and a live demo server. The release pipeline was not elegant -- manual binary uploads, cross-compilation workarounds, .gitignore bugs discovered in production. But it worked. And working, in production, beats elegant on paper.

---

Next in the series: Building for Africa: Mobile Money, Local Pricing, and Why It Matters -- why we built sh0 from Abidjan with Mobile Money payments, 5-language support, and pricing designed for African developers.

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles