Every app deployed through sh0 passes through 480 lines of Rust that answer one question: what is this project? A Next.js app? A Django API? A bare PHP site? The answer determines which Dockerfile gets generated, which port gets exposed, which health check runs. Get it wrong, and the deploy either fails or, worse, succeeds with a broken container.
On March 30, 2026, I sat down to audit those 480 lines. I found 31 bugs. Four of them were breaking production deploys today. Here is what was wrong, why it was wrong, and what it taught me about the hardest problem in deployment automation: guessing correctly.
The Architecture: Three Files That Control Everything
sh0's build engine lives in three Rust files inside the sh0-builder crate:
crates/sh0-builder/src/
├── detector.rs (480 lines) — "What stack is this?"
├── dockerfile.rs (1050 lines) — "What Dockerfile does it need?"
└── types.rs (320 lines) — Stack enum, DetectedStack structWhen a user pushes code, uploads a ZIP, or clicks "Deploy" in the dashboard, the pipeline calls detect_stack(). This function reads the project directory and returns a DetectedStack:
rustpub struct DetectedStack {
pub stack: Stack, // NextJs, Django, Php, Go, ...
pub framework: Option<String>, // "laravel", "flask", "express"
pub package_manager: Option<String>, // "npm", "yarn", "pnpm", "bun"
pub entry_point: Option<String>, // "main:app", "main.py"
pub build_command: Option<String>,
pub start_command: Option<String>,
pub port: u16,
pub has_dockerfile: bool,
}Then generate_dockerfile() takes that struct and produces a production-grade, multi-stage Dockerfile. The detector decides what to build. The generator decides how to build it.
Both were wrong in ways I did not expect.
The Four Bugs That Were Breaking Deploys Today
Bug 1: Bun Beats Next.js
A Next.js 14 project using Bun as its package manager has two marker files: next.config.js and bun.lockb. The detector checked for bun.lockb before checking for next.config.js:
rust// Check for Bun runtime
if file_exists(dir, "bun.lockb") {
return DetectedStack::new(Stack::Bun); // Returns immediately
}
// Check for framework-specific config files
if file_exists(dir, "next.config.js") { // Never reached
return DetectedStack::new(Stack::NextJs);
}The result: the project got a generic Bun Dockerfile (CMD ["bun", "start"]), not a Next.js standalone Dockerfile. Every route returned 404.
The fix: Framework config checks now run before runtime checks. Bun is a package manager. Next.js is a framework. The framework is always more specific than the runtime. After the reorder, Bun falls back to a catch-all that only triggers when no framework config is found:
rust// Framework detection first (most specific)
if file_exists(dir, "next.config.js") || ... {
let mut stack = DetectedStack::new(Stack::NextJs);
stack.package_manager = package_manager; // Still "bun"
return stack;
}
// ... SvelteKit, Nuxt, Astro, Remix ...
// Bun runtime fallback (least specific)
if package_manager.as_deref() == Some("bun") {
return DetectedStack::new(Stack::Bun);
}The deeper lesson: Detection priority must go from most-specific to least-specific. A framework config file (next.config.js) is more specific than a lockfile (bun.lockb). A lockfile is more specific than a manifest (package.json). When you add a new detection path, ask: "Is this more or less specific than what's above it?"
Bug 2: WordPress False Positive
Any PHP project with a directory named wp-content/ was detected as WordPress:
rustif file_exists(dir, "wp-config.php") || dir.join("wp-content").is_dir() {
stack.framework = Some("wordpress".to_string());
}A Laravel project that scaffolds a wp-content/ directory for WordPress integration testing? WordPress. A custom CMS that happens to use the same directory name? WordPress. The detector stamped it with the WordPress Dockerfile, which installs mysqli extensions and creates upload directories -- none of which a Laravel app needs.
The fix: wp-content/ alone is not sufficient. Require a co-marker:
rustif file_exists(dir, "wp-config.php")
|| file_exists(dir, "wp-config-sample.php")
|| (dir.join("wp-content").is_dir() && file_exists(dir, "wp-login.php"))wp-config.php alone is definitive -- it only exists in WordPress installations. wp-content/ needs wp-login.php as a co-signal. The combination eliminates false positives while catching legitimate WordPress projects that might have renamed their config file.
Bug 3: Docker COPY With Shell Redirects
The Java Maven Dockerfile template contained this line:
dockerfileCOPY .mvn .mvn 2>/dev/null || trueThis looks reasonable if you read it as a shell command. But COPY is a Dockerfile instruction, not a shell command. Docker parses it as: copy a source file named .mvn to a destination named .mvn 2>/dev/null || true. If .mvn does not exist, Docker fails the build with a cryptic error. The 2>/dev/null does not suppress anything. The || true does not provide a fallback.
The same bug existed in the Gradle template:
dockerfileCOPY gradle gradle 2>/dev/null || trueThe fix: Remove the invalid shell syntax. Restructure the template to COPY . . followed by the build command:
dockerfileFROM eclipse-temurin:21-jdk AS builder
WORKDIR /app
COPY . .
RUN chmod +x mvnw 2>/dev/null || true
RUN ./mvnw package -DskipTests -B 2>/dev/null || mvn package -DskipTests -BThis loses Docker's dependency layer caching (the old approach tried to copy pom.xml first for cache efficiency), but it is correct. An incorrect build that fails 30% of the time is worse than a correct build that is 10 seconds slower.
Bug 4: Laravel's Cached Empty APP_KEY
The Laravel Dockerfile ran php artisan config:cache during the Docker build:
dockerfileRUN php artisan config:cache --no-interaction || trueLaravel's config:cache serializes all configuration values into a single cached PHP file. During the Docker build, the environment variable APP_KEY is empty (set to "" in the Dockerfile's ENV). So the cached config contains APP_KEY="".
After deployment, the user sets APP_KEY through sh0's environment variable manager. But the cached config is already baked into the image. Laravel reads the cache, finds an empty key, and throws RuntimeException: No application encryption key has been specified.
The user sees: "I set APP_KEY but the app still crashes." The reason: the config was cached at build time with the wrong value, and the runtime value never gets read because the cache takes priority.
The fix: Move caching to container startup. Generate an entrypoint script that runs caching commands after environment variables are injected:
dockerfileRUN printf '#!/bin/bash\nset -e\nphp artisan config:cache --no-interaction 2>/dev/null || true\n... exec apache2-foreground\n' \
> /usr/local/bin/docker-entrypoint.sh \
&& chmod +x /usr/local/bin/docker-entrypoint.sh
CMD ["/usr/local/bin/docker-entrypoint.sh"]Now config:cache runs at container start, when APP_KEY has its real value. The cached config is correct. The app works.
The Semantic Overload That Caused Subtle Bugs
The DetectedStack struct had a field called entry_point. For Python, it meant a module reference: "main:app". For Django, it meant a WSGI module: "myproject.wsgi:application". For PHP, it meant a directory: "public", "webroot", "web".
Three completely different semantic meanings in one field. The Dockerfile templates interpreted entry_point differently based on the stack type, with no type safety:
rust// PHP template reads entry_point as a directory
let doc_root = stack.entry_point.as_deref().unwrap_or(".");
// "public" → /var/www/html/public ← correct
// FastAPI template reads entry_point as a module
let app_module = stack.entry_point.as_deref().unwrap_or("main:app");
// "main:app" → uvicorn main:app ← correctWhat happens if a PHP framework accidentally sets a Python-style entry point? Or if a future contributor adds a new PHP framework and uses entry_point for the wrong meaning? The code compiles, the tests pass, and the generated Dockerfile serves from the wrong directory.
The fix: Split the field into two:
rustpub struct DetectedStack {
pub entry_point: Option<String>, // File/module: "main.py", "main:app"
pub document_root: Option<String>, // Directory: "public", "webroot", "web"
// ...
}PHP frameworks now set document_root. Python and Node frameworks continue using entry_point. The separation is enforced at the type level -- you cannot accidentally pass a directory where a module reference is expected.
The Missing Stacks
The detector supported 19 stacks. The code review found 3 missing ones that users would encounter in practice:
Flask -- the second most popular Python web framework -- was completely absent. A Flask app with requirements.txt containing flask was detected as generic Python and got CMD ["python", "main.py"]. No gunicorn, no production WSGI server. The app worked in development and crashed under load.
Remix -- one of the most popular React meta-frameworks -- was not detected at all. A Remix project fell through to generic Node.js, which does not know about Remix's build output structure.
Astro static output -- Astro can run in SSR mode (produces a Node.js server) or static mode (produces pure HTML). The detector always assumed SSR. A static Astro project got CMD ["node", "dist/server/entry.mjs"], which does not exist in static builds.
For each, I added both the detection logic and the Dockerfile template. Flask uses gunicorn. Remix uses remix-serve. Astro static mode returns Stack::Static and gets an nginx Dockerfile.
Python's Silent Failure
Every Python Dockerfile template contained this line:
dockerfileRUN pip install --no-cache-dir -r requirements.txt 2>/dev/null || \
pip install --no-cache-dir . 2>/dev/null || trueThe intent: try requirements.txt first, fall back to pyproject.toml. The effect: if requirements.txt has a typo, a missing package, or a version conflict, pip fails, the error is suppressed by 2>/dev/null, the fallback || true swallows the failure, and the build continues with no packages installed. The container starts and crashes immediately on import.
The build log shows nothing useful. The user sees ModuleNotFoundError at runtime and has no idea why.
The fix: Conditional installation without error suppression:
dockerfileRUN if [ -f requirements.txt ]; then \
pip install --no-cache-dir --prefix=/install -r requirements.txt; \
elif [ -f pyproject.toml ]; then \
pip install --no-cache-dir --prefix=/install .; \
fiIf pip install fails, the build fails. The error is visible in the build log. The user knows exactly which package failed and why.
DevDependencies in Production
The Node.js Dockerfile template had this structure:
dockerfile# Build stage
RUN npm ci # Installs ALL deps including devDependencies
RUN npm run build
# Production stage
COPY --from=builder /app . # Copies everything, including devDependenciesThe production image contained jest, typescript, eslint, prettier, and every other devDependency. For a typical Next.js project, this doubles the image size from ~200MB to ~400MB and exposes development tooling in production.
The fix: Add a prune step after the build:
dockerfileRUN npm run build
RUN npm prune --production # Remove devDependencies
# Production stage
COPY --from=builder /app . # Now only production depsI added a npm_prune_cmd() helper that returns the right prune command for each package manager: npm prune --production, yarn install --production --ignore-scripts, pnpm prune --prod, or rm -rf node_modules && bun install --production.
The Final Count
28 issues fixed in one session. 155 tests passing (up from 143). Three issues deferred for a separate session because they had cross-cutting concerns (Java key dedup across database and dashboard, HealthReport timestamp requiring a new dependency, Go TCP health probe requiring pipeline changes).
Here is the breakdown:
| Severity | Count | Example |
|---|---|---|
| Critical | 7 | Bun beats Next.js, Docker COPY syntax, Laravel config:cache |
| Important | 9 | Flask missing, devDeps in production, pip error swallowing |
| Medium | 9 | Astro config.js variant, Laravel dockerignore, JVM heap opts |
| Info | 3 | Docstring fixes, doc comments |
New detection capabilities added: Flask, Remix, Lumen (distinguished from Laravel), Astro static output, Symfony via symfony.lock, Yii with dedicated Dockerfile.
New Dockerfile features: devDependency pruning, JVM container-aware heap sizing, container startup entrypoint for Laravel, conditional pip install, static site build output detection.
Why Generated Dockerfiles Are Harder Than Handwritten Ones
When you write a Dockerfile by hand, you know your project. You know whether you use Maven or Gradle. You know your document root is public/. You know your Python entry point is app.main:app.
When you generate a Dockerfile, you know nothing. You must infer everything from filesystem markers, dependency manifests, and config file contents. Every inference is a guess. Every guess can be wrong.
The 31 bugs in sh0's stack detector fall into three categories:
- Priority errors -- Bun before Next.js, WordPress matching on
wp-content/alone. The detection order was wrong, and a less-specific match preempted a more-specific one.
- Template errors -- Docker COPY with shell redirects, pip
|| true, missing prune steps. The generated Dockerfile contained invalid syntax or silently wrong behavior.
- Semantic errors --
entry_pointmeaning three different things, config cached at build time instead of runtime. The data model conflated different concepts.
Categories 1 and 2 are fixable with better code. Category 3 is fixable only with better types. The document_root field separation is not a feature -- it is a type-level guarantee that a PHP directory path cannot be confused with a Python module reference.
The more stacks you support, the more these categories compound. sh0 now detects 20 stacks with dozens of sub-framework variants. Each new detection path is another place where priority, template, and semantic errors can creep in.
This is why the code review methodology matters. One session built 28 fixes. An independent audit session will verify them, find regressions, and catch the bugs that the builder's blind spots hid. Then a second audit will catch what the first auditor missed.
Three perspectives. One correct system.
What This Means for sh0 Users
If you deploy through sh0, every detection and template issue described in this post is now fixed. Your Next.js project using Bun will be detected correctly. Your Laravel app will cache its config at startup, not at build time. Your Flask app will get gunicorn, not python main.py. Your Java app will get container-aware JVM heap sizing.
You did not need to know any of this. That is the point. sh0's job is to look at your code and build the right container, without you writing a Dockerfile, without you configuring ports, without you thinking about production WSGI servers. When the detector is wrong, every deploy is wrong. Getting it right is not a feature -- it is the product.
This is Part 39 of the sh0 engineering series. Previous: 31,000 Translations in One Session. The full series documents how sh0 was built from zero to production by a CEO in Abidjan and an AI CTO, with no human engineering team.