sh0 already had a CLI. Ten commands, built on day one, mirroring every dashboard action. Deploy, logs, env vars, health checks, SSH into containers. But there was a gap that none of those commands filled.
A developer clones a repo. They write code. They want it live. In that moment, they should not have to open a browser, navigate to a dashboard, create an app, configure a Git repository, wait for a webhook, and then trigger a build. They should type one command and get a URL.
That command is sh0 push.
$ sh0 push
Pushing my-app
Detected nodejs (Next.js) -- 85/100 health
42 files (2.3 MB) packaged
Uploading OK 0.8s
Building OK 32.4s
[ok] Live in 35.3s
-> https://my-app.sh0.appSix lines of output. Zero configuration. From local directory to live URL in 35 seconds.
This article explains every layer of the implementation -- from file packaging to deployment polling -- and the security decisions that shaped the final design.
The Problem: Too Many Steps Between Code and URL
Before sh0 push, deploying to sh0 required five steps:
- Create an app in the dashboard
- Connect a Git repository
- Configure the build settings
- Push to Git
- Wait for the webhook to trigger a build
This is fine for production workflows. It is terrible for the moment a developer thinks "I want to see this live." That moment demands immediacy. Every extra step is friction, and friction kills adoption.
We studied what Vercel did with vercel --prod, what Fly.io did with fly deploy, and what Instapods demonstrated with instapods deploy my-app. The pattern is always the same: detect the project, package the files, upload them, build on the server, return a URL.
The insight was that sh0 already had 90% of the server-side infrastructure. The upload endpoint existed. Stack detection existed. The build pipeline existed. Domain auto-creation existed. What was missing was CLI glue -- a single command that orchestrated the full flow.
Step 1: Stack Detection (Reusing What We Had)
sh0's build system already includes a stack detector that recognizes 19 technology stacks by examining project files:
rustlet stack_result = detect_stack(&project_path, ".").await;
if let Some(ref stack) = stack_result {
let health = check_health(&project_path, ".").await;
print_step(&format!(
"Detected {} ({}) -- {}/100 health",
stack.stack_type, stack.framework, health.score
));
}The detector reads package.json, Cargo.toml, requirements.txt, go.mod, composer.json, and dozens of other project markers. It returns the stack type, the framework, the package manager, and the default port. The health checker then runs 34 rules against the project -- checking for Dockerfiles, .dockerignore, environment variable configuration, and production readiness signals.
Both calls are wrapped in .ok() so that push works even when detection fails. A project without a recognizable stack can still be pushed -- the server falls back to Dockerfile-based detection.
Step 2: Packaging Files Into a ZIP
This is where security decisions start mattering. The CLI creates an in-memory ZIP archive of the project directory, but it must exclude files that should never leave the developer's machine.
The ignore hierarchy has three layers:
.sh0ignore-- Project-specific exclusions (highest priority).dockerignore-- Docker convention (fallback).gitignore-- Git convention (last resort)- Always-excluded patterns -- 21 hardcoded patterns that are excluded regardless
The always-excluded list was the subject of the first critical audit finding. Here is what we ship:
rustpub(crate) const ALWAYS_EXCLUDE: &[&str] = &[
".git", "node_modules", ".next", ".nuxt", ".output",
"target", "__pycache__", ".venv", "venv", ".tox",
"dist", "build", ".svelte-kit", ".turbo", ".cache",
".DS_Store", "*.pyc", "*.pyo", ".sh0",
".env*", // Critical: wildcard, not individual entries
".idea", ".vscode",
];The original implementation listed .env, .env.local, .env.production, .env.development as separate entries. The auditor immediately flagged this: .env.staging, .env.test, .env.custom-anything would leak through. The fix was a single .env* wildcard pattern that catches every variant.
Size Guards
After the .env* fix, the second audit added client-side resource limits:
rustconst MAX_ARCHIVE_SIZE: u64 = 500 * 1024 * 1024; // 500 MB
const MAX_FILE_COUNT: u64 = 50_000;
// During ZIP creation:
cumulative_size += content.len() as u64;
file_count += 1;
if cumulative_size > MAX_ARCHIVE_SIZE {
anyhow::bail!("Archive exceeds 500 MB limit");
}
if file_count > MAX_FILE_COUNT {
anyhow::bail!("Archive exceeds 50,000 file limit");
}Without these guards, a developer could accidentally try to push a directory containing build artifacts or data files, consuming all available memory during ZIP creation. The server already validates upload size, but catching it on the client prevents a bad experience.
Step 3: The Upload Client
Uploading a ZIP archive is not the same as making a JSON API call. The default HTTP client has a 30-second timeout -- fine for API requests, insufficient for uploading a 200 MB archive over a slow connection.
rustpub fn upload_client() -> Result<reqwest::Client> {
reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(300))
.build()
.context("Failed to build upload HTTP client")
}The original implementation swallowed builder errors and fell back to an unconfigured 30-second client. This was flagged as Important in the first audit: a developer uploading a large project would hit a silent timeout with no indication of why. The fix was making upload_client() return Result<reqwest::Client>, forcing callers to handle the error explicitly.
The upload itself uses multipart POST:
rustlet form = reqwest::multipart::Form::new()
.part("file", reqwest::multipart::Part::bytes(zip_data)
.file_name("source.zip")
.mime_str("application/zip")?
)
.text("name", app_name.clone())
.text("port", port.to_string());
// New app vs re-push to existing app
let url = if let Some(app_id) = existing_app_id {
format!("{}/api/v1/apps/{}/upload", base_url, app_id)
} else {
format!("{}/api/v1/apps/upload", base_url)
};Two endpoints, one for creating a new app and one for re-uploading to an existing app. The re-upload endpoint was new server-side code: it reuses the existing app record, creates a new deployment with triggered_by: "cli-push", and includes a concurrent deployment guard that returns HTTP 409 if a build is already in progress.
Step 4: Polling for Build Completion
After upload, the server returns a deployment ID. The CLI polls for build status every 1.5 seconds, streaming new build log lines incrementally:
rustlet spinner = create_spinner("Building");
let mut last_log_len = 0;
loop {
let deployment = client.get_deployment(&deploy_id).await?;
// Stream new log lines
if let Some(ref log) = deployment.build_log {
if log.len() > last_log_len {
let new_content = &log[last_log_len..];
for line in new_content.lines() {
update_phase_from_log(line, &spinner);
}
last_log_len = log.len();
}
}
match deployment.status.as_str() {
"running" => {
spinner.finish_with_message("OK");
break; // Success
}
"failed" => {
spinner.finish_with_message("FAILED");
return Err(anyhow!("Deployment failed"));
}
_ => {} // Still building, continue polling
}
tokio::time::sleep(Duration::from_millis(1500)).await;
}The spinner cleanup was another audit finding. The original code did not clean up the spinner on network errors during polling, leaving the terminal in a corrupted state. The fix was an explicit match block that finishes the spinner on every exit path.
Step 5: The Link File
On successful deployment, the CLI saves a .sh0/link.json file in the project directory:
json{
"app_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"app_name": "my-app",
"server_url": "https://sh0.example.com"
}This file serves the same purpose as Vercel's .vercel/ directory: it links a local directory to a remote app. The next time the developer runs sh0 push, the CLI reads the link file and re-deploys to the same app instead of creating a new one.
The write operation is atomic: the CLI writes to a temporary file and then calls std::fs::rename, which is atomic on POSIX systems. This prevents corruption if the process is interrupted during the write.
The App Name Problem
Deriving the app name from the directory name sounds simple until you consider edge cases. The original sanitize_app_name function used char::is_alphanumeric() to filter characters -- but is_alphanumeric() accepts Unicode. A developer with a directory named in Chinese or Arabic characters would get past the client-side sanitization, only to fail with a confusing server validation error (the server requires ASCII-only names).
The Round 2 audit caught this:
rust// Before (broken): accepts Unicode
name.chars()
.filter(|c| c.is_alphanumeric() || *c == '-')
.collect()
// After (correct): ASCII only
name.chars()
.filter(|c| c.is_ascii_alphanumeric() || *c == '-')
.collect()A one-character fix -- is_alphanumeric to is_ascii_alphanumeric -- that prevents a class of confusing errors for developers worldwide.
Server-Side: The Re-Upload Endpoint
The new POST /api/v1/apps/:id/upload endpoint handles re-pushing to existing apps. The most interesting piece is the concurrent deployment guard:
rust// Check for active deployments
if Deployment::has_active_by_app_id(&conn, app_id)? {
return Err(ApiError::Conflict(
"A deployment is already in progress for this app".into()
));
}The has_active_by_app_id query checks six active statuses: queued, building, pushing, starting, pulling, uploading. If any deployment is in one of these states, the endpoint returns HTTP 409 Conflict instead of starting a second build. Without this guard, two rapid sh0 push commands could create competing deployments that interfere with each other.
The CSRF Exemption Trap
The sh0 API uses CSRF protection on state-changing requests. The upload endpoints needed to be exempt (they use Bearer token auth from the CLI, not browser cookies). The original exemption used:
rustif path.contains("/upload") {
// Skip CSRF check
}This was Critical finding C-2: any route with "upload" anywhere in the path would skip CSRF. If a future developer added /api/v1/settings/upload-config, it would silently bypass CSRF protection. The fix was exact path matching:
rustif path == "/api/v1/apps/upload"
|| (path.starts_with("/api/v1/apps/") && path.ends_with("/upload")) {
// Skip CSRF check -- only upload endpoints
}The Audit Results
Phase 1 went through two independent audit rounds:
Round 1: 3 Critical, 6 Important, 5 Minor findings.
- .env* secret leak (Critical)
- CSRF exemption too broad (Critical)
- process::exit(1) in async context (Critical)
- Upload client swallows errors (Important)
- No ZIP size/count guards (Important)
- Non-atomic link file write (Important)
- No concurrent deployment guard (Important)
- Spinner terminal corruption (Important)
- Missing OpenAPI paths (Important)
Round 2: Verified all 9 Round 1 fixes, found 2 additional Important issues.
- Unicode in sanitize_app_name (Important)
- Empty ZIP detection using byte length instead of file count (Important)
Every Critical and Important finding was fixed. The code went from 36 tests to 37, with a new test specifically for .env* wildcard matching.
Why This Matters
sh0 push is not technically complex. It is ZIP creation, HTTP upload, and polling. Any developer could write it in a weekend.
What makes it difficult is getting the details right. The .env* leak would have shipped secrets to the server. The CSRF exemption would have weakened security for every future route. The Unicode app name would have produced confusing errors for developers in non-Latin-alphabet countries. The non-atomic link file write would have corrupted state on Ctrl+C.
These are the details that separate a deployment tool from a production deployment tool. And they were all caught not by the developer who wrote the code, but by independent auditors reviewing it with fresh eyes.
That is why sh0 uses a multi-session audit methodology: build, audit, audit, approve. The builder optimizes for features. The auditors optimize for correctness. The methodology converges on both.
Next in the series: From 10 Commands to 30: The Developer Ergonomics Sprint -- Four new commands in one session: sh0 init, sh0 link, sh0 open, and sh0 config.