Most people would spend a month writing a design document for a PaaS platform. We spent that time writing the platform itself.
On March 12, 2026, at a desk in Abidjan, Thales opened a terminal and typed cargo init. Twenty-four hours later, sh0.dev existed: 10 Rust crates, 24 database tables, a Docker Engine client built from scratch, a full REST API, a build engine that detects 19 tech stacks, and a static analysis engine with 34 rules. Not a prototype. Not a toy. A foundation that would carry the entire product to launch.
This is the story of that day -- the architecture decisions, the code, and the marathon session that proved a CEO and an AI CTO could build a production PaaS without a single human engineer.
The Bet: Why Rust for a PaaS
The first decision was the most consequential. Every mainstream PaaS -- Heroku, Railway, Render, Coolify -- is built on Go or TypeScript. We chose Rust.
Not because it was trendy. Because we were building a single-binary deployment platform aimed at developers who run their own servers. That binary needed to be fast, small, and self-contained. No runtime. No garbage collector pauses while routing production traffic. No "install Node 18 and then npm install 400 packages" before you can deploy your first app.
Rust gave us one more thing: if the code compiles, an entire class of bugs simply does not exist. When you are two people -- one human, one AI -- building infrastructure software, the compiler is your third team member.
The 10-Crate Workspace
The workspace structure was the skeleton of everything that followed. We designed it on the principle that each crate owns exactly one domain, depends only on what it needs, and can be tested in isolation.
sh0/
Cargo.toml # workspace root
crates/
sh0/ # main binary (CLI + server startup)
sh0-api/ # Axum HTTP API server
sh0-auth/ # authentication and API keys
sh0-backup/ # backup and restore
sh0-builder/ # stack detection, Dockerfile generation, health checks
sh0-db/ # SQLite connection pool, migrations, 21 models
sh0-docker/ # Docker Engine API client (Unix socket)
sh0-git/ # Git operations and webhook parsing
sh0-monitor/ # metrics collection and alerting
sh0-proxy/ # reverse proxy management (Caddy)The workspace Cargo.toml centralised every shared dependency version. This is not just tidiness -- it prevents the insidious bug where sh0-api compiles against serde 1.0.197 while sh0-docker links serde 1.0.195, and some subtle serialisation difference causes a runtime failure at 3 AM.
[workspace.dependencies]
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }
axum = { version = "0.7", features = ["ws"] }
rusqlite = { version = "0.31", features = ["bundled"] }
r2d2 = "0.8"
r2d2_sqlite = "0.24"
uuid = { version = "1", features = ["v4"] }
chrono = { version = "0.4", features = ["serde"] }Each leaf crate declared its dependencies by referencing the workspace:
[dependencies]
serde = { workspace = true }
tokio = { workspace = true }One source of truth. Zero version drift.
Phase 1: The Database Foundation
We started with sh0-db because everything else would depend on it. The first question was the database. PostgreSQL? MySQL? We chose SQLite with WAL mode.
The reasoning was pragmatic. sh0 is a single-binary tool. Telling users "before you can deploy your apps, first set up a PostgreSQL cluster" would be absurd. SQLite gives us an embedded, zero-configuration, battle-tested database. WAL (Write-Ahead Logging) mode gives us concurrent reads without blocking writers -- essential when the API server is reading app status while the deploy pipeline is updating it.
The connection pool setup was 30 lines of Rust that would underpin every database operation in the system:
use r2d2::Pool;
use r2d2_sqlite::SqliteConnectionManager;
use rusqlite::OpenFlags;pub type DbPool = Pool
pub fn create_pool(path: &str) -> Result
Pool::builder() .max_size(10) .build(manager) .map_err(PoolError::R2d2) } ```
Three PRAGMAs. Three critical decisions. journal_mode=WAL for concurrency. foreign_keys=ON because SQLite disables them by default (a footgun that has ruined many projects). busy_timeout=5000 so concurrent writers wait five seconds before giving up instead of immediately returning SQLITE_BUSY.
24 Tables in a Single Migration
The initial migration defined 24 tables: 13 core entities, 7 extended features, and 4 enhanced capabilities. We wrote the entire schema as a single 001_initial.sql file, embedded directly into the binary using Rust's include_str! macro.
The migration runner itself was deliberately simple:
pub fn run_migrations(conn: &Connection) -> Result<(), MigrationError> {
conn.execute_batch(
"CREATE TABLE IF NOT EXISTS _migrations (
version INTEGER PRIMARY KEY,
name TEXT NOT NULL,
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
);"
)?; let applied: Vec
for migration in MIGRATIONS { if !applied.contains(&migration.version) { conn.execute_batch(migration.sql)?; conn.execute( "INSERT INTO _migrations (version, name) VALUES (?1, ?2)", params![migration.version, migration.name], )?; } } Ok(()) } ```
No ORM. No migration framework with its own DSL and 50 transitive dependencies. Just SQL, a version table, and a loop. It runs in microseconds, it is obvious what it does, and it will never break because some upstream library changed its migration format.
21 Models, One Pattern
Each of the 21 model files followed the same pattern: a struct, a from_row() constructor, and the standard CRUD operations. Here is a representative example:
impl App {
pub fn from_row(row: &Row) -> rusqlite::Result<Self> {
Ok(Self {
id: row.get("id")?,
project_id: row.get("project_id")?,
name: row.get("name")?,
status: row.get("status")?,
created_at: row.get("created_at")?,
updated_at: row.get("updated_at")?,
})
} pub fn insert(conn: &Connection, app: &NewApp) -> Result
pub fn find_by_id(conn: &Connection, id: &str) -> Result
Twenty-one times this pattern. Repetitive? Yes. But each model compiles independently, has zero hidden magic, and maps exactly to the SQL underneath. When you are building infrastructure that other people's production apps depend on, boring is a feature.
Phase 2: The Docker Engine Client
With the database in place, we built sh0-docker -- a complete Docker Engine API client communicating over Unix sockets. This was the single most technically challenging piece of the entire day. (We cover it in full detail in the next article, "Writing a Docker Engine Client from Scratch in Rust.")
The key decision: we wrote our own client using hyper 1.x instead of shelling out to the Docker CLI or using an existing library. The result was a custom UnixConnector in about 40 lines, full container lifecycle management, multiplexed stream parsing, and CPU/memory stats computation.
Six unit tests. Five integration test files. Zero external Docker library dependencies.
Phase 3: The API Server
Phase 3 wired the database and Docker client together through an Axum HTTP API. The AppState struct carried everything:
pub struct AppState {
pub pool: Arc<DbPool>,
pub docker: Arc<DockerClient>,
pub started_at: Instant,
}Three fields. The database pool, the Docker client, and a timestamp for uptime calculation. Every handler received this state via Axum's extractor system.
The route tree was clean and RESTful:
GET /api/v1/health
GET /api/v1/status
POST /api/v1/apps
GET /api/v1/apps
GET /api/v1/apps/:id
PUT /api/v1/apps/:id
DELETE /api/v1/apps/:id
POST /api/v1/apps/:id/deployments
GET /api/v1/apps/:id/deployments
GET /api/v1/deployments/:id
WS /api/v1/apps/:id/logsOne design decision is worth highlighting: the AuthUser extractor was implemented as a no-op that always returns an admin user. This was not laziness -- it was architecture. The extractor's type signature matched what real JWT-based authentication would use in Phase 9. Every handler was written to accept AuthUser as a parameter. When we implemented real auth later, we changed one file -- the extractor -- and every handler inherited real authentication without a single line of handler code changing.
The API also included pagination with sensible defaults and guard rails:
pub struct PaginationParams {
pub page: u32, // default 1
pub per_page: u32, // default 20, clamped to 1..=100
}No user can request page -1 or per_page 10000. The API defends itself.
Phases 5 and 6: Build Engine and Health Checks
The afternoon brought the build engine (sh0-builder), which we split into two distinct capabilities: stack detection with Dockerfile generation, and a code health check engine.
The stack detector examines a project directory and identifies one of 19 technology stacks. The detection is priority-based -- if a user provides their own Dockerfile, that always wins. Otherwise, the engine looks for signature files: package.json for Node.js, go.mod for Go, Cargo.toml for Rust, and so on.
Node.js detection alone has multiple layers: which package manager (npm, yarn, pnpm, bun), which framework (Express, Fastify, Hono, Koa, NestJS), and which meta-framework (Next.js, Nuxt, SvelteKit, Astro). Getting this right means generating the correct Dockerfile -- and getting it wrong means a failed build and a frustrated user.
The health check engine added 34 static analysis rules across 8 categories, all in pure Rust. No LLM. No network call. Just function pointers and pattern matching, scanning for security issues, misconfigurations, and common deployment mistakes before they reach production. (Both the build engine and health check engine get their own dedicated articles later in this series.)
The Moment It All Compiled
At the end of the session, we ran three commands:
cargo build # passes clean
cargo test # all tests pass
cargo clippy -D warnings # zero warningsTen crates. Twenty-four database tables. Twenty-one model files. A Docker Engine client. A full REST API with 12 integration tests. A build engine with 23 unit tests. A health check engine with 34 rules and 82 tests. A CLI binary with four subcommands.
All of it compiled, linked, and tested in a single cargo build invocation. The Rust compiler had verified, at the type level, that these ten crates could interoperate correctly. No runtime surprises. No "it works on my machine."
When Thales ran cargo run -- version and saw sh0 v0.1.0 printed in the terminal, that was the moment sh0.dev stopped being an idea and started being a product.
What Made This Possible
Building a PaaS foundation in 24 hours is not normal. Three factors made it possible.
First, the CEO-AI CTO workflow. Thales made architectural decisions -- Rust, SQLite, the 10-crate split -- based on product intuition and market understanding. Claude implemented them at the speed of thought, writing correct Rust code that compiled on the first or second attempt. There was no back-and-forth about coding style, no pull request review cycle, no "let me set up my dev environment first."
Second, Rust's compiler as a quality gate. In a dynamically typed language, the code might have compiled quickly but hidden dozens of integration bugs. Rust forced us to handle every error, match every type, and make every ownership relationship explicit. The time "lost" to satisfying the borrow checker was time saved from debugging production crashes later.
Third, deliberate simplicity. No ORM. No migration framework. No Docker library. No test framework beyond the standard #[test] macro. Every dependency was chosen because it solved a specific problem (r2d2 for connection pooling, hyper for HTTP, axum for routing) and nothing more. The fewer abstractions between us and the metal, the fewer things that could break.
What Came Next
Day Zero gave us the skeleton. The next days would add muscle: git operations and webhook parsing (Phase 4), the reverse proxy with automatic SSL (Phase 7), the full deploy pipeline tying everything together (Phase 8), real authentication (Phase 9), and monitoring (Phase 10).
But first, we need to talk about the single hardest piece of code we wrote on Day Zero: the Docker Engine client. That is the next article.
---
This is Part 1 of the "How We Built sh0.dev" series, documenting how a CEO in Abidjan and an AI CTO built a complete PaaS platform in 14 days.
Series Navigation: - [1] Day Zero: 10 Rust Crates in 24 Hours (you are here) - [2] Writing a Docker Engine Client from Scratch in Rust - [3] Auto-Detecting 19 Tech Stacks from Source Code - [4] 34 Rules to Catch Deployment Mistakes Before They Happen