Back to sh0
sh0

119 One-Click Templates: From WordPress to Ollama

How we built a YAML-based template system with variable substitution, dependency ordering, and 119 production-ready templates covering databases, CMS, AI/ML, and more.

Thales & Claude | March 25, 2026 10 min sh0
templatesyamldeploymentdockerpaasone-clickself-hosted

Every PaaS lives or dies by a single question: how fast can a new user go from "I signed up" to "my app is running"? If the answer involves writing a Dockerfile, configuring environment variables, setting up a database, and wiring a reverse proxy -- you have already lost them. They will close the tab and spin up a $5 VPS with a shell script they found on Reddit.

We needed one-click deploys. Not ten templates. Not thirty. One hundred and nineteen, covering everything from WordPress to Ollama, from PostgreSQL to ERPNext. And each template had to handle the complexity that users should never see: variable generation, dependency ordering, volume provisioning, and service networking.

This is how we built a YAML-based template engine in Rust, embedded 119 templates directly into the sh0 binary, and shipped a one-click app store in three days.

The Template Schema

The foundation was a YAML schema that could express any multi-service application. We needed something simpler than Docker Compose but more powerful than a flat configuration file. The result was a custom format designed for one-click deployment:

name: wordpress
description: WordPress with MySQL database
category: cms
tags: [wordpress, cms, blog, php]
icon: wordpress

variables: - name: MYSQL_ROOT_PASSWORD description: MySQL root password type: secret_32 - name: MYSQL_DATABASE default: wordpress - name: WORDPRESS_TABLE_PREFIX default: wp_ required: false

services: - name: wordpress image: wordpress:6-apache port: 80 expose: true env: WORDPRESS_DB_HOST: mysql:3306 WORDPRESS_DB_USER: root WORDPRESS_DB_PASSWORD: "${MYSQL_ROOT_PASSWORD}" WORDPRESS_DB_NAME: "${MYSQL_DATABASE}" WORDPRESS_TABLE_PREFIX: "${WORDPRESS_TABLE_PREFIX}" depends_on: [mysql] volumes: - name: wp-data mount: /var/www/html

- name: mysql image: mysql:8 env: MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}" MYSQL_DATABASE: "${MYSQL_DATABASE}" volumes: - name: db-data mount: /var/lib/mysql ```

The key design decisions: variables are declared at the top level with types that drive auto-generation. Services reference variables via ${VAR} placeholders. One service is marked expose: true -- that is the one that gets the public domain and Caddy routing. Everything else is internal.

Type-Safe YAML Parsing

The Rust types for the template schema used Serde's derive macros with careful attention to optional fields and default values:

#[derive(Debug, Deserialize, Serialize)]
pub struct Template {
    pub name: String,
    pub description: String,
    pub category: String,
    pub tags: Vec<String>,
    pub icon: Option<String>,
    pub variables: Vec<VariableDef>,
    pub services: Vec<ServiceDef>,
}

#[derive(Debug, Deserialize, Serialize)] pub struct VariableDef { pub name: String, pub description: Option, #[serde(rename = "type", default)] pub var_type: Option, pub default: Option, #[serde(default = "default_true")] pub required: bool, } ```

The var_type field (renamed from type to avoid Rust keyword collision) controlled automatic value generation. When a variable had type secret_32, the engine generated a 32-byte hex-encoded random string. When it had type password, it generated a 16-character alphanumeric password. When it had type secret_64, a 64-byte secret. Users never needed to invent passwords for their databases -- the template engine did it for them.

The Variable Substitution Engine

Variable substitution was more than a simple string replacement. The engine had to handle three layers of variables: built-in values (like APP_NAME and DOMAIN that are only known at deploy time), auto-generated secrets, and user-provided overrides.

pub fn resolve_variables(
    template: &Template,
    app_name: &str,
    domain: &str,
    user_vars: &HashMap<String, String>,
) -> Result<HashMap<String, String>, Vec<String>> {
    let mut resolved = HashMap::new();
    let mut errors = Vec::new();

// Built-in variables resolved.insert("APP_NAME".to_string(), app_name.to_string()); resolved.insert("DOMAIN".to_string(), domain.to_string());

for var in &template.variables { if let Some(value) = user_vars.get(&var.name) { resolved.insert(var.name.clone(), value.clone()); } else if let Some(ref var_type) = var.var_type { match var_type.as_str() { "secret_32" => resolved.insert(var.name.clone(), generate_hex(32)), "secret_64" => resolved.insert(var.name.clone(), generate_hex(64)), "password" => resolved.insert(var.name.clone(), generate_password(16)), _ => { errors.push(format!("Unknown type: {}", var_type)); None } }; } else if let Some(ref default) = var.default { resolved.insert(var.name.clone(), default.clone()); } else if var.required { errors.push(format!("Required variable '{}' not provided", var.name)); } }

if errors.is_empty() { Ok(resolved) } else { Err(errors) } } ```

The priority chain was deliberate: user overrides beat auto-generation, auto-generation beats defaults, defaults beat nothing. If a required variable had no value after all three passes, the deployment was rejected with a clear error message listing every missing variable.

Validation: Catching Errors Before They Reach Docker

Template validation was the safety net between a YAML file and a running container. We implemented 19 unit tests covering every validation rule:

  • At least one service must exist
  • Exactly one service must be marked expose: true
  • Image references must be valid Docker image formats
  • depends_on references must point to existing services
  • Volume mount references must correspond to declared volumes
  • Variable names must match [A-Z_][A-Z0-9_]*
  • No circular dependencies between services

The circular dependency check used depth-first search with a visited/in-stack tracking pattern. Without it, a template with service-a depends_on service-b and service-b depends_on service-a would cause the deployment engine to deadlock -- or worse, partially deploy and leave broken containers behind.

Topological Sort for Service Ordering

When deploying a WordPress template, the MySQL container must be running before WordPress starts. When deploying Plausible Analytics, PostgreSQL and ClickHouse must both be up before the Plausible container launches. The deployment engine needed a topological sort that respected the dependency graph.

We implemented Kahn's algorithm: start with services that have no dependencies, deploy them, then deploy services whose dependencies are all satisfied, and repeat. The topological sort produced a deployment sequence that the background task executed one layer at a time: pull image, create container, start container, wait for health, then proceed to the next layer.

For Plausible, the sorted order was: [postgres, clickhouse] (parallel, no deps), then [plausible] (depends on both). For Gitea: [postgres], then [gitea]. The sort handled arbitrary depth -- a template could have five layers of dependencies and the engine would get the order right.

Embedding 119 Templates in the Binary

This was one of the more elegant Rust patterns we used. The include_dir crate allowed us to embed the entire templates/ directory into the compiled binary at build time:

use include_dir::{include_dir, Dir};

static TEMPLATES_DIR: Dir = include_dir!("$CARGO_MANIFEST_DIR/../../templates");

pub fn list_templates() -> Vec { TEMPLATES_DIR .files() .filter(|f| f.path().extension() == Some("yaml".as_ref())) .filter_map(|f| { let content = f.contents_utf8()?; let template: Template = serde_yaml::from_str(content).ok()?; Some(TemplateSummary { name: template.name, description: template.description, category: template.category, tags: template.tags, icon: template.icon, }) }) .collect() } ```

No filesystem access. No configuration directory. No downloading template files from a registry. Every template was compiled directly into the sh0 binary. When a user ran sh0 templates list, it parsed the embedded YAML files and returned them instantly. When they ran sh0 templates deploy wordpress, the binary already had the template in memory.

This design choice had a secondary benefit: template integrity. Users could not accidentally corrupt a template file. The binary was the source of truth.

From 10 to 119: The Template Expansion

The initial Phase 16 session produced 10 templates: the essentials (WordPress, Ghost, PostgreSQL, MySQL, Redis, MinIO, Gitea, Plausible, Umami, Uptime Kuma). Then we expanded in batches.

The final inventory of 119 templates spanned nine categories:

CategoryCountExamples
CMS & E-Commerce16WordPress, Ghost, Strapi, Directus, Payload, WooCommerce, PrestaShop, Medusa
Databases6PostgreSQL, MySQL, Redis, MongoDB, MariaDB, ClickHouse
Analytics6Plausible, Umami, PostHog, Matomo, Grafana, Prometheus
Auth & Identity6Keycloak, Authentik, Logto, SuperTokens, Authelia, Zitadel
AI & ML7Ollama, Open WebUI, Dify, Flowise, Langfuse, LocalAI, AnythingLLM
DevTools12Gitea, Forgejo, Jenkins, SonarQube, Vault, Verdaccio, Nexus
Communication5Rocket.Chat, Mattermost, Chatwoot, Listmonk, Cal.com
Productivity8Nextcloud, Plane, NocoDB, Baserow, Outline, BookStack, Vikunja
Infrastructure53Queues, search, networking, media, finance, education, forums

Each template was validated by an integration test that parsed every YAML file, verified the structure, checked variable naming conventions, confirmed that database services had persistent volumes, and validated auto-generation types. The test ran against all 119 templates in a single pass:

#[test]
fn all_templates_parse_and_validate() {
    let templates = list_templates();
    assert_eq!(templates.len(), 119);
    for summary in &templates {
        let template = get_template(&summary.name)
            .expect(&format!("Template '{}' should load", summary.name));
        let errors = validate_template(&template);
        assert!(errors.is_empty(),
            "Template '{}' has validation errors: {:?}",
            summary.name, errors);
    }
}

If a single template had a malformed variable name, a missing volume declaration, or a circular dependency, the entire test suite would catch it before the code could be compiled into a release binary.

The Dashboard: A Template Store

The dashboard presented templates as a browsable store with category filter tabs (All, CMS, Database, Analytics, Dev Tools, Monitoring, Storage, AI/ML) and a search input that filtered by name, description, and tags.

Each template card showed the name, description, category badge, and a "Deploy" button. Clicking "Deploy" opened a modal with an app name input and a dynamically generated form for the template's variables. Required variables were marked with a badge. Auto-generated variables showed an "(auto)" badge and had pre-filled values that users could override. Optional variables had sensible defaults.

The deploy modal sent a single API call to POST /api/v1/templates/:name/deploy. The backend validated the template, resolved all variables, performed the substitution, created the app record, and spawned a background deployment task. The user saw real-time status updates as each service was pulled, created, started, and connected to the network.

The CLI Experience

For terminal users, the templates were equally accessible:

# List all templates
sh0 templates list

# Get details about a template sh0 templates info plausible

# Deploy with auto-generated secrets sh0 templates deploy wordpress --app-name my-blog

# Deploy with custom variables sh0 templates deploy ghost --app-name newsletter \ --var MAIL_HOST=smtp.example.com \ --var [email protected] ```

The sh0 templates info command displayed the template description, all services with their images, and every variable with its type, default, and required status. It was the equivalent of reading the documentation -- except the documentation was always in sync with the actual template because it was generated from the same YAML source.

What Made This Work

Three architectural choices made the template system reliable at scale:

Embedded templates with compile-time inclusion meant zero runtime dependencies on the filesystem. The binary was the distribution format for templates. No download step. No version mismatch between the engine and the templates.

Strict validation before deployment meant that errors were caught before a single Docker API call was made. A template with a bad dependency reference was rejected with a clear message, not discovered after three containers had already been created and needed manual cleanup.

Topological sort for deployment ordering meant that multi-service templates just worked. Users did not need to understand that ClickHouse had to start before Plausible, or that PostgreSQL needed to be healthy before Gitea could connect. The engine handled the orchestration.

The result: 119 applications, deployable in under a minute each, with a single click or a single CLI command. No Docker expertise required. No YAML to write. No environment variables to research.

---

Next in the series: Docker Compose on a PaaS: Parsing, Validating, Deploying -- how we added support for standard Docker Compose files, letting users bring their existing docker-compose.yml and deploy it on sh0 without modification.

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles