Back to sh0
sh0

Docker Compose on a PaaS: Parsing, Validating, Deploying

How we added Docker Compose support to sh0 -- parsing Compose v3 YAML, validating dependencies, detecting circular references, and deploying multi-service stacks.

Thales & Claude | March 25, 2026 7 min sh0
docker-composeyamldeploymentrustpaascontainers

Docker Compose is the lingua franca of multi-container applications. Every open-source project ships a docker-compose.yml. Every developer has used one. And every PaaS that ignores Compose forces its users to translate their existing configurations into a proprietary format -- a tax that breeds resentment and churn.

We already had a template system that could deploy multi-service applications from custom YAML. The question was: could we accept standard Docker Compose files, parse them with all their idiosyncrasies, and funnel them into the same deployment pipeline? The answer was yes -- but Compose v3 had more edge cases than we expected.

Why Compose Support Matters

Our template system (Article 19) used a clean, purpose-built YAML format. It was elegant. It was also ours. Users arriving at sh0 with existing projects did not have our template files -- they had docker-compose.yml files. Hundreds of them, across hundreds of GitHub repositories, each with slightly different Compose conventions.

Telling these users "rewrite your Compose file in our format" would have been a dealbreaker. Instead, we built a parser that understood Compose v3, a validator that caught errors before deployment, and a converter that transformed Compose services into the same internal representation our template engine used.

Parsing Compose v3: The Type Minefield

The Docker Compose specification is deceptively complex. Fields that look simple have multiple valid representations. Environment variables can be a map or a list. depends_on can be a list of strings or a map with condition keys. Commands can be a string or an array. We had to model every variant:

#[derive(Debug, Deserialize)]
pub struct ComposeService {
    pub image: Option<String>,
    pub build: Option<serde_yaml::Value>,
    pub ports: Option<Vec<String>>,
    pub environment: Option<ComposeEnv>,
    pub volumes: Option<Vec<String>>,
    pub depends_on: Option<ComposeDependsOn>,
    pub command: Option<ComposeCommand>,
    pub mem_limit: Option<String>,
    pub restart: Option<String>,
}

#[derive(Debug, Deserialize)] #[serde(untagged)] pub enum ComposeEnv { Map(HashMap), List(Vec), }

#[derive(Debug, Deserialize)] #[serde(untagged)] pub enum ComposeDependsOn { List(Vec), Map(HashMap), }

#[derive(Debug, Deserialize)] #[serde(untagged)] pub enum ComposeCommand { String(String), List(Vec), } ```

The #[serde(untagged)] attribute was critical. It told Serde to try each variant in order until one matched. When a Compose file had environment: ["DB_HOST=postgres"], it parsed as ComposeEnv::List. When it had environment: { DB_HOST: postgres }, it parsed as ComposeEnv::Map. Both were valid Compose syntax, and both needed to work.

The mem_limit field was another trap. Compose accepts values like 512m, 1g, 256000000. We wrote a parser that handled all three formats and normalized them to bytes for Docker's memory limit parameter.

Validation: Catching What Serde Cannot

Successful YAML parsing was necessary but not sufficient. A Compose file could parse cleanly and still be undeployable. The validator checked four categories of errors:

Image presence. Every service needed either an image field or a build directive. A service with neither was unparseable by Docker, and we rejected it with a clear message rather than letting Docker return a cryptic error.

Dependency references. If service web declared depends_on: [db], the validator confirmed that a service named db existed in the Compose file. Typos in dependency names -- depends_on: [postgre] instead of depends_on: [postgres] -- were caught before any container was created.

Circular dependencies. The same DFS-based cycle detection from our template validator applied here. If service A depended on B, and B depended on A, the validator rejected the file with an error listing the cycle.

Port format. Compose port mappings have multiple valid formats: "8080:80", "8080:80/tcp", "127.0.0.1:8080:80". The validator parsed each format and rejected malformed entries like "not-a-port" or "8080:".

Nineteen unit tests covered every validation path. Each test used a deliberately malformed Compose file and verified that the correct error message was returned.

Conversion: From Compose to Internal Representation

The converter transformed ComposeService instances into ResolvedService structs -- the same type our template deployment pipeline consumed. This was the bridge that let Compose files reuse 100% of the existing deployment infrastructure:

pub fn convert_to_services(
    compose: &ComposeFile,
) -> Result<Vec<ResolvedService>, Vec<String>> {
    let mut services = Vec::new();
    let mut errors = Vec::new();

for (name, svc) in &compose.services { let image = match &svc.image { Some(img) => img.clone(), None => { errors.push(format!("Service '{}': no image specified", name)); continue; } };

let env = normalize_env(&svc.environment); let ports = parse_ports(&svc.ports); let memory_limit = svc.mem_limit.as_deref() .map(parse_mem_limit) .transpose() .map_err(|e| errors.push(e)) .ok() .flatten();

services.push(ResolvedService { name: name.clone(), image, env, ports, volumes: parse_volumes(&svc.volumes), depends_on: extract_depends_on(&svc.depends_on), memory_limit, command: normalize_command(&svc.command), }); }

if errors.is_empty() { Ok(services) } else { Err(errors) } } ```

Environment variable normalization was where the Compose variants collapsed into a single format. ComposeEnv::List entries like "DB_HOST=postgres" were split on the first =. ComposeEnv::Map entries were iterated directly. Both produced the same HashMap.

Stack Detection: Git Push Deploys

Compose support was not limited to the API and dashboard. When a user pushed code to sh0 via git, the build pipeline's stack detector would check for Compose files:

// Detection priority: Dockerfile > docker-compose.yml > framework detection
if repo_contains("docker-compose.yml")
    || repo_contains("docker-compose.yaml")
    || repo_contains("compose.yml")
    || repo_contains("compose.yaml")
{
    return Stack::DockerCompose {
        label: "Docker Compose".to_string(),
        default_port: 0,
        needs_dockerfile: false,
    };
}

When the stack detector identified a Compose file, the deploy pipeline read its contents, passed it through the parser, validator, and converter, and then executed the same multi-service deployment flow. Users could git push a repository containing only a docker-compose.yml and sh0 would deploy it automatically.

The detection priority was important: a Dockerfile took precedence over a Compose file. If a repository had both, the user intended a custom build, not a multi-service Compose deployment. Three tests verified the detection logic, including edge cases with alternate filenames.

The API and CLI

The Compose endpoints mirrored the template API in structure but accepted raw YAML instead of template names:

# Validate without deploying
sh0 compose validate docker-compose.yml

# Deploy a Compose file sh0 compose deploy docker-compose.yml --app-name mystack

# Deploy with variable overrides sh0 compose deploy ./compose.yaml \ --app-name production \ --var DB_PASSWORD=secure123 ```

The validate command was deliberately separate from deploy. It let users check their Compose files for errors before committing to a deployment. The response included the list of detected services, their images, port mappings, and any warnings about unsupported Compose features.

The dashboard added a "Deploy Compose" button on the Apps page. Clicking it opened a modal with a YAML textarea where users could paste their Compose file, a variable overrides section, and a validation preview that showed the parsed services before deployment began.

Reusing the Template Pipeline

The most valuable architectural decision was making the template deployment pipeline reusable. When we built the template system, the deployment helpers -- network creation, volume provisioning, topological sorting, container creation, Caddy routing -- were all implemented as standalone functions. For Compose support, we changed their visibility from private to pub(crate):

// In handlers/templates.rs -- changed from private to crate-visible
pub(crate) fn ensure_network(docker: &DockerClient, name: &str) -> Result<()>;
pub(crate) fn create_volumes(docker: &DockerClient, volumes: &[VolumeSpec]) -> Result<()>;
pub(crate) fn topological_sort(services: &[ResolvedService]) -> Vec<Vec<&ResolvedService>>;
pub(crate) fn create_container(docker: &DockerClient, svc: &ResolvedService, ...) -> Result<()>;
pub(crate) fn configure_routing(proxy: &ProxyManager, app: &App, port: u16) -> Result<()>;

Seven functions were made pub(crate). The Compose handler called them in the same order as the template handler. Zero deployment logic was duplicated. When we later fixed a bug in container creation or improved the Caddy routing configuration, both code paths benefited automatically.

The Numbers

At the end of the session, the Compose system added 19 unit tests to the suite, bringing the total to 327. The parser handled every Compose v3 construct we tested against: services, volumes, networks, environment variables in both formats, depends_on in both formats, commands in both formats, memory limits in three formats, and port mappings in four formats.

The entire implementation -- parser, validator, converter, API endpoints, CLI commands, dashboard UI, i18n in five languages, stack detection, and pipeline integration -- was completed in a single session. It compiled clean, tested clean, and deployed its first Compose file on the first try.

Not because we were lucky. Because the template pipeline had already solved multi-service deployment, and the Compose system was a translator, not a second engine.

---

Next in the series: Backup Engine: AES-256-GCM, 13 Storage Providers, and FTP Nightmares -- how we built encrypted backups with pluggable storage, and the IPv6 FTP bug that forced us to write our own client.

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles