When you deploy a Laravel app on sh0, it automatically gets a subdomain like my-laravel.sh0.app. Every service with an HTTP port gets one: my-laravel-phpmyadmin.sh0.app, my-laravel-redis.sh0.app. No DNS configuration, no proxy setup, no SSL certificate management. It just works.
But file storage was different. When we shipped managed MinIO in the previous session, the S3 API and web console were only accessible via localhost port mappings. You could see http://localhost:55519 in the dashboard, but that's useless if you're on your phone, sharing a link with a teammate, or deploying a frontend that needs a public S3 endpoint.
The question was simple: why don't infrastructure services get the same treatment as apps?
The existing pattern
sh0 already had a mature auto-subdomain system for application services. When a template deploys, the pipeline in templates.rs runs a step called "6b: Auto-subdomain for secondary HTTP services" that does three things:
- Generates the subdomain --
{app_name}-{service_name}.{base_domain} - Configures Caddy -- creates a reverse proxy route from that domain to the container's IP and port on the Docker
sh0-netnetwork - Creates a DNS record -- calls the Cloudflare API to point the subdomain to the server's public IP
The SSL certificate is handled automatically by Caddy via Let's Encrypt. The whole thing takes about two seconds.
File storage already had its own domain system -- users could manually add custom domains through a "Domains" tab, and the backend would configure Caddy and Cloudflare. But the auto part was missing.
The design decision
We had two choices for when auto-domains get assigned:
Option A: Only when the user clicks a button. Safe, explicit, no surprises.
Option B: Automatically at creation time, plus a button for existing instances. Zero-friction for new deployments, with a fallback for instances created before this feature existed.
We went with Option B. The whole point of sh0 is that infrastructure should be invisible. If you create a storage instance, you probably want to access it from outside localhost.
The naming pattern follows the app convention:
- S3 API: {instance-name}-s3.{base_domain} (e.g., system-storage-s3.sh0.app)
- Web Console: {instance-name}-console.{base_domain} (e.g., system-storage-console.sh0.app)
The implementation
A shared helper, not duplicated logic
The existing add_storage_domain handler already did everything needed: validate the domain, check cross-table uniqueness, insert a FileStorageDomain record, configure Caddy, and create a DNS record. We extracted the reusable parts into a new function:
rustasync fn auto_assign_storage_domains(
state: &AppState,
storage_id: &str,
instance_name: &str,
) -> Result<Vec<FileStorageDomainResponse>> {
let base_domain = match &state.base_domain {
Some(bd) => bd.clone(),
None => return Ok(vec![]), // No base domain configured -- skip silently
};
let candidates = [
(format!("{}-s3.{}", instance_name, base_domain), "api"),
(format!("{}-console.{}", instance_name, base_domain), "console"),
];
// For each candidate: check uniqueness, insert, DNS, then update Caddy
// ...
}Key design decisions in this function:
- Returns early if no
base_domain-- self-hosted instances without a registered domain skip auto-assignment silently. No error, no log noise.
- Idempotent -- each subdomain is checked against both the app
domainstable and thefile_storage_domainstable before insertion. Calling the function twice is safe.
- Caddy routing is batched --
update_proxy_for_storageis called once at the end, not per-domain. It groups all API domains into one Caddy route (upstream port 9000) and all console domains into another (upstream port 9001).
Non-fatal on creation
The critical detail: auto-domain assignment during instance creation must not block the creation itself.
rust// In create_instance():
if let Err(e) = auto_assign_storage_domains(&state, &created.id, &created.name).await {
tracing::warn!(instance = %created.name, error = %e,
"Failed to auto-assign storage domains");
}If Cloudflare is down, if the domain already exists, if anything goes wrong -- the storage instance is still created. The user gets their MinIO, and they can assign domains later via the button.
The button endpoint
For existing instances that were created before this feature, we added a simple endpoint:
POST /api/v1/file-storage/{id}/auto-domainIt calls the same auto_assign_storage_domains helper. The dashboard shows an "Enable external access" card in the overview tab when no domains exist, matching the pattern used for app services.
What Caddy sees
When the function completes, Caddy's configuration contains two new route blocks. Simplified:
json{
"match": [{"host": ["system-storage-s3.sh0.app"]}],
"handle": [{
"handler": "reverse_proxy",
"upstreams": [{"dial": "172.18.0.5:9000"}]
}]
}Port 9000 is MinIO's S3 API. Port 9001 is the web console. Caddy handles TLS termination and certificate provisioning automatically.
The i18n dimension
This session also revealed a gap: dozens of hardcoded English strings across the file storage pages. "Console Username", "Enable subdomain", "Browse", "DNS Active", feature card descriptions -- all raw English.
We added 25 new translation keys across all five languages (English, French, Spanish, Portuguese, Swahili) and replaced every hardcoded string with t() calls. This matters because sh0 targets African developers, many of whom prefer French or Portuguese interfaces.
The domains page (/domains) had the same problem -- table headers like "Service", "Status", "Instance", "Target" were all hardcoded. Fixed those too.
The pattern worth noting
The broader lesson here: when you build a feature for one category of resources (apps), design the underlying system (Caddy + Cloudflare integration) to be reusable. When we built the domain system for file storage a day earlier, we modeled it after the app domain system -- same database schema pattern, same proxy configuration logic, same DNS integration. That made auto-subdomains a matter of calling existing functions in the right order, not building new infrastructure.
The total implementation was one Rust helper function, one API endpoint, one route registration, and a handful of frontend changes. The hard work was already done.