The Problem
sh0 deploys complex stacks. A single WordPress deployment creates three services: the WordPress container, MySQL, and phpMyAdmin. A Laravel deploy? Same story -- laravel, mysql, phpmyadmin. Each service has up to four access URLs: internal (container-to-container), local (host port mapping), external (when enabled), and domain (the public-facing URL).
The original /domains page only showed entries from the domains database table -- the public domain records with DNS and SSL status. Useful, but it missed the bigger picture. A user deploying Redis couldn't see redis:6379 or localhost:54878 anywhere on this page. They had to navigate into each app individually to find service URLs.
The CEO's feedback was blunt: "all these URLs should be on one page."
The Architecture Decision
We had three options:
- Frontend N+1 -- Fetch all apps, then
GET /api/v1/apps/:id/servicesfor each. Simple but slow (Docker inspect per container per app).
- New global endpoint --
GET /api/v1/services/urlsthat aggregates all service URLs in one call. One HTTP request, but the backend does all the Docker inspecting.
- Hybrid -- Use the existing
GET /api/v1/services(returns basic DB records) then batch-fetch live URLs.
We went with option 2. The reasoning: this is a self-hosted tool running on the same machine as Docker. Unix socket calls to the Docker daemon are fast. The user experience of a single loading spinner that resolves in under a second beats a waterfall of requests with progressive loading.
The Backend: Extract, Extend, Expose
The existing list_services handler for GET /api/v1/apps/:id/services already had 180 lines of URL-building logic: Docker inspect for live port mappings, internal URL construction, local URL formatting (with HTTP prefix detection), domain lookup from the domains table, and connection URL building for database services.
Rather than duplicate all of this, we extracted it into a shared helper:
rustasync fn build_service_infos(
state: &AppState,
app: &App,
services: &[AppService],
domains: &[Domain],
env_map: &HashMap<String, String>,
) -> Result<Vec<ServiceInfoResponse>> {
// Docker inspect, URL building, domain matching...
}The per-app handler now calls this helper. The new global handler iterates all apps with services, calls the same helper for each, and wraps each result with app_id and app_name:
rust#[derive(Serialize)]
pub struct GlobalServiceInfoResponse {
pub app_id: String,
pub app_name: String,
#[serde(flatten)]
pub service: ServiceInfoResponse,
}The #[serde(flatten)] is key -- it means the JSON output has all fields at the top level, not nested. The frontend type extends naturally:
typescriptexport interface GlobalAppServiceInfo extends AppServiceInfo {
app_id: string;
app_name: string;
}The Frontend: One Table to Rule Them All
The rewritten /domains page is a single table with six columns: App, Service, Internal, Local, Domain, Status. The order matters -- App first because "which app does this belong to?" is always the first question. Status last because it's secondary context.
Each URL cell has copy-to-clipboard and (where applicable) an external link icon. The search filter matches across service names, app names, images, and all URL types. A status dropdown filters running vs stopped.
The result: from a single page, you can see that your WordPress deploy has wordpress:8000 internally, localhost:61637 locally, and my-wordpress.sh0.app as the public domain. Its phpmyadmin service is at phpmyadmin:80 / http://localhost:65240 / my-wordpress-phpmyadmin.sh0.app. And its MySQL is at mysql:3306 / localhost:62876 with no public domain (as expected).
The Column Order Debate
The first version put Service first, then App. After looking at the actual rendered table with real data, the CEO asked to flip it. When you're scanning a page with 15+ rows across 5 deployed apps, the app name is the anchor -- it groups the rows visually even without explicit grouping. Service name is the detail within that group.
This is a recurring pattern in dashboard design: the column that helps you find what you're looking for should come first, not the column with the most detail.
What We Also Did in This Session
This session was dense. Beyond the domains page:
- Fixed a template name mismatch --
codeigniter4.yamlwas not being found because the frontend looked upcodeigniter. A previous session renamed the file to match the internalnamefield but broke the lookup chain. One rename + one field change fixed it.
- Moved API Keys to their own settings tab -- They were buried inside the MCP Server section. Now they have a dedicated sidebar entry with their own icon, making them discoverable for users who want API access without knowing about MCP.
- Added missing OpenAPI annotations -- Three endpoints (
GET /domains,GET /services,GET /services/urls) had no utoipa annotations, so they were invisible in the API docs. Added annotations + registered them.
- Updated the marketing site API docs -- The "Other endpoints" table was missing 7 categories (Services, Backups, Certificates, Projects, Redirects, Preview Environments, Settings). Added them all with endpoint counts and descriptions.
The Build-Audit-Ship Cycle
All of this was done in a single session: plan, implement, verify types (svelte-check --threshold error returns 0 errors), verify Rust compilation (cargo check passes), commit, push. The testing checklist has 19 verification items across 6 categories for the CEO to run through.
The methodology holds: build incrementally, verify at each step, commit atomically. No 500-line PRs that take a day to review -- just focused commits that each do one thing well.