Phase 4 added two features that change how developers interact with sh0 during active development. The first, sh0 watch, removes the need to type sh0 push after every change. The second, WebSocket build log streaming, replaces the 1.5-second HTTP polling loop with real-time log delivery.
Neither feature is required for deployment. Both features make deployment invisible -- which is exactly the point.
sh0 watch -- The File Watcher
The concept is simple: watch the project directory for changes, debounce for two seconds, re-run sh0 push. The developer saves a file, and within seconds, the updated version is live.
$ sh0 watch
Watching /Users/dev/my-app (debounce: 2000ms)
Pushing my-app
Detected nodejs (Next.js) -- 85/100 health
42 files (2.3 MB) packaged
Uploading OK 0.8s
Building OK 32.4s
[ok] Live in 35.3s
-> https://my-app.sh0.app
Watching for changes...
[14:23:05] Change detected: src/App.tsx
Pushing my-app
43 files (2.3 MB) packaged
Uploading OK 0.6s
Building OK 28.1s
[ok] Live in 30.2s
-> https://my-app.sh0.app
Watching for changes...Architecture
The watcher uses notify version 7, which provides native filesystem event APIs: kqueue on macOS, inotify on Linux. The architecture is a channel-based event loop:
rustpub async fn run(client: &Sh0Client, args: &WatchArgs) -> Result<()> {
let project_path = resolve_path(args.path.as_deref())?;
// Initial push
push::run(client, &push_args).await?;
// Set up filesystem watcher
let (tx, rx) = std::sync::mpsc::channel();
let mut watcher = notify::RecommendedWatcher::new(tx, notify::Config::default())?;
watcher.watch(&project_path, notify::RecursiveMode::Recursive)?;
println!(" Watching for changes...");
// Event loop
loop {
tokio::select! {
_ = tokio::signal::ctrl_c() => {
println!("\n Stopped watching");
break;
}
_ = process_events(&rx, &project_path, client, &push_args, args.debounce) => {}
}
}
Ok(())
}The tokio::select! macro enables graceful shutdown: the watcher responds to Ctrl+C immediately, even during a debounce wait or an active push. Without this, Ctrl+C during a push would require the push to complete (or fail) before the watcher exits.
Debouncing
File editors do not save atomically. A single "Save" action in VS Code can produce three to five filesystem events: the editor writes to a temporary file, renames it over the original, updates the .git/index, and possibly triggers a formatter that writes again.
Without debouncing, each of those events would trigger a separate push. The debounce logic collects events for a configurable window (default 2000ms) before triggering:
rustasync fn process_events(
rx: &Receiver<notify::Result<Event>>,
project_path: &Path,
client: &Sh0Client,
push_args: &PushArgs,
debounce_ms: u64,
) -> Result<()> {
// Wait for first event
let event = rx.recv()?;
// Drain all events within the debounce window
tokio::time::sleep(Duration::from_millis(debounce_ms)).await;
while rx.try_recv().is_ok() {}
// Filter: ignore changes to excluded paths
if should_ignore_event(&event, project_path) {
return Ok(());
}
// Re-push
if let Err(e) = push::run(client, push_args).await {
eprintln!(" Push failed: {}", e);
// Keep watching -- do not exit on push failure
}
println!(" Watching for changes...");
Ok(())
}The key design decision is on the error path: push failures print an error and continue watching. A syntax error in the developer's code should not kill the watcher. The developer fixes the error, saves again, and the watcher picks up the next change automatically.
Shared Ignore Logic
The global audit found that watch.rs had its own ignore logic that diverged from push.rs. Both modules needed to skip the same patterns (.git/, node_modules/, .sh0/, etc.), but the watcher had a simplified version that missed some patterns.
The fix was to extract should_ignore_public() and load_ignore_patterns() from push.rs and share them:
rust// In push.rs (made pub(crate))
pub(crate) fn should_ignore_public(
path: &Path,
ignore_patterns: &[String],
) -> bool {
// Check ALWAYS_EXCLUDE patterns
// Check user-configured patterns from .sh0ignore/.dockerignore/.gitignore
// ...
}Now both push and watch use identical ignore logic. A change to a file in node_modules/ does not trigger a re-push, regardless of which code path evaluates it.
WebSocket Build Log Streaming
Phase 1's sh0 push polled the deployment status every 1.5 seconds via HTTP. This works, but it has two problems:
- Latency: Build log lines appear up to 1.5 seconds after the server writes them
- Load: Each poll is a full HTTP request-response cycle, with JSON serialization, database queries, and network overhead
WebSocket streaming solves both. The server pushes new log content as it appears, with sub-100ms latency and no polling overhead.
Server Side: The Stream Endpoint
The new endpoint lives at GET /api/v1/deployments/:id/stream. It upgrades the HTTP connection to a WebSocket and streams build log content:
rustpub async fn deploy_stream(
ws: WebSocketUpgrade,
Path(deploy_id): Path<String>,
State(state): State<AppState>,
auth: Auth,
) -> impl IntoResponse {
ws.on_upgrade(move |socket| handle_stream(socket, deploy_id, state, auth))
}
async fn handle_stream(
mut socket: WebSocket,
deploy_id: String,
state: AppState,
_auth: Auth,
) {
let mut last_log_len = 0;
loop {
// Fetch current deployment state
let deployment = match Deployment::find_by_id(&state.db, &deploy_id) {
Ok(Some(d)) => d,
_ => break,
};
// Send new log content
if let Some(ref log) = deployment.build_log {
if log.len() > last_log_len {
let new_content = &log[last_log_len..];
if socket.send(Message::Text(new_content.to_string())).await.is_err() {
break; // Client disconnected
}
last_log_len = log.len();
}
}
// Check for terminal state
match deployment.status.as_str() {
"running" | "failed" => {
let status_msg = serde_json::json!({
"type": "status",
"status": deployment.status,
"duration_ms": deployment.duration_ms,
});
let _ = socket.send(Message::Text(status_msg.to_string())).await;
break;
}
_ => {}
}
tokio::time::sleep(Duration::from_millis(500)).await;
}
}The server-side polling interval is 500ms (vs the client's 1500ms), which means log lines appear faster. But the real win is that the server only sends data when there is new content. An idle period during Docker image pulling produces zero messages, while a burst of build output streams immediately.
Authentication follows the same pattern as sh0's existing WebSocket endpoints (the terminal and log streaming): the token is passed as a query parameter (?token=...), since WebSocket connections cannot set custom headers during the upgrade handshake.
The global audit found that the token was not URL-encoded in the query parameter. A token containing +, =, or & characters would corrupt the URL. The fix was a single line:
rustlet encoded_token = percent_encoding::utf8_percent_encode(
&self.token,
percent_encoding::NON_ALPHANUMERIC,
);
let ws_url = format!("{}?token={}", base_ws_url, encoded_token);Client Side: WebSocket First, HTTP Fallback
The push command now tries WebSocket streaming first and falls back to HTTP polling if the connection fails:
rust// Try WebSocket streaming first
match stream_build_log_ws(client, &deploy_id, &spinner).await {
Ok(result) => handle_stream_result(result, &spinner),
Err(_) => {
// WebSocket failed -- fall back to HTTP polling
poll_build_log_http(client, &deploy_id, &spinner).await?
}
}The fallback is important for two reasons:
- Reverse proxies: Some network configurations strip WebSocket upgrade headers
- Older sh0 servers: A CLI built with WebSocket support must still work against servers that do not have the stream endpoint
The HTTP polling path is the original Phase 1 code, refactored into its own function but otherwise unchanged. The WebSocket path uses tokio-tungstenite, which was already a dependency for the sh0 logs command.
Shared Phase Detection
Both the WebSocket and HTTP code paths need to detect build phases from log output (to update the spinner message). The original polling code had inline phase detection. Refactoring extracted it into a shared helper:
rustfn update_phase_from_log(line: &str, spinner: &ProgressBar) {
if line.contains("[STEP") {
if line.contains("Pulling") {
spinner.set_message("Pulling image");
} else if line.contains("Building") {
spinner.set_message("Building");
} else if line.contains("Starting") {
spinner.set_message("Starting");
}
}
}Both stream_build_log_ws() and poll_build_log_http() call this function for each new log line. The spinner shows the current build phase regardless of whether the data arrived via WebSocket or HTTP.
The Developer Experience Difference
With Phase 1-3, deploying during development looked like this:
# Edit code
# Save
$ sh0 push # Type the command
# Wait 1-2 seconds for first log line
# Wait 30 seconds for build
# Check the URLWith Phase 4:
$ sh0 watch # Type once
# Edit code
# Save
# Log lines appear immediately
# Build completes
# URL is live
# Keep editing...The developer types one command at the start of their session and never thinks about deployment again. Every save triggers a push. Every log line streams in real time. The feedback loop tightens from "save, type command, wait for polling" to "save, see logs."
This is not about saving keystrokes. It is about keeping the developer in flow state. The moment they have to switch contexts -- from writing code to running a deploy command -- they lose focus. Watch mode eliminates that context switch entirely.
Verification
Both features compile cleanly and pass all existing tests:
cargo check: zero errors, zero warningscargo test -p sh0: 37/37 passcargo test: full workspace passes
The WebSocket streaming is designed to be tested in integration (server + client), which requires a running sh0 instance. Unit tests cover the phase detection logic and the fallback behavior.
Next in the series: The Auditor Caught What the Builder Missed -- A deep dive into the multi-session audit methodology: 5 Critical, 12 Important, and 19 Minor findings across 3,200 lines of code.