Backup failed: Volume path does not exist: 7fe348ca-4f53-4257-a318-ef9fedc68374This error tells the full story in one line. The backup engine received a UUID as the "volume path" and tried to open it as a filesystem directory. The UUID was the app ID -- the frontend sent it as source_id for a volume backup. The engine passed it to backup_volume(Path::new(&path)), which checked path.exists() on the host filesystem. A UUID is not a directory.
Even if the engine had received the correct path (/var/lib/postgresql/data), it would still fail. That path exists inside the Docker container, not on the host. On macOS, Docker runs in a Linux VM -- there is no /var/lib/postgresql/data on the host at all. On Linux, the Docker volume data lives in /var/lib/docker/volumes/<volume_name>/_data/, which requires root access and bypasses Docker's storage driver abstraction.
The correct approach is to never touch the host filesystem. Use Docker's own API to copy files out of containers.
The Original Implementation
rustpub fn backup_volume(path: &Path) -> Result<Vec<u8>> {
if !path.exists() {
return Err(BackupError::BackupFailed(format!(
"Volume path does not exist: {}",
path.display()
)));
}
let encoder = GzEncoder::new(Vec::new(), Compression::default());
let mut builder = Builder::new(encoder);
builder.append_dir_all(".", path)?;
// ...
}This function treats the volume as a local directory. It opens it with std::fs, reads every file, and writes them into a tar archive. For bind mounts where the host path is known, this works. For Docker-managed volumes (the volumes: section in templates), it does not.
Why exec + tar Fails Too
The first instinct was to use docker exec to run tar inside the container:
rustlet cmd = vec!["tar", "cf", "-", "-C", path, "."];
let output = docker.exec_in_container(container_id, cmd).await?;This fails for a different reason. The Docker exec API returns a multiplexed stream where stdout and stderr are interleaved with 8-byte frame headers. Our stream parser converts stdout frames to a Rust String via str::from_utf8:
rustfn parse_multiplexed_stream(data: &[u8]) -> (String, String) {
// ...
if let Ok(text) = std::str::from_utf8(&data[pos..pos + size]) {
match stream_type {
1 => stdout.push_str(text),
// ...
}
}
// ...
}A tar archive is binary data. Binary data is not valid UTF-8. The from_utf8 call silently skips frames that contain binary content, producing a corrupted (mostly empty) archive. This parser works for pg_dump output (which is SQL text) but not for binary formats.
The Docker Archive API
Docker has a purpose-built API for copying files in and out of containers:
GET /containers/{id}/archive?path=/path-- Returns a tar archive of the specified pathPUT /containers/{id}/archive?path=/path-- Uploads a tar archive and extracts it at the specified path
These endpoints handle binary data correctly (the response body is raw bytes, not a multiplexed stream), support any file type, and work regardless of what tools are installed inside the container (no need for tar to be present).
We added two methods to the Docker client:
rustpub async fn copy_from_container(
&self,
id: &str,
src_path: &str,
) -> Result<Vec<u8>> {
let path = format!(
"/containers/{}/archive?path={}",
id,
urlencoding::encode(src_path)
);
let bytes = self.get_raw(&path).await?;
Ok(bytes.to_vec())
}
pub async fn copy_to_container(
&self,
id: &str,
dest_path: &str,
tar_data: Vec<u8>,
) -> Result<()> {
let path = format!(
"/containers/{}/archive?path={}",
id,
urlencoding::encode(dest_path)
);
self.put_raw(&path, "application/x-tar", Bytes::from(tar_data)).await
}copy_from_container returns raw bytes -- no UTF-8 conversion, no stream parsing, no data loss. copy_to_container was already partially implemented (as copy_to_container for the file explorer feature) but not used for backups.
The Backup Pipeline After the Fix
The backup engine now uses copy_from_container for volume backups:
rustBackupSource::Volume { container_id, path } => {
backup_volume_docker(&self.docker, container_id, path).await?
}And copy_to_container for volume restores:
rustdocker.copy_to_container(container_id, path, data.to_vec()).await?;The data returned by copy_from_container is already a tar archive. The engine's pipeline then compresses it (gzip), optionally encrypts it (AES-256-GCM), and stores it via the configured storage provider. On restore, the process reverses: retrieve, decrypt, decompress, and upload the tar back into the container.
Source ID Resolution
The original code also had an ID problem. When the frontend selected an app for volume backup, it sent the app UUID as source_id. The engine treated source_id as the volume path. The fix was to look up the app by ID to get the container_id, then look up the app's mounts to find the volume's target path:
rustlet app = App::find_by_id(&pool, &source_id)?;
let container_id = app.container_id.unwrap();
let mounts = AppMount::list_by_app_id(&pool, &app.id)?;
let volume_path = mounts.first()
.map(|m| m.target.clone())
.unwrap_or_else(|| default_volume_path(app.stack.as_deref().unwrap_or("")));The source_id stored in the backup record changed from a bare path to container_id:path format (e.g., abc123:/var/lib/postgresql/data), so restores can find the right container and path without another app lookup.
The Lesson: Use the Platform API
Every container runtime provides APIs for data movement. Docker has archive. Podman has the same endpoints. Kubernetes has kubectl cp (which uses the same tar-over-API approach). Using exec + shell commands is tempting because it feels familiar, but it introduces dependencies (is tar installed?), encoding issues (is the output binary-safe?), and permission problems (does the exec user have read access?).
The platform API handles all of these. It works with every image, every filesystem, every encoding. When moving data in or out of containers, always prefer the platform's native data movement API over exec-based workarounds.
The original backup_volume function still exists for direct filesystem paths (bind mounts where the host path is known). But for Docker-managed volumes -- which are the default in every sh0 template -- copy_from_container is the only correct approach.