Back to 0cron
0cron

Encrypted Secrets, API Keys, and Security

AES-256-GCM encryption, ${secrets.KEY} interpolation, JWT + API key authentication, Google Sign-In verification, and HMAC webhook signing -- the security layers of 0cron.

Thales & Claude | March 25, 2026 15 min 0cron
0cronsecurityencryptionaes-gcmjwtapi-keysgoogle-auth

A cron job service that makes HTTP requests on behalf of users is, by definition, a service that handles credentials. Your jobs call APIs that require authentication. Those API keys, bearer tokens, and webhook secrets need to live somewhere -- and that somewhere had better be encrypted, access-controlled, and auditable.

0cron has four distinct security layers: encrypted secrets storage for user credentials, JWT-based authentication for dashboard sessions, API key authentication for programmatic access, and external verification for Google Sign-In and Stripe webhooks. Each layer serves a different purpose, and together they form a defense-in-depth architecture that protects both the platform and its users.

This article walks through all four layers with the actual Rust code.

Layer 1: Encrypted Secrets (AES-256-GCM)

When a user stores an API key in 0cron -- say, their Slack webhook URL or a third-party bearer token -- that value is encrypted before it touches the database. We use AES-256-GCM, which is the gold standard for authenticated encryption. "Authenticated" means that decryption not only recovers the plaintext but also verifies that the ciphertext has not been tampered with. If someone modifies even a single bit of the stored value, decryption fails rather than producing corrupted output.

Here is the encryption function.

pub fn encrypt_secret(plaintext: &str, key: &[u8]) -> AppResult<Vec<u8>> {
    let key = aes_gcm::Key::<Aes256Gcm>::from_slice(key);
    let cipher = Aes256Gcm::new(key);
    let nonce = Aes256Gcm::generate_nonce(&mut OsRng);
    let ciphertext = cipher.encrypt(&nonce, plaintext.as_bytes())
        .map_err(|e| AppError::Encryption(format!("Encryption failed: {e}")))?;
    let mut result = nonce.to_vec();
    result.extend_from_slice(&ciphertext);
    Ok(result)
}

And the corresponding decryption.

pub fn decrypt_secret(ciphertext: &[u8], key: &[u8]) -> AppResult<String> {
    if ciphertext.len() < 12 { return Err(AppError::Encryption("Ciphertext too short".to_string())); }
    let key = aes_gcm::Key::<Aes256Gcm>::from_slice(key);
    let cipher = Aes256Gcm::new(key);
    let nonce = Nonce::from_slice(&ciphertext[..12]);
    let plaintext = cipher.decrypt(nonce, &ciphertext[12..])
        .map_err(|e| AppError::Encryption(format!("Decryption failed: {e}")))?;
    String::from_utf8(plaintext).map_err(|e| AppError::Encryption(format!("Invalid UTF-8: {e}")))
}

Several design decisions are embedded in these 25 lines.

Random nonces from OsRng. Every encryption operation generates a fresh 12-byte nonce using the operating system's cryptographically secure random number generator. The nonce does not need to be secret, but it must be unique. Reusing a nonce with the same key completely breaks AES-GCM's security guarantees. By generating from OsRng (which maps to /dev/urandom on Linux), we get cryptographic-grade randomness with no risk of collision.

Nonce-prefixed ciphertext. The storage format is nonce || ciphertext -- the first 12 bytes are the nonce, and everything after is the encrypted data plus the GCM authentication tag. This is a standard pattern. The decryptor knows to split at byte 12. No delimiter, no metadata header, no version field. Just 12 bytes of nonce followed by the encrypted payload.

The length check (ciphertext.len() < 12) in the decrypt function guards against corrupted data. If the stored value is shorter than a nonce, something is fundamentally wrong, and we return an error rather than panicking on a slice operation.

The encryption key is a server-side secret. It is loaded from an environment variable at startup and never stored in the database. If the database is compromised, the attacker gets encrypted blobs that are useless without the key. The key itself is a 32-byte (256-bit) value, which means brute-forcing it would take longer than the age of the universe with current hardware.

String output validation. After decryption, we verify that the plaintext is valid UTF-8. Secrets are always text (API keys, tokens, URLs), so invalid UTF-8 after decryption indicates either a bug or tampering. This is a belt-and-suspenders check on top of GCM's authentication.

Secret Interpolation in Job Configurations

Encrypted secrets are useful only if jobs can reference them. When a user creates a cron job that calls an authenticated API, they should not paste their API key directly into the job's headers. Instead, they reference a stored secret using the ${secrets.KEY_NAME} syntax.

pub async fn interpolate_secrets(text: &str, team_id: Uuid, db: &PgPool, key: &[u8]) -> AppResult<String> {
    let re = Regex::new(r"\$\{secrets\.([A-Za-z0-9_]+)\}").unwrap();
    let mut result = text.to_string();
    for (full_match, key_name) in re.captures_iter(text).map(|cap| (cap[0].to_string(), cap[1].to_string())) {
        let row: Option<(Vec<u8>,)> = sqlx::query_as("SELECT value_encrypted FROM secrets WHERE team_id = $1 AND key = $2")
            .bind(team_id).bind(&key_name).fetch_optional(db).await?;
        match row {
            Some((encrypted,)) => { result = result.replace(&full_match, &decrypt_secret(&encrypted, key)?); }
            None => return Err(AppError::NotFound(format!("Secret '{key_name}' not found"))),
        }
    }
    Ok(result)
}

This function runs at execution time, not at job creation time. When the scheduler picks up a job to execute, it calls interpolate_secrets on the URL, headers, and body. The regex \$\{secrets\.([A-Za-z0-9_]+)\} matches patterns like ${secrets.SLACK_TOKEN} or ${secrets.AWS_ACCESS_KEY}, extracts the key name, looks up the encrypted value in the database (scoped to the team), decrypts it, and substitutes it into the text.

This architecture has several security benefits.

Secrets are never stored in plaintext in job configurations. The database stores Authorization: Bearer ${secrets.API_TOKEN}, not the actual token. If someone reads the job configuration (through a UI bug, a log leak, or a database breach), they see the reference, not the value.

Secrets are team-scoped. The query filters by team_id, so one team cannot reference another team's secrets. Even if an attacker discovers the key name used by another team, the interpolation will fail with "not found."

Decryption happens in memory and is never logged. The decrypted value exists only in the string that is passed to the HTTP client. It is not written to a log, not stored in an execution record, and not included in error messages. If the HTTP request fails, the error captures the status code and response, not the request headers containing the decrypted secret.

Missing secrets are hard errors. If a job references ${secrets.OLD_TOKEN} and that secret has been deleted, the interpolation returns an error rather than sending a request with the literal string ${secrets.OLD_TOKEN} as a header value. This prevents accidental exposure of the interpolation syntax to third-party APIs and ensures that jobs fail loudly when their dependencies change.

Layer 2: JWT Authentication

Dashboard users authenticate via JSON Web Tokens. After signing in (through Google or email/password), the server issues a JWT that the frontend stores in the auth store (described in article 7). Every subsequent API request includes this token in the Authorization header.

The auth middleware extracts and validates the JWT on every request to protected endpoints. The validation checks three things: the signature is valid (proving the token was issued by our server), the token has not expired, and the claims contain a valid user ID and team ID.

JWT validation uses the jsonwebtoken crate, which handles RS256/HS256 signature verification, expiration checks, and claim parsing in a single function call. We use HS256 (HMAC-SHA256) with a server-side secret, which is appropriate for a single-server architecture. If we scale to multiple servers, we would switch to RS256 with a public/private key pair so that any server can verify tokens without sharing the signing secret.

The middleware extracts an AuthUser struct containing user_id and team_id, which is then available to every handler. This is the identity that gates all data access -- jobs, secrets, monitors, and billing information are all scoped by team ID.

Layer 3: API Key Authentication

Not every API consumer is a browser. Developers integrate 0cron into their CI/CD pipelines, infrastructure-as-code tools, and custom scripts. These environments need a long-lived credential that does not expire every few hours like a JWT.

API keys serve this purpose. A user generates a key in the settings page, and 0cron returns it once. The raw key is never stored. Instead, the server stores an Argon2 hash of the key alongside a prefix (the first 8 characters) for lookup.

The authentication flow for API keys works in two steps. First, the middleware extracts the prefix from the provided key and queries the database for matching API key records. Second, it verifies the full key against the stored Argon2 hash. This two-step process avoids hashing every key in the database against the provided value -- the prefix narrows the search to (almost certainly) a single record.

Argon2 is deliberately slow by design. It is a memory-hard hashing algorithm that resists GPU-based brute-force attacks. If the database is breached, the attacker has hashes, not keys. And because Argon2 is expensive to compute, testing candidate keys against the stolen hashes would take impractical amounts of time.

Both JWT and API key authentication produce the same AuthUser struct. From the handler's perspective, the authentication method is invisible. A request from the dashboard (JWT) and a request from a CI script (API key) receive identical treatment. This is important for consistency -- there are no features that work only with one authentication method.

Layer 4: External Verification

0cron integrates with two external services that send data to our server: Google (for Sign-In) and Stripe (for payment webhooks). Both require verification to prevent spoofing.

Google Sign-In Verification

When a user signs in with Google, the frontend receives an ID token from Google's OAuth flow. This token is a JWT, but not one we issued -- Google issued it. Verifying it requires a different process than verifying our own JWTs.

The verification flow has four steps.

First, decode the JWT header (without verifying the signature) to extract the kid (key ID) field. Google rotates its signing keys regularly, and the kid tells us which key was used.

Second, fetch Google's public keys from https://www.googleapis.com/oauth2/v3/certs. This is a JWKS (JSON Web Key Set) endpoint that returns the current set of valid signing keys. We cache these keys to avoid hitting Google's endpoint on every sign-in.

Third, find the key matching the kid from the token header and verify the RS256 signature. If the signature is invalid, the token was not issued by Google (or was tampered with), and we reject it.

Fourth, validate the claims: aud (audience) must match our Google Client ID, iss (issuer) must be accounts.google.com, the token must not be expired, and email_verified must be true. The email_verified check is important -- Google allows accounts with unverified emails, and we do not want to trust those.

After verification, we upsert the user: if the email already exists (from a previous email/password registration), we link the Google account. If it is a new email, we create a new user and team. This linking logic prevents duplicate accounts when users sign up with email first and later add Google Sign-In.

Stripe Webhook Signature Verification

Stripe sends webhook events (subscription created, payment succeeded, invoice failed) to our /api/webhooks/stripe endpoint. These requests must be verified to ensure they actually come from Stripe, not from an attacker who discovered our webhook URL.

fn verify_stripe_signature(payload: &[u8], sig_header: &str, secret: &str) -> AppResult<()> {
    // Parse t= and v1= from header
    let signed_payload = format!("{timestamp}.{}", std::str::from_utf8(payload).unwrap_or(""));
    let mut mac = HmacSha256::new_from_slice(secret.as_bytes())?;
    mac.update(signed_payload.as_bytes());
    let expected = hex::encode(mac.finalize().into_bytes());
    // Constant-time compare + 5-minute timestamp tolerance
}

Stripe's webhook signature scheme works as follows. The Stripe-Signature header contains a timestamp (t=) and one or more signatures (v1=). The signed payload is the timestamp concatenated with the raw request body, separated by a period. We compute the HMAC-SHA256 of this concatenated string using the webhook signing secret (provided by Stripe in the dashboard), and compare it to the signature in the header.

Two details are critical.

Constant-time comparison. The signature comparison must not leak timing information. A naive string comparison (==) returns early on the first mismatched byte, which means an attacker can determine how many leading bytes of their forged signature are correct by measuring response times. Constant-time comparison processes every byte regardless of matches, making timing attacks infeasible.

Timestamp tolerance. We reject signatures older than 5 minutes. This prevents replay attacks where an attacker intercepts a legitimate webhook request and re-sends it later. The 5-minute window accommodates network delays and clock skew while limiting the replay window.

The Security Architecture as a Whole

These four layers interact to create a coherent security model.

User secrets are encrypted at rest with AES-256-GCM. They are decrypted only at job execution time, only in memory, and only for the team that owns them. The encryption key is a server-side secret that never enters the database.

Dashboard sessions use JWTs issued by 0cron, stored in the browser's localStorage, and validated on every request. The auth store (article 7) handles persistence and the API client handles injection.

Programmatic access uses API keys that are hashed with Argon2 and never stored in plaintext. The prefix-based lookup avoids full-table scans, and the slow hashing resists brute-force attacks.

External integrations are verified at the cryptographic level. Google Sign-In tokens are validated against Google's rotating public keys. Stripe webhooks are validated against HMAC signatures with timestamp tolerance.

What ties these together is the AuthUser struct. Regardless of how a request is authenticated -- JWT, API key, or (for internal routes) admin middleware -- the result is the same identity object with the same permissions model. Handlers do not care how you proved your identity. They care that you proved it and that your team ID matches the resources you are requesting.

What We Chose Not to Do

Security design is as much about what you exclude as what you include. Here are deliberate omissions.

No client-side encryption. User secrets are encrypted server-side. The user sends the plaintext secret over HTTPS, and the server encrypts it before storage. Client-side encryption (where the browser encrypts and the server never sees plaintext) would be more secure in theory, but it would make secret interpolation impossible -- the server needs to decrypt secrets to inject them into HTTP requests. End-to-end encryption for a service that needs to use the secrets is a contradiction.

No secret versioning. When a user updates a secret, the old value is overwritten. There is no history, no rollback, no "which version was used in this execution?" audit trail. This simplifies the data model and avoids the complexity of managing multiple active versions. If we add versioning later, the encryption layer does not change -- each version is just another encrypted blob.

No mutual TLS. API keys and JWTs authenticate the client to the server. We do not authenticate the server to the client beyond standard TLS certificates. For a SaaS product accessed via standard HTTPS, this is the expected security model. Mutual TLS is appropriate for service-to-service communication in zero-trust architectures, but it adds significant operational complexity for end-user access.

No hardware security module. The encryption key is an environment variable, not a key stored in an HSM or a cloud KMS. An HSM would provide stronger key protection (the key never leaves the hardware), but it adds infrastructure cost and complexity. For a $1.99/month service, the threat model does not justify HSM integration at launch. If 0cron grows to handle secrets for enterprise customers, HSM integration is a natural evolution.

The 93-Line Secret Store

The entire secrets module -- encryption, decryption, and interpolation -- is 93 lines of Rust. The auth middleware is 86 lines. Combined with the Google auth verification and Stripe webhook verification, the total security-related code is under 400 lines.

This is not because security was treated lightly. It is because we used well-vetted cryptographic libraries (aes-gcm, jsonwebtoken, argon2, hmac) and composed them with thin application logic. The aes-gcm crate implements AES-256-GCM correctly -- constant-time operations, proper nonce handling, authenticated encryption. We did not implement AES ourselves. We did not roll our own JWT verification. We did not write a custom Argon2 implementation. We used libraries that have been audited, fuzzed, and battle-tested, and we wrote the 93 lines of glue that connect them to our domain model.

This is the right approach for a small team. Cryptographic code is where bugs are most dangerous and most subtle. A single-bit error in an AES implementation can silently compromise every encrypted value in the database. By relying on established crates and focusing our code on business logic (which secrets belong to which team, when to decrypt, how to interpolate), we minimize the attack surface of our own code.

Security is not a feature you add at the end. It is a property of the architecture. In 0cron, secrets were encrypted from day one, authentication was required from the first endpoint, and external integrations were verified from the first webhook. The 400 lines of security code are not a layer on top of the application -- they are woven into every request path.

---

This is article 9 of 10 in the "How We Built 0cron" series.

1. Why the World Needs a $2 Cron Job Service 2. 4 Agents, 1 Product: Building 0cron in a Single Session 3. Building a Cron Scheduler Engine in Rust 4. "Every Day at 9am": Natural Language Schedule Parsing 5. Multi-Channel Notifications: Email, Slack, Discord, Telegram, Webhooks 6. Stripe Integration for a $1.99/month SaaS 7. From Static HTML to SvelteKit Dashboard Overnight 8. Heartbeat Monitoring: When Your Job Should Ping You 9. Encrypted Secrets, API Keys, and Security (you are here) 10. From Abidjan to Production: Launching 0cron.dev

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles