Back to flin
flin

Constraints and Validation in FlinDB

How FlinDB enforces data integrity with declarative constraints -- unique, required, check, pattern, immutable, and more -- all without writing a single SQL trigger.

Thales & Claude | March 25, 2026 11 min flin
flinflindbconstraintsvalidationintegrity

A database that accepts anything is a database that contains garbage. Data integrity is not a nice-to-have feature -- it is the foundation that every application layer above the database depends on. If the database allows a user to be created without an email, every piece of code that reads users must check for missing emails. If the database allows duplicate usernames, every authentication flow must handle collisions at the application level.

The relational world solved this decades ago with constraints: NOT NULL, UNIQUE, CHECK, FOREIGN KEY. But these constraints are defined in SQL DDL, checked at the database level, and report errors as cryptic codes like 23505 unique_violation. They work, but they are not developer-friendly.

FlinDB implements a constraint system that is declarative, readable, and comprehensive. Session 161 was the marathon that made it real -- nine types of constraints, 31 tests, and a cascade system that correctly handles both soft delete and hard destroy.

The Three Tiers of Constraints

We organized FlinDB's constraints into three priority tiers based on how commonly they are needed.

P1: Essential Constraints

These are the constraints that every application needs. Without them, the database is not production-ready.

Unique constraints prevent duplicate values on a field:

entity User {
    email: text @unique
    username: text @unique
}

When a save operation encounters a duplicate value on a unique field, ZeroCore rejects it with a clear error:

fn check_unique_constraints(
    &self,
    entity_type: &str,
    id: Option<u64>,
    schema: &EntitySchema,
    fields: &HashMap<String, Value>,
) -> DatabaseResult<()>

The uniqueness check has important semantics that took careful thought:

  • Value::None does not participate in unique checks. Multiple entities can have none for a unique field. This matches SQL's behavior where NULL values are considered distinct.
  • Soft-deleted entities are excluded from uniqueness checks. If you delete a user with email "[email protected]", you can create a new user with that same email.
  • Updating an entity with the same value it already has is allowed. Setting a user's email to its current value does not trigger a unique violation against itself.

Required constraints prevent null values:

entity User {
    name: text           // Required by default (not optional)
    email: text
    bio: text?           // Optional
}

In FlinDB, fields without the ? suffix are required by default. The validation checks both missing fields and explicit None values:

if field_def.required {
    match fields.get(&field_def.name) {
        None | Some(Value::None) => {
            return Err(DatabaseError::MissingField {
                entity_type: entity_type.to_string(),
                field: field_def.name.clone(),
            });
        }
        _ => {}
    }
}

Foreign key enforcement verifies that referenced entities exist:

entity Post {
    title: text
    author: User    // Must reference an existing User
}

// This works: user = User.find(1) save Post { title: "Hello", author: user }

// This fails: save Post { title: "Hello", author: User { id: 999 } } // Error: Referenced User with id 999 not found ```

ON DELETE behavior controls what happens to referencing entities when a referenced entity is deleted:

entity Comment {
    text: text
    post: Post @on_delete(cascade)    // Delete comments when post is deleted
}

entity Profile { bio: text user: User @on_delete(restrict) // Cannot delete user with a profile }

entity Assignment { task: text assignee: User @on_delete(set_null) // Set to null when user is deleted } ```

The cascade implementation was one of Session 161's most critical fixes. The original code only handled cascade for soft delete. We refactored it to handle both operations:

fn handle_on_delete_or_destroy(
    &mut self,
    entity_type: &str,
    entity_id: u64,
    is_destroy: bool,
) -> DatabaseResult<()> {
    match behavior {
        OnDeleteBehavior::Cascade => {
            if is_destroy {
                self.destroy(&ref_type, id)?;
            } else {
                self.delete(&ref_type, id)?;
            }
        }
        OnDeleteBehavior::SetNull => {
            // Set the reference field to None
        }
        OnDeleteBehavior::Restrict => {
            return Err(DatabaseError::RestrictViolation { /* ... */ });
        }
    }
}

When you delete a Post with cascade, its Comments are soft-deleted. When you destroy a Post with cascade, its Comments are hard-destroyed. The cascade propagates the operation type, not just the deletion.

P2: Important Constraints

These constraints handle business logic that would otherwise live in application code.

Check constraints enforce arbitrary conditions:

entity Product {
    name: text
    price: number @check(price > 0)
    quantity: int @check(quantity >= 0)
}

The @check constraint evaluates a condition against the field's value before saving. If the condition is false, the save is rejected. This moves validation from the application layer -- where it is easy to forget -- into the data model where it is always enforced.

Conditional required fields are required only when another condition is met:

entity Order {
    delivery_type: text
    shipping_address: text @required_if(delivery_type == "shipping")
}

If delivery_type is "shipping", then shipping_address must be present. If delivery_type is "pickup", shipping_address can be omitted. This eliminates an entire category of "required field missing" bugs that only appear in specific business scenarios.

Pattern validation enforces format rules:

entity User {
    email: text @pattern(email, "^[a-zA-Z0-9+_.-]+@[a-zA-Z0-9.-]+$", "Invalid email format")
    phone: text @pattern(phone, "^\\+[0-9]{10,15}$", "Phone must start with + and contain 10-15 digits")
}

The @pattern constraint takes a field name, a regular expression, and an error message. When validation fails, the error message is human-readable -- not a regex pattern dump.

P3: Nice-to-Have Constraints

These constraints address edge cases that arise in mature applications.

Partial unique constraints enforce uniqueness only when a condition is met:

entity User {
    email: text @unique_where(email, status == "active")
}

This allows multiple users to share an email address, as long as only one of them is active. Common in systems with account deactivation/reactivation flows.

Case-insensitive unique prevents duplicates regardless of case:

entity User {
    username: text @unique_ignore_case
}

"Thales", "THALES", and "thales" are all considered the same username. Without this constraint, developers must normalize case in application code -- a step that is easy to forget in one of the fifty places where usernames are created or updated.

Immutable fields cannot be changed after initial creation:

entity Transaction {
    transaction_id: text @immutable
    amount: money @immutable
    created_at: time @immutable
}

Once a transaction is saved, its transaction_id, amount, and created_at fields cannot be modified. Any attempt to save with a changed value is rejected:

fn check_immutable_constraints(
    &self,
    entity_type: &str,
    entity_id: u64,
    schema: &EntitySchema,
    fields: &HashMap<String, Value>,
) -> DatabaseResult<()> {
    // Only applies to updates (existing entities)
    if let Some(current) = self.find_by_id_internal(entity_type, entity_id)? {
        for constraint in &schema.constraints {
            if let Constraint::Immutable(field) = constraint {
                let current_val = current.fields.get(field);
                let new_val = fields.get(field);
                if current_val != new_val {
                    return Err(DatabaseError::ImmutableViolation {
                        entity_type: entity_type.to_string(),
                        field: field.clone(),
                    });
                }
            }
        }
    }
    Ok(())
}

Composite Unique Constraints

Single-field uniqueness is not always sufficient. Consider an e-commerce application where a user can have multiple addresses, but each address has a unique label per user:

entity Address {
    user: User
    label: text         // "Home", "Work", "Mom's house"
    street: text
    city: text
    @unique(user, label) // Unique combination of user + label
}

The composite unique constraint ensures that no single user can have two addresses with the label "Home", while different users can each have their own "Home" address.

ZeroCore implements composite uniqueness by checking all field combinations:

Constraint::CompositeUnique(fields) => {
    let values: Vec<_> = fields.iter()
        .map(|f| new_fields.get(f).cloned())
        .collect();

for (existing_id, versions) in collection { if Some(*existing_id) == id { continue; } if let Some(entity) = versions.last() { if entity.deleted_at.is_some() { continue; } let existing_values: Vec<_> = fields.iter() .map(|f| entity.fields.get(f).cloned()) .collect(); if values == existing_values { return Err(DatabaseError::CompositeUniqueViolation { / ... / }); } } } } ```

The Constraint Pipeline

All constraints are checked in a single validation pipeline during save(). The order matters:

1. Required fields -- checked first because other constraints cannot validate missing data 2. Type validation -- ensures field values match their declared types 3. Check constraints -- evaluates @check conditions 4. Conditional required -- evaluates @required_if conditions 5. Pattern validation -- evaluates @pattern regex matches 6. Unique constraints -- checks for duplicates (single, composite, partial, case-insensitive) 7. Immutable constraints -- prevents modification of locked fields (updates only) 8. Foreign key validation -- verifies referenced entities exist

If any constraint fails, the save is aborted and a descriptive error is returned. No partial saves. No inconsistent state. The entity either passes all constraints and is persisted, or it fails validation and nothing changes.

Error Messages That Help

One of the most frustrating aspects of SQL constraint violations is the error messages. PostgreSQL gives you ERROR: duplicate key value violates unique constraint "users_email_key". MySQL gives you ERROR 1062 (23000): Duplicate entry '[email protected]' for key 'users.email'. These messages are machine-readable but human-hostile.

FlinDB error messages are designed to be understood immediately:

  • "User with email '[email protected]' already exists" (unique violation)
  • "User requires field 'name' but it was not provided" (required violation)
  • "Product price must satisfy: price > 0" (check violation)
  • "User email does not match pattern: Invalid email format" (pattern violation)
  • "Transaction field 'amount' is immutable and cannot be changed" (immutable violation)
  • "Cannot delete User: referenced by Post (restrict)" (restrict violation)

These messages name the entity, the field, and the constraint in plain English. A developer seeing one of these errors knows exactly what went wrong and what to fix.

Testing the Constraint System

Session 161 added 31 tests for the constraint system. The test strategy was exhaustive: every constraint type was tested for both the success path (valid data is accepted) and the failure path (invalid data is rejected).

The cascade tests were particularly important because cascade behavior has a combinatorial explosion of scenarios:

  • Soft delete with CASCADE: child entities are soft-deleted
  • Hard destroy with CASCADE: child entities are hard-destroyed
  • Soft delete with RESTRICT: deletion is blocked if children exist
  • Hard destroy with RESTRICT: destruction is blocked if children exist
  • Soft delete with SET_NULL: reference field is set to None
  • Hard destroy with SET_NULL: reference field is set to None (but entity destroyed)

Each of these six scenarios was tested individually. Getting cascade semantics wrong would mean data loss (destroying children that should be soft-deleted) or data leaks (keeping references to destroyed entities).

The final test count after Session 161: 2,099 tests passing. The constraint system alone accounted for 31 of those -- nearly as many tests as lines of constraint code. When you are building a database engine, the test-to-code ratio should be high. Constraints are the guardrails of data integrity, and guardrails that fail silently are worse than no guardrails at all.

Why Constraints Belong in the Data Model

There is a persistent debate in software engineering about where validation belongs. Some argue it belongs in the application layer (controllers, services). Some argue it belongs in the database (constraints, triggers). Some argue it belongs in the domain model (value objects, invariants).

FlinDB's answer: it belongs in the entity definition, and the database enforces it. This is not just an opinion -- it is an architectural decision with concrete consequences.

When constraints live in the application layer, they can be bypassed. A developer writing a data migration script might skip validation. An admin console might write directly to the database. A background job might use a different code path that misses a check. The database sees all writes, from all sources, and enforces all constraints, every time.

When constraints are declarative -- written as annotations on entity fields -- they serve double duty as documentation. Reading email: text @unique @pattern(...) tells you everything about the email field's requirements. You do not need to search through application code, middleware functions, or SQL trigger definitions to understand the rules.

FlinDB's constraint system is not a feature. It is a philosophy: the data model should be complete and self-enforcing. If you can express a rule about your data, you should express it in the entity definition, and the database should guarantee it holds.

---

This is Part 4 of the "How We Built FlinDB" series, documenting how we built a complete embedded database engine for the FLIN programming language.

Series Navigation: - [056] FlinDB: Zero-Configuration Embedded Database - [057] Entities, Not Tables: How FlinDB Thinks About Data - [058] CRUD Without SQL - [059] Constraints and Validation in FlinDB (you are here) - [060] Aggregations and Analytics

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles