"Why not just use SQLite?"
We heard this question from the first day we announced FlinDB. And it is a fair question. SQLite is the most deployed database engine in the world. It is embedded, file-based, zero-configuration (mostly), battle-tested across billions of devices, and open source. If you need an embedded database, SQLite is the obvious choice.
We chose to build our own anyway. This article explains why -- not to disparage SQLite (which is a masterpiece of engineering) but to articulate the specific problems that SQLite cannot solve for FLIN, and the design trade-offs we made in building FlinDB.
Where They Overlap
FlinDB and SQLite share fundamental characteristics:
| Feature | SQLite | FlinDB |
|---|---|---|
| Embedded | Yes | Yes |
| File-based | Yes | Yes |
| ACID compliant | Yes | Yes |
| WAL support | Yes | Yes |
| Transactions | Yes | Yes |
| Foreign keys | Yes | Yes |
| Zero-server | Yes | Yes |
Both run in-process. Both store data in local files. Both provide ACID guarantees. Both support write-ahead logging. For the use case of "I need a simple embedded database for a single-user application," both would work.
The divergence begins when you look at what each database was designed for.
The Remote Access Problem
The number one complaint from SQLite users who outgrow it: no remote access.
SQLite is a library. It runs inside your application process. There is no network protocol. There is no TCP port. If you need a mobile app and a web dashboard to access the same data, you cannot point them both at a SQLite file.
The standard workaround is to build a REST API in front of SQLite:
// Express + SQLite: 15+ dependencies, weeks of setup
const express = require('express');
const sqlite3 = require('better-sqlite3');
const db = sqlite3('./data.db');
const app = express();app.get('/api/tasks', (req, res) => { const tasks = db.prepare('SELECT * FROM tasks').all(); res.json(tasks); });
app.post('/api/tasks', (req, res) => { const stmt = db.prepare('INSERT INTO tasks (title) VALUES (?)'); const result = stmt.run(req.body.title); res.status(201).json({ id: result.lastInsertRowid }); });
// ... authentication, validation, error handling, documentation ... app.listen(3000); ```
This is not SQLite's fault. SQLite was designed to be an application-level database, not a server. But the consequence is that every SQLite application that needs network access must build its own API layer -- duplicating effort across millions of projects.
In FLIN, routes ARE the API:
entity Task {
title: string @required @min(1)
completed: bool = false
created_at: timestamp = now()
}api GET /tasks { @doc "List all tasks" @returns [Task] return json(Task.all()) }
api POST /tasks { @doc "Create a new task" @body { title: string } @returns Task task = Task.create(body) return json(task, 201) } ```
Run flin serve and you have an HTTP server with a REST API, auto-generated documentation, type validation, and JSON responses. The database and the API are the same thing.
The Concurrent Writes Problem
SQLite uses database-level locking. One writer at a time. Period.
Writer 1: BEGIN TRANSACTION
Writer 2: BEGIN TRANSACTION --> SQLITE_BUSY (database is locked)
Writer 3: BEGIN TRANSACTION --> SQLITE_BUSY (database is locked)WAL mode helps with reads (multiple readers can proceed while one writer writes) but does not help with writes. Under write-heavy load, SQLite becomes a bottleneck.
FlinDB implements row-level locking with deadlock detection:
pub struct LockManager {
row_locks: DashMap<EntityRowKey, LockInfo>,
wait_graph: RwLock<HashMap<TransactionId, HashSet<TransactionId>>>,
}impl LockManager { pub fn acquire(&self, key: EntityRowKey, txn: TransactionId) -> LockResult { if self.would_cause_deadlock(txn, &key) { return Err(LockError::DeadlockDetected); } // Acquire row-level lock } } ```
Concurrent updates to different rows proceed in parallel. Only concurrent updates to the same row are serialized. For a web application handling multiple simultaneous requests, this is the difference between a responsive API and a bottleneck.
The Time Travel Problem
SQLite does not track history. An UPDATE statement overwrites the previous value. A DELETE statement removes the row. Gone.
Building temporal queries on top of SQLite requires a custom implementation:
-- You need to build this yourself:
CREATE TABLE users_history (
user_id INTEGER,
field TEXT,
old_value TEXT,
new_value TEXT,
changed_at DATETIME DEFAULT CURRENT_TIMESTAMP
);CREATE TRIGGER track_user_changes AFTER UPDATE ON users BEGIN INSERT INTO users_history (user_id, field, old_value, new_value) VALUES (OLD.id, 'name', OLD.name, NEW.name); -- Repeat for every field... END; ```
This is hundreds of lines of SQL triggers for each table, plus a query function to reconstruct state at a point in time, plus a cleanup mechanism to prevent unbounded history growth. Most teams give up and accept that historical data is lost.
In FlinDB, time travel is built into the storage model:
// Get the user's state at any point in time
user_then = User.find(1) @ "2025-06-15"
user_now = User.find(1)// Compare states if user_then.email != user_now.email { log("Email changed since June 15th") }
// Get entity history history = User.find(1).history ```
The @ operator is not a feature bolted on top. It is a language-level operator that maps to the EAVT event store. Every save creates a new version. Every version is preserved. Time travel is an index lookup, not a reconstruction.
The Semantic Search Problem
SQLite has FTS5 for full-text search. It is good for keyword matching. It does not understand meaning.
-- SQLite FTS5: keyword matching
SELECT * FROM products_fts WHERE products_fts MATCH 'comfortable chair';
-- Finds: "comfortable office chair"
-- Misses: "ergonomic seating with lumbar support"FlinDB has built-in semantic search with vector embeddings:
entity Product {
name: text
description: semantic text
}// Finds products by meaning, not just keywords results = search "comfortable seating for work" in Product by description // Returns: "Ergonomic Office Chair with lumbar support" // Even though no words match exactly ```
The semantic text type automatically generates vector embeddings, stores them in an internal vector index, and enables cosine similarity search. The hybrid search mode combines BM25 keyword ranking with vector similarity using Reciprocal Rank Fusion.
To achieve this with SQLite, you would need: a vector extension (sqlite-vss or similar), an embedding generation pipeline (OpenAI API or local model), a separate index, and custom query logic to merge keyword and semantic results. FlinDB provides all of this in two words: semantic text.
The Type Safety Problem
SQLite has notoriously loose typing. A column declared as INTEGER can contain a string. A column declared as TEXT can contain a blob. SQLite's type system is more of a suggestion than an enforcement mechanism.
-- SQLite happily accepts this:
INSERT INTO users (age) VALUES ('not a number');
-- No error. age column contains the string "not a number".FlinDB enforces types at the schema level:
entity User {
name: text
age: int
email: text @pattern(email, "^[a-zA-Z0-9+_.-]+@[a-zA-Z0-9.-]+$", "Invalid email")
}// This fails with a type error: save User { name: "Thales", age: "not a number", email: "bad" } // Error 1: age must be int, got text // Error 2: email does not match pattern: Invalid email ```
FlinDB validates at save time, not query time. Invalid data never enters the database. The constraint system includes check constraints, pattern validation, conditional required fields, immutable fields, and composite unique constraints -- all declarative, all enforced at the engine level.
The Query Language Problem
This is the most fundamental difference. SQLite speaks SQL. FLIN speaks FLIN.
-- SQLite
SELECT tasks.*, users.name as author_name
FROM tasks
JOIN users ON tasks.user_id = users.id
WHERE users.id = ?
ORDER BY tasks.created_at DESC
LIMIT 10;// FLIN
Task.where(user: auth.user)
.include(user)
.order(created_at: desc)
.limit(10)SQL is a string embedded in another language. It has its own syntax, its own error messages, its own type system. SQL injection is possible because the query is a string that gets interpolated. ORMs exist solely to translate between the application language and SQL.
FlinDB queries are FLIN expressions. They are type-checked by the compiler. They cannot be injected. They do not need an ORM because there is no impedance mismatch between the query language and the application language.
Performance Comparison
For single-threaded workloads, SQLite and FlinDB perform similarly:
| Operation | SQLite | FlinDB |
|---|---|---|
| Read (single row) | ~10 microseconds | ~15 microseconds |
| Read (1,000 rows) | ~1 ms | ~1.5 ms |
| Write (single row) | ~50 microseconds | ~60 microseconds |
| Full-text search | ~1 ms | ~1 ms |
FlinDB is slightly slower for basic operations because it maintains version history and indexes. The overhead is negligible for application workloads.
For concurrent workloads, FlinDB scales dramatically better:
| Operation | SQLite | FlinDB |
|---|---|---|
| 100 concurrent writes | ~5,000 microseconds (serialized) | ~150 microseconds (parallel) |
| Semantic search | N/A | ~50 ms |
| Time travel query | N/A | ~20 microseconds |
The 33x improvement in concurrent writes comes from row-level locking. SQLite serializes all writes through a single database lock. FlinDB only serializes writes to the same row.
When to Use Which
Use SQLite when: - You need a simple embedded database with no network access - Single-user desktop applications - Mobile apps (local storage) - Testing and prototyping - Read-heavy workloads with minimal writes - You are building a library, not an application
Use FlinDB when: - You are building a FLIN application (obviously) - You need time travel queries - You need semantic/AI search - You need concurrent writes - You want built-in API generation - You want type safety and validation at the data layer - You need real-time subscriptions
Use PostgreSQL when: - You need advanced SQL features (CTEs, window functions) - You have millions of concurrent connections - You need enterprise-grade replication - You have a dedicated DBA team - You are building a data warehouse
The Honest Assessment
SQLite is better than FlinDB at being a general-purpose embedded SQL database. It has 25 years of battle-testing, billions of deployments, and a specification suite of millions of tests. FlinDB cannot match this maturity.
FlinDB is better than SQLite at being the storage engine for FLIN applications. It speaks the language natively. It provides features that SQLite does not have (temporal versioning, semantic search, real-time subscriptions, row-level locking). It eliminates entire categories of setup that SQLite requires (API layer, ORM, migration system, history tracking).
We did not build FlinDB because SQLite is bad. We built it because FLIN needs a database that was designed for FLIN -- one that shares its philosophy of zero configuration, entity-first design, and temporal awareness. SQLite was designed for a different era and a different paradigm. FlinDB was designed for FLIN.
---
This is Part 14 of the "How We Built FlinDB" series, documenting how we built a complete embedded database engine for the FLIN programming language.
Series Navigation: - [067] Tree Traversal and Integration Testing - [068] FlinDB Hardening for Production - [069] FlinDB vs SQLite: Why We Built Our Own (you are here) - [070] Persistence in the Browser