January 1, 2026. New Year's Day. While most of the world was recovering from celebrations, a Cargo.toml file was created in Abidjan, Cote d'Ivoire. By the end of January 2, FLIN had a lexer that tokenized 42 keywords and 60 token types, a recursive descent parser that built abstract syntax trees, a type checker with Hindley-Milner inference, a bytecode code generator, and a stack-based virtual machine that executed programs. Ten sessions. Forty-eight hours. The entire compiler pipeline.
This is the story of those ten sessions -- the founding sprint that established FLIN's architecture and proved that the CEO + AI CTO model could produce a real programming language, not just a prototype.
Session 001: The Skeleton (January 1)
Every language begins with tokens. Before you can parse expressions or generate bytecode, you need to break source text into meaningful units -- identifiers, keywords, operators, literals. Session 001 did exactly that, plus the entire project structure.
The session lasted approximately 45 minutes. In that time, the following was created:
// 42 keywords defined in Session 001
pub enum Keyword {
Entity, Save, Delete, Where, Find, All, First, Count, Order,
Text, Int, Float, Bool, Time, File, Money, Semantic,
If, Else, For, In, Match, Route,
Now, Today, Yesterday, Tomorrow, LastWeek, LastMonth, LastYear,
Ask, Search, By, Limit,
True, False, None,
Event, Params, Body, Asc, Desc,
}Fourteen files were created. One thousand lines of Rust. Twenty-five unit tests for keyword lookup. The project structure established a pattern that would survive 300 more sessions without modification: src/lexer/, src/parser/, src/typechecker/, src/codegen/, src/vm/, src/database/, src/server/, src/error/.
The architectural decision to separate the compiler into distinct modules -- each with its own directory and public interface -- was made in this first session. It was not revised. Good architecture, established early, compounds.
One notable detail: Session 001 ended with cargo check pending because Rust was not yet installed on the system. The token definitions were written blind, based on the language specification alone. They compiled on first attempt in Session 002.
Sessions 002-004: The Lexer (January 2, Morning)
Session 002 installed Rust and verified that Session 001's code compiled cleanly. Then the real lexer work began.
The scanner -- the component that reads source characters and produces tokens -- was implemented across Sessions 002-004. The design followed a standard single-pass approach: a Scanner struct holds the source text, current position, and a character buffer. It advances through the source one character at a time, recognizing patterns and emitting tokens.
pub struct Scanner {
source: Vec<char>,
start: usize,
current: usize,
line: usize,
column: usize,
tokens: Vec<Token>,
}impl Scanner {
pub fn scan_tokens(&mut self) -> Result
Session 003 tackled a challenge unique to FLIN: the view mode. FLIN source files contain both logic (variables, functions, control flow) and HTML-like view templates. The lexer needed to switch between "code mode" and "view mode" depending on context. When it encounters , it shifts to scanning HTML tags, attributes, and text content. When it encounters {, it shifts back to code mode for embedded expressions.
This dual-mode scanning is what makes FLIN's single-file approach possible. A .flin file is not two languages glued together -- it is one language with two scanning modes, unified by a single token stream.
By the end of Session 004, the lexer was complete. It handled string literals (including template literals), numeric literals (integers and floats), all operators, all keywords, and the full view mode syntax. The test count stood at 150.
Sessions 005-006: The Parser (January 2, Midday)
With tokens in hand, the parser transforms them into an abstract syntax tree -- the structural representation of the program that the rest of the compiler operates on.
FLIN's parser uses a Pratt parsing technique for expressions, which assigns binding powers to operators and handles precedence naturally. Statements are parsed with a more traditional recursive descent approach.
Session 005 built the core parser infrastructure: the AST node definitions, the expression parser, and the statement parser. Session 006 added control flow (if/else, for loops, match expressions) and the Pratt parser for operator precedence.
// The AST defined in Session 005
pub enum Expr {
Literal(Literal),
Identifier(String),
Binary { left: Box<Expr>, op: BinOp, right: Box<Expr> },
Unary { op: UnaryOp, expr: Box<Expr> },
Call { callee: Box<Expr>, args: Vec<Expr> },
Member { object: Box<Expr>, property: String },
Index { object: Box<Expr>, index: Box<Expr> },
Lambda { params: Vec<Param>, body: Box<Expr> },
Temporal { expr: Box<Expr>, time: Box<Expr> },
// ... 20+ more variants
}pub enum Stmt {
VarDecl { name: String, value: Option
The parser's handling of the view syntax deserves attention. When the parser encounters This design -- treating view nodes as first-class statements -- means that FLIN's view syntax has access to the full expression language. There is no "template language" with restricted capabilities. Any expression you can write in FLIN logic, you can write inside a By the end of Session 006, the parser handled the complete FLIN syntax. The test count was at 200. Session 007 built the type checker foundation. Session 008 added Hindley-Milner type inference. Most programming languages that claim to have "type inference" actually have limited local inference -- they can figure out that The The practical result: FLIN developers rarely write type annotations. The type checker infers types from usage, reports clear errors when types conflict, and validates that all operations are type-safe. Session 008's implementation of Hindley-Milner was one of the more technically challenging moments in the early sprint. Type inference algorithms are subtle -- they need to handle polymorphism, recursive types, and occurs-check validation. Getting this right in a single session required precise specification from the CEO and careful implementation from the AI CTO. The code generator translates the type-checked AST into bytecode -- a sequence of instructions that the virtual machine can execute. FLIN uses a custom bytecode format with 75+ opcodes covering arithmetic, comparisons, control flow, function calls, list and map operations, entity operations, and view rendering. The code generator walks the AST in a single pass, emitting bytecode instructions into a contiguous byte buffer. Constants (strings, large numbers) go into a constant pool referenced by index. Local variables are assigned stack slots at compile time. Session 009 was efficient: the code generator leverages the AST structure directly, so each AST node maps to a small, predictable sequence of opcodes. The Session 010 is where FLIN came alive. The virtual machine -- the engine that executes bytecode -- was built in approximately 35 minutes. Two thousand eight hundred and fifty lines of Rust. Thirty-two new tests. The counter example ran. The VM is a stack-based interpreter. It maintains an operand stack for intermediate values, a call stack for function frames, and a heap for dynamically allocated objects (strings, lists, maps, entities). The execution loop is the heart of the VM: fetch an opcode, decode it, execute the corresponding operation, advance the instruction pointer, repeat. The loop is implemented as a large match statement -- each opcode maps to a block of Rust code that manipulates the stack and heap. The moment the counter example executed successfully was the moment FLIN became real. Not a specification. Not a design document. A working programming language. This trivial program compiles to: Six instructions. The VM executed them correctly on the first attempt. Two hundred and fifty-one tests passed. The most remarkable aspect of Sessions 001-010 is not the speed. It is the durability. The architecture established in these ten sessions -- the module structure, the AST design, the opcode set, the VM stack model -- survived 291 more sessions without fundamental changes. The lexer gained new tokens (for pattern matching, pipeline operators, generics) but its scanning architecture remained the same. The parser gained new statement types (routes, guards, middleware) but its Pratt parsing core was unchanged. The VM gained new opcodes (for temporal queries, security functions, file operations) but its stack-based execution model was never revised. This durability was not accidental. The early sessions made deliberate architectural choices that created room for growth: People will ask how a compiler pipeline was built in ten sessions across two days. The question deserves a direct answer. Compiler construction is a solved problem. The algorithms for lexing (finite automata), parsing (recursive descent, Pratt parsing), type checking (Hindley-Milner), code generation (tree walking), and virtual machine execution (stack machines) are decades old and thoroughly documented. Textbooks describe them in detail. Thousands of implementations exist in the open-source ecosystem. What an AI CTO brings is the ability to implement these well-known algorithms quickly and correctly. Claude did not invent a new parsing technique. It implemented a Pratt parser -- the same technique used by dozens of languages -- in Rust, with FLIN's specific grammar. The implementation is standard. The speed comes from the AI's ability to produce large volumes of correct code without the overhead of context-switching, environment setup, or typing speed. The human contribution was not implementation. It was specification. Juste decided that FLIN would use a stack-based VM rather than a register-based one. He decided that the type system would include Hindley-Milner inference. He decided that view syntax would be first-class statements, not a separate template language. These decisions shaped the architecture; the AI executed them. Ten sessions. Forty-eight hours. A complete compiler pipeline. It is fast by any measure. But it is not magic. It is the result of well-understood computer science, executed by an AI capable of producing thousands of lines of correct Rust per hour, directed by a human who knew what language he wanted to build. The magic, if there is any, is in the decision to try. --- This is Part 197 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO built a programming language from scratch. Series Navigation:
- [196] 301 Sessions in 42 Days: The Complete Timeline
- [197] The Day We Built the Lexer, Parser, and VM (you are here)
- [198] The FlinUI Sprint: 70 Components Overnight in the token stream, it does not switch to a separate HTML parser. Instead, ViewNode is simply another variant of Stmt. A ViewNode::Element with attributes parsed as expressions. A {for todo in todos} inside a view becomes a ViewNode::For containing child ViewNode elements.{} in a view.Sessions 007-008: The Type Checker (January 2, Afternoon)
x = 5 means x is an integer, but they need explicit type annotations for function parameters. FLIN's type checker goes further, using constraint-based inference to propagate type information through expressions and across function boundaries.pub enum FlinType {
Int,
Float,
Bool,
Text,
Time,
Money,
File,
Semantic,
None,
List(Box<FlinType>),
Map(Box<FlinType>, Box<FlinType>),
Entity(String),
Function { params: Vec<FlinType>, returns: Box<FlinType> },
Optional(Box<FlinType>),
TypeVar(usize), // For Hindley-Milner inference
}TypeVar variant is the key to inference. When the type checker encounters a variable whose type is not yet known, it assigns a fresh type variable. As it processes expressions, it generates constraints (e.g., "the left side of + must be the same type as the right side"). It then solves these constraints using unification, resolving type variables to concrete types.Session 009: The Code Generator (January 2, Late Afternoon)
pub enum OpCode {
// Stack
LoadConst, Pop, Dup, Swap,
// Variables
LoadLocal, StoreLocal, LoadGlobal, StoreGlobal,
// Arithmetic
Add, Sub, Mul, Div, Mod, Neg, Pow,
// Comparison
Eq, NotEq, Lt, LtEq, Gt, GtEq,
// Control flow
Jump, JumpIfTrue, JumpIfFalse, Call, Return,
// Entity operations
Save, Delete, QueryAll, QueryFind, QueryWhere,
// Temporal
AtVersion, AtTime, AtDate, History,
// View
CreateElement, SetAttribute, BindText, BindHandler,
// ... 40+ more opcodes
}emit_expr function handles expressions; emit_stmt handles statements; emit_view handles view nodes. Each is a straightforward recursive traversal.Session 010: The Virtual Machine (January 2, Evening)
pub struct VM {
stack: Vec<Value>,
frames: Vec<CallFrame>,
ip: usize,
globals: HashMap<String, Value>,
heap: Vec<HeapObject>,
free_list: Vec<usize>,
bytes_allocated: usize,
gc_threshold: usize,
output: Vec<String>,
debug: bool,
}count = 0
count++LoadInt0 ; Push 0
StoreGlobal $count ; count = 0
LoadGlobal $count ; Push count
Dup ; Duplicate for assignment
Incr ; Increment
StoreGlobal $count ; Store incremented value
HaltThe Architecture That Survived
FlinType enum was designed with variants for every type FLIN would eventually need, including optional types, type variables for inference, and entity references. Later sessions added union types, generic types, and tagged unions as new variants.The Speed Question
// The execution flow established in Session 010
// Still the same flow in Session 301
//
// Source -> Lexer -> Tokens -> Parser -> AST
// -> TypeChecker -> TypedAST -> CodeGen -> Bytecode
// -> VM -> ExecutionResponses