This is a retrospective. Over the past fourteen articles, we traced the evolution of FLIN's type system from primitive types and inference through union types, generics, traits, tagged unions, pattern matching, destructuring, pipeline operators, tuples, type guards, the never type, bounds, while-let loops, labeled loops, and or-patterns.
Each article told the story of one feature in isolation. This article tells the story of how they fit together -- how features designed in sequence form a coherent system, how design decisions in Session 97 enabled features in Session 155, and how the entire type system serves FLIN's core purpose: making application development simple, safe, and expressive.
The Feature Map
Before looking at connections, here is the complete feature inventory:
| Feature | Session(s) | Category | |
|---|---|---|---|
| Primitive types (int, number, text, bool) | Core | Foundation | |
| Special types (time, money, file, semantic) | Core | Foundation | |
| Type inference (bidirectional) | Core | Inference | |
| Optional types (T?) | Core | Safety | |
| Collection types ([T], [K:V]) | Core | Data | |
| Entity types | Core | Data + Persistence | |
| Type coercion (int->number, etc.) | Core | Ergonomics | |
| Elvis operator (?:) | 097 | Ergonomics | |
| Destructuring (array, entity, nested) | 097-098 | Ergonomics | |
| Union types (T \ | U) | 100 | Expressiveness |
| Slicing (list[1:5:2]) | 100 | Ergonomics | |
| Generic types ( | 101 | Polymorphism | |
| Traits and impl blocks | 133-136 | Constraints | |
| Tagged unions (enum with data) | 145 | Expressiveness | |
| Pattern matching (match) | 145-157 | Control flow | |
| Exhaustiveness checking | 136, 147 | Safety | |
| Never type | 136, 147 | Safety | |
| Pipeline operator (\ | >) | 150 | Ergonomics |
| Where clauses | 144, 150 | Constraints | |
| Tuples | 142 | Data | |
| Type guards (is) | 120 | Safety | |
| While-let loops | 152 | Control flow | |
| Break with value | 153 | Control flow | |
| Labeled loops | 154 | Control flow | |
| Or-patterns | 155 | Ergonomics |
Twenty-five features across approximately sixty sessions. Each one designed, implemented, tested, and documented.
The Dependency Graph
These features are not independent. They form a dependency graph where later features rely on earlier ones:
Primitives + Inference
|
+-- Optional types
| |
| +-- Type guards (is)
| | |
| | +-- Type narrowing
| | |
| | +-- Exhaustiveness checking
| |
| +-- Elvis operator (?:)
| +-- While-let loops
|
+-- Entity types
| |
| +-- Destructuring
|
+-- Collection types
| |
| +-- Slicing
| +-- Destructuring
| +-- Pipeline operator
|
+-- Union types
| |
| +-- Type narrowing
| +-- Tagged unions
| | |
| | +-- Pattern matching
| | | |
| | | +-- Or-patterns
| | | +-- Exhaustiveness checking
| | | +-- While-let patterns
| | |
| | +-- Never type
| |
| +-- Generic types
| |
| +-- Traits
| | |
| | +-- Generic bounds
| | +-- Where clauses
| |
| +-- Tuples
|
+-- Control flow
|
+-- Break with value
+-- Labeled loops
+-- While-letEvery line in this graph represents a design dependency: the feature below could not exist without the feature above. Tagged unions require union types (because a generic enum like Result uses the union type infrastructure). Pattern matching requires tagged unions (because matching on enum variants requires the variant data representation). Exhaustiveness requires pattern matching and the never type.
How Design Decisions Propagate
Some early design decisions had consequences that were not apparent until much later.
The Vec Decision
In Session 100, we represented union types as Union(Vec rather than Union(Box. This seemed like a minor implementation detail at the time. But it had cascading effects:
- Or-patterns (Session 155) reused the same
Vecapproach, because the patternA | B | Cmirrors the typeA | B | C. - Exhaustiveness checking (Session 147) could iterate over union members as a flat list, simplifying the algorithm.
- Type subtraction -- removing a type from a union -- was a simple
retainoperation on the vector.
If we had used a binary representation, each of these features would have needed recursive handling of nested unions.
The Separate DestructuringDecl Decision
In Session 097, we created Stmt::DestructuringDecl as a separate statement type rather than modifying VarDecl. This seemed conservative at the time -- a way to avoid touching 190 call sites.
But it paid dividends later. When while-let loops (Session 152) needed pattern matching in loop conditions, they could use the same Pattern enum without any interaction with VarDecl. When for-loop destructuring was added, it used the same Pattern infrastructure. The separate type meant that patterns were a standalone concept, usable in any context.
The Desugaring Approach
In Session 150, the pipeline operator was implemented by desugaring to function calls at parse time. No new AST node. No new type checker rules. No new bytecode.
This approach was then applied to other features. While-let desugars to a loop with a pattern check. Or-patterns desugar to a series of checks with a shared body. Break with value desugars to a store-and-jump sequence.
The desugaring philosophy kept the compiler core small. The parser handles complexity; the downstream passes handle simplicity.
The Coherence Test
A type system is coherent if its features interact predictably. Here are several interactions that work correctly because of deliberate design:
Generic + Union + Pattern Matching
enum Result<T, E> {
Ok(T),
Err(E)
}fn handle
This uses generics, trait bounds, tagged unions, and pattern matching simultaneously. The compiler:
1. Resolves the generic parameters T and E
2. Validates the Printable bounds on both
3. Checks the match is exhaustive (Ok and Err cover all variants)
4. Narrows T inside the Ok arm and E inside the Err arm
5. Verifies that .to_text() is available (via the Printable bound)
Five features interacting correctly.
Pipeline + Destructuring + Type Guards
data: [int | text] = [1, "hello", 2, "world", 3]numbers = data |> filter(x => x is int) |> map(x => x * 2)
[first, second, ...rest] = numbers ```
Pipeline feeds data through transformations. Type guard (is int) narrows the union type in the filter. Destructuring unpacks the result. The type of first is int -- the compiler traced the type through three features.
While-Let + Tagged Union + Break Value
enum Token {
Number(int),
Text(text),
End
}fn find_first_number(tokens: [Token]) -> int? { index = 0 while let token = tokens[index] { match token { Number(n) -> break n Text(_) -> { index++; continue } End -> break } index++ } } ```
While-let iterates. Pattern matching dispatches on the token variant. Break with value returns the found number. The result type is int? because the loop might not find a number.
Labeled Loop + Or-Pattern + Exhaustiveness
enum Priority { Critical, High, Medium, Low }'scan: for task in tasks { match task.priority { Critical | High -> { urgent_tasks.push(task) if urgent_tasks.len >= max { break 'scan } } Medium | Low -> continue } } ```
Or-patterns combine priority levels. Labeled break exits when enough urgent tasks are found. The match is exhaustive because Critical | High and Medium | Low cover all four variants.
The Implementation Numbers
At the end of Session 157, FLIN's type system implementation comprised:
| Component | Approximate Lines |
|---|---|
FlinType enum and operations | 800 |
| Type inference engine | 1,200 |
| Type compatibility checker | 600 |
| Pattern matching type checking | 500 |
| Exhaustiveness checker | 400 |
| Trait registry and validation | 500 |
| Generic type substitution | 300 |
| Error message generation | 400 |
| Total type system | ~4,700 |
The test suite grew proportionally:
| Category | Test Count |
|---|---|
| Type inference tests | ~200 |
| Type compatibility tests | ~150 |
| Pattern matching tests | ~100 |
| Generic type tests | ~80 |
| Trait bound tests | ~50 |
| Exhaustiveness tests | ~40 |
| Integration (end-to-end) tests | ~450 |
| Total | ~1,070 type-system-related tests |
Every feature was tested at the unit level (parser, type checker, code gen independently) and at the integration level (full compile-and-run tests).
What We Would Do Differently
No design survives contact with reality perfectly. A few things we would reconsider:
Type aliases could be more powerful. FLIN's type Option is a simple alias. It does not create a new nominal type. This means Option and int? are interchangeable -- which is convenient but loses the semantic distinction. A future version might support nominal type aliases that create truly distinct types.
Trait composition could use + in more places. Currently, T: A + B works in bounds, but you cannot write type Combined = A + B to combine traits. This is a gap that occasionally requires workarounds.
Error recovery in the type checker could be better. When the type checker encounters an error, it sometimes stops checking subsequent expressions in the same block. Continuing past errors and reporting multiple diagnostics would give developers a more complete picture.
Incremental type checking is not implemented. Every change re-checks the entire file. For small files, this is instant. For large programs, it could become a bottleneck. Incremental checking -- only re-checking expressions affected by a change -- is a future optimization.
What We Got Right
Several early decisions proved exactly right:
Bidirectional inference. Forward inference (value determines type) handles 90% of cases. Backward inference (context determines type) handles the remaining 10%. Together, they eliminate nearly all explicit type annotations.
The type hierarchy with int < number. Having int as a subtype of number means arithmetic works naturally. Adding an int to a number produces a number. No explicit casts needed.
Nominal traits over structural interfaces. Explicit impl blocks make trait relationships visible and searchable. Error messages can name the specific trait that is missing. The small cost (writing impl blocks) pays for itself in clarity.
Exhaustiveness as an error, not a warning. Making non-exhaustive matches a hard error catches bugs at compile time that would otherwise surface in production. Every developer who adds a variant to an enum is guided by the compiler to update every match.
Desugar at the parser level. Pipeline operators, while-let, and or-patterns all desugar to simpler constructs before the type checker sees them. This keeps the type checker focused on fundamental type operations and avoids feature-specific special cases.
The Philosophy in Retrospect
Looking back over the entire type system arc, a philosophy emerges: make the type system invisible when it can be, and visible when it must be.
Invisible: type inference means developers rarely write type annotations. Automatic coercion means int-to-number conversion just works. Optional propagation means null safety does not require ceremony.
Visible: union types explicitly declare what a value can be. Trait bounds explicitly declare what a generic type must support. Exhaustiveness checking explicitly requires handling every case. Error messages explicitly state what went wrong and how to fix it.
The balance point is different for different features. Inference should be invisible -- the developer should not think about types for most code. Exhaustiveness should be visible -- the developer should know that they handled every case.
This balance is what makes FLIN's type system suitable for application developers. It does not demand the level of type annotation that Rust or Haskell demand. It does not accept the level of type uncertainty that JavaScript or Python accept. It sits in a middle ground where types are present but unobtrusive, where safety is enforced but not burdensome.
That middle ground was the goal from the first session. One hundred and fifty sessions later, we achieved it.
What Comes Next
The type system arc is complete. The next arc of the "How We Built FLIN" series moves to a different domain: FLIN's temporal model. Time travel queries, entity history, the @ operator, temporal keywords, and the database infrastructure that makes "show me this record as it was last Tuesday" a one-line operation.
The type system will reappear throughout -- temporal operations return typed results, history queries produce typed lists, and the @ operator is type-checked like any other expression. But the focus shifts from how FLIN understands types to how FLIN understands time.
---
This is Part 45 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO designed and implemented a programming language from scratch.
Series Navigation: - [43] While-Let Loops and Break With Value - [44] Labeled Loops and Or-Patterns - [45] Advanced Type Features: The Complete Picture (you are here) - [46] FLIN's Temporal Model (coming next)