Functional programming in a web language is not a luxury -- it is a necessity. Every event handler is a callback. Every array transformation is a higher-order function. Every promise chain is a sequence of lambdas. If writing anonymous functions is verbose or awkward, developers fight the language instead of using it.
FLIN's arrow functions, implemented in Session 093 and refined with type inference in Session 141, give developers the concise syntax they expect from modern JavaScript while adding the type safety that JavaScript lacks. Ten syntactic forms cover every use case from simple callbacks to curried function composition, and a constraint-based inference system figures out the types so developers do not have to annotate them.
The Ten Forms
Arrow functions in FLIN support every common pattern:
// 1. Single parameter (no parentheses needed)
double = x => x * 2// 2. Multiple parameters add = (a, b) => a + b
// 3. Zero parameters getRandom = () => 42
// 4. Single parameter with parentheses (optional) square = (x) => x * x
// 5. Expression body (implicit return) multiply = (a, b) => a * b
// 6. Block body (explicit statements) greet = (name) => { message = "Hello, " + name message }
// 7. Three or more parameters sum3 = (a, b, c) => a + b + c
// 8. Nested arrows (currying) makeAdder = x => y => x + y
// 9. As callback arguments numbers = [1, 2, 3, 4, 5] doubled = numbers.map(x => x * 2)
// 10. Async arrow functions
fetchUser = async (id) => {
response = await http.get(/api/users/${id})
response.json
}
```
The syntax is deliberately identical to JavaScript arrow functions. A developer who knows JavaScript can write FLIN arrow functions without learning anything new. This is not accidental -- it is a design principle. FLIN aims to feel familiar while providing capabilities that JavaScript cannot.
Parser Implementation
The most interesting challenge was teaching the parser to distinguish between a parenthesized expression and a parameter list. When the parser sees (, it does not yet know whether it is looking at (a + b) (a grouped expression) or (a, b) => ... (the start of a lambda).
The solution is lookahead-based disambiguation:
// When we see '(' in expression position:
// 1. Save the current position
// 2. Try to parse as parameter list
// 3. If we see ')' followed by '=>', it's a lambda
// 4. Otherwise, restore position and parse as grouped expressionTokenKind::LeftParen => { let checkpoint = self.position;
// Try parsing as lambda parameters if let Some(params) = self.try_parse_lambda_params() { if self.check(TokenKind::FatArrow) { self.advance(); // consume => let body = self.parse_lambda_body()?; return Ok(Expr::Lambda { params, body, span }); } }
// Not a lambda - restore and parse as grouped expression self.position = checkpoint; self.parse_grouped_expression() } ```
The try_parse_lambda_params method attempts to parse a comma-separated list of identifiers. If it succeeds and the next token is =>, the parser commits to the lambda interpretation. If it fails at any point -- encountering an operator, a complex expression, or any token that cannot be a parameter name -- the parser backtracks to the checkpoint and tries the grouped expression interpretation.
For single-parameter arrows without parentheses (x => x * 2), the disambiguation is simpler. When the parser sees an identifier followed by =>, it knows immediately that it is a single-parameter lambda:
TokenKind::Identifier(name) if self.peek_is(TokenKind::FatArrow) => {
self.advance(); // consume =>
let body = self.parse_lambda_body()?;
Ok(Expr::Lambda {
params: vec![Param::new(name)],
body,
span,
})
}Currying Through Nesting
Curried functions -- functions that return functions -- work naturally with arrow syntax:
makeAdder = x => y => x + yadd5 = makeAdder(5) result = add5(3) // 8 ```
The parser handles this because y => x + y is itself a valid expression, and the body of an arrow function can be any expression. No special case needed -- currying emerges from the grammar's recursive structure.
The Type Inference Problem
Arrow functions created a type inference challenge. When a developer writes:
add = (a, b) => a + bWhat are the types of a and b? The type checker assigns fresh type variables: a: ?T0, b: ?T1. But when it encounters a + b, it needs to know the types of the operands to determine whether + is numeric addition, string concatenation, or an error.
Before Session 141, the type checker simply gave up at this point:
Error: Cannot apply '+' to ?T0 and ?T1This was technically correct -- the types were indeed unknown -- but practically useless. Developers expected the type checker to figure out the types from context, the way TypeScript and Rust do.
Constraint-Based Inference
Session 141 solved this with a constraint-based approach. Instead of requiring concrete types for binary operations, the type checker records constraints that must be satisfied later:
// In types.rs
pub enum Constraint {
Numeric(FlinType, Span), // Type must be Int or Float
Integral(FlinType, Span), // Type must be Int only
}// In checker.rs
pub struct TypeChecker {
// ... existing fields ...
constraints: Vec
impl TypeChecker { fn add_constraint(&mut self, constraint: Constraint) { self.constraints.push(constraint); }
fn verify_constraints(&self) -> Result<(), TypeError> { for constraint in &self.constraints { match constraint { Constraint::Numeric(ty, span) => { let resolved = self.resolve(ty); match resolved { FlinType::Int | FlinType::Float => {} // OK FlinType::Var(_) => {} // Still unresolved, defer _ => return Err(TypeError::NotNumeric(resolved, *span)), } } Constraint::Integral(ty, span) => { let resolved = self.resolve(ty); match resolved { FlinType::Int => {} // OK FlinType::Var(_) => {} // Defer _ => return Err(TypeError::NotIntegral(resolved, *span)), } } } } Ok(()) } } ```
The critical change is in check_binary. When both operands are type variables, instead of failing immediately, the checker unifies them and records a constraint:
// In check_binary - handle type variables
(FlinType::Var(_), FlinType::Var(_)) => {
let unified = self.unify(&left_ty, &right_ty, span)?;
self.add_constraint(Constraint::Numeric(unified.clone(), span));
Ok(unified)
}// When one operand is concrete and one is a variable (FlinType::Int, FlinType::Var(_)) => { self.unify(&right_ty, &FlinType::Int, span)?; Ok(FlinType::Int) }
(FlinType::Var(_), FlinType::Float) => { self.unify(&left_ty, &FlinType::Float, span)?; Ok(FlinType::Float) } ```
When a concrete operand meets a type variable, the unifier resolves the variable to the concrete type immediately. When both are variables, they are unified (forced to be the same type) and a Numeric constraint is recorded.
The constraints are verified at the call site, after the caller's argument types have been unified with the lambda's parameter types:
add = (a, b) => a + b
result = add(3, 5) // At this point, a and b are unified to IntWhen add(3, 5) is type-checked, the checker unifies ?T0 with Int (from the literal 3) and ?T1 with Int (from 5). The Numeric constraint on the unified type is then verified: Int is numeric. The expression type-checks successfully.
The Result
After Session 141, lambda type inference works for the common cases:
add = (a, b) => a + b
print(add(3, 5)) // 8 -- works
print(add(3.14, 2.86)) // 6.0 -- worksmultiply = (x, y) => x * y print(multiply(6, 7)) // 42 -- works
negate = x => -x print(negate(5)) // -5 -- works ```
The type checker infers that add takes two numeric arguments and returns a numeric result. It infers that multiply does the same. It infers that negate takes a numeric argument. All without a single type annotation.
For cases where inference is insufficient -- a lambda that could accept multiple types -- explicit annotations can be added:
format = (value: text) => value.upper()But for the vast majority of lambdas used in practice (callbacks, transformers, predicates), the inference system handles everything automatically.
Closure Support
Arrow functions are closures: they capture variables from their surrounding scope. The existing closure implementation in FLIN's code generator (emit_lambda) handles this:
base = 100calculate = (x) => { discount = x * 0.1 base - discount // 'base' captured from outer scope }
result = calculate(50) // 95.0 ```
The code generator detects that base is referenced inside the lambda but defined outside it. It emits an UpvalueCapture instruction that closes over the variable, making it available inside the function even after the enclosing scope exits.
This means arrow functions and regular fn declarations have identical semantics. The only difference is syntax: arrow functions are expressions that can appear anywhere a value is expected, while fn declarations are statements that bind a name at the top level.
Performance Characteristics
Arrow functions compile to the same bytecode as regular functions. There is no performance penalty for using the concise syntax. The VM does not distinguish between a function created by fn add(a, b) { return a + b } and one created by add = (a, b) => a + b. Both produce a function object with a bytecode body and an optional closure environment.
The type inference system adds a small amount of work to the type checker (recording and verifying constraints) but no work to the runtime. Types are erased after compilation -- the VM operates on untyped values. The inference system's cost is paid once at compile time, not on every execution.
Testing
The arrow function implementation has comprehensive test coverage:
#[test]
fn test_parse_lambda_simple() {
// x => x * 2
let expr = parse_expr("x => x * 2");
assert!(matches!(expr, Expr::Lambda { .. }));
}#[test] fn test_parse_lambda_multi_param() { // (a, b) => a + b let expr = parse_expr("(a, b) => a + b"); if let Expr::Lambda { params, .. } = expr { assert_eq!(params.len(), 2); } }
#[test] fn test_parse_lambda_zero_param() { // () => 42 let expr = parse_expr("() => 42"); if let Expr::Lambda { params, .. } = expr { assert_eq!(params.len(), 0); } } ```
Seven parser tests cover all ten forms. Additional integration tests verify that arrow functions execute correctly in the VM. The lambda inference tests verify type checking with constraint propagation. Together, they ensure that arrow functions work correctly from parsing through execution.
Arrow functions and lambda inference represent the developer experience philosophy of FLIN distilled into a single feature: write less, get more. The syntax is minimal. The types are inferred. The closures are automatic. The developer focuses on the logic, and the compiler handles the rest.
This concludes Arc 16 -- Developer Experience. Ten articles covering the tools, formats, and language features that make FLIN not just a language with powerful features, but a language that is genuinely pleasant to use. From the CLI that replaces entire toolchains to the arrow functions that make callbacks a joy instead of a chore, every decision was guided by a single question: what would make the developer's day better?
---
This is Part 180 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO designed and built a programming language from scratch.
Series Navigation: - [179] Template Literals and String Formatting - [180] Arrow Functions and Lambda Inference (you are here) - Next arc: FLIN Deployment and Production