In production, things break. APIs return 500 errors. Database queries take ten seconds. Memory usage climbs until the process crashes. The difference between a toy project and a production application is whether you know about these problems before your users do.
Most languages delegate monitoring to external services. Sentry for error tracking. Datadog for performance monitoring. New Relic for application metrics. These services are excellent -- and they cost money, require configuration, and add latency to every request. For developers in Abidjan building their first SaaS, a $29/month monitoring service is a significant expense. For a language that promises "everything you need, nothing you do not," shipping monitoring as a built-in was a natural decision.
Session 180 added error tracking, performance timing, and memory monitoring to FLIN's standard library. Not a replacement for Sentry or Datadog -- but a foundation that catches 80% of production issues without any external dependency.
Structured Logging
The foundation of observability is logging. FLIN's logging system goes beyond print with four severity levels and structured data:
log_info("Server started on port 8080")
log_warn("Cache miss rate above 50%")
log_error("Database connection failed: {error}")
log_debug("Query executed in {duration}ms")Each log function accepts a message string with interpolation. The output includes a timestamp, severity level, and the message:
[2026-03-26T14:30:45Z] INFO Server started on port 8080
[2026-03-26T14:30:46Z] WARN Cache miss rate above 50%
[2026-03-26T14:30:47Z] ERROR Database connection failed: connection refused
[2026-03-26T14:30:47Z] DEBUG Query executed in 45msThe log level is configurable at runtime. In development, all four levels are active. In production, debug is typically disabled to reduce log volume. The log level is set via environment variable:
// Set log level via environment
// FLIN_LOG_LEVEL=warn (only warn and error)
// FLIN_LOG_LEVEL=info (info, warn, error)
// FLIN_LOG_LEVEL=debug (all levels)Structured Log Data
For machine-parseable logs (useful for log aggregation services), you can attach structured data:
log_info("User logged in", {
user_id: user.id,
email: user.email,
ip: request.ip,
method: "password"
})
// [2026-03-26T14:30:45Z] INFO User logged in user_id=42 [email protected] ip=192.168.1.1 method=passwordThe structured data is appended as key-value pairs in the log output. This format is compatible with log aggregation tools like Loki, Elasticsearch, and CloudWatch Logs. No special adapter needed -- the output is plain text that any log parser can handle.
Error Tracking
Automatic Error Capture
FLIN's runtime automatically captures and logs unhandled errors with full context:
fn process_payment(order_id: text) {
order = Order.find(order_id)
{if order == none}
// This error is automatically captured with context
log_error("Order not found", {
order_id: order_id,
function: "process_payment"
})
return { error: "Order not found" }
{/if}response = http_post("https://api.payment.com/charge", { body: { amount: order.total }, timeout: 30.seconds })
{if not response.ok} track_error("payment_failed", { order_id: order_id, status: response.status, body: response.body, customer: order.customer_id }) return { error: "Payment failed" } {/if}
return { success: true } } ```
track_error: Explicit Error Tracking
track_error is a dedicated function for recording errors that deserve special attention:
track_error(error_type, context)// Track a specific error with context
track_error("api_timeout", {
endpoint: "https://api.payment.com/charge",
timeout: 30,
order_id: order.id,
attempt: retry_count
})// Track a validation error track_error("validation_failed", { entity: "User", field: "email", value: email, reason: "Invalid format" }) ```
track_error does three things:
1. Logs the error at ERROR level with structured context
2. Increments an internal error counter (accessible via error_count())
3. Stores the error in an in-memory ring buffer (last 1,000 errors)
The ring buffer is accessible for building admin dashboards:
// Get recent errors
recent = recent_errors(50)
{for error in recent}
<tr>
<td>{error.type}</td>
<td>{error.message}</td>
<td>{error.timestamp.from_now}</td>
<td><Code>{to_json(error.context, pretty: true)}</Code></td>
</tr>
{/for}Performance Timing
timer_start and timer_end
For measuring how long operations take:
timer_start("database_query")
users = User.where(role: "admin")
elapsed = timer_end("database_query")
// Prints: "[TIMER] database_query: 12.45ms"
// Returns: 12.45 (milliseconds as float){if elapsed > 100} log_warn("Slow query: {elapsed}ms for admin user lookup") {/if} ```
Timers are identified by string labels. timer_start records the current time. timer_end computes the elapsed time, logs it, and returns the duration in milliseconds. If you call timer_end with a label that was never started, it returns none instead of crashing.
measure: Block Timing
For timing a block of code without managing start/end labels:
result = measure("render_dashboard", {
users = User.all
orders = Order.where(status: "active")
stats = compute_stats(orders)
render_template(users, orders, stats)
})
// Prints: "[MEASURE] render_dashboard: 45.2ms"
// result contains the block's return valuemeasure takes a label and a block, times the block's execution, logs the duration, and returns the block's result. It is a cleaner API than timer_start/timer_end because it is impossible to forget the end call.
measure_latency: Request-Level Timing
For web applications, measuring end-to-end request latency is critical:
fn handle_request(request) {
result = measure_latency("GET /api/users", {
users = User.all
to_json(users)
})return { status: 200, body: result } } ```
measure_latency works like measure but also maintains a running statistics window: average latency, p50, p95, p99, and maximum. These statistics are accessible for building monitoring dashboards:
stats = latency_stats("GET /api/users")
// {
// count: 1247,
// average: 23.4,
// p50: 18.0,
// p95: 45.0,
// p99: 120.0,
// max: 345.0
// }Memory Monitoring
memory_usage: Current Memory State
mem = memory_usage()
// {
// heap_used: 12582912, // 12 MB
// heap_total: 33554432, // 32 MB
// stack_used: 8192, // 8 KB
// objects: 45230, // Number of heap objects
// gc_collections: 7 // Number of GC runs
// }print("Heap: {format_bytes(mem.heap_used)} / {format_bytes(mem.heap_total)}") // "Heap: 12.0 MB / 32.0 MB" ```
memory_usage returns a snapshot of the VM's memory state. This is useful for detecting memory leaks (heap usage that grows continuously) and for sizing server resources appropriately.
Memory Alerts
fn check_memory() {
mem = memory_usage()
usage_percent = (mem.heap_used * 100) / mem.heap_total{if usage_percent > 90} log_error("Memory critical: {usage_percent}% used", { heap_used: format_bytes(mem.heap_used), heap_total: format_bytes(mem.heap_total), objects: mem.objects }) {else if usage_percent > 75} log_warn("Memory high: {usage_percent}% used") {/if} } ```
Assertions: Development-Time Safety Nets
Assertions are functions that verify assumptions and fail loudly when they are wrong:
assert(user != none)
assert(user != none, "User must exist before processing")
assert_eq(response.status, 200)
assert_ne(password, "")assert checks a boolean condition. If it is false, execution stops with an error message. The optional second argument provides a custom message.
assert_eq and assert_ne check equality and inequality. When they fail, the error message includes both values:
Assertion failed: assert_eq(response.status, 200)
Left: 404
Right: 200Assertions are active in development and can be disabled in production via a build flag. When disabled, they compile to no-ops -- zero runtime cost.
Building a Monitoring Dashboard
Putting all the monitoring functions together, here is a complete monitoring dashboard built entirely with FLIN built-ins:
fn monitoring_dashboard() {
mem = memory_usage()
errors = recent_errors(10)
api_stats = latency_stats("API")
System Monitor
This dashboard shows memory usage, API latency percentiles, and recent errors -- the three metrics that matter most for production health. No Sentry. No Datadog. No Grafana. Just FLIN built-in functions and FlinUI components.
Implementation: Low-Overhead Instrumentation
The monitoring functions are designed for minimal performance impact. Timers use Rust's std::time::Instant, which reads the CPU's high-resolution clock without a system call. Error tracking uses a lock-free ring buffer that does not block the main execution path. Memory statistics are collected from the VM's allocator without stopping execution.
pub struct MonitoringState {
timers: HashMap<String, Instant>,
errors: RingBuffer<TrackedError, 1000>,
error_count: AtomicU64,
latency_windows: HashMap<String, LatencyWindow>,
}impl MonitoringState { pub fn timer_start(&mut self, label: &str) { self.timers.insert(label.to_string(), Instant::now()); }
pub fn timer_end(&mut self, label: &str) -> Option
pub fn track_error(&self, error_type: &str, context: Map) { self.error_count.fetch_add(1, Ordering::Relaxed); self.errors.push(TrackedError { error_type: error_type.to_string(), context, timestamp: Utc::now(), }); } } ```
The LatencyWindow maintains a fixed-size circular buffer of recent latency measurements and computes percentiles on demand using a sorted copy. This trades O(n log n) computation at read time for O(1) insertion at write time -- the correct tradeoff for a monitoring system where writes (recording measurements) are far more frequent than reads (displaying dashboards).
What This Is Not
FLIN's monitoring is not a replacement for production monitoring services. It does not provide:
- Distributed tracing across multiple services
- Alerting via email, Slack, or PagerDuty
- Long-term storage of metrics (the ring buffer is in-memory)
- Anomaly detection using machine learning
- Dashboard UI (though you can build one with FlinUI)
It is a foundation. For a solo developer or a small team, the built-in monitoring provides immediate visibility into application health. When the application grows to need distributed tracing and alerting, the structured logging format is compatible with every major monitoring service -- you pipe the logs to Loki, the errors to Sentry, and the metrics to Prometheus. The built-in functions remain useful as the data collection layer.
Eighteen Functions for Production Visibility
The complete monitoring API:
- Logging:
log_info,log_warn,log_error,log_debug - Error tracking:
track_error,recent_errors,error_count - Timing:
timer_start,timer_end,measure,measure_latency,latency_stats - Memory:
memory_usage,format_bytes - Assertions:
assert,assert_eq,assert_ne - Output:
print,debug
Eighteen functions that give every FLIN application production-grade observability from day one. No external service. No configuration. No monthly bill. Just built-in functions that tell you what your application is doing and how long it takes.
---
This is Part 80 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO built error tracking and performance monitoring into a programming language.
Series Navigation: - [79] Validation and Sanitization Functions - [80] Error Tracking and Performance Monitoring (you are here) - [81] FlinUI: Zero-Import Component System - [82] From Zero to 70 Components in One Session