A web application without observability is a black box. Requests go in. Responses come out. What happens in between -- how long it takes, whether errors are increasing, which endpoints are slow, how much memory the process consumes -- is invisible. Developers discover problems when users complain, not when the metrics warn.
The traditional response is to bolt on observability after the fact. Install Prometheus for metrics. Install Grafana for dashboards. Install the ELK stack for logs. Install Jaeger for distributed tracing. Each tool requires its own deployment, its own configuration, its own learning curve. A startup in Abidjan that cannot afford a DevOps team simply goes without.
FLIN's response is different: observability is built into the runtime. Every FLIN application automatically collects request logs, system metrics, and request analytics. The admin console at /_flin displays this data in real time. No installation. No configuration. No cost.
The Metrics Engine: AtomicU64 for Zero Contention
The metrics system sits in src/server/metrics.rs, introduced in Session 320. It uses atomic counters for the hot path -- every HTTP request increments these counters, so contention must be zero.
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Mutex;
use std::collections::HashMap;
use std::time::Instant;pub static TOTAL_REQUESTS: AtomicU64 = AtomicU64::new(0); pub static TOTAL_ERRORS: AtomicU64 = AtomicU64::new(0); pub static TOTAL_RESPONSE_TIME_MS: AtomicU64 = AtomicU64::new(0);
static SERVER_START_TIME: LazyLock
static ROUTE_STATS: LazyLock
static STATUS_CODES: LazyLock
pub fn record_request( path: &str, status: u16, duration_ms: u64, ) { // Atomic increments -- no lock needed TOTAL_REQUESTS.fetch_add(1, Ordering::Relaxed); TOTAL_RESPONSE_TIME_MS.fetch_add(duration_ms, Ordering::Relaxed);
if status >= 400 { TOTAL_ERRORS.fetch_add(1, Ordering::Relaxed); }
// Per-route stats -- lock needed but contention is low if let Ok(mut routes) = ROUTE_STATS.lock() { let entry = routes.entry(path.to_string()) .or_insert(RouteMetrics::default()); entry.requests += 1; entry.total_time_ms += duration_ms; }
// Status code distribution if let Ok(mut codes) = STATUS_CODES.lock() { *codes.entry(status).or_insert(0) += 1; } } ```
The design splits counters into two tiers. The hot-path counters (TOTAL_REQUESTS, TOTAL_ERRORS, TOTAL_RESPONSE_TIME_MS) use AtomicU64 with relaxed ordering -- the fastest possible atomic operation, with no memory fences. The cold-path counters (per-route stats, status code distribution) use a Mutex, which is slightly slower but negligible since the lock is held for microseconds.
The record_request() function is called from handle_connection after every HTTP response. Console API routes (those starting with /_flin/) are excluded from route stats to prevent admin panel traffic from inflating the application's metrics.
The Log Buffer: Ring Buffer in Memory
Server logs live in src/server/log_buffer.rs. The implementation is a ring buffer backed by a VecDeque with a maximum capacity of 1,000 entries:
use std::collections::VecDeque;
use std::sync::Mutex;const MAX_LOG_ENTRIES: usize = 1000;
pub struct LogEntry {
pub timestamp: u64,
pub level: String, // "INFO", "WARN", "ERROR"
pub source: String, // "http", "db", "auth", "system"
pub message: String,
pub method: Option
static LOG_BUFFER: LazyLock
pub fn push_log(entry: LogEntry) { if let Ok(mut buffer) = LOG_BUFFER.lock() { if buffer.len() >= MAX_LOG_ENTRIES { buffer.pop_front(); // Evict oldest entry } buffer.push_back(entry); } }
pub fn get_logs( level: Option<&str>, source: Option<&str>, search: Option<&str>, limit: usize, ) -> Vec<&LogEntry> { let buffer = LOG_BUFFER.lock().unwrap(); buffer.iter() .rev() // Newest first .filter(|e| level.map_or(true, |l| e.level == l)) .filter(|e| source.map_or(true, |s| e.source == s)) .filter(|e| search.map_or(true, |q| e.message.contains(q))) .take(limit) .collect() } ```
A log entry is pushed on every HTTP request, capturing the method, path, status code, and response time. The ring buffer ensures memory usage is bounded -- even under heavy load, the log buffer never grows beyond 1,000 entries. Older entries are silently evicted.
The choice of in-memory logging over file-based logging is deliberate. File-based logs require disk I/O on every request, log rotation configuration, and a separate tool to read them. In-memory logs are instant to write, instant to query, and visible in the console without SSH access. For persistent logging, FLIN's planned observability roadmap includes a file backend, but the in-memory buffer covers 90% of debugging scenarios.
The Logs Page: Filtered, Searchable, Streaming
The Logs page at /_flin/logs displays the log buffer with three filtering dimensions:
// Logs viewer conceptual model
route GET "/_flin/api/logs" {
guard admin_sessionlevel = query.level || "ALL" // ALL, INFO, WARN, ERROR source = query.source || none search = query.search || none limit = query.limit || 100
logs = get_logs( level: if level == "ALL" { none } else { level }, source: source, search: search, limit: limit )
respond json({ entries: logs, count: logs.len, total_in_buffer: log_buffer_size() }) } ```
The frontend includes a streaming toggle that polls the API every two seconds when enabled. This creates a near-real-time log viewer: make a request to the application in one browser tab, watch the log entry appear in the console in another tab within two seconds.
A "Clear Logs" button calls POST /_flin/api/logs/clear, which empties the ring buffer. This is useful when debugging a specific interaction -- clear the logs, perform the action, and see only the relevant entries.
The Metrics Page: Gauges and Counters
The Metrics page at /_flin/metrics displays system-level gauges and request counters:
| Metric | Type | Source |
|---|---|---|
| Memory Usage | Gauge (%) | get_memory_stats() system call |
| Database Size | Gauge (bytes) | .flindb/ directory scan |
| Uptime | Gauge (seconds) | SERVER_START_TIME.elapsed() |
| Total Requests | Counter | AtomicU64 |
| Average Response Time | Computed | total_time / total_requests |
| Total Errors | Counter | AtomicU64 (status >= 400) |
| Error Rate | Computed (%) | errors / requests * 100 |
| Status Code Distribution | Histogram | Per-code counts |
The page auto-refreshes every five seconds, making it a live monitoring dashboard. The memory gauge uses color transitions: green below 60%, amber between 60% and 85%, red above 85%.
pub fn get_metrics() -> Response {
let total_requests = TOTAL_REQUESTS.load(Ordering::Relaxed);
let total_errors = TOTAL_ERRORS.load(Ordering::Relaxed);
let total_time = TOTAL_RESPONSE_TIME_MS.load(Ordering::Relaxed);
let memory = get_memory_stats();
let db_size = calculate_db_size();
let uptime = SERVER_START_TIME.elapsed();let avg_response = if total_requests > 0 { total_time / total_requests } else { 0 };
let error_rate = if total_requests > 0 { (total_errors as f64 / total_requests as f64) * 100.0 } else { 0.0 };
json_response(200, &json!({ "memory_percent": memory.percentage(), "memory_used": format_bytes(memory.used), "memory_total": format_bytes(memory.total), "db_size": format_bytes(db_size), "uptime_seconds": uptime.as_secs(), "total_requests": total_requests, "total_errors": total_errors, "avg_response_ms": avg_response, "error_rate_percent": format!("{:.2}", error_rate), "status_codes": get_status_code_distribution(), })) } ```
The Analytics Page: Top Routes and Status Breakdown
The Analytics page at /_flin/analytics provides a higher-level view than raw metrics. It answers the question: "Which parts of my application are busiest and where are the problems?"
The centerpiece is the Top Routes table -- the 10 most-requested routes sorted by request count, with progress bars showing relative traffic:
// Analytics API response shape
analytics = {
total_requests: 45230,
total_errors: 45,
avg_response_ms: 23,
error_rate: "0.10%",
uptime: "3d 14h 22m",
top_routes: [
{
path: "/api/products",
method: "GET",
requests: 12450,
avg_ms: 15,
errors: 2
},
{
path: "/api/users/login",
method: "POST",
requests: 8320,
avg_ms: 45,
errors: 23
}
// ... 8 more routes
],
status_codes: {
"200": 38000,
"201": 5200,
"400": 30,
"401": 12,
"404": 3,
"500": 0
}
}The status code breakdown uses color-coded badges: 2xx in green, 3xx in blue, 4xx in amber, 5xx in red. A healthy application shows an overwhelming green, with occasional amber. Any red is a signal to investigate.
The page auto-refreshes every 10 seconds, providing a lightweight monitoring view that does not require Grafana or Datadog.
The Future: Prometheus, Tracing, and Alerting
The observability system described above covers the basics: logs, metrics, and analytics. The planned Part 2 of the admin console extends this with production-grade features:
Prometheus endpoint (/_flin/metrics/prometheus): Export metrics in Prometheus text format, enabling integration with existing Prometheus/Grafana stacks for teams that already have monitoring infrastructure.
Custom metrics in FLIN syntax:
// Define custom metrics in your application
metric orders_placed: counter with labels ["region", "product_type"]
metric cart_items: gauge
metric checkout_duration: histogram with buckets [0.1, 0.5, 1, 2, 5]// Use them in route handlers orders_placed.inc(region: "EU", product_type: "digital") cart_items.set(user.cart.items.len) checkout_duration.observe(elapsed_seconds) ```
Alerting system: Define alert rules that trigger notifications when metrics cross thresholds. Notification channels include email, Slack, Discord, and PagerDuty.
Distributed tracing: OpenTelemetry-compatible trace collection showing the full lifecycle of a request across route handlers, database queries, AI calls, and external API requests.
These features are planned but not yet implemented. The current system -- in-memory logs, atomic counters, and a real-time dashboard -- already puts FLIN ahead of PocketBase (which has only basic logging) and on par with Supabase (which requires a cloud subscription for its monitoring features).
Observability as a Language Feature
The most important aspect of FLIN's observability is not any individual feature. It is the fact that observability ships with every application by default. A developer in Abidjan deploying their first web application does not need to know what Prometheus is, does not need to configure Grafana, does not need to set up log aggregation. They navigate to /_flin/metrics and see how their application is performing.
This is what "replacing 47 technologies" means in practice. Not just replacing the code you write, but replacing the infrastructure you manage.
The next article tells a smaller story -- but one that reveals an important truth about UI development: the sidebar navigation fix that changed everything.
---
This is Part 140 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO built production-grade observability into a programming language runtime.
Series Navigation: - [139] Admin Login and Authentication - [140] Observability and Monitoring (you are here) - [141] Sidebar Navigation: A Small Fix That Changed Everything