When you expose your API as MCP tools for AI clients, you face a maintenance problem: every endpoint has two schemas. The REST schema (OpenAPI) and the MCP tool schema (JSON Schema in inputSchema). They describe the same operation but live in different files, written by different people (or different sessions), at different times. They will drift apart.
This post describes how sh0 solved this by generating MCP tool definitions directly from OpenAPI annotations, using utoipa's extension system.
The Problem: 12 Tools, 24 Schemas
sh0's MCP server (Phase 1) shipped with 12 hand-curated tools. Each tool had:
1. A McpTool definition in tools.rs with a manually written JSON Schema
2. A REST handler in handlers/*.rs with utoipa annotations generating an OpenAPI schema
Both described the same parameters for the same operation. The list_apps tool had page and per_page as optional integers. The GET /api/v1/apps endpoint had PaginationParams with the same fields. Two sources of truth for one reality.
Adding a new MCP tool required touching three locations: the utoipa annotation, the tool_definitions() function, and the execute_tool() match arm. Miss one and you get a tool that the AI can call but the server cannot execute, or a schema that promises parameters the handler ignores.
The Solution: OpenAPI Extensions as MCP Metadata
utoipa v5 supports custom OpenAPI extensions in #[utoipa::path] annotations:
#[utoipa::path(
get,
path = "/api/v1/apps",
tag = "Apps",
params(PaginationParams),
responses(...),
security(("bearer" = [])),
extensions(
("x-mcp-enabled" = json!(true)),
("x-mcp-risk" = json!("read")),
("x-mcp-description" = json!("List all deployed applications."))
)
)]
pub async fn list_apps(...) -> ... { ... }The x-mcp-enabled: true extension marks this endpoint as an MCP tool. At startup, sh0 parses its own OpenAPI spec and generates MCP tool definitions from annotated operations. The handler's parameters become the tool's inputSchema. The description becomes the tool's description. The operationId becomes the tool name.
One annotation. One schema. Zero drift.

The Extension Protocol
We defined five extensions:
| Extension | Purpose |
|---|---|
x-mcp-enabled | Marks an endpoint as an MCP tool |
x-mcp-risk | Risk level (read, write, admin) for future scoped key enforcement |
x-mcp-name | Overrides tool name when it differs from operationId |
x-mcp-description | Overrides description with MCP-specific wording |
x-mcp-param-map | Remaps parameter names (e.g., path param id becomes app_id) |
The x-mcp-param-map deserves explanation. OpenAPI path parameters often use generic names like {id}. But MCP tools benefit from descriptive names: app_id tells the AI what kind of identifier to provide. The mapping is declarative:
("x-mcp-param-map" = json!({"id": {"name": "app_id", "description": "App ID or app name"}}))The Generator: 150 Lines of Rust
The openapi.rs module is intentionally simple. It does not try to be a general-purpose OpenAPI-to-MCP converter. It reads sh0's specific spec and produces sh0's specific tools:
1. Iterate all paths and operations in the OpenAPI spec
2. Filter by x-mcp-enabled: true
3. For each matching operation, build an McpTool:
- Name from x-mcp-name or operationId
- Description from x-mcp-description, summary, or description
- inputSchema from path and query parameters, with name remapping
4. Append manually defined tools (one tool, get_app_logs, calls Docker directly and has no REST endpoint)
The result: 12 tools, identical to the hand-written versions, derived from the same annotations that generate the OpenAPI spec.
The Hybrid: Automatic Definitions, Manual Execution
A fully automatic system would also route tool calls to handlers automatically. We chose not to do this for Phase 2. The tool definitions (what the AI sees) are generated from OpenAPI. The tool execution (what happens when the AI calls a tool) stays in the manual execute_tool() dispatch function.
This means adding a new MCP tool still requires two steps:
1. Add the utoipa extensions to the handler
2. Add the executor match arm in tools.rs
But the schema is never written by hand. The shape of arguments, their types, which are required -- all derived from the handler's existing utoipa annotations.
Why not full auto-routing? Because the MCP executor does more than just call the REST handler. It resolves apps by name (not just ID), fetches related data (domains, env var counts), and formats output differently than the REST response. The execution logic is worth writing explicitly. The definition logic is not.
Verification: Unit Tests for Parity
The riskiest part of this migration is subtle schema changes. If the generated list_apps schema has a different property name or type than the hand-written version, the AI client might send arguments the executor does not expect.
Four unit tests verify parity:
- All 12 expected tool names are present
- get_app has app_id as a required parameter (remapped from id)
- list_apps has page and per_page properties
- get_server_status has an empty properties object
These tests run against the real OpenAPI spec generated by utoipa, catching any drift between annotations and expectations.

The Audit Catches What the Builder Misses
This is where the multi-session methodology earns its keep. The primary session built the feature and moved on. A separate auditor session -- fresh context, no attachment to the implementation -- immediately spotted a performance issue: the OpenAPI spec was being parsed on every tools/list request, even though the spec is static at runtime (it is derived from compile-time utoipa annotations).
The fix was a LazyLock cache: parse the spec once on first access, serve the cached result on every subsequent call. Three lines of code, zero allocation per request after the first.

This is the value of the build-audit-audit workflow: the builder optimizes for correctness. The auditor optimizes for everything else. Neither session alone would have produced code that was both correct and efficient.
What We Learned
Extensions are underused. OpenAPI extensions (x-*) are a standard mechanism that most codebases ignore. They are the right place for metadata that is specific to your system but not part of the OpenAPI standard. MCP tool metadata, rate limit hints, feature flags, deprecation timelines -- all fit naturally as extensions.
The generator should be specific, not general. A general OpenAPI-to-MCP converter would need to handle request bodies, response schemas, authentication flows, and dozens of edge cases. Our generator handles path parameters, query parameters, and five custom extensions. It is 150 lines and does exactly what we need.
Parameter naming matters for AI ergonomics. The difference between id and app_id is the difference between an AI that guesses and an AI that knows what to provide. The x-mcp-param-map extension lets the REST API keep its RESTful conventions while the MCP tool uses descriptive argument names.
What Comes Next
Phase 3 will use the x-mcp-risk extension for scoped API keys. A key with read scope will only see tools with x-mcp-risk: "read". A key with write scope will see read and write tools. The risk metadata is already in the OpenAPI spec, embedded in every annotated endpoint. The enforcement layer just needs to filter the tool list.
The foundation is set. Every future MCP tool is five lines of annotation away.