Cached HashMap lookups disguised as framework performance numbers
2026-03-21 · by eclips4 · Analysis of justrach/turboapi at commit 43a7191
TurboAPI calls itself a "high-performance Python web framework" and a "drop-in FastAPI replacement". It currently markets itself as "20x faster" and shows figures like "150k req/s (22x FastAPI)" on Apple Silicon. There's a real Zig HTTP server, a radix-trie router, a native Postgres driver. Fine. But the benchmark numbers don't measure what they say they measure.
TurboAPI has two caching layers. Both are on by default. The DB one can't be turned off at runtime. Both are active in the benchmark suites discussed here, but the HTTP response cache only applies to the simple GET-style fast paths, not every handler type.
For simple_sync_noargs and simple_sync handlers, the first call caches the JSON response in a Zig StringHashMap. Every request after that never calls Python for that path. It just returns cached bytes.
var response_cache: ?std.StringHashMap([]const u8) = null;
var response_cache_count: usize = 0;
const MAX_CACHE_ENTRIES: usize = 10_000;
var cache_noargs_responses: bool = false;
server.zig:964-997 - cache hit path in request dispatch
// Ultra-fast path: simple handlers
.simple_sync_noargs => {
if (cache_noargs_responses) {
if (getResponseCache().get(match.handler_key)) |cached| {
// Cache hit: Python is NEVER called
sendResponse(stream, 200, "application/json", cached);
return;
}
}
},
The TURBO_DISABLE_CACHE env var disables this cache. But it was
added for TechEmpower compliance,
and no benchmark here uses it.
All SELECT queries go through a 10,000-entry LRU cache with a 30-second TTL. No runtime disable flag.
db.zig:38-51 - DB cache config (hardcoded, always on)const DB_CACHE_MAX: usize = 10_000;
var db_cache_enabled: bool = true; // always on, no env var to disable
var db_cache_ttl: i64 = 30; // 30 seconds, hardcoded
var db_cache: ?std.StringHashMap(CacheEntry) = null;
var db_cache_mutex: std.Thread.Mutex = .{};
db.zig:320-328 - cache check on every SELECT by PK
// Cache check -- build cache key from table + pk value
var cache_key_buf: [256]u8 = undefined;
const cache_key = std.fmt.bufPrint(&cache_key_buf,
"GET:{s}:{s}", .{ entry.table, pk_val }) catch "";
if (cacheGet(cache_key)) |cached_body| {
sendResponseFn(stream, 200, "application/json", cached_body);
return; // Postgres is NEVER hit
}
run_benchmarks.py SLOPTests handlers like return {"message": "Hello, World!"}. HTTP response cache is on.
After the first request, "140k req/s" is just Zig doing HashMap.get() + stream.writeAll().
FastAPI runs Python on every request. This is cached vs uncached, not a framework comparison.
Line 200:
# Static route -- response pre-rendered at startup, zero Python call
app.static_route("GET", "/health", '{"status":"ok","engine":"zig-static"}')
vs FastAPI:
# FastAPI runs this Python function on every request
@app.get("/health")
def health():
return {"status": "ok", "engine": "fastapi"}
A pre-rendered static string vs a Python function call. That's the "benchmark".
full_comparison.py SLOP# Warm caches
for path in ["/users/1", "/users?limit=10", "/users/1/dashboard",
"/search?q=lorem", "/admins", "/order-stats",
"/top-spenders", "/posts/tagged?tag=tag1", "/health"]:
urllib.request.urlopen(f"http://127.0.0.1:{TURBO_PORT}{path}")
Every test endpoint is warmed into the 30s DB cache before measurement. wrk runs for 5 seconds, TTL is 30. Postgres is never hit. FastAPI+SQLAlchemy has no cache, hits Postgres every time.
db_bench_ci.py SLOP# Warm cache
urllib.request.urlopen(f"http://127.0.0.1:{TURBO_PORT}/users/1")
urllib.request.urlopen(f"http://127.0.0.1:{TURBO_PORT}/users?limit=10")
Same thing. Cache warmed, then 10s benchmark against a 30s TTL. At least the columns
are labeled turbo_cached_pk and turbo_cached_list, but those
are the only TurboAPI numbers shown.
postgres/bench.py "NO CACHE" SLOPThis one claims to test without caching:
bench.py:198-208 - the "NO CACHE" modeelif mode == "turbo_nocache":
port = start_turbo_app(routes_nocache)
print("\n=== 3. TurboAPI+pg.zig NO CACHE (varying IDs) ===")
rps_id = run_wrk_lua(
f"http://127.0.0.1:{port}/users/1", "/app/varying_ids.lua",
"SELECT by ID (varying)",
)
Uses a lua script to cycle through different user IDs:
varying_ids.luacounter = 0
request = function()
counter = counter + 1
local id = (counter % 100) + 1 -- only 100 unique IDs
return wrk.format("GET", "/users/" .. id)
end
It gets worse: the benchmark prewarms the exact IDs later used by the Lua script before measurement starts.
bench.py:132-136 - prewarm before wrk# warmup: hit enough unique IDs to prime all 16 pool connections
for i in range(200):
requests.get(f"http://127.0.0.1:{port}/users/{(i % 1000) + 1}", timeout=5)
db_cache_enabled is still true. The DB cache key for select_one is
"GET:{table}:{pk_val}", and the warmup covers IDs 1..200 while the Lua script only cycles IDs 1..100.
So the "NO CACHE" run starts with its entire working set already cached before wrk even begins.
Even without that bug, it would still fully warm after the first 100 requests and stay hot for the rest of the 10-second run because the TTL is 30 seconds.
The list endpoint is worse. routes_nocache uses ORDER BY random() to get
different rows each time, but it is registered via db_query(), so it still goes through
the custom-query cache path. The cache key is based on the SQL text plus params. This query has no params,
so every request after the first uses the same cache key and replays the same cached JSON body:
const prefix = "Q:";
const sql_key_len = @min(entry.custom_sql.len, 64);
@memcpy(cache_key_buf[ck_pos..][0..sql_key_len], entry.custom_sql[0..sql_key_len]);
// no params for this route, so the key is the same every time
if (cacheGet(cache_key)) |cached_body| {
sendResponseFn(stream, 200, "application/json", cached_body);
return; // Postgres never touched, ORDER BY random() only affects the first miss
}
| Benchmark | HTTP Cache | DB Cache | Postgres Hit Rate | Verdict |
|---|---|---|---|---|
run_benchmarks.py |
ON | N/A (no DB) | N/A | SLOP |
turboapi_vs_fastapi.py |
ON | N/A (no DB) | N/A | SLOP |
full_comparison.py |
N/A for DB routes | ON + warmed | ~0% | SLOP |
db_bench_ci.py |
N/A for DB routes | ON + warmed | ~0% | SLOP |
bench.py "NO CACHE" |
N/A for DB routes | ON + prewarmed exact working set | ~0% | SLOP |
bench.py "CACHED" |
N/A for DB routes | ON + warmed | ~0% | At least honest about it |
db_cache_enabled = true, 30s TTL, no way to turn it off without editing Zig source.
TURBO_DISABLE_CACHE only kills the HTTP response cache, not the DB one.
The DB numbers are Zig HashMap.get(), not Postgres roundtrips.
db.zig.