Runtime & Compiler Performance

Runtime & Compiler Performance Benchmarks

Measured on Apple M3 Pro, macOS 15.5, Kōdo v1.10.0. Each test run 3 times, best time reported. All times in seconds.

Runtime Performance

Recursive Fibonacci — fib(35)

Pure recursive computation, no I/O. Tests function call overhead and recursion.

LanguageTime (s)Relative
Kōdo (Inkwell LLVM)0.0290.7x
Kōdo (Cranelift)0.0350.9x
Rust (release)0.041.0x
Node.js (V8 JIT)0.092.3x
Go0.092.3x
Python0.8721.8x

Kōdo Inkwell is faster than Rust on recursive workloads. This is possible because Kōdo’s concurrency-aware yield analysis skips overhead for pure functions, and LLVM’s optimization passes handle recursion efficiently.

Kōdo is 25-30x faster than Python on recursive computation.

Sum Loop — 10 million iterations

Tight loop with integer addition. Tests loop overhead and basic arithmetic.

LanguageTime (s)Relative
Rust (release)0.001.0x
Kōdo (Inkwell LLVM)0.020
Kōdo (Cranelift)0.022
Node.js (V8 JIT)0.06
Go0.08
Python0.48

Kōdo is 4x faster than Go and 24x faster than Python on tight loops.

Backend Comparison — Cranelift vs Inkwell (LLVM)

Kōdo supports two code generation backends:

  • Cranelift — default, fast compilation, good runtime performance
  • Inkwell (LLVM C API) — uses the full LLVM optimization pipeline with alloca elimination, function inlining hints, and native CPU targeting
BenchmarkCraneliftInkwell LLVMDifference
fib(35)0.035s0.029s17% faster
sum 10M0.022s0.020s9% faster

Inkwell consistently outperforms Cranelift thanks to LLVM’s more aggressive optimization passes, including better register allocation, loop optimization, and function inlining.

123 of 142 examples compile and run correctly with the Inkwell backend. All 142 examples work with the default Cranelift backend.

Build: cargo build -p kodoc --features llvm Use: kodoc build file.ko --backend=inkwell Release mode: kodoc build file.ko --release (requires LLVM feature)

Concurrency-Aware Yield Optimization

Kōdo’s green thread scheduler inserts yield points for cooperative multitasking. An inter-procedural analysis detects which functions participate in concurrency (spawn, channels, async) and skips yield insertion for pure functions.

This eliminates massive overhead in recursive and loop-heavy code:

BenchmarkBefore optimizationAfterSpeedup
fib(35) Cranelift0.25s0.035s7.1x
sum 10M Cranelift0.078s0.022s3.5x
fib(35) Inkwell0.25s0.029s8.6x

Functions that use spawn, channel_*, or async continue to receive yield points normally, ensuring cooperative scheduling works correctly.

Compiler Performance

Compilation Speed

How fast the Kōdo compiler processes source code.

OperationFileLinesTime
kodoc checkself_hosted_parser.ko1,8977ms
kodoc checkfibonacci.ko304ms
kodoc buildfibonacci.ko30105ms
kodoc buildhello.ko12105ms
kodoc buildcontracts_demo.ko45105ms

Comparison: Compilation Speed

CompilerFileTimeNotes
kodoc check1,897 lines7msType checking only
kodoc buildsmall file105msFull pipeline + linking
rustcsmall file68msSingle file, no deps
rustc -Osmall file68msWith optimizations
go buildsmall file194msSingle file

Kōdo’s type checker is extremely fast (7ms for ~2K lines). The build time is dominated by the linker (clang), not the compiler itself. For the agent feedback loop (check → fix → recheck), Kōdo delivers sub-10ms latency.

Throughput

MetricValue
Check throughput~270K lines/sec
Build throughput~300 lines/sec (linker-bound)
WASM playground check< 50ms (browser)

Key Takeaways

  1. Kōdo Inkwell is faster than Rust on recursive workloads
  2. 4x faster than Go on tight loops
  3. 24-30x faster than Python across all benchmarks
  4. Compiler check is sub-10ms — instant feedback for agents
  5. Full build is ~100ms — fast enough for tight compile-fix loops
  6. Concurrency-aware optimization eliminates yield overhead in pure code

Methodology

  • Cranelift is the default backend (kodoc build)
  • Inkwell results use kodoc build --backend=inkwell with --features llvm
  • Release mode uses kodoc build --release (Inkwell with O3)
  • Rust uses rustc -O (release optimizations)
  • Go uses go build (default optimizations)
  • Node.js uses V8’s JIT compiler
  • Python uses CPython 3.12
  • No warmup runs; cold-start measurement for compilation
  • Runtime benchmarks exclude compilation time