Measured,

Every number on this page comes from real benchmarks on commodity hardware. No cherry-picked results, no theoretical maximums — just reproducible measurements you can verify yourself.

<10 ms
Cold Startup
Process start to first request served
~2 MB
Binary Size
Fully static, no runtime dependencies
0
GC Pauses
Manual memory management, always
54
Test Suites
Comprehensive unit and integration tests
<10 ms
Flashstor
~200 ms
Go-Based (MinIO)
20x
Faster
Startup Phases
Binary load: <1 ms
ISA-L init: <1 ms
Socket bind: <1 ms
Ready to serve: <10 ms
Startup

Cold Start: 20x Faster

Flashstor starts serving requests in under 10 milliseconds. No JIT warmup, no class loading, no runtime initialization. The binary is ready the moment the kernel loads it into memory.

  • Static binary — fully linked at compile time, no dynamic loading
  • No runtime initialization overhead (no GC, no JIT, no VM)
  • ISA-L dispatch tables initialized in microseconds
  • Ideal for container orchestration and auto-scaling scenarios
  • Pod startup to request-ready: consistently <10 ms
Memory

Predictable Memory, No Surprises

Flashstor maintains stable memory usage through arena allocation. No garbage collector means no unpredictable memory spikes or pause-induced latency tails.

  • Stable ~24 MB baseline under load (arena + connection pools)
  • Per-connection overhead: ~75 KiB (arena + I/O buffers)
  • No memory growth over time — arenas are reset, not freed
  • Compare: Go-based alternatives use 180-340 MB with periodic GC spikes
// Memory profile comparison
Flashstor (C, arena allocator):
Baseline: 24 MB (stable)
Per-conn: 75 KiB
GC pauses: 0 ms (none)
Traditional (Go, runtime GC):
Baseline: 180-340 MB
GC pauses: 10-50 ms
Heap growth: Unbounded
Observability

Built-In Monitoring & Metrics

Flashstor exposes comprehensive Prometheus metrics and health check endpoints. No additional agents or sidecars required.

Request Metrics

  • • Requests/sec by operation type
  • • Latency histograms (p50/p95/p99)
  • • Error rates by HTTP status code

Storage Metrics

  • • Bytes read/written per disk
  • • Erasure coding encode/decode time
  • • Bitrot scan coverage percentage

System Metrics

  • • Memory usage (RSS, arena active)
  • • Open file descriptors
  • • Worker thread utilization

Cluster Metrics

  • • Peer node health status
  • • Replication lag and queue depth
  • • Distributed lock contention
Live Metrics
Uptime 99.99%
Avg Latency (p50) < 5 ms
Memory (RSS) 24 MB
EC Encode Rate 34 GiB/s
Active Connections ~ 2,400

Benchmark Comparison

Flashstor vs. traditional Go-based object storage on equivalent hardware

Metric Flashstor Traditional (Go) Improvement
Binary Size ~2 MB 80+ MB 40x smaller
Cold Start <10 ms 200-500 ms 20x faster
Memory (Idle) ~24 MB 180-340 MB 7-14x less
GC Pause (p99) 0 ms 10-50 ms Eliminated
EC Encode (8+8, 1MiB) 88 µs 4,986 µs 57x faster
Write Path (1MiB, 8+8) ~2,300 µs ~7,200 µs 3.1x faster

Run Your Own Benchmarks

We provide benchmark tooling and methodology so you can reproduce every number on this page in your own environment. No black boxes.