Performance by the Numbers

Settler is benchmarked continuously against realistic high-volume financial data workloads. Numbers below are from the production engine, not synthetic demos.

45ms
API Latency (p95)
Global Edge
10k/s
Recon Throughput
Events per Job
<120ms
Evidence Gen
Per Run (SHA-256)
<200ms
Replay Verify
Hash Confirmation
100%
Determinism Pass Rate
Identical Inputs → Identical Hash
99.99%
Uptime SLA
Enterprise Tier
0 leaks
Tenant Isolation Tests
Across All CI Runs
<3s
Mismatch Triage
AI-Assisted Context Load

Latency by Stage

Stagep50p95p99
API Gateway + Auth4ms8ms14ms
Rule Evaluation12ms22ms38ms
Evidence Hash Generation45ms90ms115ms
Mismatch Classification6ms11ms18ms
Audit Log Write3ms7ms12ms
End-to-End (Full Run)280ms620ms980ms

Throughput by Scenario

Stripe ↔ QuickBooks

Volume
50k transactions
Throughput
~8,200 rec/s
Evidence
2.1 MB

Bank Payouts ↔ Internal Ledger

Volume
200k rows
Throughput
~9,800 rec/s
Evidence
8.4 MB

Multi-source (3 systems)

Volume
100k transactions
Throughput
~6,400 rec/s
Evidence
5.7 MB

High-cardinality FX

Volume
25k multi-currency
Throughput
~7,100 rec/s
Evidence
1.9 MB

Determinism Guarantees

Settler enforces determinism at the engine level. The following are verified in CI on every merge:

  • Identical input data and identical rules → byte-identical output hash
  • Replay of any stored run confirms original evidence hash
  • Rule version fingerprint captured in every evidence bundle
  • Concurrent runs on same inputs produce independent but identical hashes
  • Engine output is floating-point safe — all amounts handled in integer cent representation

Methodology

Benchmarks run continuously via a k6-based load test suite distributed across multiple AWS regions. Each test scenario uses fixture data that mirrors production-scale financial transaction patterns: bursty webhook events, concurrent reconciliation jobs, high-cardinality foreign key joins, and multi-currency FX amounts.

Latency Measurement

All latency figures are client-side wall-clock times including full network round-trip. Internal stage timings are captured via structured spans at the engine boundary and summed for end-to-end reporting. The edge CDN is included in all p95/p99 measurements.

Throughput Measurement

Throughput is measured as records successfully reconciled per second under sustained load. Each scenario runs for 120 seconds at target concurrency before recording throughput to allow JIT warmup. Numbers reflect the engine running at 80% CPU utilization on a standard cloud instance type.

Determinism Verification

Determinism is verified by running the same reconciliation job 3 times with identical inputs and confirming that all three evidence SHA-256 hashes are byte-identical. This test runs on every merge in CI. A determinism failure is a blocking CI failure.

Benchmark results reflect the production engine under controlled load test conditions. Actual performance in your environment will vary based on data volume, network topology, instance size, and concurrent load. Run the quickstart locally to measure performance on your own hardware.