Helix

Benchmarks

Helix ships a JSON benchmark harness (python -m benchmarks.api_benchmarks) and continuously publishes the results to CI. The goals are:

Running locally

python -m benchmarks.api_benchmarks \
  --repeat 5 \
  --warmup 1 \
  --limit 0 \
  --out bench-results/api.json \
  --summary-md bench-results/api.md

Heavy datasets via GitHub Actions

Trigger a manual heavy sweep from the Actions → CI → Run workflow button:

  1. Set bench_heavy to true (this bumps repeats to 10 and disables the 10k sampling limit).
  2. Optionally provide runner-accessible overrides for dna_fasta / protein_fasta. On hosted runners you typically leave these blank; on self-hosted boxes you can point at a mounted volume or fetcher script.

Each run publishes:

Trend (mean seconds)

The gallery below visualizes every *.mean_s column recorded in docs/data/bench/history.csv and summarizes the latest run.

CSV source: docs/data/bench/history.csv. Commit history contains the raw JSON artifacts under benchmarks/out/ (uploaded by CI) if you need to recompute metrics offline.