Measuring Software Performance

Measuring Software Performance: Why Your Benchmarks Are Probably Lying

A Loose Cable That Broke Physics In 2006, a team of physicists began building the OPERA experiment — a 730-kilometer underground tunnel from CERN in Switzerland to Gran Sasso in Italy, designed to measure the speed of neutrinos. Five years of construction. Roughly 100 million euros. The most rigorous experimental physics on the planet. In September 2011, the results came back. Neutrinos were traveling faster than the speed of light. The team had just broken the laws of physics. ...

March 6, 2026 · 13 min · 2681 words · map[email:kakkoyun@gmail.com name:Kemal Akkoyun]
How to Reliably Measure Software Performance

talk: How to Reliably Measure Software Performance

Measuring software performance reliably is remarkably difficult. It’s a specialized version of a more general problem: trying to find a signal in a world full of noise. A benchmark that reports a 5% improvement might just be measuring thermal throttling, noisy neighbors, or the phase of the moon. In this talk, we walk through the full stack of reliable performance measurement — from controlling your benchmarking environment (bare metal instances, CPU affinity, disabling SMT and dynamic frequency scaling) to designing benchmarks that are both representative and repeatable. We cover the statistical methods needed to interpret results correctly (hypothesis testing, change point detection) and show how to integrate continuous benchmarking into development workflows so regressions are caught before they reach production. ...

February 1, 2026 · 1 min · 209 words · map[email:kakkoyun@gmail.com name:Kemal Akkoyun]