Switching from JMeter was the right call for us!
About 14 months ago our team (based in PAK, B2C platform, peaks around major shopping events) moved our entire load testing suite from JMeter to k6. Wanted to give an honest retrospective now that we’ve been through two major sales events with the new setup.
Why we left JMeter: Not a knock on JMeter — it’s powerful and battle-tested. But for our team:
- XML-based test scripts were painful to maintain and review in PRs
- The GUI was misleading for distributed load generation
- Correlating JMeter output with our Grafana dashboards required too much glue
What we like about k6 :
- Tests are just JavaScript — version controlled, code-reviewed, readable
- Native Grafana integration (they’re the same company now, and it shows)
k6 cloudfor distributed load generation without managing infrastructure- Thresholds built into the test script itself:
Does this means your CI pipeline fails automatically if performance regresses past defined budgets?. No human has to eyeball a graph? That’s so cool!
Yes! But let me be transparent;
What’s still annoying
- Browser-based performance testing via
k6 browseris still maturing - Protocol support isn’t as broad as JMeter (though it covers 95% of our needs)
Happy to answer questions if anyone is mid-evaluation between the two.
- Browser-based performance testing via
Very similar journey here , moved from JMeter about two years ago.
The threshold-as-code approach is the feature I’d never give up. Having performance acceptance criteria living in the same repo as the tests completely changed our relationship with stakeholders. Now we have actual conversations about what p95 latency should be before a feature ships, rather than after it’s already slow in production.
Interesting how a seemingly small change can have such an effect!
One thing worth mentioning for anyone evaluating k6: the k6 extensions ecosystem is worth exploring early. We added
xk6-sqlto test some of our DB-heavy endpoints more realistically. The extension model is a bit involved to set up but powerful.Slightly different perspective — we evaluated k6 and ended up staying on Gatling (based in germany but roots in france, so maybe there’s a bias there 😄).
For our use case — complex, stateful user journeys with a lot of conditional logic — Gatling’s Scala DSL was actually easier to model than JavaScript. The simulation concept maps more naturally to what we’re doing.
That said, k6 is clearly the right call for teams who want fast onboarding and tight Grafana integration. They’re both good tools and the JMeter-or-nothing era is happily over.
Quick question for OP — how are you handling test data at scale?
This is always where our load tests fall apart. We can generate the load fine, but realistic test data for 10,000 simulated users doing checkout flows requires a lot of pre-seeded accounts, products, inventory states, etc. What’s your approach?
Great question. Honestly it’s still our biggest pain point.
Current approach:
- Pre-generate a CSV of test user credentials (10k accounts in our staging environment)
- k6 reads from the CSV using
SharedArrayso each VU gets a unique user - Product/inventory data is refreshed from a staging snapshot weekly via a script
It’s not elegant. But it works well enough that our load results are meaningfully realistic. The alternative — fully synthetic data — produces numbers that don’t translate to production at all in our experience.
insightful! thanks for the reply 😀
Can you explain what tool you use to pre-generate the csv? Appreciated!
