how do you manage testing across NVDA, JAWS, and VoiceOver without losing your mind? 🫠 I’m a QA engineer in the Netherlands working on a government portal that has to comply with EN 301 549 (Europe’s accessibility standard). This means I can’t just test with one screen reader — I need coverage across NVDA, JAWS, and VoiceOver at minimum.
The problem is the inconsistency is absolutely wild.
Specific things driving me crazy right now:
aria-liveregions that announce correctly in NVDA but are completely silent in JAWS- VoiceOver on iOS treating
role="button"differently depending on whether it’s a<div>or<span> - Focus management after modal close works in one, breaks in another
I’ve got a Windows VM for NVDA and JAWS testing and a physical iPhone for VoiceOver. The setup works but the context-switching is exhausting.
Has anyone found a sustainable workflow for multi-screen-reader regression testing?
I’m also wondering how much of this is reasonable to automate vs just accepting it needs skilled manual testers. Would love to hear from people doing this at scale.

Fellow EU tester here (based in Poland, also dealing with public sector a11y compliance). The inconsistency between JAWS and NVDA is genuinely one of the most frustrating parts of this work.
Our workflow that’s helped:
We stopped trying to achieve identical behavior across all readers and shifted to “no journey-blocking failures in any of them.” Subtle announcement differences are logged but not always blockers.
Also — if you’re not already using it, the Accessibility Insights for Web extension is great for structured manual audits. Free, from Microsoft, actually good.
I’ll look into this thanks!!!