how do you manage testing across NVDA, JAWS, and VoiceOver without losing your mind? 🫠 I’m a QA engineer in the Netherlands working on a government portal that has to comply with EN 301 549 (Europe’s accessibility standard). This means I can’t just test with one screen reader — I need coverage across NVDA, JAWS, and VoiceOver at minimum.
The problem is the inconsistency is absolutely wild.
Specific things driving me crazy right now:
aria-liveregions that announce correctly in NVDA but are completely silent in JAWS- VoiceOver on iOS treating
role="button"differently depending on whether it’s a<div>or<span> - Focus management after modal close works in one, breaks in another
I’ve got a Windows VM for NVDA and JAWS testing and a physical iPhone for VoiceOver. The setup works but the context-switching is exhausting.
Has anyone found a sustainable workflow for multi-screen-reader regression testing?
I’m also wondering how much of this is reasonable to automate vs just accepting it needs skilled manual testers. Would love to hear from people doing this at scale.

The
aria-liveinconsistency you’re describing is a known pain point. JAWS has its own internal logic for deciding what’s “worth” announcing from live regions and it doesn’t always respectaria-atomicthe way the spec intends.What’s worked for us: wrapping live region content in a visually-hidden but real
<p>tag rather than relying purely on ARIA attributes. More semantic HTML tends to behave more predictably across readers than heavy ARIA decoration.<p class="sr-only" aria-live="polite">Form submitted successfully</p>It feels old-fashioned but it’s more reliable in practice.
thanks!
Quick question : How do you know this claim is true:
Have they stated this themselves?