how do you manage testing across NVDA, JAWS, and VoiceOver without losing your mind? 🫠 I’m a QA engineer in the Netherlands working on a government portal that has to comply with EN 301 549 (Europe’s accessibility standard). This means I can’t just test with one screen reader — I need coverage across NVDA, JAWS, and VoiceOver at minimum.

The problem is the inconsistency is absolutely wild.

Specific things driving me crazy right now:

  • aria-live regions that announce correctly in NVDA but are completely silent in JAWS
  • VoiceOver on iOS treating role="button" differently depending on whether it’s a <div> or <span>
  • Focus management after modal close works in one, breaks in another

I’ve got a Windows VM for NVDA and JAWS testing and a physical iPhone for VoiceOver. The setup works but the context-switching is exhausting.

Has anyone found a sustainable workflow for multi-screen-reader regression testing?

I’m also wondering how much of this is reasonable to automate vs just accepting it needs skilled manual testers. Would love to hear from people doing this at scale.