how do you manage testing across NVDA, JAWS, and VoiceOver without losing your mind? 🫠 I’m a QA engineer in the Netherlands working on a government portal that has to comply with EN 301 549 (Europe’s accessibility standard). This means I can’t just test with one screen reader — I need coverage across NVDA, JAWS, and VoiceOver at minimum.

The problem is the inconsistency is absolutely wild.

Specific things driving me crazy right now:

  • aria-live regions that announce correctly in NVDA but are completely silent in JAWS
  • VoiceOver on iOS treating role="button" differently depending on whether it’s a <div> or <span>
  • Focus management after modal close works in one, breaks in another

I’ve got a Windows VM for NVDA and JAWS testing and a physical iPhone for VoiceOver. The setup works but the context-switching is exhausting.

Has anyone found a sustainable workflow for multi-screen-reader regression testing?

I’m also wondering how much of this is reasonable to automate vs just accepting it needs skilled manual testers. Would love to hear from people doing this at scale.

  • cool_developer
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m a QA consultant based in Canada specialising in a11y. Honest answer to your question:

    Most of it needs to stay manual. Automated tools (Axe, Lighthouse etc.) catch maybe 30–40% of real accessibility issues — the rest requires human judgment and actual assistive technology.

    What you can automate:

    • Axe-core integrated into your Playwright suite catches low-hanging fruit on every PR
    • Custom linting rules for missing alt text, empty labels, bad heading structure

    What you cannot automate:

    • Whether a screen reader actually conveys the right meaning
    • Logical focus order
    • Whether error messages are actually helpful when announced

    Your exhaustion is valid. This work is skilled and time-intensive. Push back on anyone who tells you an overlay or an automated scanner “handles” accessibility.