Curious what’s actually useful vs hype.

Wanted to get a real-world temperature check on this. My company in India (B2B SaaS) is being pushed by management to “use AI in our QA process” — classic top-down pressure without much direction.

I’ve personally tried:

  • GitHub Copilot for autocompleting test cases (actually useful for boilerplate)
  • Cursor for refactoring old Selenium tests (surprisingly decent)
  • One of the newer “autonomous testing” tools I won’t name ! complete waste of money

What’s been your experience? I’m specifically interested in whether anyone is using AI for: Test case generation from requirements documents

That seems like the most promising use case to me but I haven’t found a solid workflow yet.

  • pakistani_tester1947
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Hi, Context, I’m Based in Pakistan, QA architect at a mid-size logistics company.

    Copilot for writing unit tests is legitimately good when the code under test is well-structured. Give it a pure function with clear inputs/outputs and it’ll cover edge cases you’d forget. Give it a tangled service class with 10 dependencies and it produces confident-looking garbage.

    The “autonomous testing” tools — I’ve evaluated three of them now. They work okayish for simple happy-path flows but they fail completely when your app has complex auth, dynamic content, or anything stateful.

    • definitelydevelopingOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Thanks for your response would love to connect if you want to chat more! I think we could learn useful things from eachother 🙌

    • german_coder8763
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Can you explain what that means substatively? Is it really about complex auth flows or how the tools are configured?