Over the last couple of years, many of my podcast guests who work in the testing industry have spoken about the rapid adoption of AI within their software development teams. During a recent webinar/podcast I hosted with automation testing industry veterans Tal Barmeir and Guy Arieli of Blink.io., for instance, they dropped some real-world knowledge about how AI can help you with your automation testing efforts.
Read on to learn how to employ their expert tips to master AI in test automation.
AI Agents as Test Automation Coders
Tal and Guy pointed to the use of AI agents as a possible future for AI in testing.
AI agents act as test automation coders supervised by human testers. These autonomous agents receive test requirements, execute them, and create and maintain that code.
You may be thinking, “But doesn't this replace the human tester from the process?” The answer is, not really.
Guy and Tal mentioned that humans still play a crucial role in auditing the AI's work.
So, they do things like helping the agents when they get stuck and overriding decisions when needed.
I believe this is basically the model that most teams will adopt for AI in software development.
It will be a human-AI collaboration model, NOT a replacement model. Other benefits will include helping more folks with deep domain knowledge contribute to the automation efforts, since AI can handle the heavy lifting of writing the code.
Some automation engineers can then, in turn, be promoted into managerial roles overseeing teams of AI testers.
How to Keep Up with AI-Assisted Development
AI-generated code is already being widely used.
According to Guy, AI-assisted development tools like GitHub Copilot are already boosting developer productivity by 20-30%.
Why should you care?
Well, the extra code that is being developed will also require testing.
Yay for testers!
But for testers to keep up with all this new code, Tal believes that using AI-powered testing tools will be key for QA engineers to keep pace.
Think about it – without taking advantage of AI assistance, your testing team risk becoming burned out, or worse – being seen as a bottleneck.
Chat with Joe Bot: Your AI Testing Guru
How AI Will Help You Scale and Prioritize Tests
After interviewing over 500 software engineers, I’ve learned that scaling tests is one of the biggest hurdles to automation.
Here's the good news. AI agents can effectively alleviate one of the biggest challenges in automation: scaling.
The size of your test team no longer limits you. Instead, you can scale up unlimited testing agents to help you.
So rather than picking and choosing what to test based on time constraints ( as well as praying you've identified the right mix), you can take advantage of unlimited agents.
With AI, you’re no longer confined to a limited scope of testing. You now have the freedom to cover all the different configurations and scenarios that need to be tested.
Another significant benefit of AI is the ability to handle localization testing.
So, a test you've written once can automatically be used to validate dozens of languages.
Talk about a time saver!
Your traditional testing matrix of separate scripts for each language collapses into a single, AI-powered test. Even the historical divide between mobile and web testing fades, as a properly designed AI test focused on business logic can run seamlessly across both.
The Power of AI + Cucumber
Guy described how Cucumber, a business-readable, domain-specific language (DSL) for behavior specifications, is perfect for prompting AI test generation. By writing test scenarios at a business logic level in Cucumber, you can feed the AI all the information it needs to autonomously create full test automation code.
The key is getting the prompt engineering right—providing enough detail for the AI to understand what to do, but keeping it focused on business logic vs. low-level UI interactions. Tal recommends using Cucumber best practices like the imperative mood (e.g. “login with username and password”) and Gherkin syntax to strike this balance.
Under the Hood: How AI Testers Work
In a live demo, Guy showed an AI tester in action. Given a basic user journey like “login and add an item to cart”, the AI autonomously broke it down into subtests, generated detailed test code (in Playwright), and executed the scripts, reasoning through the UI along the way. The generated code was sophisticatedly designed with multiple locators, proper prioritization, and naming—all created in minutes.
When the same tests were run against an updated version of the app with UI changes, the AI tester was able to dynamically rewrite the testing flow to adapt, while alerting the human supervisor to review the changes. All of this integrates with existing CI/CD pipelines, with the AI tests committed to source control for auditability. You can even “protect” portions of the test code from AI updates.
The AI testing also enables measuring user experience indirectly. By tracking how much “effort” the AI expends to complete a user flow, you can flag unintuitive interfaces that may need redesign before human testing. You can even “train” the AI tester with your application documentation to behave like an expert user.
Intelligent Recovery and Cross-Platform Support
A big headache in test automation is maintaining tests when the application UI changes. Tal explained how Blink.io's AI handles this intelligently. When tests fail, the AI analyzes the failure reason and attempts to rewrite the relevant portions of code to adapt to the UI changes. It then re-runs the tests and creates a pull request with the updated code for human review. That means testers can offload the tedious maintenance to AI and focus their efforts on testing new functionality.
Guy also described the AI tester's cross-platform capabilities. The same AI-powered test scenario can adapt itself to run across web, mobile and desktop versions of an app. The AI creates the needed platform-specific interactions under the hood, while the tester just focuses on describing the core business logic of the test.
Getting Started with AI Testing
To take advantage of AI testing, Tal and Guy recommend:
- Experiment extensively with AI testing tools to build familiarity with their capabilities and limitations. There's no substitute for hands-on learning.
- Start by having the AI generate a comprehensive smoke test suite for a new feature, then layer on additional edge case scenarios.
- Provide the AI clear, Cucumber-based scenario descriptions. Invest in learning good prompt engineering.
- Have the AI handle the tedious heavy lifting of test creation and maintenance, while focusing human testers on exploring novel scenarios.
The Future of AI Testing
Looking ahead, Guy sees potential for AI testing to expand further into API and visual testing. By ingesting API documentation, AIs could generate intelligent API test suites. And by analyzing application screenshots, AIs could spot visual regressions.
Tal emphasized that we're still in the early stages of the generative AI revolution. As language models grow more sophisticated, the sky's the limit for AI testing. We may see AIs autonomously testing everything from documentation to performance to security. But some form of human-in-the-loop testing will always be needed.
Get Started with Blinq.io
To explore AI testing for yourself, check out Blinq.io. The platform's AI agents can transform Cucumber scenarios into fully functioning test automation code in minutes. It integrates with popular tools like Playwright and offers a free trial to get started. You can also join Blinq's weekly demo webinars to see the AI in action.
Embracing the AI testing revolution will be key to thriving in an era of AI-augmented development. By collaborating effectively with AI agents, human testers can massively boost their efficiency and focus their efforts on high-value exploratory testing. Mastering this emerging paradigm will be a critical skill for testers in the years ahead.