About This Episode:
Are your accessibility tests missing critical issues? A new open-source framework with Selenium + Axe-core might be your fix.
Can AI really make Test-Driven Development 10x faster?
Autonomous testing is heating up. Forrester’s Q3 2025 report reveals the big winners, risks, and disruptors testers need to know about.
Find out in this episode of the Test Guild New Shows for the week of Oct 5th. So, grab your favorite cup of coffee or tea, and let's do this.
Exclusive Sponsor
Discover ZAPTEST.AI, the AI-powered platform revolutionizing testing and automation. With Plan Studio, streamline test case management by directly importing from your common ALM component into Plan Studio and leveraging AI to optimize cases into reusable, automation-ready modules. Generate actionable insights instantly with built-in snapshots and reports. Powered by Copilot, ZAPTEST.AI automates script generation, manages object repositories, and eliminates repetitive tasks, enabling teams to focus on strategic goals. Experience risk-free innovation with a 6-month No-Risk Proof of Concept, ensuring measurable ROI before commitment. Simplify, optimize, and automate your testing process with ZAPTEST.AI.
Start your test automation journey today—schedule your demo now! https://testguild.me/ZAPTESTNEWS
Links to News Mentioned in this Episode
| Time | News | Link |
| 0:24 | ZAPTESTAI | https://testguild.me/ZAPTESTNEWS |
| 1:03 | Selenium Axe-Core | https://testguild.me/72t9fx |
| 1:53 | ARIA Notify | https://guildlive.io/s/z9i67Bww |
| 3:33 | Mobile No-Code + AI | https://guildlive.io/s/0xt4p5EB |
| 4:52 | Caesr AI | https://guildlive.io/s/vlcASVIH |
| 5:35 | TDD With AI | https://guildlive.io/s/z0Qc3Cnd |
| 7:06 | Forrester Report | https://guildlive.io/s/POq47b9W |
| 8:47 | DevTools (MCP) | https://guildlive.io/s/f9ssW2In |
News
[00:00:00] Joe Colantonio Are your accessibility tests missing critical issues? A new open source framework with Selenium + Axe Core might be your fix. Can AI really make test-driven development 10x faster? And have you seen Forrester's Q3 2025 report revealing the big winners, risks and disruptors testers need to know about? Find out in this episode of the Test Guild News Show for the week of October 5th. Grab your favorite cup of coffee or tea and let's do this.
[00:00:24] Joe Colantonio Hey, before we get into the news, I want to thank this week's sponsor Zaptest AI, an AI driven platform that can help you supercharge your automation efforts. It's really cool because their intelligent co-pilot generates optimized code snippets while their planned studio can help you effortlessly streamline your test case management. And what's even better is you can experience the power of AI in action with their risk-free six-month proof of concept featuring a dedicated ZAP expert at no upfront cost. Unlock unparallel efficiency and ROI in your testing process. Don't wait. Schedule your demo now and see how it can help you improve your test automation efforts using the link down below.
[00:01:04] Joe Colantonio First up is all about accessibility testing. And I found this on LinkedIn by Saran Kumar, a senior SDET at Greenway Health. He just released an open source accessibility testing framework on GitHub that integrates Selenium Web driver with Deque's Axe Core library. And this project automatically scans websites for accessibility violations and generates multiple report formats for different audiences. When executed, the framework launches Chrome, navigates to the target website, wait for page load, runs the access accessibility scan and writes result to a JSON file. A post-processing utility then converts the JSON file into both CSV and HTML formats. The framework can also be configured to fail builds if violation thresholds are exceeded and can be integrated into CI/CD pipelines through Jenkins GitHub actions or GitLab CI with build artifacts archived for review.
[00:01:54] Joe Colantonio And speaking about accessibility, a new accessibility API has just dropped. What is it? Check it out. This is Mark Noonan from Cypress, who's published a guide on testing the new area notify API, which provides a programmatic way to communicate live page updates to assistive technology like screen readers. And this API currently has experimental status, but is available in Chrome beta version 1.41. It was also accessible earlier this year through Microsoft Edge origin trial. And what's really cool, if you don't know the area notify API allows developers to dispatch events from any DOM element or the document object itself to announce user facing state changes, offering an alternative to the current approach of using ARIA live regions for screen reader announcements. And according to this article, this addresses common accessibility gaps where state changes like loading indicators often creates blockers for visually impaired users by replacing button text without providing equivalent information to screen readers. And this tutorial walks you through building an end-to-end test for a shopping cart button that uses the API to announce two states, an immediate adding item to cart message when clicking followed by an adding item to cart confirmation after simulating a 2 second server delay. So this test really demonstrates several Cypress techniques, including visiting local HTML files without running a server using cy clock and cy tick to control time and avoid waiting for delays. And how to stub the ARIA notify method on a specific HTML element to verify it's called with the correct messaging, creating Cypress aliases for easy reference to stubs and using Cypress with native keyboard events introduced in the Cypress version 15.1.
[00:03:31] Joe Colantonio Alright, next up is the webinar of the week. We're hosting a webinar on mobile QA featuring Autify's no-code platform and its AI-powered capabilities. This session will be led by Ryo, Co-founder and CEO of Autify, who has over 10 years of software engineering experience. And this webinar addresses mobile testing challenges that include device and OS fragmentation, flaky UIs and high test maintenance overhead. And I know, speaking to a lot of testers, these issues cause teams to spend more time fixing tests rather than releasing software and scaling coverage typically requires substantial effort. Ryo told me he's going to demonstrate Autify's mobile platform, specifically focusing on two factors. AI prompt assertions for validation and JavaScript steps for advanced testing scenarios. And the session aims to show how these tools handle dynamic UIs, reduces maintenance cycles, and supports faster delivery. And he also told me attendees are going to see how QA teams can simplify mobile testing across different devices and operating systems using a no code approach combined with AI and how AI prompt insertion and JavaScript Steps can stabilize automation at scale while covering edge cases and methods to accelerate release cycles while reducing QA costs. I highly recommend you register now using that special link down below. Hope to see you there.
[00:04:51] Joe Colantonio All right, I just saw a new tool on product hunt that I haven't heard about, and it came my way via LinkedIn from Dominic, who talked about how Caesr AI by Ask UI has just dropped, which is a no-code automation platform that goes way beyond web testing. It uses AI to execute complex test cases across browsers, mobile and desktops while automatically generating reports with screenshots and markdown documentation. And so for testers, obviously that means faster, more accurate test execution, less manual drudgery and automation that works across any interface, not just the web. I highly recommend you check out Product Hunt. Think of this as an AI powered UI agent that you can click, verify, document and report all without any code. Does it work? Well, try it out for yourself and let me know in the comments down below.
[00:05:34] Joe Colantonio All right. Also on LinkedIn, this next article came my way. It's by Jeff Morgan, a developer with over 25 years of test-driven development experience. He's just published a workflow for using AI to achieve the same outcomes as TDD, well following the traditional red green refactoring cycle. Sounds cool, right? So, hey, let's check it out. So if you don't know, Jeff is well known in the space. He has taught TDD workshops at multiple conferences worldwide. He initially struggled to integrate AI into a TDD workflow and found that traditional approaches felt unnatural. And so after months of experimentation with different tools and configurations, Jeff developed a process focused on outcomes rather than strict adherence to TDD practice. But the goal remains the same, producing thoughtful testing and well-factored code. And Jeff reports now completing in one hour what previously took a full day has built applications with the sun and a half a day that would normally require one to two weeks without AI. And the workflow centers on three main components. First, developers configuring their IDE to make critical task easily accessible, including running tests, obtaining code coverage information, running static code analysis and performing security analysis on both code and dependencies. Second, developers educate their large language models by creating an agent's MD file at the project route that contains quality and security guidelines. And third, the workflow involves asking the LLMs to write a plan that divides task into small steps and save it to a file for tracking. So this is definitely something I'm going to check out. I highly recommend you do too, using that link down below.
[00:07:07] Joe Colantonio Next up is the Autonomous Testing Platforms Landscape Report. So Forrester has released their Q3 2025 Autonomous Testing Platform Landscape report, which is authored by Diego and a few other authors covering 31 vendors. The report defines autonomous testing platforms as solutions that combine traditional automation with the AI and Gen AI agents to continuously perform Increasingly autonomous testing task across functional and non-functional end-to-end test. And according to Forrester, testing risks become the bottleneck of the software development lifecycle as organizations adopt AI tools to generate requirements, designs, and code at faster rates. The report identifies three primary business values for testers, accelerating time devalued through AI-driven test automation, reducing strategic risk through risk-based testing orchestration and intelligent test scoping, and democratizing testing through no-code interfaces and natural language test authoring that lets non-technical users contribute to quality efforts. And it also goes of how the market dynamics reveals a split picture. The main trend is AI-driven autonomous testing, delivering rapid innovation and adoptive automation. And the primary challenge is that buyers struggle with understanding AI's actual value versus vendor claims, face skill gaps as testers must evolve into strategic orchestrators and deal with fragmented toolchains and unclear ROI. And the top distributor identified this Agentic AI which enables self-updating, self-healing, and continuous testing with human orchestration. And the report also covers four use cases and five extended use cases, including testing of AI and Gen AI systems themselves, and also includes research into 31 vendors.
[00:08:47] Joe Colantonio Looking for ways to help debug your tests? Well, this next announcement may help. It's all about Chrome DevTools MCP that's been released for AI agents. So Chrome has launched a public preview of the Chrome Devtools model context protocol server which connects AI coding assistance directly to Chrome's debugging tools. And this report addresses what Teams describes as a fundamental problem. AI agents cannot see what their generated code actually does when running in a browser. But the Chrome DevTools MCP server implements this protocol but the chrome DevTool MCP several implements these protocols to give AI assistance access to Chrome debugging capabilities and performance insights. The system works by providing tools that LLMs can call such as performance.trace, which allows an AI agent to launch Chrome, open a website and record a performance trace that could then be analyzed for improvements. The documentation outlines several testing and debugging scenarios.
[00:09:44] Alright, so that's it for this episode of the Test Guild News Show. I'm Joe. My mission is to help you succeed in creating end-to-end full stack pipeline automation awesomeness. And as always, test everything and keep the good. Cheers.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.