About This Episode:
Is your architecture sabotaging your tests
Playwright users, there's a new shortcut in town, what is it.
AI testing tools are multiplying fast but what's actually useful versus what's just hype.
Find out in this episode of the Test Guild New Shows for the week of Nov2 . Â So, grab your favorite cup of coffee or tea, and let's do this.
Exclusive Sponsor
Discover ZAPTEST.AI, the AI-powered platform revolutionizing testing and automation. With Plan Studio, streamline test case management by directly importing from your common ALM component into Plan Studio and leveraging AI to optimize cases into reusable, automation-ready modules. Generate actionable insights instantly with built-in snapshots and reports. Powered by Copilot, ZAPTEST.AI automates script generation, manages object repositories, and eliminates repetitive tasks, enabling teams to focus on strategic goals. Experience risk-free innovation with a 6-month No-Risk Proof of Concept, ensuring measurable ROI before commitment. Simplify, optimize, and automate your testing process with ZAPTEST.AI.
Start your test automation journey today—schedule your demo now! https://testguild.me/ZAPTESTNEWS
Links to News Mentioned in this Episode
| 0:20 | ZAPTEST AI | https://testguild.me/ZAPTESTNEWS |
| 0:59 | Playwright Selectors | https://testguild.me/zay6vv |
| 2:11 | QA Testable | https://testguild.me/irdt5r |
| 3:53 | WebdriverIO Obsidian | https://testguild.me/z7rk0l |
| 4:51 | TestSprite | https://testguild.me/zs96xo |
| 5:59 | Agentic QA | https://testguild.me/ov9efn |
| 7:08 | Microsoft Copilot | https://testguild.me/px8l2n |
| 8:06 | Â Robonito | https://testguild.me/zy05pa |
| 8:41 | Load Testing Yes Or Why | https://testguild.me/o0j99p |
News
[00:00:00] Joe Colantonio Is your architecture sabotaging your test? Playwright users, there's a new shortcut in town. What is it? AI testing tools are multiplying fast, but what's actually useful versus what's just hyped? Find out in this episode of the Test Guild News show for the week of November 2nd. Grab your favorite cup of coffee or tea and let's do this.
[00:00:20] Hey, before we get into the news, I want to thank this week's sponsor Zaptest AI, an AI driven platform that can help you supercharge your automation efforts. It's really cool because their intelligent co-pilot generates optimized code snippets while their planned studio can help you effortlessly streamline your test case management. And what's even better is you can experience the power of AI in action with their risk-free six-month proof of concept featuring a dedicated ZAP expert at no upfront cost. Unlock unparallel efficiency and ROI in your testing process. Don't wait. Schedule your demo now and see how it can help you improve your test automation efforts using the link down below.
[00:01:00] All right, let's kick things off with something that could be useful right now if you're working with Playwright. And this by the one and only Sanjay Kumar, founder and creator of Selectors Hub that everyone should be using. He just announced that Playwright Selectors is now live on Selectors Hub version 5.4.9. And this Chrome and Edge extension now auto generates and verifies Playwright selectors with just one click. Bringing the same ease already available for things like XPath and CSS. And if you don't know, Selectors Hub is a Chrome extension that helps testers and developers instantly create and validate Playwright locators, XPath, and CSS Selectors directly inside Chrome DevTools. And it also supports things like iFrames, shadow DOMs, SVGs, dropdowns, and dynamic UI elements. While auto suggesting attributes and text values to speed up selector creation. It also highlights matched elements and shows errors and includes features like XPath healing and locator page for bulk verification and locater generation. To use it, all you need to do is open up your Chrome developers, select the Selectors Hub tab or icon, inspect an element and click the Playwright Selectors button. Smart suggestions and auto verification simplifies editing and testing selectors in real time.
[00:02:11] Now, if you ever wondered why a test is slow or flaky, despite your best efforts, this next story might change how you think about the problem. This is a new blog post on Medium by Mona, who argues that testability is a core architectural property that determines both testing effort and software quality. Rather than focus only on writing tests, the article stresses design systems that are easy to test, observe, and control. Testability is defined how easily a system supports creating and running effective tests. Low testability doesn't always mean about quality. But it slows feedback, increases effort, and hides defects. High testability, on the other hand, delivers faster feedback and lower risk. And Mona breaks down testability into 3 main pillars. The first one is observability, seeking what happens inside the system through logging, tracing, like open telemetry, metrics, and health endpoints. Second is controllability, being able to set the system into known states via APIs, dependency injection, feature flags, or test hooks. And third is isolation, ensuring tests to run independently through mocking, in-memory database, containerized environments and seamless designs. She also goes over 8 architectural anti-patterns that reduce testability, like global state and independencies, tightly coupled services, non-deterministic behavior, weak logging, shared databases, un-mockable singletons and external systems without fallback. And to measure testability. This article suggests tracking test duration, set up complexity, flakiness, coverage, and time to diagnose failures. And finally, Mona recommends treating testability as a first class design concern, adding to the architecture review and checklist that covers observability, controllability, mockability, isolation, and cleanup mechanisms. And you can read more about it using the links down below.
[00:03:53] Joe Colantonio All right, staying on the technical side, here's a detailed walkthrough for anyone testing Electron apps or plugins who thought end-to-end testing wasn't possible in their environment. This is by Kirill, who's just released a detailed guide on setting up end-to-end testing for Obsidian plug-ins using WebDriver.io. He highlights a major testing gap. Out of more than 2,600 public obsidian plugins repositories, only a handful include end-to-end tests. And this is built on Electron and Chromium, which possesses a challenge because developers can't access its source code. Older Electron documentation recommended Spectron, which is now deprecated, while current guidance points to WebDdriver.io with their community-maintained electron service as its successor. This guide explains how to create dedicated end-to-end test projects configuring WebDriver.io to launch Obsidian, to build a clean test vault for each run that demonstrates automation plugin activation, interacting with UI elements through page objects and handling non-unique selectors.
[00:04:51] Joe Colantonio Next up is a follow the money segment. TestSprite, a C-HAL based startup has raised 6.7 Million dollars in seed funding for their AI automated code testing and validation platform. TestSprite addresses what the company identifies as a bottleneck in AI powered development. While coding tools like Cursor, Windsurf and GitHub Copilot have accelerated code writing, testing the validation has not kept up pace. Well, that all changes because TestSprite uses an autonomous AI agent that integrates directly into AI-enabled integrated development environments and multi-cloud platforms. The platform will automatically generate, run, and update tests, both front-end and back-end code without manual input. According to the company, identifies issues and proposes fixes with explanations, reducing testing time from days to minutes. And the founder states that writing code is no longer the difficult part and that the real challenge is assuring code behaves as attended. He also describes TestSprite as an autopilot player. That converts AI written code into production ready software without the manual testing that typically slows teams down. Definitely a unique solution. You should definitely check out yourself as well using the link down below.
[00:05:57] Joe Colantonio Let me know your thoughts. And TestSprite isn't the only one betting on AI powered testing. I hear it for multiple vendors doing this. But EPAM just launched something called Agentic QA. It's worth understanding what agentic actually means in this context. Let's check it out. So EPAM systems has launched Agentic QA, which is an AI native testing solution designed to address challenges in accelerating software development cycles. According to EPAM, traditional testing approaches, both manual and automated, are struggling to keep pace with modern development demands. Organizations frequently encounter production issues due to insufficient testing time as development cycles compress. Agentic QA is part of EPAM. AI run tools for testing as a service platform. The solution introduces what the company calls adaptive regression testing, which combines AI and subject matter expertise and what they term a vibe testing experience. The system is designed to dynamically adapt to user interface changes, navigate complex user paths, and test both functional and non-functional requirements in real time without requiring scripts. And Adam Auerbach, who is the vice president of DevTestSecOps practices at EPAM, draws on more than a decade of crowd testing experience and aims to provide clients with faster, more adaptive testing capabilities.
[00:07:08] Joe Colantonio And as more teams build AI agents, a new challenge emerges. How do you test them? Well, Microsoft just released a solution built right into Copilot Studio. So Microsoft has announced the public review of agentic evaluation in Copilot studio, a new built in feature that automates testing for AI agents. It tackles a key challenge in AI development, manual time consuming testing that doesn't scale. The agent evaluation enables structured automated tests directly inside Copilot Studio. Makers can create, evaluate sets, choose test methods, define success metrics and assess performance across different agentic models all in one place. And users can upload or reuse test sets, add custom questions, or let AI generate queries from the agent's metadata and knowledge sources. And tests could check for exact or partial matches, intent accuracy, and semantic relevance. Successful criteria can be turned into either strict keyword alignment or meaning based evaluations. Really cool stuff. You can check out more about it down below.
[00:08:06] Joe Colantonio As you know, not every team has dedicated performance engineers, which is why no code tools like Robonito has gained traction. What is it? So Kailash introduces Robonito, which has an AI powered no code tool designed to simplify AI performance testing. He presents it as a solution for QA teams frustrated by the scripting and set up complexity of traditional performance testing frameworks. Robonito, less testers simulate real world API traffic and track metrics like response time, latency, throughput and error rates. Testers simply set virtual uses and duration through the interface, eliminating the need for scripts or agents.
[00:08:41] Joe Colantonio And speaking of performance, this new article by Lars and Rebecca argued that effective performance engineering requires both load testing and observability. In their latest article, they say load testing answers if systems meet performance goals, while observability explains why they do or don't. They define performance engineering as a blend of testing, monitoring. Analysis, tuning and capacity planning, load testing scripts must account for testing levels, realistic user behaviors, API payloads and various load types like scalability, stress, longevity and spike and observability platforms collect telemetry to reveal systems hidden behavior and to load the real time metrics like trace logs and service maps. For example, a load test may show whether an API can handle 300 transactions per second, but observability pinpoints, whether slowdowns stem from code. Infrastructure, or Kubernetes saturation, and the authors highlight observability's role in root cause analysis, AI assisted diagnostics, and feedback loops for tuning performance.
[00:09:41] Joe Colantonio All right. For links of everything of value we covered in this news episode, head on over to those links in the comments down below. That's it for this episode of the Test Guild News Show. I'm Joe. My mission is to help you succeed in creating end-to-end full stack pipeline automation awesomeness. As always, test everything and keep the good. Cheers.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.



