Agent2Agent, MCP Accessibility, Kubernetes Performance and More TGNS156

By Test Guild
  • Share:
Join the Guild for FREE
Test-Guild-News-Show-Automation-DevOps

About This Episode:

Have you seen this latest AI-powered test automation planform?

What is the Agent2Agent and how does it relate to MCP

How does SEGA use automation testing?

Find out in this episode of the Test Guild New Shows for the week of May 4.  So, grab your favorite cup of coffee or tea, and let's do this.

Links to News Mentioned in this Episode

0:18 ZAPTEST AI https://testguild.me/ZAPTESTNEWS
0:57 Autify 3-Month Free https://testguild.me/a5n62h
2:14 mcp-axe https://testguild.me/5x9jvz
3:21 SEGA automation https://testguild.me/bbvqg2
4:47 Google A2A https://testguild.me/naktl2
6:37 Kubernetes Performance https://testguild.me/p5lq05
7:40 MCP Security Testing https://testguild.me/8amonh
8:08 BrightStar https://testguild.me/4jxlag

News

[00:00:00] Joe Colantonio Have you seen the latest AI powered test automation platform? What is the agent2agent protocol and how does it relate to MCP? And how does Sega use automation testing? Find out in this episode of the Test Guild News Show for the week of May 4th, so grab your favorite cup of coffee or tea and let's do this.

[00:00:18] Joe Colantonio Hey, before we get into the news, I want to thank this week's sponsor Zaptest AI, an AI-driven platform that can help you supercharge your automation efforts. It's really cool because their intelligent co-pilot generates optimized code snippets, while their planned studio can help you effortlessly streamline your test case management. And what's even better is you can experience the power of AI in action with their risk free 6 month proof of concept, featuring a dedicated ZAP expert at no upfront costs. Unlock unparallel efficiency and ROI in your testing process. Don't wait. Schedule your demo now and see how it can help you improve your test automation efforts using the link down below.

[00:00:57] Joe Colantonio As you know, I'm always on the look out for new tools. So here's an example of a new one I just heard about. And it's how Autify has released Nexus, a generative AI powered test automation platform designed to help QA teams and developers create and manage automated tests faster with more flexibility. Built natively on Playwright, Nexus allows users to generate test cases directly from plain English and produce requirements using natural language processing. This platform also supports both low code and full code workflows aiming to bridge the gap between manual testers and developers. It also offers options for test execution locally in the cloud or on premise, giving teams full control over infrastructure and their security needs. And what's really great is Nexus really sets itself apart by offering exportable Playwright scripts, real time test generation from requirements via a feature called Genesis AI, and a modular architecture that supports integration with CI/CD pipelines. And they say early access users already report saving time in test creation, improving coverage, and reducing maintenance. As I always say, seeing is believing. Definitely check out their free 3 month trial and put AI power testing to the test for yourself. To check it out now, using the special link down below.

[00:02:14] Joe Colantonio I was just thinking to myself recently, I haven't heard a lot from Manoj Kumar Kumar recently, but then, I just saw this announcement on LinkedIn from him. If you don't know Manoj Kumar is an open source contributor to a bunch of different test automation tools. He's also an accessibility advocate and he just announced the release of a new open source MCP plugin designed to support accessibility audits within agentic AI workflows. And this plugin allows AI agents, including tools like Claude and Cursor to perform audits on web URLs, HTML pages, or raw JSON outputs, returning summarized accessibility violations in plain text. It's built on top of Selenium and leverages the axe core engine. The plugin highlights how MCPs can act as a bridge between traditional testing tools and AI-driven development environments. Also on LinkedIn, Manoj mentioned he developed the plugin over Easter break to enhance his hands-on understanding of agentic systems and contribute a practical solution for integrating accessibility into modern AI-powered UI development and testing workflows. And even better, the plugin is now publicly available on GitHub as well, and you can find it using the link.

[00:03:21] Joe Colantonio All right. I'm also always on the lookout for real use cases of how people succeed with test automation. And here's an interview I just found on how RGG Studios, the developers behind the Like A Dragon series, has revealed that its rapid game release cycle is partially due to streamlining testing and debugging the processes that begins early in development. And they also shared how their internal QA team is deeply embedded in the development pipeline from the outset, allowing bugs to be detected and addressed almost in real time. And rather than waiting for complete builds, it goes through how RGG Studio integrates daily checks and constant playthroughs, enabling immediate identification of issues. Developers often work side by side with QA engineers, allowing for direct communication and quicker iterations. And this tight feedback loop also reduces the need for lengthy post-production testing phases and supports the studio's ability to release major titles annually. Also, they go over how the tools that capture gameplay sessions and detailed logs have allowed RGG to trace bugs precisely significantly improving turnaround times and fixes. And they also point out its experience with reused assets and internal engines familiarity as factors that support fast development without compromising quality.

[00:04:38] Joe Colantonio Hopefully by now, software testers already are using an embedded QA process. But if not, here's a great interview to check out more on how it's done in the real world successfully.

[00:04:48] Joe Colantonio All right. So this next post was actually released three weeks ago. I'm not sure how I missed it because it is to me mind blowing. What is it? Let's check it out. This is all about how Google has released a new open source protocol called Agent2Agent designed to enable direct communication between AI agents. And this closely aligns with the model context protocol developments we've been covering over the past few months. This protocol establishes a standardized way agents to collaborate by exchange instruction messages, shared goals, and negotiating task. It includes a conversation protocol built on JSON and a flexible API that allows agents to build on different platforms or even different architectures to coordinate with minimal integration overhead. And Google positions this release as a step towards more robust, agentic ecosystems where multiple specialized AIs can dynamically collaborate to solve problems across applications such as software development, customer support, or automation testing. And this is wild because MCP marks a shift in how AI interacts with tools, not through pre-written code or APIs, but through context-driven autonomous reasoning. Instead of explicitly programming how to use tools, you can now describe the goal and the AI figures out how to orchestrate these tools needed to accomplish it. With A2A now, Google is enabling multiple AI agents to collaborate directly on complex tasks. So this emphasis is moving beyond tool usage to AI teaming. Agents negotiating, planning and solving problems together without step by step human instructions. In short, we're entering an era of autonomous software that doesn't just follow commands, but reasons about what to do and how to do it. I think this is another development you need to keep an eye on to take advantage of how you can increase automation quickly using these new protocols that are being released.

[00:06:38] Joe Colantonio Alright, in performance testing news, I found a new series that just started all about Kubernetes performance testing. This LinkedIn article by Bruno delves into performance testing for Java applications deployed on Kubernetes platforms. The discussion addresses specific strategies and methodologies to optimize resource utilization, enhance response times and assure scalable application performance. And some of the significant points Bruno points out is optimizing Java virtual machine settings. Leveraging Kubernetes configurations like scaling policies and resource requests and the integration of monitoring tools to identify bottlenecks. He also emphasized the importance of understanding underlining infrastructure to achieve effective tuning. The article also provides detailed actual advice on adjusting heap size, garbage collection settings, and auto scale configurations, the Kubernetes workflows to improve your application's efficiency. In mastering these adjustments and maintaining monitoring systems are crucial. Ensuring optimal application performance in your cloud native environments.

[00:07:40] Joe Colantonio Also, as I was looking for news articles on LinkedIn, I came across this one that caught my attention. I think this highlights a significant area more testers need to look at. What is it? Well, this is by Den who focuses on MCP security best practices and he points to a series of security best practice designed to enhance software testing protocols. And this document serves as a resource aimed to equip software testing teams with strategies to mitigate potential vulnerabilities early in the development cycle. As I was researching topics around automation and security testing on LinkedIn, I also came across this announcement about a new tool to help with automated security testing. What is it? Check it out. At the RSA conference in San Francisco, Bright Security introduced BrightStar, which is an AI-driven platform designed to automate application security testing and remediation. I never heard about this tool before, so I did a little digging and found out that the system integrates into continuous integration pipelines and developer workflows, and it's aimed to identify and address vulnerabilities of both AI generated and traditional code bases. And some of the features include automated test generation, code fix suggestions, and validation of applied fixes with the goal of reducing manual intervention in security processes.

[00:08:52] Joe Colantonio Just another trend I've been seeing more and more of some of these tools coming out to automate and test AI aspects of security and definitely something you should check out. All right, so that's it for this episode of the Test Guild News Show. I'm Joe, and my mission is to help you succeed in creating end-to-end full stack pipeline automation awesomeness. As always, test everything and keep the good. Cheers.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Test-Guild-News-Show-Automation-DevOps

AI Test Management, AI Prompts for Playwright, Codex and More TGNS158

Posted on 05/19/2025

About This Episode: Have you seen the lates AI Powered Test Management Tool? ...

Showing 81 of 6864 media items Load more Uploading 1 / 1 – Judy-Mosley-TestGuild_AutomationFeature.jpg Attachment Details Judy Mosley TestGuild Automation Feature

Building a Career in QA with Judy Mosley

Posted on 05/18/2025

About This Episode: In today’s episode, host Joe Colantonio sits down with Judy ...

Jacob Leverich TestGuild DevOps Toolchain

Observability at Scale with AI with Jacob Leverich

Posted on 05/14/2025

About this DevOps Toolchain Episode: In this episode of the DevOps Toolchain podcast, ...