About This Episode:
Are your API tests really covering what they should, or are you flying blind?
Could this new AI GitHub reviewer stop bad code from slipping through your pipeline?
And where does your team sit on the Automation Maturity Pyramid?
Find out in this episode of the Test Guild News Show for the week of Sep 21st.
So, grab your favorite cup of coffee or tea and let's do this.
Links to News Mentioned in this Episode
time | item | link |
0:20 | ZAPTEST AI | https://testguild.me/ZAPTESTNEWS |
1:00 | swagger-coverage-cli | https://testguild.me/cd0a8a |
2:09 | AI in QA Webinar | https://testguild.me/jm0irg |
3:23 | Pull Request reviewer | https://testguild.me/amn1eu |
4:32 | Automation Maturity Pyramid | https://testguild.me/66lwsr |
5:24 | Call For Speakers | https://testguild.me/agspeak |
5:58 | k6 Operator | https://testguild.me/mtjd4n |
7:00 | AltTester 2.2.6 | https://testguild.me/kjmsen |
7:51 | Blacksmith Software 10M | https://testguild.me/qwx143 |
[00:00:00] Joe Colantonio Are your API tests really covering what they should or you just flying blind? Could this new AI GitHub reviewer stop bad code from slipping through your pipeline? And where does your team sit on the automation maturity pyramid? Find out in this episode of The Test Guild News Show for the week of September 21st. So grab your favorite cup of coffee or tea and let's do this.
[00:00:20] Joe Colantonio Hey, before we get into the news, I want to thank this week's sponsor Zaptest AI, an AI-driven platform that can help you supercharge your automation efforts. It's really cool because their intelligent co-pilot generates optimized code snippets while their planned studio can help you effortlessly streamline your test case management. And what's even better is you can experience the power of AI in action with their risk-free six-month proof of concept featuring a dedicated ZAP expert at no upfront cost. Unlock unparallel efficiency and ROI in your testing process. Don't wait. Schedule your demo now and see how it can help you improve your test automation efforts using the link down below.
[00:00:59] Joe Colantonio All right, let's kick things off. The new tool that could save your team hours of blind spots in the API testing. Check it out. Alex reached out to me on LinkedIn to let me know all about this new tool called Swagger Coverage CLI, and it's been released to help teams measure API testing coverage across multiple protocols. And this command-line tool analyzes how much of an API specification is actually exercised by Postman Collections or Newman Execution Reports. It also supports OpenAPI Swagger for REST, protocol buffers for GRPC, GraphQL schemers, and even CSV-based API documentation, making it wicked useful for organizations running diverse microservices. The CLI produces a unified HTML report that breaks down coverage by protocol and highlights unmatched endpoints that features smart endpoint mapping, strict parameter and body validation, and detail per API or combined coverage percentages. And testers can run it against multiple API specifications in one pass, which makes it really valuable when managing a large number of services. So if you're doing anything with APIs, definitely something worthwhile that you should check out down below. Check it out. Let me know your thoughts.
[00:02:10] Joe Colantonio Of course, AI is everywhere. But how do you separate hype from real value? And that's exactly what our upcoming webinar is tackling in this week's webinar of the week. Let's check it out. We're hosting this webinar along with ConformIQ which focuses on addressing a bunch of misconceptions about AI and quality assurance. We're going to examine the gap between AI expectations and practical implementations in software testing. We designed this webinar to address organizations that are adopting AI for quality assurance without clear long-term strategies. And the content is going to emphasize that there are no shortcuts in high-quality software, and promotes what ConformIQ describes as a partnership approach between humans and AI in testing environments. You definitely want to attend to learn how to use AI as a co-pilot to accelerate test creation, increase test coverage, and reduce maintenance requirements. And the webinars are going to be presented by Mark Creamer, who's the president and CEO of ConformIQ, a company that focuses on improving software quality and processes. He told me he designed the session for software testing professionals who really want to understand practical AI applications beyond the current market hype. So you don't want to miss it. Definitely register now using that link down below and hope to see there.
[00:03:23] Joe Colantonio Speaking of coverage gaps, you don't want to miss this next article. And Raj told me he just released a new GitHub pull request reviewer called, I don't know how to say it, it's called GHPRAI. And he says the tool addresses a common development problem where manual test coverage reviewers missed critical gaps, leading to production failures, even when a pull request appears clean and pass continuous integration checks. Raj mentioned he built this to automate code analysis and test generation using local artificial intelligence models. The tool integrates directly with GitHub repositories through webhooks and uses Ollama to run local large language models for code analysis. When a pull request is created or updated, the system automatically clones the repository, analyzes change files for complexity and security risk, and generates comprehensive test cases. This tool creates a separate branch containing the AI-generated tests and posts detailed analysis reports as pull request comments. It also generates detailed reports, including code quality scores, complexity assessment, security findings, and specific improvement recommendations alongside the automated test generation.
[00:04:32] Joe Colantonio Okay, new tools are great, but where do you actually stand in your automation journey? Well, a new framework tries to answer that. David Ingram has introduced what he's calling the automation maturity pyramid, which is a framework for measuring the effectiveness and impact of test automation suites. It goes of how the pyramid consists of four progressive phases that build upon each other. Phase one focuses on confidence, which establishes trust and test results by eliminating flakiness, ensuring environment's stability, and creating strong data strategies. Phase two addresses short-term impact, making automation immediately useful for implementing CI/CD strategies that provide fast feedback. Step three is all about speed of development, optimizing for efficiency as suites grow. And the final phase is long-term impact, which focuses on sustainability through metrics-driven improvements. Really cool article. Definitely check out more down below. Let me know your thoughts.
[00:05:25] Joe Colantonio And if you have insights of your own that you want to share, here's a chance to join the stage at our 10th annual Automation Guild happening in February 2026. I think I mentioned this last week, but I want to remind you that we have officially opened our call for speakers for our 10th annual event. We pull our guild every year to find out what they're struggling with. And based on that information, we have a bunch of topics they tell me that they want to learn more about. So if you're an expert in any of these topics I've listed on a call for speakers page, we'd love to hear from you. Definitely submit your idea if you haven't already. I'd love to see it. And you can find that once again down below.
[00:05:58] Joe Colantonio If performance on the scale is a pain point for you, especially on Kubernetes, then you don't want to miss this next article. And this is about how K6 operator has officially reached 1.0 status. In this announcement, they go over how an experiment by K6 developer advocate Simon in 2020 has evolved into an open source project that simplifies running and scaling distributed performance tests directly on Kubernetes clusters. The K6 operator functions as a Kubernetes operator that uses custom resource definitions to manage test execution. And it defines 2 main CRDs, test run, which declaratively runs K6 tests and Kubernetes, and represents the simplest way to run distributed K6 tests with open source tooling, and a private load zone, registering a private load zone, allowing Grafana Cloud K6 to execute tests inside Kubernetes clusters using a simple K6 cloud run commands. And this version also introduced a bunch of improved versioning practices. If you're a performance tester, definitely something to check out more.
[00:06:59] Joe Colantonio All right. Where are my game testers at? Don't feel left out because this next one's for you. I actually found this on LinkedIn and Ru's page talking about how AltTester has released version 2.26 of the game testing automation platform. And what's cool is they've introduced an AI extension designed to accelerate test script creation for game testers. And this update addresses the time-consuming nature of repetitive scripting that often limits testers' abilities to focus on exploration, creativity, and complex testing scenarios in game environments. It explains more how the new AI extension helps testers generate test scripts more quickly by leveraging AI. And the AI extension specifically targets the dynamic requirements of game testing where complex game mechanisms and vast virtual worlds create unique automation challenges. If you're doing hand-to-hand gaming, definitely something you should check out down below.
[00:07:51] Joe Colantonio All right, last up is a follow the money segment. This announcement mentions that Blacksmith Software has just raised 10 million in Series A funding to address speed and cost issues in GitHub Action CI pipelines. Also based on this PR article, they claim they've doubled CI speed and cut compute costs by up to 75%. And they do this by avoiding renting generic cloud hardware like AWS and instead use high performance gaming grade CPUs specifically tuned for speed. And this approach provides predictable performance and no queuing, and maximize hardware utilizations. And teams can migrate with just one line of code and begin experiencing faster builds within minutes. It also talks about how the company has introduced a new observability feature that gives teams instant clarity to diagnose failures. The platform unifies test results from parallel runs in single searchable views, helping teams identify real failures, flaky tests, and infrastructure issues without overhead of traditional monitoring suites.
[00:08:49] Joe Colantonio All right, for links of everything of value we covered in this news episode, head on over to those links down below. So that's it for this episode of the Test Guild News Show. I'm Joe, my mission is to help you succeed in creating end-to-end full-stack pipeline automation awesomeness. And as always, test everything and keep the good. Cheers!
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.