About This Episode:
Are you stuck in a test migration maze?
Can you really shift left… or are you shifting into chaos?
And is Copilot’s new browser the ultimate shortcut for automation engineers?
Find out in this episode of the Test Guild News Show for the week of July 6. So, grab your favorite cup of coffee or tea — and let’s do this!
Exclusive Sponsor
Discover ZAPTEST.AI, the AI-powered platform revolutionizing testing and automation. With Plan Studio, streamline test case management by directly importing from your common ALM component into Plan Studio and leveraging AI to optimize cases into reusable, automation-ready modules. Generate actionable insights instantly with built-in snapshots and reports. Powered by Copilot, ZAPTEST.AI automates script generation, manages object repositories, and eliminates repetitive tasks, enabling teams to focus on strategic goals. Experience risk-free innovation with a 6-month No-Risk Proof of Concept, ensuring measurable ROI before commitment. Simplify, optimize, and automate your testing process with ZAPTEST.AI.
Start your test automation journey today—schedule your demo now! https://testguild.me/ZAPTESTNEWS
Links to News Mentioned in this Episode
0:18 | ZAPTESTAI | https://testguild.me/ZAPTESTNEWS |
0:57 | Cypress to Playwright | https://testguild.me/r1muz4 |
2:08 | Shift-Left Webinar | https://testguild.me/voq0z6 |
3:09 | Mobile AI | https://testguild.me/8n88zy |
4:41 | CoPilot Browser | https://testguild.me/jccw1a |
6:02 | AI Context Engineering | https://testguild.me/c5aidu |
7:15 | System Knoweldge | https://testguild.me/xclkln |
8:20 | OWASP AI Guide | https://testguild.me/5a0k98 |
News
[00:00:00] Joe Colantonio Are you stuck in a test migration maze? Can you really shift left? Or are you just shifting into chaos? And this is copilot's new browser, the ultimate shortcut for automation engineers? Find out in this episode of The Test Guild News Show, the week of July 6th. Grab your favorite cup of coffee or tea and let's do this.
[00:00:18] Hey, before we get into the news, I want to thank this week's sponsor Zaptest AI, an AI driven platform that can help you supercharge your automation efforts. It's really cool because their intelligent co-pilot generates optimized code snippets while their planned studio can help you effortlessly streamline your test case management. And what's even better is you can experience the power of AI in action with their risk-free six-month proof of concept featuring a dedicated ZAP expert at no upfront cost. Unlock unparallel efficiency and ROI in your testing process. Don't wait. Schedule your demo now and see how it can help you improve your test automation efforts using the link down below.
[00:00:57] Joe Colantonio All right, let's kick things off. The question I know many testers have on their mind, and that is, should you finally make the leap from Cypress to Playwright? Here's what you need to know before you jump. Check it out. So on Medium, Akhilesh recently outlined an approach for migrating test scripts from Cypress-to-Playwright using AI-powered prompt engineering. And this article describes how testers can use a single carefully crafted prompt to automatically convert Cypress code to Playwright, leveraging AI models like GPT 4. And this method involves providing context-rich instructions to the AI, specifying input, Cypress code, and desired output, the equivalent Playwright code, along with a detailed breakdown of the transformation rules. Akhilesh shares example prompts and suggests using tools such as ChatGPT or similar large language models to perform this conversion. He also emphasized that this approach can help teams transition to Playwright without manually rewriting large test suites. Potentially saving you a bunch of time. However, he also notes that testers should still validate and maintain the generated code to ensure it works as intended. Check it out for yourself in the special link down below. Let me know in the comments if it works for you.
[00:02:07] Joe Colantonio Next up is the webinar of the week. So shift left is the mantra, but most teams get stuck in the weeds. Find out why and what you can do to fix it in our upcoming webinar with Blink.io happening next week on July 15th. Shift left sounds good on paper, but in reality, from my experience, most efforts fail due to scale. And developers are expected to own automation, but they lack the time, tools, and governance to do it well. The results, fragile test code, duplicate efforts across teams, and automation that's too flaky to trust. And 90% of end-to-end tests are never really wired into pipelines as quality gates. We're going to go over things like why shift left efforts often fall short in real world environments. What's holding back scalability, reliable end-to-end test automation, how Gen.AI can help your teams build production-grade automation without overhead, and how you can leverage AI test engineers to help you write, maintain, and heal tests so your developers don't have to. If you haven't already, I highly recommend you register for this webinar happening next week using the link down below. Hope to see you there.
[00:03:09] Joe Colantonio All right, so let's talk about mobile. Generative AI is making big promises in mobile app testing, but is it ready to actually help you ship with confidence? Well, in this recent Medium article by Amrit, he examines how generative AI can be applied to mobile app testing, outlining its potential to speed up and strengthen the testing process. And this piece explains the traditional mobile app. Testing often struggles with a variety of devices, operation systems and frequent updates. Generative AI, known primarily for creating text and image, is now being explored to generate test cases automatically, simulate diverse user interactions and even help with UI validations. And this article highlights that generative AI can analyze app requirements and produce detailed test scenarios, reducing manual efforts and broadening coverage. It could also simulate real world user behavior, which is critical for uncovering issues early. However, they also note that generative AI solutions still need careful human oversight to validate outputs and avoid false positives or irrelevant test scenarios. Amrit emphasizes the team should evaluate the reliability of their AI generated test cases and integrate them carefully with their existing automation frameworks rather than fully replacing traditional approaches. But I think it's a good point. Maybe Generative AI can help some of the things that software test automation engineers need to do when they need to automate mobile apps. But the best thing is to use your mind and work with the AI to make sure the output is correct. And it's actually saving you real time and not just adding more overhead. Definitely check it out for yourself and let me know your thoughts.
[00:04:40] Joe Colantonio All right. On to one of my favorite topics, and that is tooling. So Copilot's new built in browser might sound like a dream for automation engineers, but how can it really help you in your day to day workflows? GitHub has just announced that its Copilot coding agent now includes an integrated web browser. And this new capability allows Copilot to autonomously search the Internet to gather relevant information, retrieve documentation and find code examples in real time while assisting developers and testers. And this browser feature is designed to enhance Copilot ability to solve more complex coding tasks that require context beyond local code or pre-trained knowledge. For example, the agent can now look up obscure API references, check library updates or verify best practices directly from the web without human prompting. And GitHub notes that this browsing behavior is controlled to reduce security risk and queries are visible to users to maintain transparency. The update represents a significant expansion of copilot's functionality. Shifting it from a static code completion tool to a more autonomous coding assistant capable of live research and self-directed learning.
[00:05:50] Joe Colantonio Just another thing, software testers need to be aware of what the developers might be using. And you may need to start considering the potential for inconsistent or unverified sources to impact your final code base that you're testing. As we zoom out, AI isn't just about prompts anymore. There's a new essential skill that you'll want to build to stay ahead as well, if you haven't already. In a recent article by Phil, he introduces the concept of context engineering as a new discipline aimed at improving how large language models perform in production environments. And Phil explains that while prompt engineering focuses on crafting instructions for single queries, context engineering involves designing and managing the broad set of information, such as documents, user profiles, and dynamic data that an LLM uses to generate responses. And this article emphasized that context engineering helps LLMs make more accurate and relevant decisions by shaping the data they draw from rather than just refining the prompt. And Phil also outlined several strategies to context engineering, including retrieval, augmented generation, embedded-based context selection, and using context agents to personalize outputs dynamically. For software testers and developers working with AI-based features, understanding context engineering is crucial for building more reliable and use the specific applications. And this approach also addresses common issues like hallucinations and inconsistencies in LLM outputs that are a major concern in critical systems.
[00:07:14] Joe Colantonio And while we're talking about skills, here's a reminder from the field, deep technical system knowledge is becoming a make or break factor for testers on the QT blog by Richard Bradshaw, really emphasize the importance of having a strong technical system knowledge for quality assurance professionals. And the article explains that testers who understand a system's technical architecture including hardware, operating systems, network dependencies, can design more effective tests and uncover issues that purely functional testing may overlook. The article also points out the gaps in technical understanding can lead to missed defects, especially in areas like system integration and performance, and it also encourage testers to move beyond black box approaches and engage more deeply with underlining system components. And it also highlights that as the system becomes more complex, technical awareness becomes essential for preventing costly failures and maintaining product reliability. The article also suggests that investing time in learning about system internals can also improve collaboration with developers and system architects and by doing so, hopefully strengthen overall software quality.
[00:08:20] Joe Colantonio All right, AI and security is always in the spotlight and OWASP just dropped a new AI testing guide. So let's unpack what's inside and why it matters to you. Check it out. OWASP has published the AI testing guide, a new resource aimed to helping software testers and security professionals evaluate AI-enabled systems. This guide addresses unique challenges in testing artificial intelligence and machine learning models, which differs significantly from traditional software testing. This document covers critical areas, including data quality and integrity, model robustness, adversarial testing and explainability, and it outlines specific strategies for assessing the security and reliability of AI models emphasizing the need to test not only the code, but also the data pipelines and model outputs. This guide also highlights potential risks such as bias training data, unexpected model behavior and vulnerabilities to adversarial attacks. It also encouraged testers to adopt a comprehensive risk-based approach that accounts for both technical and ethical considerations when working with AI systems.
[00:09:25] Joe Colantonio Alright, and for links of everything we covered in this news episode, head on over to the links in the comment down below. So that's it for this episode of the Test Guild News Show. I'm Joe, my mission is to help you succeed in creating end-to-end full stack pipeline automation awesomeness. And as always, test everything and keep the good. Cheers.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.