GenAI Learning Journey, Contract Testing, AI Safety Testing and more TGNS131

By Test Guild
  • Share:
Join the Guild for FREE
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

About This Episode:

What famous testers just published their GenAI / LLM learning experience journey?

How can Contract Testing Bridges the Gap in Integration Testing

Have you seen the new open-source platform that was released for AI safety testing?

Find out in this episode of the Test Guild New Shows for the week of Aug 4th.  So, grab your favorite cup of coffee or tea, and let's do this.

Exclusive Sponsor

This episode of the TestGuild News Show is sponsored by the folks at Applitools. Applitools is a next-generation test automation platform powered by Visual AI. Increase quality, accelerate delivery and reduce cost with the world’s most intelligent test automation platform. Seeing is believing, so create your free account now!

Applitools Free Account https://rcl.ink/xroZw

Links to News Mentioned in this Episode

time News Title Rocket Link
0:22 Register for Newsshow https://testguild.me/newsub
0:29 GenAI / LLM learning journey https://testguild.me/a5u6vb
1:36 Contract Testing https://testguild.me/50ynkv
2:46 Free Browser Plugin https://testguild.me/eolafs
3:33 Dependency Inversion Principle https://testguild.me/yq7tdj
4:27 GameDriver 2024.07! https://testguild.me/p422qs
5:07 GameDriver Course https://testguild.me/gamecourse
5:23 Checkly (Don't promote) https://testguild.me/o7ujsd
5:46 Full Stack Tests Quickly https://testguild.me/zw86uq
6:56 Synthetic Monitoring https://testguild.me/czeljh
8:20 NISTAI safety testing https://testguild.me/grxjzv

News

[00:00:00] Joe Colantonio What famous tester just published the Gen AI LLM Learning Experience journey? How can contract testing bridge the gap in integration testing? And have you seen the new open source platform that was just released for AI safety testing? Find out in this episode of the Test Guild News Show for the week of August 4th. Grab your favorite cup of coffee or tea and let's do this.

[00:00:20] Joe Colantonio Before we get into it, if you haven't already, make sure to subscribe to our Test Guild's LinkedIn News Show newsletter that I have the link for down below and never miss another episode.

[00:00:29] Joe Colantonio First up, is our Gen AI LLM learning journey. In this blog post by the renowned agile tester, Lisa Crispin discusses the growing impact of Generative AI and large language models and software testing. In this post, Lisa really emphasized how these technologies are helping testers by automating mundane tasks and generating valuable insights. She also highlights the importance of continuous learning and adaptability for testers to leveraging these advancements effectively. And she also provides examples of how Gen AI can assist in areas like test case generation, defect prediction, and even in simulating real world usage to uncover hidden issues. And Lisa also encourages testers to embrace these technologies to give them a try and try to integrate them into their own workflows. However, this post also cautions over the overreliance on these tools and stresses the need for human oversight to ensure quality and reliability. But I think the key takeaway, as Lisa points out, is to embrace these technologies while maintaining a critical eye to ensure the quality and reliability of their outcomes. Thank you, Lisa for this post and to learn more, you can check out more about it down below.

[00:01:36] Joe Colantonio How can contract testing help you with integration tests? Well, let's check out this latest post by Bas, all about contract testing and how it has emerged as a critical approach with integration testing. And so this post goes over how contract testing provides an efficient and robust method to validate the interactions between different services. And it also goes over how as microservice architecture becomes more and more the norm, how traditional integration testing is really falling short to provide coverage for these types of tests. So it goes over how contract testing addresses these issues by defining and enforcing clear contracts between services, ensuring they communicate correctly without requiring complete end-to-end test. And this article outlines the mechanics of contract testing, and also emphasizes its critical role in the software development lifecycle. And by validating interactions at the micro level, developers can catch errors earlier, reducing costly fixes later in the pipeline. And he also points out how contract testing also supports parallel development streams by decoupling service dependencies and enhancing productivity. Definitely another must, written by Bas. You could check that out down below.

[00:02:46] Joe Colantonio I also came across a new tool that you should definitely check out, and it's how Sanjay Kumar just announced the release of a new browser extension called Check My Links, and this free tool quickly scans web pages to identify broken, active, invalid redirect links, providing valuable insights in just a few seconds. It's available for Chrome, Edge, Opera, and Brave browsers, and it helps maintain website integrity by detecting link issues. Some of the features Sanjay points out, is that highlights valid links and green invalid links in red. It allows users to customize link highlight colors. You can export link data for further analysis in and include settings to exclude certain pages from scans. Sanjay is always developing new tools. Definitely keep your eye on him and definitely check this out as well. Let me know your thoughts about it in the comments down below.

[00:03:33] Joe Colantonio What is the Dependency Inversion principle? To me it sounds like an 80s synth band, but it's not. It's actually something that can help software testers. What is it? Let's check it out. And this is by Kristin Jackvony, who goes over one of the last pieces of her solid principal series on the dependency inversion principle. So Kristin breaks down the dependency inversion principle, which advocates that high level modules should not depend on lower level modules. Instead, both should depend on abstractions which means that the core code base should rely on interfaces or abstracts classes rather than concrete implementations. And this approach encourages a more flexible and decoupled architecture. And she breaks down them when applied to software testing, these principles can drastically improve the maintainability and scalability of tests. This looks like this is the last post in a series on solid principles. So make sure to check out all Kristin's other posts on this topic.

[00:04:27] Joe Colantonio Next up is all about Game Driver, how it's just announced the release of its latest version, version 2024.07, and it's designed for Unreal Engines and Unity. And this update brings a variety of new features aimed at enhancing automated testing for game developers and testers and QA professionals. And this release focus and improving workflow efficiency, expanding platform support, and integrating more seamlessly with CI/CD pipelines. It also goes over some key additions, including enhanced interactions with game objects, updated support for various versions of Unreal Engine and Unity. And advanced debugging tools. These improvements are set to streamline testing processes, allowing developers to identify and resolve issues more quickly.

[00:05:07] Joe Colantonio And if you've never heard of Game Drive before, you don't know how it can help you with testing. Definitely check out our free training course on getting started with video game testing using Game Driver to get you up to speed on this awesome utility as well.

[00:05:20] Joe Colantonio Next up is a follow the money segment, so check these been on a roll. They just announced a raise 20 million series B funding. And Tim goes over how this financial boost is expected to aim Checkly enhancing its platform, which enables developers to detect and resolve issues at an unprecedented speeds up to ten times faster than traditional methods, according to them. Definitely a cool solution. If you haven't checked it out, check it out as well and you can find more about this announcement the links down below.

[00:05:47] Joe Colantonio Up next is how to build automate a full stack test quicker from manual test runs. This is by Nuwan, who posted on LinkedIn about a tool I've never heard of before, then mentioned that he worked in organizations ranging from millions to trillion dollar skills, consistently finding testing to be a critical yet suboptimal area as developers work increasingly relies on fast, inexpensive, but often inaccurate large language models. Testing is becoming the main bottleneck in the development process. Because of this, he sees the role of humans in testing is changing. Even while some advocate for fully AI driven testing in more practical approach involves humans guiding the process through the exploration and insights, while A.I. handles the repetitive tasks like automating and scaling. And then that's where he mentions the tool I never heard of before. He points to testchimp and this tool that he points to as an example of what he thinks is a balanced approach, which combines human intuition and creativity with AI efficiencies. And this platform goal is create seamless integration of human and AI testing efforts in the testing process. Never heard of before, but definitely if you have, let me know your thoughts on it in the comments.

[00:06:56] Joe Colantonio This next article is all about observability, and it's all about synthetic monitoring with OpenTelemetry. I've noticed over the years that observability has become more and more critical as a component to ensure robust and efficient systems. So a recent pivotal development in this field is the integration of synthetic monitoring techniques with OpenTelemetry, which is an open source project providing a unified set of APIs, libraries, and agents in instrumentation to collect distributed traces and metrics. And this post goes over why maintaining optimal performance and reliability of software is critical. One innovative approach gaining traction is the use of synthetic monitoring combined with the powerful capabilities of OpenTelemetry. So synthetic monitoring, which involves simulating user interactions and transactions to test application performance, is being seen as a cool way to use OpenTelemetry as flexible open source tools. And what's great about this approach is it enables organizations to preemptively detect performance issues and outages in their software ensuring a superior user experience, and this approach provides a proactive layer of observability, allowing teams to understand system behaviors under predefined conditions without waiting for real users to encounter problems. I think this is an invaluable assets for diagnosing and resolving potential bottlenecks that every testers should learn about and you can find out more about it in the links down below.

[00:08:20] Joe Colantonio And with all this talk around, I found an open source platform which is designed to enhance the safety testing of artificial intelligence systems. And this is by the National Institute of Standards and Technology, has announced the release of an open source platform designed to enhance the safety testing of AI systems. This new initiative aims to provide a standardized method for evaluating AI technologies and showing that they function reliably in a free from biases. And by creating this platform, NIST seeks to offer developers, researchers, and policymakers tools to assess the safety and effectiveness of AI applications comprehensively. And this tool, which I think is called Dioptra is an open source web based tool which was first released in 2022, that they've added onto, which seeks to help companies train AI models and people using these models to assess, analyze, and a track AI risk. And this framework also includes a suite of testing tools and frameworks to help you test AI systems. And these tools are geared towards identifying potential vulnerabilities, ensuring that AI technologies comply with established safety standards. So if you're a tester, you definitely should leverage these new tools to identify and mitigate risk ensuring showing that AI applications are developing a both secure and unbiased.

[00:09:36] Joe Colantonio Alright for links of everything value we covered in this news episode. Head on over to all those links that first comment down below. That's it for this episode of the Test Guild News show, I'm Joe, my mission is to help you succeed in creating end-to-end, full stack pipeline automation awesomeness. As always, test everything and keep to good. Cheers.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Mateo Rojas Carulla TestGuild DevOps Toolchain

AI and the New Era of Cybersecurity Threats with Mateo Rojas-Carulla

Posted on 12/11/2024

About this DevOps Toolchain Episode: Today, we're exploring a topic that's becoming more ...

Discover-Future-Trends-in-Automation-at-Automation-Guild-Feature-Image

Discover Future Trends in Automation at Automation Guild

Posted on 12/08/2024

About This Episode: I'm your host, Joe Colantonio, and I am thrilled to ...

Evan Niedojadlo TestGuild DevOp

From Code to Leadership with Evan Niedojadlo

Posted on 12/04/2024

About this DevOps Toolchain Episode: Today's episode delves into the journey of transitioning ...