AI for Test Coverage, Why Playwright is Slow, Crowdstrike and more! TGNS129

By Test Guild
  • Share:
Join the Guild for FREE
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

About This Episode:

Do you know how much of real production usage your tests cover?

Why is slowing down your Playwright tests?

What are some key testing lessons you can learn from the Recent Microsoft/CrowdStrike incident

Find out in this episode of the Test Guild New Shows for the week of July 21  So, grab your favorite cup of coffee or tea, and let's do this.

Exclusive Sponsor

This episode of the TestGuild News Show is sponsored by the folks at Applitools. Applitools is a next-generation test automation platform powered by Visual AI. Increase quality, accelerate delivery and reduce cost with the world’s most intelligent test automation platform. Seeing is believing, so create your free account now!

Applitools Free Account https://rcl.ink/xroZw

Links to News Mentioned in this Episode

Time News Title Rocket Link
0:29 Infinitest: Infinisights https://testguild.me/vq6a5z
2:00 page.goto() slowing down Playwright https://testguild.me/e3y84f
3:01 SeaLights in $150 million https://testguild.me/30bqqu
4:19 Cypress latest releases https://testguild.me/3kt7od
4:55 DevAssure https://testguild.me/edgccl
5:51 AI-Focused Engineering Team https://testguild.me/8dsusd
6:41 Continuous Web Performance Monitoring https://testguild.me/ubtf0o
7:34 We live in a universe filled with Fragile Systems https://testguild.me/y34qx1
8:31 Testing Leasson from CrowdStrike https://testguild.me/f5embi

News

[00:00:00] Joe Colantonio Do you know how much of real production usage your tests actually cover? Why is Page.goto slowing down your Playwright test? And what are some key testing lessons you can learn from the recent Microsoft CrowdStrike incident? Find out in this episode of The Test Guild News Show for the week of July 21st. Grab your favorite cup of coffee or tea and let's do this.

[00:00:20] Joe Colantonio Before we get into it, if you haven't already, please make sure to subscribe to our Test Guild LinkedIn News Show newsletter. That I'll have the link down below to never miss another episode.

[00:00:29] Joe Colantonio Next up, I came across a blog post that reveals gaps in test coverage that most software testers are experiencing. Let's check it out. In this post by infinitest, who goes over and highlights a critical issue regarding test coverage and software development. And it suggests that many software teams may not know how much of their real production usage is actually covered by the test suites. Despite having a large number of tests, such as 5,000, this gap in understanding can lead to undetected issues and potential system failures. If that's you, listen up, because I just learned about a new feature that addresses this, which is part of the new complimentary capabilities that infinitest.io has recently added to their existing record and playback solution. And this feature, infinitest generates insights based on the data that is collected by their agent, and it goes over two types of insights that it collects. First one is events that shows how many of unique users events exist in your tests and which are missing, and the second is flows that shows the most common user journeys generated by that proprietary algorithm. And how many of them are part of your test. And what's great about these two insights is that you could simultaneously see what your current test coverage is, and what is missing, based on the most current and common user actions. Sounds awesome, right? And I know a lot of testers are struggling with this. Don't be one of those testers that have a bunch of tests that really do address risk of what your customers are users are really doing. Check it out for yourself. Head on over to testguild.me/testcover and see it for yourself.

[00:02:01] Do you know that in a recent finding, developers had identified that the use of page.goto in Playwright tests is contributing to slow test execution times? What can you do about it? Let's check it out. In this article on Checkly by Stefan, he goes over why page.goto is slowing down your Playwright tests. So it goes into detail on how the page.goto function in Playwright, which navigates to a specific URL, has been pinpointed as a performance bottleneck. And this slows down tests as it waits for the entire page load, including resources before completing. And this has a lot of real code examples that show you how to optimize your performance for your Playwright test to use alternative methods such as using selective waits, such as using selective waits for specific elements or conditions required for the test scenarios, rather than waiting for the full page load. And implementing this can lead to remarkably faster test executions. If you are using Playwright, this is definitely a must read, and you can find it down below.

[00:03:01] Joe Colantonio Next up is a Follow the Money segment. What company was just acquired for $150 million? Let's check it out. Tricentis which pretty much requires every company has just announced the acquisition of another one. And this is the AI powered continuous testing platform SeaLights for $150 million. And this strategic move aims to enhance Tricentis test automation capabilities. And this deal reflects Tricentis commitment to expanding its AI-driven testing solutions, targeting improved efficiencies and comprehensive coverage for software development life cycles that everyone has to deal with. And this acquisition underscores the growing importance of AI in the software testing industry. Testers should focus on familiarizing themselves with AI and continuous testing technologies. And what I really like about SeaLights is when you check in code, it could tell you what tests you can run that map to that code check and rather than just run all your tests. It's a really cool solution. It's actually one of the last things I did before I was laid off almost five years ago. I did a proof of concept for using AI to identify test to run for code check ins and SeaLight was the winner. They were selected. So really excited for the folks like co-founder Eran Sher, who's been on the podcast multiple times and everyone else on the SeaLight's team. Congratulations and check it out in the links down below.

[00:04:19] Joe Colantonio Also up is how Cypress.io has introduced a new version. It's version 13.12 of its testing tool, and this update includes enhanced features and improved performance. And the new version focuses on stability improvements, making automated tests more efficient for developers and testers. And some key features are support for angular 18 component testing and support for signal support for Angular component testing version 17.2 and above. So testers should definitely check out this update using Cypress to benefit from enhanced stability, and also explore the new added features to help streamline your testing process.

[00:04:55] Joe Colantonio Also on LinkedIn, I came across a new tool I've never heard of before, and it's interesting because. It's actually a question I got on a recent webinar. What is that question? What does this handle? Let's check this out. This was posted by George, who mentioned on LinkedIn that he discovered a tool called DevAssure. What is it? Well, according to him, it uses Gen AI to generate test cases from UI mockups and feature specs, integrating seamlessly with popular tools like Figma. So on a recent webinar, someone asked, hey, I have a Figma diagram, can I add this to the AI tool and will allow me to know what test cases to run? At that point, the answer was no, but here's a tool that actually can do it. And the AI powered code analyzer identifies functional bugs earlier and the codeless test IDE facilitates automated test. And they call themselves the first AI body for a bunch of different things that have to do with testing. So definitely something to check out. They also have built in visual regression and accessibility testing.

[00:05:52] Joe Colantonio With all the talk about AI, what's required to support an AI focus engineering team? Well, let's see this next article by Adam who goes over what is required to support an AI focus engineering team. And the article goes into and covers how as AI technology continues to grow, supporting an AI focus engineering team requires a blend of specialized skills, collaborative tools, and robust infrastructure. And it goes into detail how companies need to invest in hiring individuals proficient in machine learning, data science, and software development. And equally important is the establishment of a collaborative environment that encourages knowledge sharing and continuous learning. He also has some interesting thoughts on testing and the future of testing in these types of environments. Definitely a great read by Adam. Definitely check it out. Once again in the links down below.

[00:06:41] Joe Colantonio As you know, I love performance testing. This next article goes over how you can use continuous web performance monitoring and why it's so important. And this is by Tammy who emphasized the significance of continuous web performance monitoring. In this LinkedIn post, Tammy highlights how modern tools like SpeedCurve can offer real time insights into website performance, enabling businesses to make timely improvements. She also stresses that ongoing monitoring is crucial for maintaining optimal user experiences which directly impact customer satisfaction and more importantly, conversion rates. And the article itself goes into more detail on how continuous web performance monitoring is essential for identifying and addressing performance issues promptly, and testers should really integrate real time monitoring tools to ensure websites remain fast and efficient. So another great read. Thank you, Tammy for this. Definitely check that out and let me know your thoughts down below.

[00:07:34] Joe Colantonio All right, talk about burying the lead. But this is probably the biggest news last week. And it was all about Microsoft and CrowdStrike and multiple different blog posts that go over why this happened, how to avoid it, did testing have anything to do it? Did no testing have anything to do with it? Here are a few key articles I found that go over this in more detail. In this post, James highlights how we have to deal with a bunch of different fragile systems nowadays, and the post goes over that software systems, despite advancements, remain susceptible to failures. And complexity design and unexpected interactions between components often lead to system breakdowns. And the key point I think James makes is that, as CrowdStrike has demonstrated, we only one software patch away from disaster at scale not possible before in human history because of a high dependency on software defined systems. And as testers, you should definitely take note. But what can you do about it?

[00:08:31] Joe Colantonio Well, here's another post by Lee that goes over some key lessons you can take away from this incident, and he list some actions that organizations including Cloud Strike can implement to enhance software quality and reduce the likelihood of future incidences. The first one is strengthening testing procedures. The second he goes over is how to embrace secure development approaches. The third is cultivating a culture of ongoing improvement. The fourth is cross-functional collaboration, and the fifth, that he goes into detail is strengthen monitoring and response. In the last one of many that I could choose from is by Jason, the reason why I chose this one by Jason is that there are over 100 reactions, and over 23 people commented on it, and Jason wrote about the disappointing and seeing some testers take pleasure in the CrowdStrike issue. And obviously, some testers agreed with him and some testers did not. So read the comments and let me know is this something you think could be avoided? Or it could not be avoided if it's better testing was in place. I'd love to know your thoughts and let me know in the comments down below.

[00:09:32] Joe Colantonio All right, for links of everything of value we covered in this news show episode. Head on over to links in the comments down below. So that's it for this episode of the Test Guild News Show. I'm Joe, my mission is to help you succeed in creating end-to-end full stack pipeline automation awesomeness. As always, test everything and keep to good. Cheers.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Chris Hood TestGuild DevOps Toolchain

Putting Customers First: A DevOps Imperative with Chris Hood

Posted on 08/28/2024

About this DevOps Toolchain Episode: In today's session, we are thrilled to be ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Software Testing Poker, Playwright Studio, FinTech Test Automation TGNS134

Posted on 08/26/2024

About This Episode: What does poker have to do with software testing? Have ...

Rudolf Groetz TestGuild Automation Feature

Low-Code Test Automation a Journey from Skepticism to Success with Rudolf Groetz

Posted on 08/25/2024

About This Episode: In this session, Rudolf Groetz shares how Raiffeisen Bank International ...