Playwright Vs JMeter, 3 Automation Questions You need To Know and More TGNS119

By Test Guild
  • Share:
Join the Guild for FREE
A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

About This Episode:

Do you know the three pivotal questions to ask when assessing your automation strategy?

What is the Open-Closed Principle and how is applied to Software Testing?

Why would you ever compare Playwright vs JMeter?

Find out in this episode of the Test Guild New Shows for the week of May 5.  So, grab your favorite cup of coffee or tea, and let's do this.

Exclusive Sponsor

This episode of the TestGuild News Show is sponsored by the folks at Applitools. Applitools is a next-generation test automation platform powered by Visual AI. Increase quality, accelerate delivery and reduce cost with the world’s most intelligent test automation platform. Seeing is believing, so create your free account now!

Applitools Free Account

Links to News Mentioned in this Episode

Time News Title Rocket Link
0:00 Register for LinkedIn NewsLetter
0:19 the Open-Closed Principle
1:39 Amazon Q
2:27 3 Must ask Automation Questions
3:19 Free AI Tools
4:06 Snowflake Arctic LLM
5:25 Artillery Cloud
6:18 Grafana Cloud Synthetic Monitoring!
7:33 Playwright vs JMeter
9:13 Oasis Secures $35M


[00:00:00] Joe Colantonio Do you know what the three pivotal questions are to ask when assessing your automation strategy? What is the open close principle and how is it applied to software testing? And why would you ever compare Playwright versus JMeter? Find out in this episode of The Test Guild News Show for the week of May 5th. So grab your favorite cup of coffee or tea and let's do this.

[00:00:19] Joe Colantonio First up is what is the open close principle in software testing? Let's check it out. This is part of a series that Kristin Jackvony has been working on, and this one is all about the open closed principle, which is the key concept in the solid principles of software development tailored specifically for testers. And she goes into detail why this concept is vital for maintaining the stability of systems, while allowing for the evolution and scalability of your testing frameworks. In this blog uses a practical example involving a logging class commonly used for testers for automating logging functionality. Initially, the class accepted username and password, enabling automated log in. However, when a new feature involving a challenge question is introduced, modifying the original class directly would disrupt existing tests that rely on it. And to address this, an extended class log in with challenges created, which incorporates the new feature without altering the original class. Demonstrating adherence to the open close principle. So why should you care? While adopting the open close principle can significantly reduce the risk of disrupting existing automated tests when new features are integrated, it also encourages a more sustainable and manageable approach to developing and maintaining your test automation frameworks. You should definitely check out and you can find it in the links down below.

[00:01:39] Joe Colantonio All right, I found this next news article by scrolling through my LinkedIn feed, where I saw that Brad Johnson posted an article talking about a new Generative AI powered assistant for businesses and developers. And it's all about Amazon Web Services has officially launched Amazon Q, which is a Generative AI assistant aimed to improving productivity and software development and business operations. And the reason why I bring this up is because Amazon Q enables highly accurate code generation, testing, debugging, and implementation based on developers inputs. It also facilitate access to business data such as company policies, product information, business results by connecting to enterprise data repositories for summarization, trend analysis, and dialog. So definitely something you should be aware of. You could check it out and let me know your thoughts.

[00:02:27] Joe Colantonio And Paul Grizzaffi is back in the news with another rocking blog post. Let's check it out. So in this recent blog post on Responsible Automation, Paul Grizzaffi, an expert in automation and testing, shares insights derived from his multiple years of experience in consulting and practice and test automation across a bunch of different organization. Paul suggests three pivotal questions to guide companies in assessing the automation strategies. What are they? The first one is do I have the right automation? Second one is do I need more automation? And third is do I have too much automation? These questions help determine whether your existing automation aligns with your business goals, and whether additional automation could enhance efficiency in whether current automation levels exceed the value that they provide. Definitely a must read, and you can find it in the links down below. And thank you, Paul, for the shout out.

[00:03:19] Joe Colantonio Okay, so this next news article isn't really news, but it's news to me. What do I mean? Let's check it out. I actually went to Star East last week, and I got to meet Stacy Kirk face to face for the first time, and she showed me a bunch of free tools her company offers that you could check out for free. And as you can tell, there are three different tools here. The first one is rapid test case generation. The other one is creating UI test automation code, which helps you streamline the process of your tests automation code generation. They also have a create API test automation code, and they also have ask your test automation expert. So I saw a quick demo and I played around with these really quickly, but I think it's something that will give you some benefit. So let me know. Check it out yourself in the comments and let me know which free AI tool you like the best.

[00:04:06] Joe Colantonio Once again, going through my LinkedIn feed, I came across another LLM that I think you need to be aware of. And this article was posted by Unmesh, who pointed to how snowflake has unveiled a new language model called Snowflake Arctic, and this was designed for enterprise use with remarkable cost efficiency and openness developed by Snowflake's AI research team. Arctic is set to help you transform how business deploy large language models by significantly reducing the financial and computational burden typically associated with such technologies. And this particular LLM excels in executing complex tasks such as SQL generation, coding, and following detailed instructions, and according to them, it outperforms other models that have been trained with much higher budgets. And what's even better? This model is fully open source under the Apache 2 license, providing unrestricted access. So what's it for you? Well, it's a good opportunity for you to learn from an open-source community, expand your skill set in areas like the MOE architecture. It goes over an enterprise focus, AI applications, and it also can help you enhance your job prospects in professional growth as the main increases for skills and testing sophisticated, cost-effective AI solutions within enterprise environments. Definitely a must check out. And you can find it down below.

[00:05:25] Joe Colantonio Also, a few days ago, Hassy just announced that Artilleryio has announced the official launch of Artillery Cloud, now out of beta and available for general use. So what is Artillery Cloud? Let's check it out. Artillery cloud is designed for teams conducting continuous load testing, which provides advanced tools for visualization and analyzing results. And this platform helps users make sense of extensive raw data from load tests, identifying and resolving performance bottlenecks, and keeping external stakeholders updated on testing progress. And also Artillery Cloud includes features such as summarizing reports of key performance metrics, custom charts and dashboards for application specific metrics, and options for annotating and sharing test results. It also supports Playwright for browser performance metrics and user experience visualizations, enhancing the utility of load testing in real world scenarios.

[00:06:18] Joe Colantonio Also, Grafana Labs has made a big announcement and this was announced on Leandro Melendez's is LinkedIn Señor Performo all about how Grafana Cloud Synthetic Monitoring Services introduces new capabilities that enable more complex and realistic simulations of user interactions with applications. And so the post goes into detail how this update features the introduction of multi HTTP and scripted checks powered by the K6 performance testing tool. And this allows developers, platform engineers and site reliability engineers to create detailed, multi-step simulations that mimic actual user journeys, ensuring that applications perform reliably under different varied conditions. Previously, Grafana Cloud synthetic monitoring utilized the Prometheus black box exporter for basic protocol tests. This new integration with K6 expands the scope to include advanced scripting in multi-step testing, providing deeper insights and more robust monitoring of user experiences. It also has the ability to define detailed assertions for each step of the user journey, where testers can pinpoint issues more precisely, leading to quicker problem resolutions and that definitely you should check out and let me know your thoughts in the comments down below.

[00:07:33] Joe Colantonio Also, OctoPerf just posted a link to a blog post they wrote, I believe last month on how Playwright compares to JMeter. What does it all mean? Let's check it out. And as you know Playwright is known for its end-to-end testing capabilities that automate real browser actions and presents an alternative to JMeter, which generates virtual users based on a protocol while JMeter captures every request generated by the browser and reproduces those to simulate load, Playwright operates by actually launching a browser and executing user actions. And the comparison reveals that a Playwright offers a more realistic user interaction by incorporating actual browser responses. They requires significantly more system resources, but conversely, JMeter, though less intensive on resources as you know, involves complex script management, including handling dynamic parameters like token correlation mainly. But this article summarizes that despite the higher resource demands, the benefits of using Playwright could outweigh the costs, particularly for specific testing scenarios where user experience and interface interactions are critical. However, combining the strength of both Playwright and JMeter could be a strategic approach, allowing testers to leverage the realism of browser-based testing while managing the bulk of the load with protocol-based views. Actually, I'll be honest, this is nothing new. I was doing this in 2000 with WinRunner as a GUI V user, running it from within Load Runner using both the load runner protocol and WinRunner to get the real user experience. So it's good to see that these tools are finally catching up almost 25 years later. And if you're using Playwright to JMeter just definitely, you should definitely check out.

[00:09:13] Joe Colantonio And lastly, in security news all about how Oasis a security, a leader in non-human identity management and HIM secure 35 million extension to its series A funding. And this platform also offers real time analysis on how non-human identities are used, assisting operations and identifying security risk and facilitating quick remediation through automated responses and prioritization.

[00:09:38] Joe Colantonio All right, and for links of everything of value we covered in this news episode. Head on over to the links in the comments down below. So that's it for this episode of The Test Guild News Show, I'm Joe, my mission is to help you succeed in creating end-to-end, full-stack, full pipeline automation awesomeness. As always, test everything and keep the good. Cheers.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

AI for Test Coverage, Why Playwright is Slow, Crowdstrike and more! TGNS129

Posted on 07/22/2024

About This Episode: Do you know how much of real production usage your ...

Mark Creamer TestGuild Automation Feature

AI’s Role in Test Automation and Collaboration with Mark Creamer

Posted on 07/21/2024

About This Episode: In this episode, host Joe Colantonio sits down with Mark ...

Javier Alejandro Re TestGuild DevOps Toolchain

Building a Real World DevOps Pipeline with Javier Alejandro Re

Posted on 07/17/2024

About this DevOps Toolchain Episode: Today, we have a special treat for all ...