Front-end Performance User Experience Testing

By Test Guild
  • Share:
Join the Guild for FREE
Joe Colantonio TestGuild AutomationFeature

About This Episode:

As we head into the holiday season, the topic of front-end performance testing will become even more critical. In this episode, you’ll learn ways to start optimizing your application front end to give your users a quicker experience. Discover tools and techniques to get your team involved in your front-end testing efforts earlier in your SDLC.

Full Transcript Joe Colantonio

Joe [00:00:16] Hey, it's Joe, and welcome to another episode of the Test Guild Automation podcast. Today, we're going to be going over what I think is a really important topic for testers, and that is front end performance. Don't be scared by the performance aspect. You don't need to be a performance testing engineer in order to do these types of activities. I'm actually going to go over a blog post I recently wrote called The Complete Front End Performance Testing Guide. I have a link for that in the show notes going over a session we had at the 2020 PERF Guild conference where Andy Davies, who is a performance testing professional some ago over what his session was about, some tools that can help you, especially as we head into Black Friday. You want to make sure your applications are ready to handle the increased load that they're probably going to experience during the holiday season. And if you haven't already, I just wanted to let you know you have one week left to get a special early bird discount on the Automation Guild tickets for 2021. Head on over to testgu-21304.arnt-nyc.servebolt.cloud and register for the fifth annual online conference dedicated 100 percent to helping you succeed with all things automation testing related. Our goal is you learn a tip, tool, technique, or best practice that's going to help a new or existing automation project and set you up for success in the New Year. You definitely don't want to miss it. We have Simon Stewart, who is the creator of Selenium joining us, and it may be one of his last sessions he gives at an online conference. And also we have the creative Appium. We have a bunch of awesome sessions. So you definitely want to check it out, head on over to testgu-21304.arnt-nyc.servebolt.cloud. Register for that right now while the bird ticket sale is going on.

Joe [00:02:03] The Test Guild Automation podcast is sponsored by the fantastic folks at SauceLabs. Their copies test platform helps ensure your favorite mobile apps and websites work flawlessly in every browser operating system in place. Get a free trial because a testguildcom.kinsta.cloud/saucelabs. You click on the exclusive sponsors section to try it for free for 14 days. Check it out.

Joe [00:02:32] All right, so as I mentioned, as we head into the holiday season, the topic of front end performance testing of consumer-facing businesses like retailers, publishers, financial companies, all types of companies are going to become even more critical. Also, I'm sure most folks probably are even planning for how to handle higher traffic than normal on their sites in this Covid-19 world we're living in. So I expect an increase in traffic on sites as people are more reluctant to go out and shop in public they are going to do more and more things online.

Joe [00:03:07] So I think this holiday season we want to see even bigger issues with performance. So you want to be on top of it before it affects your application. And I know this can be very frustrating. Our teams might have initially put in a lot of effort into making our sites wicked fast. But one of the challenges you're probably facing is how to stay on top of things once your application is released in the wild. So you have multiple probably iterations, multiple things, and patches that have been released. And over time, you probably lost track of how performers are actually our application for our end-user. I speak with a lot of many different developers and testers who struggle with things like how to monitor their site speed over time and how to know what a new release of a code or what a new releases code change will have on the front end performance of their application. So what I've done is I've actually taken the advice I found from many performance experts I've actually interviewed on my…if you haven't listened to yet, the Test Guild performance podcast, once again have a link for that in the show notes. But a lot of this is actually taken from Andy Davies, who is the author of Using Webpage Test Web Performance Testing for Novices and Power Users and a 2020 Perf Guild speaker from this year. He did an awesome session around this topic. And a lot of what I'm covering in this podcast episode is based on Andy Davies' work. So let's get into it.

[00:04:34] The first thing that Andy pointed out was or you need to be aware of also is the psychology of web page response times. And so let's start a little bit with human psychology. I think if you step back maybe 50 years ago to around 1968, there was a guy called Bob Miller who was researching how we respond to delay. And he found that as long as you receive a response within a tenth of a second of an action, you perceive that as being instant. And if you press a button and a light comes on in one hundred milliseconds, you receive that is instant. But as delay creeps up around a third of a second, you begin to notice the light if it's on or off and if you receive a response within a second you can seamlessly carry on and it doesn't interrupt your flow. So that's good. But the longer that delay becomes, the more likely you are to what we call bounce. And what I mean by bounces when you go to a website and it doesn't respond quickly to you go directly off and you go to another website. That's bad because obviously, people aren't spending time on your website. If it's an e-commerce site, they're not going to buy anything because they're not spending time on your site. So the bounce rate is very important. And in 1968, they found that that limit before someone bounces or loses interest is around 10 seconds. And I think a few years ago, actually, Microsoft did a similar research and found the limit was more around the seven to eight seconds mark. So it's getting even less time that people are going to stick around on your site as your page is loading. So user response time from a front-end performance perspective is very critical. So when we talk about performance, a lot of people are thinking about stress testing and load testing, and that's critical as well. But that's not what we're talking about here, even though that's what actually ultimately will probably impact the front end performance of your user. What we're really talking about is how your user interacts with your application one at a time on their desktop or from a browser, from a mobile device. And you might be wondering, well, why does application front end performance response time even matter to me? I'm a tester. What do I care? Well, maybe cynical, but everything I think comes down to money and money. So if your application is not responsive to your end-user and so your company is losing money, eventually, if your company is losing money, guess who's going to be impacted? You. If you're an employee, they're going to start…a lot of times they'll just go, oh, we're going to cut headcount in order to bring up our share price or something. And the easiest way to do that is to get rid of people. So, you know, in my view, the more performing your application is the less likely that going to happen. And so you definitely want to be aware of this.

Joe [00:07:17] It's not one of the only issues, but money is one of the leading factors and also just something to be aware of. So if you make people wait on your website, if you deliver a slow experience, it's fundamentally going to have a negative implication for your business and you can actually see this when you look at real data of the experience people get on websites and how that influences their behavior. So once again, I'll have a link to this in the show notes. But there's a chart that Andy gave up called “How Speed Affects the User User's Behavior on your Website?” And in the chart, people with faster experiences view more pages on a website, right? Sounds kind of yeah, makes sense but a lot of people don't think about that. So if you're a retailer, an e-commerce site, that means they look at more products, which probably leads to more sales. And if you work for a publisher that relies on advertisement, that means they're going to be fed. They are going to read more stories, which then you can then give them more ads. So you're getting a benefit because the speed is encouraging them to stick around on your site, more interactive on your site which ultimately is going to lead to better behavior that encourages the way you make money for your company. He also shows another slide, “How bounce rate affects your website?” as well. And so bounce rate as I mentioned, it represents how many people come to your site, visit one page and then leave. So a high bounce rate means that folks are leaving your site without taking any action, which is not good. And you see once again in this chart, hopefully, you'll reference it in the show notes, you see the bounce rate is lowest at the three-second mark, then it climbs after that. So the longer you make people wait, the more likely they are only to visit one page or they're going to bounce. It also shows in this chart a conversion rate. And the conversion rate is how many people spend money or bought things. And in the chart, you'll be able to see that virtually nobody converts below three seconds. And that probably makes sense, probably because a few pages complete before three seconds. And the longer you make someone wait, the less likely they are to convert. And in that four to seven-second mark, that's really the sweet spot. And so in a four to seven-second mark, conversions rates drop from five to four percent and that translates into lost revenue.

Joe [00:09:34] Andy also has an example of how he helped a retailer improve their site speed. And once again, there will be a chart that shows this. And so what they did is they targeted Android-only users. And he made some changes that improved the median experience for those Android visitors by four seconds or so, that's all they did. They made that one small change. And he said they saw the amount of revenue from those visits while those visitors increased by 26 percent just by making that small tweak just in time.

Joe [00:10:07] So a lot of this is going to sound more dairy, but rather than practical. But I'm going to show you some tools that Andy shared with me that can help you focus on how to get that response time down. But before we dove into that, I just want to talk a little bit more about how people currently are thinking about performance and why I think there are some blind spots that a lot of people have gone through as they are developing software. So your team probably, as I know my teams did they spend a lot of time, a lot of money on building and tuning your server farms and the databases and testing their capacity to ensure that you can really stir up that initial HTML payload really quickly to your site visitors. And that's critical. I'm not downplaying that, but that's like a majority of all they're focusing on. And, you know, if you look at some of the top sites in and Andy showed some UK sites in his example, it showed that how long it took the back end to generate the initial response. And so in the chart, you see these little pink lines and then you see a blue line for the front end. So you can visualize this, that the pink line represents the back end and the blue line represents all the other resources that make up the payload that you're serving up your end-user so that your images sizes, your scripts, your stylesheets, all the things you need to complete that payload. When focusing on performance, it's essential once again not to ignore the backend because until the backend has a…it delivers a response, there's no work for the front end to do. But in this chart, you see a majority of the work that affects the actual experience and that's what we're talking about, end-user experience, it's actually happening in the browser. So in order to measure the front end performance and to get a front-end performance insight, you need a mental model that's going to help you understand how the metrics you gather mapped the actual business experience. And so Andy shared a mental model of the front end that makes up a majority of the payload that takes up the time that people are waiting around, waiting of your site to load. So it's a really cool visual Andy gave and once again, that will be in the show notes. And so the images showed…it represents visual clues a visitor might have that things are working properly as your page is loading. And so it gives a case where the browser bar changed the address to the website. But at what point does the page actually become useful? That's critical because if it does become useful, then most likely the users are going to bounce if it's going to take too long.

Joe [00:12:36] So you want to make sure that the first thing that appears, even if the site is still loading, someone could start interacting with your site at least so that they won't bounce. And that's going to be different for you based on what your application does compared to someone else's.  Different sites are different. So for a new site, it may be when someone can start to read the news, that's going to be the first useful thing that your site does. You want to make sure that is loading as fast as possible and also could be when a product image appears. And so the visitor can see that they're on the right page for a retailer. So you really have to consider at what point does your page become usable? And in the example Andy gave, it was pretty late because the menu button wasn't immediately available for the type of action that that user was expecting right away. So when you're thinking about front end performance, you're thinking about, you know, how long does that page take to load before it becomes useful? So you could do a lot of things where even though things are still loading up, as long as you have something happening first that can allow users to start interacting with your site even though other things are loading up, that's going to help your user experience as well. So you want to know how long it takes for your page that you're testing to become usable and what happens in that beginning phase. And there are actually two broad ways you can measure how a page performs. And so the two ways are synthetic and in the wild. So synthetic is in the lab-style environment where you have a defined test setups in a known condition and then you have in the wild. So a real person's browser using whatever phone they're using and connecting to whatever network they're using. And both approaches have their place. But this… what we're going to talk about here is going to be talking about in the lab approach, because that's the one that most closely fits how we build performance into our initial workflow. So it's critical as your software development lifecycle is going on, like anything you want to shift to left and that everyone's being evolved. So it's not too late in the process where the applications are already developed and then you're starting to test and perform it. You can't test in performance to an application that does not perform, just like you can't test in automation to an application that's not automatable, right? So you want to get involved in the software development lifecycle. The best way to do that then is through synthetic. Even though in the wild it has its place too.  you. And so if you're doing a type of crowd testing it's really…I've seen it helped a lot of a lot of people as well. But we're going to just focus on the synthetic part in this episode. So how do you start thinking about front end performance if you do nothing else? The critical takeaway from this episode is to start building performance in your software development lifecycle as early as possible is essential.

Joe [00:15:17] And there are some situations during which you should be thinking about performance. And those are like when you're still in the planning phases, think about how big your pages are and will that be. What they're going to be composed of? How big they're going to be? Because the bigger they are, the more it's going to affect how the how much it loads. And when you're running a test on each build, you want to understand whether the build has made the site faster or slower. That's critical as well. I used to use a tool called Serenity, and it wasn't as detailed on exactly what was causing performance issues, but it gave me a time that this test took maybe a minute. But I could tell over time if it took three minutes, then we might have an issue. It gave like a really high-level indicator of flag that, hey, we need to look into this, because over time, traditionally this test only took a minute, and now it's taken three. And it just gives you a place to start being able to pinpoint that you may have a performance issue. That's just one quick example. Also, when you track in your releases and probably use an external you want to use an external APM tool to understand how your performance is changing now that you released that software to the wild. And when checking your releases using synthetic and real user experience, modern tools to understand what folks are experiencing in the wild, that's going to be critical as well. So you can adjust to that information and then take it in as you're developing software. When making iterations on your software you can build that into a software development lifecycle as well. So if you decide the application is going to include lots of fonts, lots of scripts, and lots of images, when you release it into the wild and discover that users are experiencing slow response times, you should revise your choices and to slim your site down so that it let you know that while maybe we should scale this back. You would just want to be on top of things and not be caught off guard. So being a little proactive is going to go a long way. Also, like anything with testing, integrating performance into your continuous integration, continuous delivery pipeline is a must to test against your performance KPIs to ensure that you're delivering a good performance experience to your users. So just like you integrate your automated test into your pipeline, in your unit test, you also want to start integrating these high-level performance criteria as well so that you're not caught off guard with your performance as it's released to your customers.

Joe [00:17:38] So let's talk a little bit about how do you actually start building performance into your software development lifecycle. So you're probably asking right now, okay, how do I begin building performance engineering into my developer's workflow? So first, we're going to take a look at a few tools that you can use to test performance in a synthetic lab type environment. And then we're going to cover a little bit how these tools can integrate into CI/CD process. So you can ideally and every check-in understand whether you have a positive or negative impact on performance. And what you might think about performance and I think a  lot of people think about, oh, it's this long, big planning performance engineering process that you have to put in a huge load and you use a lot of performance tools. What we're going to talk about here is a tool that pretty much anyone can use to help them from a one user perspective, understand their performance. And James Pulley brought up a good point that a lot of times when people think about performance, they think about multiple users and they have no idea even how their application performs for one user. So it seems, you know, common sense yet how to perform for one user. But a lot of teams actually completely bypass one user and just start thinking about how it performs from multiple users without focusing on this first critical step, because it's just one user and you could start testing as early as possible, which, as I mentioned, is going to be critical.

Joe [00:19:05] So the first tool you're going to use that you want to use is Google Lighthouse. So this is actually built into the Chrome dev tools. It has a lighthouse built-in. So if you actually use and have Chrome installed, if you go to your web application, right-click on it, and you inspect your web page, you can have a tab that allows you to experiment straight away and explore the features that the lighthouse offers. So when you open up dev tools, you'll have an option for Lighthouse. It used to be called the Audit Panel. And when you start to use the lighthouse, you can begin examining the page's performance, also accessibility, which is a big topic nowadays as well, and some SEO best practices. You can also choose to have a test page in a mobile scenario where it uses an emulated mobile device that has a small screen size or slow down CPU or a slower network, or you can test any desktop device. Now, all this is free, right? And all this is baked right in. All you need to do is go to a website, you right-click on it, you click on inspect, and then when you click on inspect, you'll have a tab called Lighthouse. And then in Lighthouse, you can actually generate a report. It gives you an option to generate a report and it'll identify and fix common problems that affect your site's performance and accessibility and user experience. So really, really useful tool they're definitely gonna want to start using right now, show your developers right now, and like I said, it doesn't take any fancy tooling or fancy knowledge. Anyone could start doing this right away. So I think it's a quick, quick win if you're a team if you're not even using it right now.

[00:20:51] And another way I've seen people use this Google Lighthouse in the development process is to actually leverage their existing automated test to capture this data. And as I mentioned, one way I did that is I used a tool called Serenity and Serenity gave me a relative way to track performance over time. But if you actually want to tap into Lighthouse, you can actually use their API. And one tool that comes to mind is Cypress IO. I think Cypress actually has an audit plugin that allows you to use Lighthouse. So as part of your flow, you can actually tap into this plugin and as your test is running you can actually capture Lighthouse metrics. This is awesome because your automated tests are probably already part of your CI/CD. And just by adding this extra feature, it's going to give you this high-level performance data as well. So that's wicked cool. Also, I keep mentioning relative performance. So it's not diving into every single…again if you're using a performance to like LoadRunner a lot of times you create transactions for every piece of your website. This one gives you more of what they call a performance number for the Lighthouse thing. So it's called a performance number. So it's it allows you to when it audits the site, it gives you this performance score. And the score is useful because you receive a metric and metrics are great because you could track that over time. And so this could be useful when discussing performance with your company stakeholders. So you could say, look, our metric is…our performance score that we got from LightHouse is 20. You know, it could be bad, could be good. It all depends on your application, what it's doing. But what's great about is you have a number at least and once you have a number you can list track against that number and that's critical. And you can start using it for having conversations and having conversations is awesome because people are usually driven by numbers. You can say, hey, look, I perform it. Number, we're going to use this 20. And over time, if it gets worse, we could start saying, hey, why did it get worse? How could make it get better? So you can really use it as a simple way to use, to measure, to use it to measure over time as to whether you're getting better or worse or as a comparison against your competitors. Like I said because Lighthouse is built-in you can actually go to your competitor's webpage, right-click on it and see how they do compared to your website. So that gives you a competitive edge as well. Also, what's cool about Google Lighthouse it has other features besides the main performance score. There are other metrics that you can use to capture the visitors' experience that they might have when a page is loaded. The three main ones you can use to dive into is first, contextual paint. And this is basically when did the content starts to get painted on the screen? They also have one called speed index, which measures how long it takes for the visitor's screen to go from a blank screen to being fully complete and stable. The faster the better, obviously. So this is a great metric to capture that, to know when it comes to the experience of the user how fast that speed indexes. And also time to interact, which tries to measure when a visitor can start interacting with your application, which goes back to what we talked about earlier, how soon can your application become useful? Because that's going to help determine whether or not a user bounces off your site or not. All this, once again, a metrics captured directly within Google Lighthouse, a completely free tool that you can start using right away within Google Chrome. And you can help use these metrics to start judging your visitors' experience. So once again, you can use that total performance score that the audit gives you to see at a greater, at a higher level the metric you can use to track relative performance over time. But then you can also go deeper and use these lower-level metrics to track to see where you're getting better or worse. Also, Lighthouse also generates some suggestions on things you could do to make things better. It also has a diagnostic piece. You can use to see some reasons why you received that performance score that you did. And overall, Lighthouse gives you awesome top-level metrics to track and sub metrics you can use to understand how to make your page faster.

Joe [00:25:05] Another tool that kind of does this that uses Lighthouse behind the scene that you might be aware of as well as PageSpeed Insights. So I'll have a link once again to this article that I wrote that covers all this. I'll have a link to all of them as well. The Page Insight, it also gives you a little more detail as well. You should definitely check that out.

Joe [00:25:24] So all the tools that Andy brought up are not free tools, but stay with me. They're not free. They're paid. But these are just some tools. There are lots of tools. I'm not saying use these tools, but I think I'm going to give you a reason why having some paid solutions are critical, whatever they may be. Let's just go over the ones that Andy recommended and you could use whatever. But I'll go over why Andy pointed out paid solutions. So the first one, which is actually not that expensive, it's called DebugBear and actually helps you track Lighthouse Score over time and it also produces some dashboard and I'll have an image for this in the post that I'll have a link to that shows you the dashboard. It just makes it really easy to see how all your key front-end metrics are changing over time. He also pointed out another one called Treo, which is another product that you can use to create Lighthouse dashboards for you, as well as track performance over time. It gives you a snapshot of the scores timings from the latest test in the main score you front and you can use to scroll. So Treo gives you a snapshot of the scores and things from the latest test and your main score. And so these tools are great to give you to really leverage Lighthouse not only to make a snapshot but also to be used as part of your SDLC to track how performance is changing over time. Another tool, the out is WebPageTest. This is a free tool and this is often referred to as the Swiss Army Knife of performance testing tools. So it uses real browsers, not just Chrome or Lighthouse, but you could test it in Firefox, Edge, Chrome, and other browsers and you can test on real mobile devices. And similar to Lighthouse, you can basically test through multiple locations around the world, whereas WebPageTest has four areas and now for the paid solutions like DebugBear and Treo, they have about 10 or 13. So obviously the paid options give you more, more locations. Also what's cool with things like WebPageTest it has a lot of options, including a capture video, which is really handy. After the test is complete it generates a report. You receive a set of metrics about the page that are slightly lower level and more in detail than Lighthouse. One of the first things Andy pointed out with the WebPageTest is something called the filmstrip view that he said he uses all the time. It's very effective at showing key performance areas because everybody basically understands a filmstrip. Now it builds, he said, actually helps build empathy about what the visitors experience, because you could see over time what they're seeing and they kind of relate to a while. You know, after five seconds, they still can interact with the site. I would be frustrated. I can empathize with them. What can I do to make it better for our end users? So WebPageTest gives you a richer, wider set of metrics than Lighthouse. It also gives you the ability really to drill down and test, to glean more information, and actually understand why you get the results you did. So similar to there being commercial variants of Lighthouse there's also a commercial equivalence to WebPageTest. So I mentioned WebPage has is free, but there's also appeared options. There's something called SpeedCurve, which is the same engine as WebPageTest under the hood but also tracks performance over time. And you can set it up to send alerts hourly or daily. And another one is called Calibre. So both of these will help you track your search performance, whether in their staging environment or a real or live environment or over time.

Joe [00:29:15] So why do we mention these tools? These are the debug PRO, DebugBear, Treo, SpeedCurve. And the reason why, as I mentioned earlier, hopefully, you stuck around is because they all have APIs and why API is critical? When a product has an API, it allows you to easily start testing or revoking a test on-demand and get the results back when it completes, which means you could then integrate it into your CI/CD pipelines from Jenkins make the API calls very easy. So we have a paid product and they offer these APIs easily over time start integrating them into CI/CD pipeline so you can make changes. You could track how those changes are affecting your score at Lighthouse or your absolute raw timings in WebPageTest. So it's just really critical. Because you can actually start integrating these insights as part of your team CI/CD pipelines and a paid solution allows you to access this functionality. And Andy mentioned building performance testing into your continuous integration cycle is, in his experience, the best way for you to stick to good performance practices, because obviously, if your team is getting feedback after every check-in, they're going to be more aware of how they're coding or how they are developing software is impacting the performance and ultimately the end-user experience, which then impacts the money or the value your application is giving to your users in the wild.

Joe [00:30:45] So if you actually want to see the demo that Andy gave at the Perf Guild once again, just go to the show notes of this episode, which I'll have a link where you can actually get the actual session, actually see this in action. So some parting words of wisdom like I said, always ask someone, as you know if you've been listening for a while one piece of actionable advice, Andy's one piece of actionable advice is…at the beginning of this episode, we mentioned the massive improvements, the user experience that Andy got from the response time in the Android retail example. But he said it's not always that simple. So often, you know, you make a series of small incremental gains that eventually add up to larger ones and the same is true for building front-end performance into your workflows. He highly recommends you start simple. Start perhaps with the PageSpeed Insights, Lighthouse, or one of the commercial services. Set some limits around where you currently are and use that to make things to help educate your team to see over time are things getting better or worse when you make changes. Then over time, you can start working on more advanced speed improvements. So just having that high-level metric performance metric once again can help create more conversations. And you have a number that you can actually go to and say, is it getting better? Is it getting worse? It's just an indicator. It's not the end all be all of performance, but at least gives you something to start with. And just having that one performance metric that the Lighthouse gives you is an awesome starting place. And also, remember, Google is playing the stage in performance as one of the ranking factors in their search algorithm. So if search is important to your application this means that your company is really going to want to be invested in the front end performance of your application as well, because obviously, you want your site to be ranking high in the rankings for SEO when people do Google searches as well.

Joe [00:32:34] So hopefully you found value from this episode. All right. So I mentioned a lot of things in this episode. So in order to get it, all you need to do is head on over to testguildcom.kinsta.cloud/a328. And while you're there make sure to click on the try it for free today link under the exclusive sponsors' section, it's all about SauceLabs awesome products and services. And as I mentioned, if you're listening to this on the day this has been released, you have one week, one week for the early bird specials. It's going to be the biggest Automation Guild ever. It's the fifth annual online conference dedicated 100 percent to automation testing. We've been doing this since 2017. It's not taking advantage of Covid-19, as you see with all these other events going on. We've been doing this for over five years. We have it down to an awesome system that our attendees seem to really like, find it very engaging. If you haven't checked it out already, head on over to testgu-21304.arnt-nyc.servebolt.cloud and definitely register for Automation Guild twenty 2021. So that's it for this episode of the Test Guild Automation podcast. I'm Joe and my mission to help you succeed by creating end to end full stack automation awesomeness. As always, test everything and keep the good. Cheers!

 

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

Resources

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Nicola Lindgren Vernon Richards TestGuild Automation Feature

The Software Tester’s Journey with Nicola Lindgren and Vernon Richards

Posted on 12/22/2024

About This Episode: Today, we dive deep into how to advance your career ...

Alex Kearns TestGuild DevOps Toolchain

Leveraging GenAI to Accelerate Cloud Migration with Alex Kearns

Posted on 12/18/2024

About this DevOps Toolchain Episode: Today, we're diving deep into how you can ...

Three people are pictured on a graphic titled "AI Secrets You Should Know." Set against a striking red background, the image features the ZAPTALK logo in the top left corner, highlighting discussions on AI and automation.

The Secret to Embracing AI and Automation (ZAPTALK EP 02)

Posted on 12/17/2024

About Episode Join Alex (ZAP) Chernyak, Joe Colantonio, and David Moses in episode ...