Reliable & Fast Synthetic E2E Monitoring with Tim Nolet & Hannes Lenke

By Test Guild
  • Share:
Join the Guild for FREE
TIm Nolet Checkly Feature

About This Episode:

How’s your automated API and E2E monitoring strategy? In this episode, Tim Nolet and Hannes Lenke, co-founders of Checkly, will share why active monitoring for modern development/DevOps is essential, how Puppeteer and Playwright fit. And that monitoring production is great, but catching bugs before they hit production is even better. Discover all the benefits of leveraging your existing automation tests for synthetic monitoring in your pre-production and production environments. Listen up!

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Tim Nolet

Time Nolet Founder of Checkly

Tim is the founder & CTO of Checkly. He has over 20 years of experience as an Ops Engineer turned Developer.

About Hannes Lenke

hannes Lenke

Hannes is co-founder and CEO of Checkly, the API and E2E monitoring platform for modern Developers. He started his career as a Developer and decided ten years ago to found TestObject, a mobile testing SaaS. He sold this company to Sauce Labs in 2016 and became their General Manager in Europe. Three years later, Hannes decided to join Checkly.

Connect with Hannes Lenke

Full Transcript Hannes Lenke and Tim Nolet

Joe [00:00:55] Hey, Hannes and Tim! Welcome to the show.

Hannes and Tim [00:01:34] Hi, Joe. Thanks for having us.

Joe [00:01:36] Awesome. Great to have you all. So I'm really excited to talk to you. Playwright has been a really hot topic I've been hearing more and more about, and usually, it's Selenium you hear about. So I'm just curious to know before we really dive into it. Now, what's the deal with headless automation? I think you also do something with theheadless.dev site, which I think is all about headless automation as well.

Tim [00:01:57] Yeah, that's correct, Joe. So it's a long story. So if we're going to talk about Playwright, we should probably also talk about Puppeteer in the same breath I guess. To be very short, we think or I think specifically when I started Checkly, my company, is that Playwright and Puppeteer, both are automation frameworks to talk to browsers from JavaScript or Python. They are revolutionizing what you can do with browsers in code. So it's not only testing, it's not only monitoring what we specifically do. There's a whole bunch more you can do and it's hard to answer it in one sentence, like what is the revolution? Because there are so many different topics. So I guess we have enough to chat about. Faster is a term you hear more often, more stable, easier to use, more modern, and making eir codes. So a little bit hard to pinpoint what modern eir code (??) is . I think that's it in a nutshell.

Joe [00:02:56] Hannes, I don't know if you have anything to add, I've been a big believer in a more headless type of automation. A lot of people, even the creator of Selenium told me, you know if you have a thousand Selenium tests you are doing automation wrong. So any thoughts on that?

Hannes [00:03:09] Yeah. So I mean, what headless automation basically changes is first of all it's faster to execute. and it's easier. So headless is nothing more than executing your browser with all the UI, which is essentially making it faster. And if you are able to execute scripts automation faster then you are able to do a lot of sophisticated things in your, for example, development workflow without the need for longer breaks, essentially. And this is just opening a whole space of use cases. So it's not only Puppeteer and Playwright, which you can execute headless. So there are also a lot of other frameworks. And essentially that's one ingredient that is changing how also developers look at automation.

Joe [00:04:01] So I think another thing people struggle with as we go towards headless is technical, but the programming skills that are needed because it's not, sometimes it's overcomplicated or it's JavaScript and sometimes you're not, people aren't familiar with JavaScript like Puppeteer and Playwright. You kind of really need to know how to develop a JavaScript. So I've been seeing a lot of recordists coming out. So I believe you also have a headless recorder and if so, you just tell us a little bit more what the headless recorder is all about?

Tim [00:04:28] Sure. Maybe I should pick that one up. That was the first one. I build our current headless recorder offering. First, a small correction Joe. Puppeteer and Playwright are also drivable now from Python and I think from C Sharp also. It is quite some development there. However, the main project is being developed Puppeteer by Google Playwright by Microsoft as a JavaScript or actually typescript brainwork. And many of the use cases and examples you will see are in JavaScript, of course, but yet you need to still there is a little bit of let's say, well, I wouldn't call it a stumbling block, but yes, you need to be able to code to drive Puppeteer and Playwright, however you can generate code. And that's what our Atlas recorder does to Chrome extension. It's on the Chrome web store. It's on GitHub. It's an open-source thing. We're doing pretty well. We have twelve thousand GitHub stars and it's a lot of virtual Internet money that we have there. And what we essentially do is you press record, you click on a bunch of stuff on your website, only works right now on website mobile and then you click stop and then we generate that code for you. So the output of our recorder is just more code. This is kind of important because we do believe that that's what you want in the end. With the recorder, I didn't want to build a thing that hides it from you or that kind of makes it proprietary formats that you then can execute. It literally just…it poops out JavaScript for you. And sometimes you need to do a little bit of editing because the world isn't perfect. But yeah, we're doing quite well and we have some bigger plans for it to make it even greater and nicer and fulfill some other use cases.

Hannes [00:06:10] So essentially it's a kickstart, right. Kickstart for you to get started with your scripts and then you can take that and make that a larger automation suite, which could test your websites or monitor your websites.

Joe [00:06:29] I assume this is open-source, and if so, why invest in an open-source project? I know you guys are very busy. You also have a thriving business. So why funnel off resources to go to an open-source type project?

Hannes [00:06:41] So, I mean, all users are JavaScript developers mainly. Or let's call them JavaScript experts. They need to have an easy way to get started. And the easier the bay is, the better fast, because then the users get to the success Julia (??). And this is why we're investing into open-source and investing into the headless recorder. And also another initiative that we are driving is theheadless.dev, which is essentially a portal to teach you and to educate you on headless automation, Puppeteer and Playwright. We hope that these initiatives help the community to grow. And the community of Puppeteer and Playwright is a relatively young but fast-growing community. And we hope that we can do something or edit here maybe. We think that our investment will help the community to grow even faster and this is important also for our business success in the end.

Tim [00:07:44] I think that there's one thing I can add there is that the headless recorder that we have right now and theheadless.dev, which is this portal or knowledge base. They fulfill part of the lifecycle when you adopt as a user a framework like Playwright or a Puppeteer, which we have strategically invested in our paid product. This is very core to what we do. So we are all in. But what you typically see is that there's someone who is interested in it, heard the name, wants to play around. So there's a part of the education that we need to do. That's theheadless.dev. Then the education part is offering people who really going to try it out to write their first code. That's when our headless recorder helps you to write that first code. Hopefully, we are doing a good job and we're nice people, and we're a good business. They will find a use for our product. And then, of course, there is this what do you do with those? I run a bunch of tests or we call them checks and then I have some output and something breaks or something goes right. And you want to look at what the results are. That's kind of the next step that we're working on right now. So we see the open-source initiatives really good for, as Hannes mentioned already, for the onboarding of people that might later want to use our product. But also in general, if they don't have to use our product at all we benefit if the community grows. The level of knowledge goes up in the wider community.

Joe [00:09:14] So, you know, Hannes, I think you've mentioned developer or dev three or four times. A lot of times when people think of automation tools in testing they go automatically a tester or QA. So do you actually see developers embracing these types of tools?

Hannes [00:09:28] Yes. What we see in the market is that basically, the more teams are picking up DevOps and more DevOps that they are also responsible for the reliability of their application. And this means reliability in the development phase, but also in production. So they get called if something is wrong in their specific service, on their specific page. So there's a need for tools that fit into developer workflow. Traditional or testers are working together with developers who want to have one to adopt tools which also their colleagues could understand and help with building out an automation suite in the end.

Tim [00:10:12] And I could make it more practical even. When I started coding Checkly roughly two years ago, I had some other jobs before that, of course. And these were jobs, mostly startups, small dev teams, maybe the whole company forty people. We didn't have QA people. That was not the hiring list. We had engineers and engineers did everything. So it seems and this is maybe anecdotal, but there's definitely a trend where it's just not something that smaller businesses or SMB invest in right now. They expect developers to do it. You do the QA. You do the monitoring. This is really the underlying trigger for me to actually start building Checkly and adopting that role that wears multiple hats. I'm the coder. I'm the tester, I'm the operator. In the end, I'm doing everything. And if these tools talk to each other and I can use the same language, the same frameworks, and the same things that I know, the less overhead for me, the less complex switching.

Joe [00:11:15] So I think another thing people struggle with is there are so many tools out there. So how do they know what to choose? So, you know, you have Selenium, you have Puppeteer, you have Playwright. Are there any criteria you use or you give your customers to know how you should use Playwright in this situation and you just use Puppeteer in this situation, or is it just up to personal preference?

Tim [00:11:36] That's a good one. Just to stick to the topic, Playwright and Puppeteer, because I'm pretty sure Hannes has way more knowledge about Selenium having worked at one of the bigger vendors that has that at its core.  Playwright is relatively new. It is…it's no, I wouldn't call the fork off Puppeteer, but many of the folks that worked on Puppeteer and are now working on Playwright. So there's definitely a large overlap in the API not so much in the actual code on the back end. So what you see happening right now is that Playwright is on a rocket ship. They are developing really fast and adding all kinds of, let's call it higher-level features that have to do with recording videos, recording test scenarios, stuff like that. This is all very recent. And some of that ship like two weeks ago. What I see right now is that you can't go wrong with either. They have very similar features, but and hopefully, the people at Google will not look angry or will not get angry. I think the Playwright team is really nailing their feature roadmap. And I have been in situations already at our company when there was a support ticket, something didn't work, a page didn't render correctly. And I said, why don't you flip from Puppeteer to Playwright? And the bug went away. It worked. That for me, a big signal. So the reliability went up basically. The rest we'll have to see. I think the coming months, six months are going to be very interesting on how both frameworks level up and maybe start, you know, going in different directions. That's possible. I don't know that yet because I don't have all the details from the companies building this. So we're actually quite in the middle of it. And there are other players also. Cypress is a big one of course. They have their own thing. I am very curious about what's going to happen there in the coming six to 12 months.

Hannes [00:13:27] I can just add that Playwright is making more efforts to support cross-browser which could be the main difference for protesters in the end who really want to make sure that their browser looks on a different device out there.

Tim [00:13:42] That is a good one. I completely forgot that the big selling point when Playwright came out of the gates was, hey, we support Firefox, WebGit. So the Safari framework and Chromium all at the same time. And I think they're almost completely at feature parity right now across the three big browser families.

Joe [00:14:02] Nice. So Hannes I'm just curious to know from your experience then. I think a lot of teams have a lot of investment in Selenium or UI Test. Do you see teams replacing Selenium with Puppeteer or Playwright Test or do you see like a mixture of maybe 10 percent Selenium end to end,  more front-end type test, and then lower-level kind of faster headless type testing using languages or tools like Playwright and Puppeteer?

Hannes [00:14:26] Yeah, good question. I mean, what I've seen in the past is that especially newer projects. Projects were more on dev applications start to look left and right and are really trying to understand what is the right framework here. And these are the frameworks that normally adopt new technologies like Puppeteer and Playwright or Cypess, where when you have already invested into a lot of Selenium tests and they worked out for you or other frameworks and this works out for you on a larger scale, why should you change? Automation and coming up with a big automation suite, it's always an investment and you should always think about hey, does it make sense to throw that investment away and do I gain enough benefits from putting something from the scratch, like building software? This is a question I see Tim asking and sometimes the answer is no, stick with what you have and just be happy with it.

Joe [00:15:28] So we talked a lot about the testing piece. And I think, you know, we've already mentioned DevOps and monitoring and the whole lifecycle. And so a big shift I've been seeing is people focusing in on the full pipeline for the software development lifecycle and a big shift going towards monitoring, utilizing the scripts to give them not only functional test information but also some sort of production or health type information. So you talked a little bit, I think in the pre-show. We talked a little bit about synthetic monitoring. So before we get into the monitoring piece, I guess for folks that may not know what is synthetic monitoring?

Hannes [00:16:04] So we talked about automation scripts and synthetic monitoring is in the end scheduled automation script. So something which runs on the schedule, maybe every minute, maybe every 10 minutes, you name it, in the end. Synthetics enable you to have an artificial user in the end. It's not a real user which is interacting with the application, but really something artificial. So a script which is simulating the interactions with your application on the production system, and if that goes wrong, then you might want to get a look at it and you might want to get a call to fix the issue. And this is what a synthetic is in the end. The benefit of it is that if you have these artificial users, then they face an issue. It's not a problem for you as a company, real user might face an issue and you maybe lose a customer or lose a user, but maybe just lose trust of that customer. And this is why synthetics are important for users, for companies.

Tim [00:17:08] Yeah, correct, and maybe just a very practical example that I always tell customers and potential customers is, hey, you got a web that requires a login and you log in to any SAS tool has a log in your dashboard loads, fills up with a bunch of stuff that needs to work, right? Yeah, that needs to work. Good. Then you create a script. Use whatever you want. We recommend Playwright or Puppeteer that does that action. Create a test user or something like that. But it's representative of your application and that user or that script that logs into your application loads the whole screen, and then you write a bunch of assertions like, hey, this needs to be there. That needs to be there. It's basically an end to end test and run that every 10 minutes. What's interesting is that doing that gives you a whole bunch of information that your application is working, because in most sophisticated applications, the screen that loads does a whole bunch of background calls to fetch data from services, from your database, et cetera, et cetera. If you have a green checkmark on a dashboard that tells me, yes, this page is working exactly as I want right now and in 10 minutes and in 10 minutes, it gives you a lot of confidence because, well, we all know that unit tests and end to end testing, while you run it before production, they involve mocking. They involve all manner of things where it's similar to real life, but it's not really real life. And you can get that last bit of reliability or trust that you just shipped the correct release. But just having it monitored on a loop every 10 minutes, that's basically it in a nutshell.

Joe [00:18:48] Great. Now I'm older, so I want to see if this is the same scenario. So back in, like two thousand I had tools called WinRunner. We had a way where we'd use optical business process testing or business monitoring testing, where we install our scripts on our machines and we have to physically ship them to our insurance agents and then they plug them in. And those tests would run every 10 minutes. And then if you get like two red in a row, we'd get alerted. And we know that there was something wrong with the website. Obviously, technology has changed and things become more effective, but is this the same type of principle?

Tim [00:19:20] This is exactly what we do minus the shipping of the box. Yeah. So it is the exact same thing.

Joe [00:19:27] Cool, cool. So are you running then in different locations as well? Can you do that? And because you're able to use the cloud now you can run in maybe Asia and run scripts in Europe and US and then determine if you have issues in certain areas as well.

Tim [00:19:41] Yeah, I think one of the interesting bits which we actually didn't discuss yet, but it has to do with these new frameworks is we leverage AWS Lambda and we can run wherever Amazon runs. So we have I think, 20 locations now. We just added Millán. But we added that too in Italy. South Africa was added recently, which is great. So we shipped it there. The interesting thing is that the workloads that we run are fairly lean. This is because of these frameworks. They're more resource-efficient. So there's very, very little overhead for us as a small company to distribute those workloads all over the globe. We give a nice UI to the user. They can click their little country flag and they can run these folks from anywhere on the planet. And you get interesting extra information. Not only does my app work, but how fast does it load? Are there any weird issues going on with proxies all over the world? And we do see quite some interesting bits where, you know, one application works from India, doesn't work from, I don't know, the US because I don't know. Some part of the Internet in between broke. That happens quite easily.

Joe [00:20:51] Yeah, one hundred percent. So, you know, I know you have a tool, Checkly. We talked a little bit about it. And obviously, this is the functionality, it seems like that it handles. So can you just give us a high-level overview of what a lifecycle would look like if someone was using Checkly? Because you did mention lifecycle testing is important. So I'm just curious to know, like end-to-end using your solution, like, how do you unite everything and what does it look like and what's the output?

Tim [00:21:13] Checkly does two things. We do synthetic monitoring using Puppeteer and Playwright and we do API monitoring because some applications are APIs. They don't have anything in a browser. So one of the…that's kind of that's the core of Checkly because you want to have both next to each other. Weirdly enough, this is sometimes overlooked or it's an add-on to an existing API platform, or the other way around, there's a synthetic monitoring platform. You can add a little bit of API. We just treat them as exactly the same. They are two core tenets. So what happens is, well, you log in, you create a new check. That's what we call it, hence Checkly. If you want to do a browser check, as we call it, so render something in Chrome and assert it, we give you an editor, code editor, and a whole bunch of examples, and of course, this recorder and all the educational material and you code your first check. This could be literally as simple as “Go to my homepage” and use an assertion library and we just use the open-source ones that are out there that are very, very well known and assert that the title is correct, that there's a specific name in the screen or something like that. This could be five lines of code. You press save and we run it for you every 10 minutes. We can give you a whole bunch of fancy options, but that's the core of it. The last step that every customer of ours does is when it breaks, where should we alert you? So that's where the monitoring part comes in. And we have basically all the options that you would expect there. Email, SMS, Slack. It's like PagerDuty and Opsgenie are services we integrate with. So typically for the on-call service, you want to get a message there when things break. You can add to this, the API part. This is very similar to if I show you a screenshot, people would recognize it. Oh, that's one of those API request builders like Postman or Insomnia where you put in a URL. You can edit all the headers, you can add authentication, all that kind of stuff. And what you end up with, in the end, is a very nice looking dashboard that shows you all the key health data of your application and APIs. We can check every minute for you the browser sessions that we spin up. Playwright, we run them every ten minutes. And for some customers actually every five minutes, I'm not lying. because that's probably more than enough. And that's our starting point. We have a whole line of extra things if you want to take it to the next level. This comes down to a lot of extra scripting you can do with your own code and actually triggering these suits of tests because we can call them actually every time you deploy something. So a lot of our customers use GitHub or another service and they deploy to, I don't know, to AWS or Hiroku or services like Netlify or Vercel. And what we allow me to do is not only run these things every ten minutes or every one minute but trigger them explicitly every time you do a new deploy. So this is kind of like an end to end test that comes right after your deployment and hopefully also catches a lot of bugs before they really start annoying your customers.

Joe [00:24:26] So I guess one objection or one thing someone could ask is a lot of times because you're in production and may just be a superficial test, but a lot of the business value might be in someone adding to see an insurance policy. And obviously, you're not going to delete a policy or a destructive type activities in production. How do you handle that? So you're not just getting like high-level information, but actually, getting at the critical areas of your business? Does this help you with that or the different techniques you can use in order to get around that?

Tim [00:24:54] Yeah, so this is a very good question. So on the browser checks what we call an end to end test with Puppeteer a Playwright you have basically all the power there. You can say like, hey, let's create an insurance policy in the script, do a bunch of checks like, okay, yeah, that should be there. That should be there and then delete it again in exactly the same script. So you can do a little setup and a teardown. That's pretty powerful already. Interestingly, for the API part, we did exactly the same. We allow you to run a script before the API check and after the API checks, we follow the exact same pattern that you see with traditional testing or end-to-end testing setups and teardowns that create a well-known situation which you can act upon and then remove that. We do that ourselves with our APIs and lots of our customers are using that because, well, it's kind of the holy grail of this testing in production, if we can call it that. Two remarks, though, use a test user, do not use an actual customer that's bad. Or some admin user that's not good. So we advise you to always create a test account and keep it simple. I've seen some users and I see that in the wild, too. They go overboard. They do too much. That last five to 10 percent is maybe not necessary. You know, keep your scripts as tight as you can, keep them as you can get them because they will get more complicated and the Internet can be flaky and there might be some glitch. And then you need to add more errors and. Inside of your script, it can get bloated very quickly, so those are always the two tips use, it allows you to keep it as short as possible.

Hannes [00:26:38] And if all this doesn't work, luckily these three will have some capabilities to mock behavior. So there's a possibility to intercept quests and make sure that you're mocking your website on the fly.

Joe [00:26:52] How long have you been running? I think two years. So do you have any feedback on how this has helped companies like real-world maybe case study or things you may have found? They said, oh, wow, you know, it actually implemented it here. And we've got X amount of, I don't know, whatever.

Tim [00:27:08] So I think there are two cases. Vercel formally named Zeit rebranded this year. They're an early customer of ours. They use it for every deployment that they do on their front end. They use it quite extensively. They have hundreds of checks or running customer side specific API endpoints but also using this triggered thing. Like every deployment, they see their results in GitHub because we integrate with that. So in a pool request, you get a nice little green checkmark. Everything is correct. And if it's not correct, we show you what's wrong. So that's a pretty massive use case they have there. Then there's another company which really has a really interesting product. They're in the security space. And they blew my mind. What they did was the following. They have some software that runs on-premise, which they have no control over. It's one of their products. They can't really get to it. One of their customers installs it. And then if something breaks, they're like, yeah, well, you know, we can't really do anything about it. But they do have access to a health endpoint. So what they did was they integrated our product into their rollout. So one of their customers installs their product through our API because we have an API for everything. Through our API, a check gets created and if something fails, a webhook gets fired because we have webhooks and then webhook calls their system. And that system then sends a nice branded message to their customer, like, hey, it seems to me there's something wrong with that installation that we don't have access to. You might want to check this and this and this. So that was for me, a completely amazing use case of how they automated through our APIs and our webhooks, their reliability process for that specific product. Pretty cool.

Joe [00:28:55] So, you know, you also once again, you've mentioned reliability a few times. Is this because there's been or you've seen a push towards reliability engineering and you see this to falling into that type of silo or like who would be the main person that would utilize this to put it in place?

Hannes [00:29:10] So the main push, which we are seeing is, as I mentioned before, the developers and development teams on all responsible for the reliability of their site. You mentioned reliability quite often because of basically two things. First of all, monitoring has to be reliable. So you don't want to wake up in the middle of the night because of a flaky check somewhere. So so the underlying checks and scripts, frameworks, and technologies have to be very reliable across all of our customers. The second thing is reliability for digital products becomes more and more important customers to expect reliable products. And our customers understand that. They understand that their website, the digital products, have to be reliable all the time. So, yes, there is a shift towards reliability, maybe towards SIE but even more, a large shift that the expectation of customers and of our users is changed.

Joe [00:30:17] So we touched on a lot here. You have a newer tool. We touch on pretty much a lot of the buzzwords also that are out there. But one thing we didn't talk about is AI. A lot of times people slap on AI with the new product. I don't think I saw that on yours. Any thoughts around AI or do you think you could see that playing a role anywhere in the functionality in your roadmap if it's not already there?

Tim [00:30:37] Yeah, certainly this is a scope we're going full AI. Everything is AI. I think it neatly segues into what Hannes just mentioned. We are in…at core, we're selling reliability, we're selling trustworthiness. Like I can trust my process, my apps, because I'm using Checkly and they have my back. There's a big reason why we're not using AI or maybe machine learning, which could be actually a little bit closer. There are some interesting players that are using machine learning to update a test code basically. That seems like an interesting thing. I don't know anyone that uses it, by the way, but still, you know, I'm not dissing it at all. I honestly just think I see no real usage for AI. I also don't know a lot of stuff, I don't know a lot about it, but there's one on the machine learning site that's very interesting. Screenshot diffing is a hot thing, I think. What are they called? That company with the nice hedgehog logo.

Joe [00:31:41] Percy?

Tim [00:31:42] Percy. Yeah, they were recently snapped up by Browser Stack. I don't know if Percy uses machine learning, I've no idea. But they do screenshot diffing. So looking at the screenshot, see if there's to change. That is a very good use case for something like machine learning. Where how do you know that that difference in that screenshot is actually meaningful. That seems pretty fair and I think I think we're going to see some interesting things there. But for now, we don't have any big plans in that direction. Hannes or did I miss our board meeting that we're going somewhere else?

Hannes [00:32:13] You're right. So the important piece here is we're integrating open frameworks, open Puppeteer, open Playwright. And there's always the possibility that someone is implementing AI or machine learning, which helps our users. And then the likelihood of that is really helping our users as integrating is(unintelligible).

Joe [00:32:35] Okay Hannes and Tim, before we go, is there one piece of actionable advice you can give to someone to help them with their automation testing efforts? And what's the best way to find, contact you, or learn more about Checkly?

Hannes [00:32:45] So my advice is please check out our open-source projects. One is theheadless.dev and just read about Puppeteer and Playwright and maybe read the basic tutorials there. Maybe also check out the headless recorder and see that Chrome extension is helping you to create the first scripts and maybe also have a look at Checkly. So checklyhq.com. That's how you can reach us. We're also having Twitter users and they're using intercom, so it's easy to reach us everywhere.

Tim [00:33:19] That is correct. I have nothing to add to this. Headless dev is by the way, theheadless.dev. We got a Google dev domain. It's pretty cool.

 

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Matt Van Itallie Promotional graphic for a DevOps toolchain podcast episode featuring CTO insights on the impact of Gen AI on DevOps, with guest Matt Van Itallie, supported by Bugsnag

CTO Insights on Gen AI’s Impact on DevOps with Matt Van Itallie

Posted on 03/27/2024

About this DevOps Toolchain Episode: Today, we'll speak with the remarkable Matt Van ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Sideways Test Pyramid, WebDriver Visual Testing and More TGNS115

Posted on 03/25/2024

About This Episode: What is a Sideways Test Pyramid in testing Have you ...

Frank Van der Kuur Mark Moberts Tatu Aalto

RoboCon Recap: Testing, Networking, and Building with Robot Framework with Tatu Aalto, Mark Moberts and Frank Van der Kuur

Posted on 03/24/2024

About This Episode: Today's special episode, “Robocon Recapp,” is about the insights and ...