Automation Engineering Productivity with Dushyant Acharya

By Test Guild
  • Share:
Join the Guild for FREE
Dushyant Acharya TestGuild AutomationFeature

About This Episode:

Software development is moving faster than ever in a continuous delivery model, and traditional test automation is sometimes not enough to keep up. In this episode, technology leader Dushyant Acharya, will share how the engineering productivity paradigm helps expand test automation and make it more effective. Discover how to increase software development productivity with development workflow, including test automation, infrastructure, and efficient product delivery.

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Dushyant Acharya

Dushyant Acharya

Dushyant is a technology leader with expertise in building and running high performing engineering teams delivering quality products at scale. He currently runs Payment Platform engineering teams of Engineering Productivity, and DevOps at Ripple Labs – which aims to provide frictionless experience for global payments on blockchain technology.

Dushyant also holds a strong academic background with masters in Software Systems from BITS Pilani, India and MBA from Haas school of management, UC Berkeley.

Connect with Dushyant Acharya

Full Transcript Dushyant Acharya

[00:00:01] Intro Welcome to the Test Guild Automation Podcast, where we all get together to learn more about automation and software testing with your host, Joe Colantonio.

[00:00:16] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast, and today we'll be talking with Dushyant all about engineer and productivity, machine learning, all kinds of things technology leaders with expertise in building and running, high performing engineering teams, delivering quality products and scale, which I think is really unique because a lot of people stumble on this or we're going to touch on this. He currently runs payment platform engineering teams of engineering productivity and DevOps of Ripple Labs, which aims to provide a frictionless experience for global payments on blockchain technology. Another thing we might touch on, so definitely you want to listen to this episode. Dushyant also holds a strong academic background with a master's in software systems and also has an MBA, so he has a lot of knowledge. You don't want to miss this episode. Check it out.

[00:00:59] Intro The Test Guild Automation Podcast is sponsored by the fantastic folks at SauceLabs. Their cloud-based has platform helps ensure your favorite mobile apps and websites work flawlessly and every browser operating system in the place. Get a free trial visit testguildcom.kinsta.cloud/saucelabs and click on the exclusive sponsor's section to try for free for 14 days. Check it out.

[00:01:26] Joe Colantonio Hey, Dushyant, welcome to the Guild.

[00:01:32] Dushyant Acharya Hey, Joe! I'm really glad to be here, and it was a very generous introduction. Thank you very much for your kind words.

[00:01:38] Joe Colantonio Yeah, I really love your background. A lot of things we could touch on. So the first thing that caught my eye was you talked about something called a productivity paradigm. And when people take test automation, sometimes that they think of it just as functional automation. Just curious to know what is the productivity paradigm? How does it relate if it relates at all to test automation?

[00:01:58] Dushyant Acharya You know, Joe, so I really like the question. So let me start by saying that all of the years when I have been doing quality-related work that it has been evolving a lot. I started with manual testing ages ago, and now people don't like to do manual testing for the right reason, and automation has the right focus on how you are actually testing your feature. There's something that …(??) to the feature itself is not sufficient because nowadays what we're looking for is a continuous delivery of your product, how fast you can deliver your product? Is that monthly, is it weekly, or is it something daily? And when that happens, you're not just focusing on one automation this is focusing the entire pipeline of your code. What happens after the coder writes it and how quickly it can go to production and when you have the chances of increasing the productivity. So when I see about productivity as a paradigm for hedging productivity as a space, I usually try to look at that in my understanding.

[00:02:53] Joe Colantonio Nice. So I think what would throw a lot of people off is a lot of times they struggle just with functional UI automation. How could they scale? As you said, we have to scale. We're trying to deliver safer, quicker, and faster. But if people are just struggling on just UI automation, how can they actually expand that to be more to help other activities that may be slowing them down?

[00:03:11] Dushyant Acharya Well, if I had to give the easy answer to that one is the key to your solution would be to break down your problem into multiple pieces. You cannot solve all the things together. You can't aim too high and wait for a big delivery to come and to automate everything. If you already had the UI that's just upfront, right, then, think about what is next? Think about what you need to automate next by looking at what is failing first, right? So start measuring. Being data-driven in that kind of situation helps you a lot. Start seeing where your team is going wrong, what policies are coming from, where you're spending more time in delivery, and trying to automate that part first. Is at the backend, the database, is it performance or is it just comparability with how systems interpret, right? So, it will become clear if you start pitching data and breaking down your problem into multiple pieces.

[00:03:59] Joe Colantonio Nice. I know a big piece of the paradigm is getting developers involved because it's like you said, it's not just about focusing on UI automation. So what kind of activities can people be looking at? If they're not aware of that? Maybe they're just where I could automate a browser, but they may not be aware of may be other opportunities. Are there any other areas you think are quick ones, maybe that people don't think about when it things comes to automation?

[00:04:22] Dushyant Acharya So as someone who's working on the automation, you look past your biscuits. That's the ideal. So if I am the engineer looking at the automation and my focus on the feature test, which is, Hey, what is my feature doing, and isn't they want to supposed to do? Or I'm trying to get how I could test my feature faster? How I can test my feature alike? How I can build my feature and of course, faster without any issues? How I can actually get to the finish line up to the problem on the table? All of the non-production and all of the production. So during that part process with the automation that I'm not only focused on one feature, but I'm focused on the experience of my engineer has, on my code has, managers reaching to the customer. On the other hand, if I would developer. Now we're asking developers to look beyond your code, and not just be responsible to write a code, you have the responsibility to write the quality code, which means you don't need to do some kind of validation of your code, you need to understand what the feature is and how you are testing it. Yes, your automation gives you all the tools you need to do your job in that one, but you cannot ignore it. Not anymore.

[00:05:29] Joe Colantonio Great. So a question we get asked all the time, especially as doing continuous delivery, is how do we measure success? Like how would you measure success? Is it just I know it's a terrible thing to say bugs found because that's always been a terrible metric. But the delivering software faster, like would you usually measure teams by to see how they're coming along in this paradigm shift?

[00:05:48] Dushyant Acharya Oh, that's an interesting question. So, you know, I'll go back to my, my studies of that. And sometimes the measure you put creates the wrong incentive for the team to focus on that measure that suddenly that measure instead of helping you to do something better to drive that on behavior. So my first instincts and, for example, would be, Hey the good thing to measure is, how many releases I'm doing? How many deliveries without any issues? But then maybe if you do not know the release of it and if you do tag and you will be doing more releases and not releases without issue, but at the same time, you're not actually delivering substantial (??) feature of Cypress work. So now what we've tried to see is what is the experience for the stakeholders part? If I am an engineer, what is my experience? So if I created a code, what are your metrics, which actually tells you that, “Hey, I need to spend two days to create the bugs? What are the best things to write test cases, test my feature, which I actually wrote in two days? So that two days of your role and three days of the development, that's not good. But if the two days can become two hours, then that makes it. So if you're looking at the experience of the people involved, that may be a better metric as well. One of the experiences that you always were looking for was definitely your customer. And that's why it will go back to your customer's experience and how frictionless deliveries they're getting. How many of them, how big they are. And did they actually know if I had to tell my customer that, “Hey expect the economy down because I'm releasing something that obviously is not a good experience, what if I can actually release it without doing the …(??) right? That is something which I think …(??).

[00:07:23] Joe Colantonio That's a great metric. I know a trend I've been hearing more about site reliability engineering, and so they're able to measure calls from customers if something went wrong and that that could be a good way to determine how well the team's delivering value maybe is, calls going down, customers become more satisfied. Did you do any type of survey? Is there anything that customers are more just a little not as scientific to, to gather that data?

[00:07:45] Dushyant Acharya We do. We do surveys. So as I mentioned, the customers are very important. The other aspect which comes in, comes handy is to actually start with who is your customer? So I imagined you part …(??) DevOps. So when I, have my engineering priority hat on, my customers are either engineers they're doing focusing on their priority Yes, it is about continuous Integration testing maybe. It is about the pipeline, it's about test automation, it's about the tools to help them do their job and they are my customer. So my focus should be on them. And what are their pain points? What they're looking for? Technology tools of everything are important (??). When I'm winning head of DevOps, I'm working with engineers as well as I'm working at operations in SRE. So changed a little bit. What are they looking for, right? You don't have, sometimes security, reliability, and everything come as part of the package and you actually have to figure out, how you do that. The only way to get to that is you need to talk to your customers. Now, if you also include your external customer themselves are very, very important but if you're talking to an internal customer, I think talking helps. That's usually something which people avoid many places, and it's surprising, how easy it is to talk to people in your company and get the input right?

[00:08:57] Joe Colantonio That's interesting, talking. That's right, that's a good, good point. Sometimes the lowest-tech thing is usually the hardest thing to do for some reason.

[00:09:03] Dushyant Acharya Yeah.

[00:09:03] Joe Colantonio Cool. So you did mention you managed multiple teams, and I assume that you always need to be researching or finding, you know, maybe tools that could help the company or help your teams, as you say, become more productive. How do you do that? How do you know, like what's worth focusing your time on with all the tools in the market? Well, these are not exaggerations, but all these claims about AI machine learning, like how do you know what tools to focus on that you think are actually going to add value to your team?

[00:09:31] Dushyant Acharya Well, to be honest, I don't spend as much time researching the things as I probably would like myself to do, right? We always have to balance some of the track. You have to deliver on that on a given day and versus how much time you have on you have to do on something. The symptoms are an excuse to I don't want to improve because I don't have time to resolve them. So what I tried to do is try to find kind of midway and a couple of things which I try to see. One thing is it tells something which you actually identify the pain point, then it makes more sense to focus on how we can actually pick up. Is that a new tool, an action tool? Sometimes we change tools or sometimes we want to write something yourself you do that. If you actually have no idea, then you actually go into the real market research. So I did that once when I was working with machine learning testing. How do I actually make my test case intelligent because I don't know what my output is when I'm testing my machine learning algorithm. And then, we just searched multiple things. And then we realize that “Hey, my product is using Google dialog, so why don't use the same thing because we already have the expertise on that?” And then, then it may not be the right test solution for you, but for my team, for my context, that was the perfect solution, because we're experts on that one. It's very easy to continue test automation.

[00:10:51] Joe Colantonio Now, it's just a good point. I think you did a session on AI machine learning, and one of the talking points was, how do you know, how do you test something when you don't know the outcome? And so any tips from it, because it seems like you're using blockchain and that seems like a newer technology that people are kind of unsure about. So as you're trying to release software quicker, obviously, you need to cover more test cases. But then there used to be deterministic. You can only automate things that are deterministic, but nowadays it sounds like you need to build handles, maybe workflows that you may not anticipate. So how do you handle that?

[00:11:21] Dushyant Acharya That's another interesting question. Yes, I spot my previous company, I was responsible to put test automation framework, to test automation learning application, and did a talk last year about that one. If I have to answer short, what I will tell you is this is the progression …(??) on the test case and test design as we go through it. Even before AI machine learning came along, many test engineers were starting to think, “How do I let my test case fail gracefully?” So do I expect my test case to pass 100% and everywhere you go, a few years back the only way my test case would pass is my A set equal to B, or A is not equal to B and if it is great (??) then that just like white or black right? It's working. It's not working. Very simple. But as you go into the end, machine learning that becomes more complicated because you do not have right or wrong answers. You have possible multiple right answers. And the question is how you let your test framework, gradually understand that and fail, so when I start implementing that one it was initially was just to give us conditions into the apps. Hey, if this is, so this is, and so. And I actually had a really good example on that one as well, like let's say for example but I'm testing my log-in and you want to test it my log-in is successful. Then I'm expecting some kind of a welcome message. And if it is unsuccessful, I'm guessing I would get some kind of animus. Now, I may not know what kind of error message is, if my animus is coming from a machine learning developer. But I know it would be a message that says, “Hey, this is not, invalid something or welcome, you logged in.” So I can look for those keywords in the message and I can infer what the message would say. Orl I can actually go full …(??) And use my machine-learning algorithm to take your output as my input and run into my algorithm and tell you the confidence level that I agree with and I would not say 100% likely it is to pass, but I insist 70% matches also passed. So the second one was what we actually ended implementing. And the idea here, if I had to summarize those, how do you let your test case fail? It is not between zero and one. Your choices are infinite and you start thinking about, “Do I have more than 70%  confidence in the match?” Then my test cases are passed.

[00:13:38] Joe Colantonio Now, it's a great point. I always worry about not tests that are always passing, but how do they handle what happens when they fail? And this sounds like maybe people should be focusing on and how does it handle gracefully. Like you said, maybe AI machine learning is one way that can help assist with that as well. So how did how does this work for your company? Did it because a lot of people say it's overrated. It doesn't work. Did you have success actually implementing this technique?

[00:14:01] Dushyant Acharya Very good question. So at present, it's like that I have that diagram handy right now, but the machine learning application is just an application. It is just using machine learning. Your input and output may be different based on the algorithms influence, but at the same time, you should think of your system as a number system if you actually try to see the actual machine learning. You do have the input data, which a machine algorithm of the machine learning algorithm is going to use. You do have your machine learning algorithm giving you some kind of rules that this is how to act manual. But then you do have to collect and put back with your system in place where your input from your customer will go to that algorithm and give you some output. At the same time, you had captured those outputs somewhere that you can actually go through them again. It has to cycle through back into retraining of the algorithm as well. It's a complex pipeline. And if you think about it, it's kind of a data pipeline as well because it is very heavy on the data. If you are not recycling your data through the entire cycle, then your algorithm doesn't love their execution, so that's one part of it. Now once you decide that one and you see there are systems as a pipeline of the data moving across and machine learning is one important key piece of that. You know, automation becomes a combination of multiple things of automation, which probably you have already done that in your case. And I do think with good automation and you think probably would have solved each of its pieces separately. And then the only piece which now you have to solve is the machine learning algorithm becomes very easy to solve and then you're actually going to say “How will we solve that piece and you have one, two, or three options?” So when I said I did implement things just like you implement five, seven different things and you combine them together, then you make sure that you get enough confidence so you can release the product without impacting customers and that's good. If I'm trying to test each and everything that probably I'll do one release every decade or so, it's always infinite possibilities. But I am trying to make sure that I run end-to-end testing. I'm giving myself visible room to failures if I actually go and to deploy something that customers happy and …(??).

[00:16:11] Joe Colantonio Nice. So the discussion that always comes up when talking about automation quickly, fast, and manual testing. You mentioned manual testing. We need to minimize it, obviously. But obviously, does manual testing have a place in the Martin type of paradigm you're thinking of? For example, for exploratory testing for your teams, is there always going to be a slice that can and only will be done manually by a tester?

[00:16:31] Dushyant Acharya That's a really good question. Yeah, but it come a long way in recent years specifically. So, you know, in the earlier days when we used to say that, “Hey, I need to do manual testing”, there are a few reasons for that. Reasons like I don't really know how my customer is testing. So only my customer would now say, do I have to put a year to write down the …(??)? It has to be manual or it is like this is so complicated doing this system. It has to be deployed on its production system and then fully disciplined, and I can't access it as this API hardware program and techniques. I have so complicated flow that I can only access it manually, right?  Now, fortunately for us, we don't have this kind of limitation. So now I don't say you don't do manual testing. I say, why do you need to do manual testing? Because if you can explain that, I'm pretty sure I don't have to tell you don't to do manual testing. You'll tell me not to do manual testing. The idea is are you asking that question? Are you trying to do manual testing because I don't have time to do automation writing? I don't have time to write the automation test case? That is a different problem. That's not about my manual testing is an important matter. As an organization, I'm not putting enough energy and resources behind writing the vital automation. If someone's and sometimes let's say I don't like it, I think that's a very big question, which I still hear, which is a valued …(??). If there is one feature that you're deploying and you only have to run this test once for the next six months, and it takes one day to do manual testing. And if I ask you to do one manual automation code writing on that one then probably you do not have ROI to do automation of that one. So then the problem is, “Am I saying that manual testing is needed, or am I asking the right question that why it takes one month, right to write that automation?” So I think now everyone is going to do better where you asked the right question and you actually try to automate things the right way, and that is actually helping you. So it's long to answer but I will say it is a transition, but I don't see any people now coming to me and saying that I absolutely need a manual test. They actually come and tell me that I needed it because the system is the same, you get me this and the next update.

[00:18:34] Joe Colantonio That's a good point because I'm thinking over my career, my long career, a lot of testing was coming up at the infrastructure and environment like we need to run on this configuration and now cloud and Kubernetes and things spin things up in dark containers actually take care of a lot of the things that were part of had only been manual because you had a there was a lot of manual steps involved, but now it sounds like automate it. So in your productivity paradigm, I think you touched on infrastructure. Is this part of where you think maybe this solves some of the issues that maybe people don't know that you can actually now automate certain things because we have solutions like infrastructure and if it's who handles that piece of the automation in the pipeline?

[00:19:09] Dushyant Acharya It does. It does. It does include. So when I think about engineering productivity, it's about time I do improve infrastructure as well, and that is the reason my DevOps team is taking component engineer productivity. The way I like to answer that question is if you do not have the right infrastructure and you don't have the ability to create the infrastructure on-demand, you would not be able to use automation effectively. Now, who does that? Different companies have solved that differently. I would prefer being on the transition team and working with DevOps as part of that. You have this idea that sometimes multiple ways of spinning that right idea, but the concept which you mentioned in the back is my alignment (??) immutable and I may use in the cloud. I may use in my Kubernetes and what I would continue using is, allow using is something which I deployed and I don't have to change it to this …(??) and giving to the ability doing my automation framework is essential because automation would only start when you the … if I'm doing a manual enormous set of …(??) in the automation, then automation is not automation. It is …(??) And you need to do that ability back to the developer so they can do that within the pipeline as well. And then you need to give them multiple options. It's not always a luxury to skip some of the testings. This, of course, most of my …(??) friends to finance …(??) back, and when we work with the fintech and finances …(??) compliance and security. And you don't want to miss the testing, but you do not want to do manual steps in between as well. But the only way to do is effectively give you the infrastructure, which is as close to production as possible, and the ability to give you the control using, using this in the program.

[00:20:53] Joe Colantonio Now it's a great point about regulations. The thing I worked for was regulation and health care, and a lot of times if you audit it, they want to see the environment you tested and it was really difficult. But now with these types of solutions, you can take a snapshot and actually spin it up. So rather than saying you can't automate it because there are manual steps, you actually sound like you probably could automate it and would help you because you have an audit of what you did, where everything is, and they're all there you actually look at. So it's a good point. So I guess my next question is I always get asked, all right, I'm a tester. I've been a tester for 20 years and becoming obsolete. And you've mentioned developers a bunch of times and the automation pipeline. Is there still a need for an automation engineer or a tester, and if so, I know on LinkedIn you have, you hired a few weeks ago and have an opening, just like what kind of skills do you look for? You think people should be focusing on as we move towards this paradigm, they think really is going to be a shift that more, more people are going to start experiencing.

[00:21:44] Dushyant Acharya That's a good question again. Joe, sometimes there's always going to be, there's always going to be needed for the people who have an eye for quality and an eye for writing something which is effective and repetitive and already established in the system. That's what automation is essentially. The idea it just changing for the last decade is, “Was it manual, was it the UI automation, or was it the AP automation only?” Are you actually doing that with someone, are you learning that every night, is it running 100% successfully or not? Now, what I'm saying is this is, is it running as part of the boat every day or not? And how many people are writing test cases versus how many people are supporting writing good test cases? And so thinking less of the actual work, that's what is changing a little, helping it at the same time, it's also increasing the scope. If you look at four years back, I did not ask the developers to write test cases signed to hire and for that few engineers to write test cases. At the same time, few engineers would not be able to move past test cases. They'll only have to focus on test cases only. Now, we do win-win because now we create a framework that is efficient for the developer to use or and I hope it is efficient for the developer to use at the same time, automation engineers have more scopes. They're not just limited to that feature test they're now looking at the product first, they're now coming over the tools, they're coming over the pipeline, and actually managing the pipeline. They're responsible for putting the code in the hands of their customers without any mandate directions, so it gives them more options to offer. So if there's always going to be a need for the people, the right mindset, and the right skill set, right? I do have for the motivating the essentially and productivity, which usually is most of the people who come to interview, and most of my people who got hired in last one or two years. They came from test automation, but not because that can be helpful to understand this, that the new paradigm. But if you don't have that, if you don't know how you used to do things, it's very difficult as you know how you change the next 5. At the same time, I work on the DevOps recruiting and I look for people who are very focused on in transition his occurrence. But so all of you are in charge of shipping the containers issue, you are managing it how you had to find it when you know.

[00:24:04] Joe Colantonio Nice. So, as we get near 2022, I've been asking speakers what their thoughts are on maybe a tool or technique, or do you think is going to be big in 2022? I think people should know more about do you have any suggestions for predictions for 2022 on technologies you think are really going to take off?

[00:24:18] Dushyant Acharya Oh, that's a tough question, and I'm prepared for that. And there are so many things I think at the top of my mind. I think a couple of things that come to my mind, one if I think about automation, I think we're not as much as data-driven as we should be as an industry and anything which have produced the better metrics. Then focus on data-driven test automation and data-driven in the engine part here in DevOps would be a cool skillset to have in 2022. And on my other side of the house, I think …(??)gaining a lot of popularity and a lot of people are jumping to that one. I won't say this is upcoming because there are people using it already. But I'm seeing the vision of the next year, we'll be using it even more, even more as a not here Ops team would be using…(??). The automation team would note that as well. Your developer would note that as well. If you want to create your local environment using the same logic, that should be …(??) will probably be possible for most of the company in 2022.

[00:25:27] Joe Colantonio Love it. Love it. Okay,  Dushyant, before we go, is there one piece of actionable advice you can give to someone to help them with their paradigm shift automation testing efforts, and what's the best way to find or contact you?

[00:25:37] Dushyant Acharya Let me show you the easy answer. LinkedIn, I'm very active in LinkedIn. So find me on LinkedIn and I really like connecting to new people. Ripple is hiring a lot. Right now, we have a lot of open positions, really get amazing people and great work. So I found decent reports and tried to look for the opening to reach out to me and I'm just happy to connect with you and with the right people. On the advice side, I think I'd say not one but a couple of things I'd say actually. The one thing which I have learned from most of my interactions is how people get notes about job descriptions and titles and they don't even think that they would be able to do the job. So it's a good thing to be comfortable if you know half of the things in the job description, be comfortable enough to go and talk to the actual recruiter. Ask an actual question, “Hey, what this job is? I have the skill set already. Do you think the other things that allow the job I want? Or is it like a prerequisite of making me into you? Don't assume that you are not qualified because we don't know everything in the job description that's opposed to your chance, see it. And the second thing which I tell the prospective people who are trying to attract the other people, which I have learned recently as far is when you reach out to someone trying to see what is your proposition is and then what they are looking for? Yes, most people do that, but also tell them why they should help you, why you want the right person for whatever job you think you would give them that one extra paragraph to describe that. So they feel it more personal and they actually respond to you. I think that should be all.

[00:27:02] Joe Colantonio Great advice. I just thought of another question, though, as you were speaking about, you know, even if you don't match all the requirements, you should definitely apply for the job. But when I look at Ripple, it's like financial transactions using blockchain. Just that alone kind of makes me not scared, but it's like, I don't know if I'd be qualified for this. Any tools that someone should know about the blockchain for testing blockchain?

[00:27:23] Dushyant Acharya Well, before I answer you that Joe. Let me tell you that my previous company was in the AI machine learning company, and before I started working on day one, I had zero experience of testing machine learning algorithms. Ripple is a crypto blockchain company. And then the first day of the job, I started listening to what crypto blockchain is, right? So it's not necessary to know everything because you have a specific role, the things you would need to know and the things which you can learn on the job. The very important point for me is how you connect into the hiring team with the recruiter and make sure that you know what is the …(??) for it. If I'm hiring for something role, then I can actually teach you to do many things, but probably that's not really the job description. So if you're looking for a new domain, new technology, don't get scared. That's one thing. And if there's nothing else find it on Youtube tutorials. It's like good people talking in 30 minutes and explaining the big ideas. That usually helps. So that you have that theory, and then you actually start talking to people to understand what the role actually is.

[00:28:26] Joe Colantonio Thanks again for your automation awesomeness. For notes on everything of value, we covered in this episode, head on over to testguildcom.kinsta.cloud/a369, and while you're there make sure to click on the try it for free today link under the exclusive sponsor's section to learn all about SauceLabs awesome products and services. And if the show has helped you in any way, why not read it and review it on iTunes. Reviews really help in the rankings of the show, and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed in creating end-to-end full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:29:08] Outro Thanks for listening to the Test Guild Automation Podcast. Head on over to testguildcom.kinsta.cloud for full show notes, amazing blog articles, and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.

 

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Matt Van Itallie Promotional graphic for a DevOps toolchain podcast episode featuring CTO insights on the impact of Gen AI on DevOps, with guest Matt Van Itallie, supported by Bugsnag

CTO Insights on Gen AI’s Impact on DevOps with Matt Van Itallie

Posted on 03/27/2024

About this DevOps Toolchain Episode: Today, we'll speak with the remarkable Matt Van ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Sideways Test Pyramid, WebDriver Visual Testing and More TGNS115

Posted on 03/25/2024

About This Episode: What is a Sideways Test Pyramid in testing Have you ...

Frank Van der Kuur Mark Moberts Tatu Aalto

RoboCon Recap: Testing, Networking, and Building with Robot Framework with Tatu Aalto, Mark Moberts and Frank Van der Kuur

Posted on 03/24/2024

About This Episode: Today's special episode, “Robocon Recapp,” is about the insights and ...