From Chaos to Clarity: Improving Software Testing Practices with Prathyusha Nama

By Test Guild
  • Share:
Join the Guild for FREE
Prathyusha Nama TestGuild Automation Feature

About This Episode:

Today’s episode, we are thrilled to have Prathyusha Nama, a seasoned leader in automation with over a decade of experience, particularly in mobile automation.

Checkout this weeks sponsor BrowserStack App Automate: https://testguild.me/appautomate

Prathyusha delves into the recent Microsoft and CrowdStrike incident that brought significant disruption and how it highlights the urgent need for robust testing practices.

We’ll explore the pitfalls of outsourcing testing, the significance of testing in any industry, and the inability of a single individual to manage it alone. Prathyusha shares insights on contract testing and shift-left and backward compatibility testing, which could help avoid such mishaps. Additionally, she discusses the emotional impact of software failures, the nationwide attention they grab, and the necessity for companies to reevaluate their quality assurance priorities.

Prathyusha also reflects on her experiences mentoring tech professionals and their common challenges, such as setting up automation environments and lack of recognition within organizations. We discuss how contract testing can ensure API compatibility across different teams, preventing disastrous deployment failures.

Listen in to hear Prathyus discuss essential strategies, such as selective rollouts, comprehensive testing, and a detailed rollback plan.

Exclusive Sponsor

In today's app-driven world, businesses must ensure flawless user experiences, but testing can be a challenge with:

Limited device coverage
Slow, unreliable setups
Few automation options

BrowserStack App Automate solves this with its cloud-based app automation platform. It's simple to use and packed with features for confident app releases.

With App Automate, you can:

Integrate test suites in minutes via the BrowserStack SDK.
Run thousands of parallel tests across 20,000+ real devices.
Easily integrate with CI/CD pipelines, test frameworks, and more.
Test apps in internal or staged environments, even behind firewalls.
Debug instantly with video recordings, logs, and AI-powered error categorization.

Their updated Builds Dashboard also features advanced debugging like flaky test detection and quality profiles. Plus, with Real Device Cloud, you can test key use cases like camera injections (QR/barcode scanning) and simulate timezone, GPS, and network conditions.

Boost your app quality and competitive edge with App Automate—your one-stop solution for scaling mobile releases. Support the show and try it for yourself now: https://testguild.me/appautomate

About Prathyusha Nama

Prathyusha Nama

Prathyusha Nama as a Test Architecture Manager at Align Technology, Inc., with over 10 years of experience specializing in Automation and Test Architecture. Recognized for innovative approaches to mobile automation, Prathyusha has led impactful initiatives that streamlined testing processes,reducing debugging time and driving efficiency gains across teams. Prathyusha also received the prestigious ‘Leadership Award' for their work on mobile automation during the COVID-19 pandemic.

A strong advocate for quality excellence, Prathyusha plays an active role as a mentor on ADPList and awarded as ‘Top 1% Impfactful Mentor in Egineering’ and ‘Top 10 female mentors in architectural Engineering and Full Stack’ and serves as a judge for cloud-based categories at industry awards events and reviewer for technical publications. Their expertise spans AI-driven testing, cloud computing, and intelligent automation strategies. Additionally, they have spoken at events such as International Conference on Automation AI and the future of QA, highlighting the revolutionary impact of AI in testing.

Prathyusha has published articles in leading journals and platforms focusing on cutting-edge automation strategies and AI-driven defect prevention.

Connect with Prathyusha Nama

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:00:35] Joe Colantonio Hey, it's Joe, and welcome to another episode of The Test Guild Automation podcast. Today, we'll be talking with Prathyusha, all about From chaos to clarity, navigating software mishaps with defective automation. We're basically going to cover the Microsoft CrowdStrike incident that happened a few months ago. If you don't know, Prathyusha is a distinguished leader in the field of automation with over ten years of hands on experience and a deep specialization in mobile automation. Currently serving in the Quality Center of Excellence for Test architecture, she plays an instrumental role in advancing testing practices at our current company. So stay tuned to hear a really important topic that I think is going to change the industry as it applies to testing. You don't want to miss it. Check it out.

[00:01:16] Hey, in today's app driven world, businesses must ensure flawless user experience but testing can be a challenge with limited device coverage, slow unreliable set ups and fewer automation options. And that's what today's episode sponsor BrowserStack comes in. Their app automate solution solves many of the challenges with its Cloud-based app automation platform. It's simple to use and packed with features for confident app releases. With app automate, you can integrate test suites in minutes via the BrowserStack SDK. Run thousands of parallel tests across 20,000 plus real devices. Easily integrate with CI/CD pipelines, test frameworks and more test apps and internal and stage environments even behind firewalls and debug instantly with video recordings, logs, and AI powered error categorization. Their updated build dashboard also features advanced debugging like flaky test detection and quality profiles. Plus, with their real device cloud, you can test key use cases like camera injection, QR, barcode scanning, and similarly time zones, GPS, and network conditions. Push app quality and competitive edge with App Automate your one stop solution for scaling mobile app releases. Give it a try. Support the show using the link down below.

[00:02:36] Joe Colantonio Hey, welcome to the Guild.

[00:02:40] Prathyusha Nama Hey Joe. This is Prathyusha. It's really an honor to me to have been following you on podcast and everything, but it's really nice to see you in person and be on the show. So thank you.

[00:02:52] Joe Colantonio I appreciate it. Thank you. Thank you. I guess before we get into it, really what call your attention to CrowdStrike? Because I know you contacted me, said, hey, I'd like to cover this. What was it about it that made you go, maybe this is something we need to dive into a little bit more?

[00:03:08] Prathyusha Nama Especially for like whatever. If you see lots of industries nowadays, like airlines or all other companies, they're outsourcing most of the testing practices to a different vendor, a different company. It's like there are a lot of integration gaps and if you see that a lot of emphasis is not towards the testing. Most of the times we see that the focus on the UX/UI or the development part is not the same with the testing. These types of incidents where an entire world is in chaos, like something is broken down, you are just getting a negative message like airlines are unable to do what they're supposed to be doing. Everybody is in chaos and just not understanding what's going on. But then if you dive deep into what could have caused the problem, it kind of for people who are already in this field for quite some time, you can kind of make a wild guess on what could have gone wrong on both parties. Like it's not anybody's fault to be very honest with you. And also the fact that it's very appreciative of both CrowdStrike and Microsoft, especially with the fact that they've kind of taken a quick rout the turnaround time the way they were able to fix it, making sure that they were in contact with the customer base, making sure that they're providing updates, that is all like really good. But now it is, I think this incident has made lot of experts, companies, everybody to kind of reevaluate their whole testing processes. So that is something that I always was interested in, especially throughout my career. I've seen that happening quite a bit where the funding is not that good enough for testing side of the projects usually. And most of the times we also have a notion that one person can just kind of do the whole job like it. I mean, why do we need a person to just see if it's working or not? It's not that simple is what I really wanted everybody to be aware of and focus on the importance of testing. Because even in our day-to-day life, the very basic thing, you wouldn't buy something without you visually see it, touch it, or some way tested, right? The testing is part of our daily lives. I really wanted to emphasize how important testing is for any industry, not just software. For all of the anything that you are putting out there. It is mandatory that we just have to ensure it is all working up and running and all that. This topic blends with all the things that we would otherwise talk on different occasions. But all of it comes into picture when we look at incidents like this. So that kind of caught my attention. I really wanted to like talk about that.

[00:05:47] Joe Colantonio Could this have been seen like how like it just or does a highlight like all the interdependencies we have on software not knowing that our software relies on this other piece of software that if that goes down, the who would have known that your lines would have been affected. Was this avoidable or do you just see it as something we now know about? So now we need a plan based on the information we now have?

[00:06:09] Prathyusha Nama It could have been avoided because we have lots of testing that we do, right? Like if you follow the best practices like contract testing, you have shift left, you have backwards compatibility testing. It's basically the fact that the emphasis on that integration testing as well as how soon there should be a process of the companies should follow a certain process. Like it's not that simple that you would just update something. The issue is basically, the contract between what the software system on Microsoft, which is the client is expecting versus what the CrowdStrike security update was providing. It's basically the input parameters that were provided is not something that Microsoft was expecting. This could have been avoided if they had contractors in place to make sure when they do this initial deployment on their like Dev or Staging environments that this is working fine. If there was a backward compatibility testing in place, any new updates, making sure they don't break the existing version of all their client systems and of course like shift left practice that is very popular these days where all the companies are kind of moving towards shift level to be able to identify issues sooner rather than at the very end of the game that it does like something close to production or in production in some instances. There are a lot of practices that are being done. It's not something new or this is not something that just came out of an incident, but these are the things that have already been there. Company is following it. It is all being done. It's just that the importance of why it should be implemented as part of our process, how strong can we make the testing process is what it all gets down. It could have been avoided, especially with the vendor, a lot of vendors and a lot of outsourcing that happens. It is really important to make sure we have like strong practices in place and all of it like none of it is actually to be done manually. Like you wouldn't even expect that something should be verified manually. You have a lot of automation improvements. You have AI/ML, everything like we are trending towards a different technical. A lot is happening. So it could have been avoided. It's just that the emphasis on implementing strict policies for testing strict in place like the release plan for anything, even if it's a minor update, especially for cybersecurity firms, it is really, really important that they focus more on the testing practices and procedures.

[00:08:40] Joe Colantonio Absolutely. So before we go any further, just in case of the crowd that listens to this is pretty tech savvy, but those that don't know what is contract testing because you mentioned it a few times, that's something that probably could have helped in this incident?

[00:08:52] Prathyusha Nama To break it down, like in terms of, let's take an example of API. You have API provider and consumer that's going to use these APIs, right? But they are the development of API happens on a different team and the team that consumes these APIs is going to be different. How do you ensure like at the beginning of the project you would say, okay, as per the requirements, my API is going to just give you this response. It's going to have X, Y and Z as mandatory fields and something else as like optional. That's the requirement use case for product producer API. Now on the consumer side, this is all communication like that, the beginning of the project, they all agree, okay, this is what it is. And then once the consumer team starts developing their application, based on what was discussed on the original implementation details like X, Y and Z as mandatory and something else as optional, they will develop off of their own teams. Now imagine that is no contract testing, something has changed on the production side of the API. They have decided now to add one more mandatory field, but they forgot to notify or they don't have communication happening between the consumer and the API that is being developed. So then the consumer team just ends up releasing their product and then once the API deployment happens, it's not going to work in production because not all the times you have staging or some environment available before you go to production. Sometimes API teams does their own release schedules versus the client applications stick to their own schedules. It's just that if there is no contract testing, it would be way too late by the time they would identify that something was broken because they did not stick to the contract that was agreed upon. These contract tests are usually run at the beginning of the developmental cycle where both consumer and producer API would have contract test return on their own where they would make sure the expected response is what they're getting from consumers side and vice versa for the API that is being developed. These types of tests we have a lot of the pack bi directional flow with pack. There are some popular contract testing tools that are being used by majority of companies. So this can also be leveraged for not just APIs but all other types of testing as well.

[00:11:09] Joe Colantonio So yeah, sounds like it's a great thing to use if you have a lot of third-party dependencies on your application, you don't actually need their application, but you'll know what kind of responses and calls they're making.

[00:11:22] Prathyusha Nama Yeah.

[00:11:23] Joe Colantonio Nice. All right. I want to dive into a few other things, but just in case I may have missed it, I forgot. Do you know what caused the CrowdStrike outage? What eventually what it was. Was it a testing issue, I think?

[00:11:32] Prathyusha Nama Yeah, it was. So it's basically like there was an update on CrowdStrike software version. They had these parameters added that is one new parameter. But then looks like that was not they did not take into account what could have been the possible effects of this update. They just went ahead and updated it. Microsoft, on the other hand, was not expecting that update. This version of Microsoft was not compatible with the version that was deployed. And that's kind of causing memory out of bound exception that's eventually crashing. So you just get an error message on the screen when you try to access the applications. That was like, yeah, it is a testing issue, yes.

[00:12:11] Joe Colantonio Gotcha. Microsoft, it's an OS. How would they have known? They must have so many third-party dependencies. I guess it goes back to what you said. You need to have a good testing procedure or a testing process in place. Let's start there. What do you think would be a good testing process to have in place? Because your title is Quality Center of Excellence for Test Architecture, which tells me already a company takes a seriously. But for some reason I'm seeing quality centers and centers of excellence disappear. What do you think companies need to have in place in order to have a strong testing process?

[00:12:44] Prathyusha Nama That should be like a process for any like, for example, on Microsoft side, there is no way for them to know that CrowdStrike was going to deploy a software version and that would break it. If they had known, they wouldn't let this happen. Of course, the issue is also with the fact that the cybersecurity, the CrowdStrike, was given access to the kernel of the Microsoft, which is where you have lot of your important stuff, there is no way for if something goes wrong, it's going to crash. That is the reason why we do that, because all the cybersecurity stuff would need that level of access to make sure we are handling the hacking and all that stuff. So especially for these types of things, it is more important to have strict processes in place. Like I was talking about the compatibility testing, making sure they have some types of testing that automation does, and that would mean very critical ones that they run every like probably every day to make sure if something changes, even if it's very minor update on the dependencies that they have, as long as they can identify it sooner. And then, if they had some tests in place before the upgrade would have happened, they would have seen an issue with minor upgrades or something like in terms of what if I have a version 1.0, but then I'm getting a version 1.2. If it was failing, they would have caught it sooner and they would have had that collaboration with the CrowdStrike beforehand to make sure this is what we are noticing on our site. I think we just have to make sure that is kind of being taken into account on the tests that you have written on your end to make sure it doesn't happen in production. So this here is like the lack of coordination in terms of testing, right? There are certain tests that CrowdStrike runs. It's not that they are just doing the migration. They do have tests that are running. It's just that they did not take into account like what could have been the effect if a system or one of their clients has a version that is two levels below the version that they are putting out there, will it still work or would it cause any problems? That is like making sure your backwards compatibility is also in place. It's not one place where this process needs to change. It's basically that everybody like the consumer, CrowdStrike, Microsoft, everyone needs to have these processes in place because most of the times you would think that even for regular company really cycles. For any feature or anything that goes out the door, be have multiple levels of multiple rounds of testing that's done before it gets to production like probably months of time because you have your sprint level testing, you have regression, you have integration, user testing, all that, and then you do production smoke test before you make it live. And also the fact that especially for these types of huge upgrades, we could also have rolling out like specific set of users. Instead of rolling out the whole user base, they could do select a rollout like, for example like for companies that deal with doctors and patients like my company, for instance. We have different regions. It's not going to be in like the same country. We have to be HIPAA compliant. We have FDA regulations that are different for each country. What happens here is that instead of rolling out the feature for the whole doctor or the customer user base, we do selected rollout like 100 people. How is it going? Is it fine? And control any issues? Because at that point, instead of affecting millions of users, you would just have hundred users that are affected. But worst case, if something goes wrong. That type of strategy is something that you need to have in place. Also, the first-hand testing that happens immediately after your software is updated. Just to make sure you have roll back plan in place like something goes wrong, you immediately go and find out some things off. You immediately roll back to the previous version to make sure it happens off hours. Like nobody's going to like at least the intensity of the issue or the affected user base would be very less. These types of processes, it's not something new, it's just that it's all already there. We just have to have that being documented are the gatekeepers like for example, if we talked about Center of Excellence. The reason why Center of Excellence is formed is to make sure we are following all the best practices that are good in terms of software testing, making sure we follow everything before products get out the door, making sure it's bug free. What else can we do? How can we improve it? Whatever latest trends that are coming up in the industry, making sure how do we incorporate them into our release cycles, all of that. Because it's equally important for our features to work, even if it's a cool feature. If it doesn't work, once it gets to the production, that is no point. So yeah.

[00:17:33] Joe Colantonio I love that answer because now you're talking about shift right. And like feature flagging, canary testing that probably definitely would have only like you said, roll it out to a few users or hundred segment and then be able to roll it back would have probably solved this quickly. That's a great technique.

[00:17:49] Prathyusha Nama Yeah.

[00:17:50] Joe Colantonio How hard is it to get started with that then? Is that must be like a big commitment, right? Or are people afraid to do that in production? Is that why you see, maybe there may have been a gap there?

[00:17:59] Prathyusha Nama Yeah. Possible. Or maybe they never even thought about. They wouldn't have guessed that something would have meant. I mean, Microsoft has been in the industry for like so long. Did we ever see this happen? We did not. No one would anticipate that. It's just that now that it has happened, everybody is talking about it. But what if it didn't happen? We wouldn't talk about it. We would just go off our business the same usual way. That is the only thing that I think could have been the issue because it's just that they have certain checks that they do like the basic checks that they would do on each end, but then the possibility of something like this could go wrong. They probably did not in control take into account. Now that this happened, everybody would start to think of even the statement that's issued by Microsoft after the summit that happened on understanding what this issue is about, what could have? How could have just been avoided, and all of that. If you see most of the things that we've talked about are already being now taken into account. I feel like it's just that no one anticipated this would happen. It would be like as long as stuff is working, all of our critical path to production is all good. We are fine. But these types of things, especially comes into picture when we are dealing with third party applications who we don't usually talk to on the day-to-day basis.

[00:19:20] Joe Colantonio Right. How do you then balance that? Because you could test for everything. I know a lot of people are risk based, but how do you balance the need to test versus the need to get in the hands of the customer and be do it cost effectively?

[00:19:32] Prathyusha Nama So yeah, like I said, like you could use automation, you could leverage automation for cost cutting, instead of having manual people. Like if you end up needing like 4 or 5 people on board the sites that could be reduced to quite a bit by relying on automation, CI/CD where you have regular runs, you don't even have to talk to the or have a touch base or something with those teams. As long as you have both the parties have their critical tests running up and running on their CI, every now and then they get the results faster, making sure the integration points are all touched not just sticking to one system. Like, for example, if you have 2 or 3 teams that are working together on an application like a UI team, back-end team, API team. All of them will have their own tests that are really good that like that they do what they're supposed to be doing. If there is no integration testing happening between these like every UI test is only sticking to UI checks, API to API and then database to database. Imagine, if they don't do any other testing. Making sure backwards compatible are something like integration. The end-to-end flow is all good. Nothing is going to go wrong. We definitely don't see that product working in production unless they have someone who is making sure all the teams are connected and all of the testing happens and everything is working fine. I think that is what is required here as well in order to do it cost effectively. With AI and ML, there is a lot that is coming up in the self-healing automation. We could see and leverage how which parts of it can be used for security because not everything can be used for security because it has a lot of other complications. It's not as simple as API and the UI testing, but definitely worth exploring. And innovation should happen there, where we could leverage some concepts of it to make sure the issues are identified automatically instead of manually someone finding it or something. I mean, as long as you know the problem somehow automatically coming to you, fixing that and making sure that everything looks fine is doable.

[00:21:39] Joe Colantonio Nice. Have you had time to mess around with AI technology and how is that helped you?

[00:21:44] Prathyusha Nama I did it out of my personal interest, but nothing that I was able to do it within the current organization because we are HIPAA compliant, an FDA regulated company. Even for like when I was on the mobile team, we were looking at trying to explore all the cloud-based solutions for running our tests on simulators and emulators, especially because we have photo capturing, which is AI AR related machine learning. We heavily rely on machine learning for our image processing because that requires testing to be very efficient with the actual phones and actual camera. Though we have automation that would actually do this in order to be able to test this out across multiple devices, we tried exploring lot of cloud-based solutions and we were like the approval was not in place because of the security policies and HIPAA compliance. I was not really able to apply them in my job. But out of my interest, I did look at lot of stuff. That is one confidence that happens every year in North Carolina SQA conference or there you have lot of vendors coming in like the BrowserStack, all of the Applitools, everybody comes in so there we have a lot of sessions for AI/ML as well. How and what they could do. So yeah, I mean, whatever the research I've been doing, all the papers that I've published, it's all out of my personal interest.

[00:23:08] Joe Colantonio Nice. A lot of times I think companies see testing as a cost center, not a value add. Do you think this will help? I think I saw CrowdStrike was being sued for billions of dollars. Do you think this helps testing in the long run or do you think this be like any other incident be forgotten in another month or so?

[00:23:24] Prathyusha Nama I don't think this is going to be forgotten that easily, especially the fact that there is an emotion piece involved in this incident, right? I mean, if it was just computers shutting down, I mean, who cares that people are going to forget? But it's not that. It's basically, there were traveling people that were affected. Imagine the emotion that goes on there and there could have been some people who are probably planning to get married. Maybe they were going to meet their loved ones, their families. Something that's really important in terms of their life events. Would they ever forget what happened? They missed it because of the airlines being unable to get the stuff on time. They all missed what they wanted to do. It's an emotional attachment at this point, right. So it's not that easily forgotten. And also for the companies, it's the fact that the I mean, though the customer base that was affected is very less for the Microsoft user base. But it still got nationwide attention. Like everybody was like, what is going on? What is going on? So it's not that small of an issue where they would just release one press release and they're fine about it. No, I don't think Microsoft is taking it lightly either. Like if you see the things that they're planning on doing the analysis report, I really appreciate the way both the companies addressed this one because it was never blaming whose fault was it. Was it pushed on it? It was never the case. It was more of like collaboration, trying to fix the problem for their customers, which shows they really give importance to incidents like this. I think this is going to set a new tone for a lot of companies where the cost center like you've mentioned, most of the times, you don't usually get lot of funding for quality side of the resourcing or staffing or anything. Things would change for a lot of companies. Like no one would want to be in such spot. I think this probably is going to set a new stage for quality. We will probably have more importance in the future.

[00:25:26] Joe Colantonio Awesome. Okay Prathyusha, before we go, is there one piece of actionable advice you can give to someone to help them with their automation testing efforts and what's the best way to find or contact you?

[00:25:35] Prathyusha Nama For me. Actually, I also am a mentor on ADPList, which is a free platform for all of the tech professionals. If anyone's interested in finding out more about automation needing help, like getting started on automation, leveling up. I am a mentor on ADPList. You can always reach out and book a session. It's free. You don't have to pay anything. I have a lot of mentees that are mentoring currently who are like some of them just graduated looking for automation roles. Some of them are like in at the beginning of their automation journey, wanting to understand more on how they can get involved and get into senior management roles and things like that. So yeah, that's one way you can reach out to me. You can also reach out to me on LinkedIn.

[00:26:19] Joe Colantonio I need a follow up question on that then. For your coaching calls then, is there a common theme you hear all the time that people are struggling with automation?

[00:26:28] Prathyusha Nama Yeah. So most of the times the things that I hear from them is, the initial struggle of setting it up. They have lot of environments, they have lot of test data, a lot of dependencies on other teams. The test struggle that they initially faced to just have that whole flow like a full stack. We always talk about full stack development, but we never really talk about full stack role in a software development testing industry. That is this thing that's full stack. I mean, it's not non-technical or an official dom. But if you talk about so far when I started my career, I was the very first Test QA engineer for that the company had for mobile team. They did not have any testing presence in the company for mobile applications. So it was on me to make sure how do I ensure the product software testing right. So I was the one who did the POCs to pick up a tool language that would work, finishing the setting up of the frameworks, making sure that scalable working across all the environments and then CI/CD setup and then making sure I do the DevOps part. So I kind of did the full stack role in my field where I'm doing everything from starting A to B until everything is done and up and running, right? But we don't really talk about these types of things, but that is something that is being asked from the people. So that is the struggle that I see from most of my mentees where they're like, My job is to just write code, but I'm expected to know CI/CD scripting, Python's, too many languages. Developer can just stick to one specific technology and language, but for me they are like Python, Selenium, JavaScript, all other languages. Like I'm really on a right track? Like do I want to pursue my career in this? Like, what do I do? And most of the times lack of recognition is the biggest thing that I see from my mentees that they feel very demotivated. They don't feel like they're bringing value to the company because any meeting that they go to, they always see that the developers are getting credit done for product owners are being appreciated because the product is delivered UX/UI, everything is good. But then for all of that to happen, testers were part of the project, but they never really get that credit. So these are like the common things that I hear from all of my mentees.

[00:28:41] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a518. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:29:15] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.

[00:29:59] Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Görkem Ercan TestGuild DevOps Toolchain

Simplifying the AI/ML to Production Pipeline with Görkem Ercan

Posted on 10/16/2024

About this DevOps Toolchain Episode: Today, we're joined by Görkem Ercan , Jozu's ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

AutomationGuild Voting, Performance in SDLC, Playwright Updates and More TGNS138

Posted on 10/14/2024

About This Episode: Do you know that voting for Automation Guild 2025 session ...

Ryo Chikazawa TestGuild Automation Feature

Combining AI and Playwright using Autify Genesis with Ryo Chikazawa

Posted on 10/13/2024

About This Episode: Want to know more about the Power of AI to ...