How Generative AI is Elevating Testing Quality in Healthcare with Richard Kedziora

By Test Guild
  • Share:
Join the Guild for FREE
Richard Kedziora TestGuild Automation Feature

About This Episode:

Today, host Joe Colantonio sits down with Richard Kedziora, the co-founder of Estenda Solutions, to delve into the role of generative AI in testing healthcare technology. With over 30 years of experience in healthcare tech, Richard shares his insights on how AI transforms patient outcomes by enhancing efficiency, improving decision-making, and alleviating administrative burdens.

In this episode, we explore:

  • The importance of interacting with AI and how it can replace outdated practices.
  • Real-world applications, such as ambient listening and language translation in healthcare.
  • Concerns about AI reliability, data privacy, and regulatory compliance.
  • How AI contributes to better software development and testing.
  • The future of AI in healthcare with emerging technologies like wearables and robotic assistants.

Join Joe and Richard as they discuss how leveraging AI can revolutionize healthcare, making it more effective and patient-centered. Don’t forget to rate and review us on iTunes, and visit the Test Guild website to join our vibrant community of automation enthusiasts!

About Richard Kedziora

Richard Kedziora

Richard Kedziora is the co-founder of Estenda Solutions, a leading company specializing in custom software and data analysis for healthcare and medical companies. With a remarkable journey spanning over 30 years, he possesses a deep understanding of designing, developing, and deploying successful software projects. His extensive experience enables him to provide valuable guidance and innovative insights, resulting in cost-effective solutions that improve patient outcomes.

Mr. Kedziora received his M.B.A. from West Chester University and a Bachelor of Science in Computer Science from Duquesne University where he received the Excellence in Computer Science Award. He has spoken at numerous technical and healthcare conferences on a variety of topics and written or co-authored
multiple articles focused on healthcare information technology, several published in peer-reviewed scientific journals.

Connect with Richard Kedziora

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:00:34] Joe Colantonio Hey, do you want to learn more about how GenerativeAI can really elevate your testing quality in health care and AI in general? Well, you're in for a special treat because we have RJ joining us, who is the co-founder of Estenda Solutions, a leader in custom software and data analysis for health care with over 30 years of experience in designing and deploying software and software projects. He's passionate about creating cost effective solutions that really improve patient outcomes. He also holds multiple degrees and is a frequent speaker and published author. As you could tell, Richard brings a bunch of wealth of information and knowledge and health care tech, especially to the show, so you don't want to miss it. Listen up. Check it out.

[00:01:11] Joe Colantonio Hey, RJ, welcome to The Guild.

[00:01:15] RJ Kedziora Thanks for having me. I've to live up to that now.

[00:01:18] Joe Colantonio For sure.

[00:01:19] RJ Kedziora The challenge is on. Challenge accepted.

[00:01:22] Joe Colantonio All right. So it seems like AI really is your sweet spot like a whole at a high level. How did you get into health care and then how did you get so passionate about AI in health care?

[00:01:33] RJ Kedziora The thing I always find interesting is people seem to forget that AI has been around for a long time or maybe just GenAI sort of brought it into the common culture. I was fortunate in high school through the 80s that I was involved in technology, had the computer was going to go to college for that, to get that Ph.D. in artificial intelligence, but then took a job started earning a living, but always had my hands in it and developing expert systems over the years and dabbled in multiple fields. Railroad car scheduling was intriguing. I used to work for U.S. Steel way back when in the 90s, but I got to health care through some consulting that I was doing pharmaceuticals work for a lot of the pharmaceuticals. I live in a Philadelphia area and it's called the .... All their headquarters are right there. It was very easy to work for a bunch of those different companies over the years as a consultant. And then trying to figure out where I wanted to go in life and career and make an impact on the world and health care, really, was it. I'm not a doctor. I don't play one on TV, but I understand technology and that technology is about people. And so I help now, stand on my company which we founded, though 20 odd years ago with a good friend, Drew Lewis, we can develop systems that help lots of people. A Doctor, nurses, they do a credible things. But it's end of one. It's one person making a difference. And they do make a difference. But I can help develop a system then that are impacting more and more people, which is just an amazing feeling when you go to sleep at night.

[00:03:17] Joe Colantonio Love it. As you're developing software for these types of companies, do you keep quality in minor testing as a part of a big part of what you all do? Or how does that work and how does it specific to health care, how maybe it's more critical than maybe other like a simple application that's just a website or something.

[00:03:35] RJ Kedziora It testing and quality is at the heart of everything we do. For just what the reason you just said that it is about people. It's about their health and wellness. And our typical project is not billing. It is important to get the billing right. But that's not our typical project. We do focus more on what I think of the innovative patient side of things, dealing with data, particularly as it comes off of the medical device. Blood pressure, blood glucose information, your heart rate, taking that information, and marrying it together with the medical record information, your labs, and medication to be able to make recommendations. As we work with startups and it's like, we have all this data, let's find patterns and shown to the doctor. Well, you need to do a little bit more than that these days because they don't have a lot of time. Testing and the quality of those applications is paramount. And so Estenda is ISO 1345 certified, which is a standard that says we have a good software development process, that it's well documented, SOP-driven, template-driven, checklist-driven. And as we do code reviews, it's like, let's make sure we are asking the questions each time to make sure that we are putting out a quality product. And then even beyond the actual software testing the algorithm testing, and things usually go into a clinical trial. And so we have Phd on staff that can help design those clinical trials to make sure that they do work in the real world.

[00:05:16] Joe Colantonio I used to work for a huge health care company and I always thought like, this is awful because it's really hard to innovate. I don't know if it's because it was an enterprise or because it's so regulated. How hard is it to then create software that is innovative and fills everyone's concerns about getting audited and things like that?

[00:05:35] RJ Kedziora And we get audited all the time. We do self audits. We have independent third parties that come in and audit us. Customers audit us. It always makes it it interesting. But that keeps us honest. And it's like, okay, we're going to go through this process. We're going to do it right. And we do have these conversations with various different particularly with the startups that are trying to just get into health care and they're like, wait, this is what we need to do. And it's like, yes, you need to have this documented process. You can't just start writing software and see what happens because there is that level of scrutiny and that it can impact the health of people. There are different levels of that obviously, some systems that are not directly impacting patient health, there's less of a threshold there. But yeah, it does make a difference and it doesn't squash innovation. You just have to be aware of this up front as you're developing these solutions and thinking about it. The FDA, like if you need to go that far and you're going to get need FDA approval as you sell your particular product, whatever you're putting on a market, they're your friend, they're there to help you to make sure that you're doing this right, that you're not surprised. If you do think you're going down this road, talk to the FDA, talk to there's lots of consultants out there, obviously, that can help you understand that journey, us included. And so just make sure that you're aware of what you need to do. I wouldn't say it's squashing innovation by any means. You just have to be aware.

[00:07:12] Joe Colantonio I always think this area it's also ripe for disruption, especially with AI. Do you see AI being almost like taking us to a new level, a new era of health care, maybe we can get insights, more cost effective health care? Or is that pie in the sky? It's still theocracy and roadblocks.

[00:07:32] RJ Kedziora We're starting to see it already. Health care is the last industry to really embrace data. There's a lot of technology and health care, the MRI's, the image of machines, Xrays, incredible technology from that perspective. But it's the last industry to really embrace data. And now that we have the medical records systems out there that are starting to capture more and more information and all of the wearables that are on the market, that there's just the amount of them. And yet you go, the amount of information that's available now to the professionals is just opening up the world of possibilities. And in that note of being cautious in health care and the early use cases are around note taking and summarizing content. Low risk, as you go into any project, what is the risk of this project? What is going to be the impact of it? And then that drives your overall strategy. If I'm making, clinical decision algorithms to treat cancer, that's one thing. Or if you're just, okay, here's your steps for the day. There is different thresholds of what you have to worry about and think about.

[00:08:45] Joe Colantonio Are these specific use cases that you've seen Generative AI being applied to health care, for example, back in the day, we used to have to do translations into different language. We had an Excel sheet. We'd go to an offshore company that knew the language and said this is how it should look. And then we go look at the application, make sure it matches. Seems like AI would be very easy to just say take this UI, translate into this language and boom, I have the correct translation or is that easy or that's already been done for a while?

[00:09:16] RJ Kedziora The biggest, earliest use case that's getting quickly picked up I referenced the second ago but the idea of ambient listening. Doctors, nurses, health care professionals don't want to sit there and type on a keyboard and handle their notes. Let the computer technology listen. The conversation in the room to be able to take those notes, to enter the information in the medical record system. That's quickly becoming, I would think, a top use case in medical that's being rolled out faster and faster these days to free up doctors to do what they want to do. I've heard it called Pajama Time. And it's like when you're done seeing patients and you've got to go home after dinner, end of the night, now you're entering all your notes into the system and it's like, go have fun with the family, go out have fun with friends instead of taking all those notes. Let the power of the technology take that on for you. Low risk again. And then likewise, it's also summary of that information in the EMR, which is just interesting. You referenced this, the idea of language translation, and I think that is a powerful use case. But as more and more people start looking at what these systems are capable of, you have to think about the training data. And so these systems have gone out and just vacuumed up everything on the Internet. The vast amount of information on the Internet is in English. There are been various research reports that talk about the language translation into, say, Spanish or German. Japanese is not as good as it is in English because of that smaller training dataset. People are taking this challenge on and creating better and better systems. But that was one of my early assignments. My wife happens to be a nurse manager at a practice and they do bring in language translators. I was like, Og my God! GenAI. This is going to solve your problem. You're not able to do over that. But that's where you can't rely on it because especially if you don't know Spanish, it's going to sound good to you because you don't understand Spanish. But then that Spanish speaker is going to be like, what are you talking about? Because it's going to lose those nuances and ask you that native speaker or have the ability to translate between the languages. I'm not going to pick them up. I'm not going to know Spanish. I'm like hey, done a great job, but you might not be. You do have to be cautious of that. We're in the early days still. It's funny to think of but we are in early days and maybe two years now since GenAI hit the market. But we're still testing and still trying out and figuring out what is good at.

[00:11:50] Joe Colantonio How do you avoid overreliance then? Oh, that's all powerful, Oracle. I'm going to apply it. It's translation beautiful. I'm sure it's an AI. It's an LM, it's been trained. I'm rolling with it.

[00:12:03] RJ Kedziora I think of it as an intern. Think of it as that first year person. It's very capable, but it also sounds very trustworthy, sounds very authoritative when it just spits out something. It's very easy to be like, okay, all right. Now you have to go back and double check it. We do use it a lot. Huge advocate for it, but it's an 80% solution. Never just sit here and be like, yeah, okay, let's add X, Y and Z. Those are various stories where lawyers have gotten in trouble in different professionals or got in trouble because they just took it at face value and didn't cross-check the information because even in the case of the lawyer, they made up case references and the lawyer didn't go, check that case reference and to make sure it was real. You have to be careful of it from that perspective. Same thing in the field of medicine. There's a story of a couple months ago, I was like, how many rocks a day do you eat? Well, I eat 3 rocks a day would be healthy. You and I know, you're not going to eat rocks, but insert other food item that's not as good for you, snickers, candy bars and maybe it doesn't give you quite the right advice. You do have to be cautious of it and use it because it's taking over the world. When Oprah had a special on in a couple weeks ago, that's when I knew, okay, we had a flashpoint. Now Oprah's talking about it.

[00:13:32] Joe Colantonio You make a good point, though, garbage in, garbage out, though, Like how many rocks? How do you as any suggestion you have for testers to make sure that when they are interact with GenAI they are using good prompting techniques to make sure they're getting good output?

[00:13:46] RJ Kedziora Yeah, there is a couple things to think about when you're in GenAI systems. One, the hallucinations. Is it just making stuff up? So you got to be aware of that as you're testing and how do you mitigate against that? Is there bias in the data that was used? If you start out with a generic large language model and then you fine tune it and train it and in your specific information material, say you work for a major health system, what information do you have available if you're going to feed that into the model to make it more specific to you? Is it accurately representing that information? So you've got to be careful of that as well. It's the challenge here is it's non deterministic. I can answer and ask the same question and five minutes later you can ask the same question again and you're going to get a different answer. It's not just testing, it's also monitoring over time to make sure that it is performing as you expect.

[00:14:52] Joe Colantonio Nice, so in the pre-show, you mentioned health care is very high level. There's a lot of different verticals within health care. I assume each other vertical has its own standards, its own regulations. Can we use AI to say, hey, we tested this, we're in clinical trials, Does this meet all the standards that we have to adhere to? I don't know, is that too vague, but can it help you do something like that?

[00:15:17] RJ Kedziora This is where the bias and the training data and where you mentioned garbage in, garbage out, but it's even how do you code your medical information at your institute? We've seen examples of this and the FDA is even call this out that the industry has to be somewhat self accountable for this, that you can't just take an algorithm that was generated in one hospital and move it into another hospital without testing it. What are the specifics of your population? If I live in the suburbs of Philadelphia, and it's like there's a very specific demographic of people that live here. Now I take that and move it down south and there's a different demographic, or I move it out west or Canada or Mexico or Japan. There's different demographics of people. So there's the same algorithms that were developed here work for those populations. Probably not, which is why you need to test. And as you're developing them, make sure that you are using dataset that is representative of a wider population.

[00:16:22] Joe Colantonio Do you get pushback by using AI especially with data because of HIPAA compliance, like leaking real patient information for test data somehow, or having these LLMs being trained on it and all sudden you could maybe spit on a real patient data when you're prompting it, who knows?

[00:16:39] RJ Kedziora That is, in my opinion, one of the overblown concerns.

[00:16:43] Joe Colantonio Really. Okay.

[00:16:44] RJ Kedziora Of bad actually happening. I'm not saying it's never going to happen, but it's an overblown concern that if I'm a doctor and I put in some information that somebody else somehow promises and RJ Kedziora male in the fifty's, comes up with some specific information and then that person knows who I am and can be able to do anything with it. I would not enter that specific information. You can use the system for positioning, you have a specific question. You don't have to put my name in it to be able to do that. You don't have to put an identifier in there to be able to get meaningful feedback from it. With HIPAA, if you're a health care provider, you need we signed business associate agreements all the time, then that makes us accountable for the use of this information as well. The large language models out there, these corporations are now signing the business associate agreements. They weren't early on, but they are signing them now and be able to like hold your data separately. That's not incorporated into the model to address these concerns. As they're rolling out new models, make sure what is your business associate agreement actually cover? Does it cover, the use of the current model, the model that just came out last week? Probably not. But you do have to be aware of it. And think of these. It's all about risk is what it is.

[00:18:10] Joe Colantonio When you developing software yourself for all these different clients, do you actually use GenAI maybe to create user stories or test scenarios from code or how do you actually use A.I. in your day to day type of delivery?

[00:18:23] RJ Kedziora I do. It's even starting as early on. I've been doing you think about trying to figure out what your system is going to do, what are the user requirements, and you're going to do interviewing the experts or just the user in your system. And I've been doing these types of interviews for decades. And I know I can do them. But by entering the input in parameters in to say, ChatGPT or Claude, it's spitting it out a lot faster for me and getting me going. It's not 80%. It's like then I can go and tweak those questions and make them different. But it's an efficiency thing. So starting very early on in user interviews, I'm developing a system that's addressing prostate cancer, at a specific institution. If the doctor with 30 years of experience who's written these articles, you can tell it like you journal articles, what questions I need to ask them to start getting to the core of the system that we're going to develop. So doing that writing requirements is the same way as you record those interviews with all of the different people and then the AI systems are able to extract the meaningful pieces of information from those. Again, it's an efficiency thing, making life a lot easier. Lots of opportunities to use it from that perspective. And then once we do have requirements created, we do a lot of human review that you can also plug them into the system. And I always love asking the question, what did I forget? What is the other thing that I forget? I referenced our process. It's very SOP, template driven, checklist driven to make sure that we're not forgetting all the little things that you need to think about when you're developing a system, but use the AI to help you with that. There's a standard was the ISO 25,000 where it talks about look at scalability and usability and stuff like that. As I'm prompting writing those prompts, it's like, okay, based on this ISO standard, which covers all of these things, what am I not remembering to ask kind of thing, and it can put those things together.

[00:20:37] Joe Colantonio It's a huge timesaver.

[00:20:38] RJ Kedziora Which is really nice. But then, we can continue down the chain. Now I have requirements. Now I need to write tests from the software developer perspective, from a QA perspective or write those tests. I've done an example a couple of times now where you have a screenshot, take the screenshot, drop it into one of the models that has a vision capability, can ingest that image as like write test cases for this. And that's where it truly amazed me months ago when I first did this. Like typically if you have a data input field and you put a little asterisk next to it, which means it's a required field, it knew that. And so it was writing test cases that, okay, username is a required field. But then it even helps with security testing, SQL injection. You have to make sure that SQL injection is not a problem. And if you're newer to the field, maybe you don't remember to test SQL injection or you're not familiar with it. You can very easily then just be like, what is your SQL injection? How does it work to really help you out from that perspective? And then I'll say probably last is just test generation, particularly in health care, with the wearables, with electronic medical record systems, there's so much data that you need to be able to test these systems. It's beautiful for creating data for you.

[00:22:02] Joe Colantonio 100% Agree, and I love the concept of doing these interviews. It's almost like if you ever heard of behavior driven development, you can create personas of people based on those interviews and then be able to say, okay, as John, I want to do this this and probably come up with some really good ideas so you're not being replaced. It's almost like it's giving you more ideas of like, okay, how can I make sure I'm creating test cases for all these different things that persona John is going to do?

[00:22:28] RJ Kedziora Absolutely. And that's the beauty of it because it does help build that creativity, that brainstorming is like, okay, talking to John, who's a surgeon, generated by ten questions. Okay, generate me ten more questions, generate ten more questions, generate ten more, and it will keep generating more questions for you. There will be repetition in that, but it just gets the creative juices flowing and reduces your time to get to that point where you have a good set of questions.

[00:22:57] Joe Colantonio And also, I like the point about our security testing and things. You probably could say, Hey, what kind of test am I missing? And it may say you missing security testing. And if you don't know, like you said, you could say, okay, give me some examples of some good security tests that I can run against a specific application for sure.

[00:23:12] RJ Kedziora And it's fascinating. I did because we do health care and work in the wearable data kind of thing. I happen to do triathlons so much, very much focused on heart rate and different things like that and have been able to say, okay, I have a person that's out of shape and 60 years old and generate me a heart rate profile of a person running a mile, this. And it's like, okay. And it generates that data. And it's like for every minute here's specific and it starts out, say 90 beats per minute and then goes up and then towards the end of the run, it comes back down. And now I have a nice profile of this person's heart rate over time. And I say, okay, generate me the same data for a 30 year old athlete who's in really good shape. And it just amazes me because then it drops the heart rate down because it doesn't understand heart rate, it doesn't understand exercise parameters or age or anything like that. But it was able to generate a good representative data set that differentiated between an out of shape 60 year old and good, in-shape athlete in their 20s or 30s and like it differentiated those in created. It's very meaningful to be able to help test systems.

[00:24:28] Joe Colantonio Now it's interesting. I know someone that once again to running and he had a coach that slowly eased into him with a plan like this week, we do this day one, day to day three. Almost sounds like you can use this as a coach as well for pretty much.

[00:24:40] RJ Kedziora Yes. Do that.

[00:24:43] Joe Colantonio Nice. Does it when multimodal coming out? Do you find it like sometimes you could feel like an image of a screen and say, how can you help me from a user's perspective, can you tell me how my user would interact with this or anything I should test to make sure my user is going to have a good experience. Do you get that broad?

[00:25:01] RJ Kedziora Yeah, we do, because it's partially it started out as just as curiosity. It was like, well, there's actually work and it does it's fascinating. They'll be able to differentiate the fields. And I think it's a key thing that if it's not interpreting it correctly, what is that telling you? Is it well designed? Do we have to reconsider how we're using or positioning a particular element If it didn't recognize it, maybe it is still a good design. And the AI just couldn't interpreter proper like. But it should cause you to ask that question like, okay, the AI system can handle this. What is a human going to do with it?

[00:25:42] Joe Colantonio 100%, so RJ, for your company itself, maybe a little back from what you all do. So if someone's listening, like I'm trying to develop a health care application, maybe I need a little help here with my AI strategy or our development strategy. Is that something you could help them with or could help them with?

[00:25:56] RJ Kedziora Yeah, Estenda is a professional services company, so we don't sell products. We help other companies develop their products, develop strategies. And we do we work with a mix of like large corporations, government agencies, particularly the R&D department. If we work with like health systems, it's going to be with the Ph.D. researchers. Usually, very cutting edge, which is where our sweet spot is. We might not quite know what you want to do. We're going to help you develop the system and how it's going to operate. We've had a couple of clients over the years put our name on patents because we've been so much part of that process of creation. Like, okay, yes, put their name on it. We never own anything. You as the client owns everything, but it lets you explore lots of things. And as we have those AI conversations where today a lot of conversations start. One of the first early questions is always around what's your data strategy, What's your quality strategy? And some companies, particularly more on the startup signs, they don't have that figured out yet. And you're not going to get very far if you don't have your data strategy, if you don't have a quality strategy in place. As you start thinking about AI, starts with the data. Garbage in, garbage out, you said it.

[00:27:15] Joe Colantonio For sure. What's cutting edge? You consider cutting edge. Is that anything you see on the horizon like GenAI I get it. Blah blah, blah. But there's a some you like, okay, people need to be aware of this because this is about to take off or anything you've been messing around with in the lab?

[00:27:27] RJ Kedziora The big aspects of where all this is going, I think, is one, the intersection of wearables because more and more of these devices are coming out. They're more and more capable, generating more and more data. It's the intersection of the wearables and the AI to interpret that data and give it back to the person. It's getting care out of the four walls of the hospital. Me as an individual or even as the population ages this concept of care at home, so as my parents are getting elderly, fortunately they're still doing pretty good, but you don't want to have to move them into an assisted living home. They're very comfortable in the home where they live. How can we apply these tools, the technology, to understand their living environment? And that's where a lot of this is going beyond the four walls of a hospital. So if you know, your mother got out of bed this morning that she used the facilities, she brush her teeth, she opened the refrigerator, she turned the coffee on, she turned the TV on. She's having a good, normal day. But you're not sitting there with cameras in the room like observing her. And so she doesn't feel like she's being observed or you're protecting her privacy a little bit more at home where she's comfortable. And then one day it's like, okay, wait a minute, it's 9:00. She's usually up at six in the morning. She didn't get out of bed. She didn't turn on the coffee. She didn't turn on, let me pick up the phone. They're like, Hey, mom, how you doing? Very simple use case of that technology, which is where this is going.

[00:29:03] Joe Colantonio All right. So this may be a little wild, but have you seen the clips of the Tesla robot and how close are we that like caregiving where not only can we wear wearables, but also we can use some sort of robotic assistant to almost I don't have kids, so I'm betting on or hoping on these robots to really do that thing in 20, 25 years.

[00:29:21] RJ Kedziora The technology can help you. I am deeply fascinated, particularly by what Elon did recently kind of thing. It does turn out there was a little sort of magic behind the scenes in what he was showing, and it wasn't as upfront as it looks like on those videos. But to his point, the reason he was doing and the reason he was comfortable, he was like, look at how far we've already come. This is around the corner. So it really is. In Japan as a culture has really embraced this. So they are at the point where they do have like robots that are in the home to help care. And then they're not totally autonomous kind of thing, just running around like the Jetsons or anything like that. But at this point. But look to Japan, if you're really curious to where this tech is going and what it's becoming capable of, Japan is really embracing technology. But yeah, it's coming. It's not quite right around the corner, but it's not that far down the road to having that robot to provide care at home. And you have to think about it self-driving cars, it's like, okay, a lot of people are scared or apprehensive of self driving cars. You don't have control anymore. But what's better for driving the car? You or that computer that's in the car. And it's the computer that's better at driving.

[00:30:42] Joe Colantonio 100% Okay Rj, before we go, is there one piece of actionable advice you can give to someone to help them with their A.I. testing efforts? And what's the best way to find contact you or learn more about Estenda?

[00:30:52] RJ Kedziora To get a hold of us our website, Estenda.com. My LinkedIn always, always works. It is RJ Kedziora loved to have these conversations. It just always fascinates me what people are coming up with and thoughts. My one piece of advice is use it and it's that simple. But you need to do a little learning and education. And there was this idea around crafting the right prompts and how do I write a better prompt? One of the fascinating things is just ask ChatGPT or Claude or any of these solutions to write me a better prompt. You don't need to become a prompt expert anymore. Ask it to help you. Ask it to ask you questions to get to a better answer. If you're not sure what to ask and or how to use it. Ask it to help you and it will which is one of those things that it's amazing. And lastly, I would say it's not going to replace you, but people using it are going to replace you. So get to know it.

[00:31:55] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a521. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:32:29] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.

[00:33:13] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Playwright Postman API Convertor, TestZeus, AutomationGuild 25 TGNS143

Posted on 12/02/2024

About This Episode: Do you know what must attend online automation conference is ...

Daniel Knott TestGuild Automation Feature

Removing Pain Points in Mobile Test Automation with Daniel Knott

Posted on 12/01/2024

About This Episode: In Today's special session, we are honored to have Daniel ...

Andrew Duncan TestGuild DevOps Toolchain

How to Building High-Performing Engineering Teams with Andrew Duncan

Posted on 11/27/2024

About this DevOps Toolchain Episode: Today's guest, Andrew Duncan, founder of Vertice Labs, ...