About This Episode:
Manual regression testing isn’t going away—yet most teams still struggle with deciding what actually needs to be retested in fast release cycles.
See how AI can help your manual testing now: https://testguild.me/parasoftai
In this episode, we explore how Parasoft’s Test Impact Analysis helps QA teams run fewer tests while improving confidence, coverage, and release velocity.
Wilhelm Haaker (Director of Solution Engineering) and Daniel Garay (Director of QA) join Joe to unpack how code-level insights and real coverage data eliminate guesswork during regression cycles. They walk through how
Parasoft CTP identifies exactly which manual or automated tests are impacted by code changes—and how teams use this to reduce risk, shrink regression time, and avoid redundant testing.
What You’ll Learn:
Why manual regression remains a huge bottleneck in modern DevOps
How Test Impact Analysis reveals the exact tests affected by code changes
How code coverage + impact analysis reduce risk without expanding the test suite
Ways teams use saved time for deeper exploratory testing
How QA, Dev, and Automation teams can align with real data instead of assumptions
Whether you’re a tester, automation engineer, QA lead, or DevOps architect, this episode gives you a clear path to faster, safer releases using data-driven regression strategies.
Exclusive Sponsor
Sponsored by Parasoft — Empowering QA teams with data-driven Test Impact Analysis.
If your manual regression cycles feel too long, too noisy, or too risky, Parasoft’s Test Impact Analysis gives you clarity on exactly which tests matter. By mapping real code changes to specific test cases, teams can cut redundant testing,
focus their effort, and release with confidence.
Try Parasoft’s Test Impact Analysis tools for manual testers: https://testguild.me/parasoftai
About Wilhelm Haaker

Wilhelm Haaker, Director of Solution Engineering at Parasoft, manages a global team of solution engineers dedicated to Parasoft’s web and cloud solutions. His team helps organizations modernize their software development
and testing processes and optimize test automation for Agile transformation, API first, and cloud migration initiatives.
Connect with Wilhelm Haaker
-
- Company: www.parasoft.com
- Blog: www.parasoft.com/blog
- LinkedIn: www.wilhelmhaaker
About Daniel Garay

Daniel Garay, Experienced Quality Assurance Director with a demonstrated history of working in the computer software industry. Skilled in Scrum,Test Cases, Agile Methodologies, and Test Automation.
Connect with Daniel Garay
-
- Company: www.parasoft.com
- Blog: www.parasoft.com/blog
- LinkedIn: www.danielgaray
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:02] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
[00:00:35] Joe Colantonio Hey, is your team still slagging through massive regression suites, feeling the pressure of shortened release cycles, or wondering how to trust what's safe to skip? If so, this episode is going to save you hours, maybe even days on every release. Today, I'm joined by Wilhelm and Daniel from Parasoft, and we're digging into something every tester deals with, but very few actually optimize manual regression testing. Listen and discover how to run fewer tests while staying confident nothing critical slipped through. Why are most teams still guessing what to test? And how can data finally replace those guesses? And how do you know exactly what manual tests or automated tests are impacted by each code change? Wilhelm even walks us through a real world example using Parasoft's continuous testing platform, showing how code coverage and test impact analysis completely reshapes the regression process. If you've ever worried about missing a defect that kept you up late at night, or wasting time retesting everything just to feel safe, if so, you're not going to want to miss this one. Listen all the way to the end, check it out.
[00:01:43] Hey, before we get into it, this episode of the Test Guild Automation Podcast is sponsored by the awesome folks at Parasoft, empowering QA teams with data driven test impact analysis. If your manual regression cycles feel too long, too noisy, or too risky, Parasoft's test impact analysis gives you clarity on exactly which test matters. By mapping real code changes to specific test cases, teams can cut redundant testing, focusing their efforts and releasing with more confidence. But as you know, I always say seeing is believing. So try it for yourself using that special link down below. Support the show and let me know how it worked for you.
[00:02:23] Joe Colantonio Guys, welcome to The Guild.
[00:02:26] Wilhelm Haaker Hey, good to be here.
[00:02:27] Daniel Garay Good to be here. Thank you for the invite.
[00:02:30] Joe Colantonio Absolutely. Really excited about this topic. I know we're gonna go over a lot of people still struggle with manual regression testing, but the challenges are, how to get ahead of it. I think this is a really timely topic, especially in the age of AI. I guess before we get into it, maybe a little bit more about yourself. So Wilhelm?
[00:02:45] Wilhelm Haaker Sure. Yeah, so Wilhelm Haaker, I'm the director of solution engineering here at Parasoft, focused on our functional solutions. Everything from API testing, service virtualization, code coverage, and happy to be here today.
[00:03:00] Joe Colantonio Great to have you. Hey Daniel, how you been?
[00:03:02] Daniel Garay I'm good. I'm good, Joe. So I'm director of QA here. Been in the industry for over twenty years, which is crazy to think, but it's been twenty years now. Managing the team here in the U.S and Poland, responsible for the metrics that we need to meet on each release. We obviously have a good collaboration with development and obviously leading the collaboration moving forward with within the QA team.
[00:03:36] Joe Colantonio Love it. So you mentioned you've been doing this for twenty years. I've been doing it a little bit longer. The funny, the topic we're talking about is manual regression testing. Why is it such a challenge? Especially as releases get faster and faster, especially with AI with vibe coding. I've been vibe coding and I just I go right to my production, even though it's not mission critical. I should know better. I can imagine what people are doing in the real world. Maybe a little bit what makes manual regression testing so challenging still? To this day with fast release cycles.
[00:04:06] Daniel Garay This industry is always talking about, oh well, manual regression testing is going out the door. But I mean, there's always still some form of manual regression testing, right? And when you're releasing, the first thing you think about is scope of what you need to manually test. I mean, you have a large set of tests that you need to actually execute and run, but you only have a small amount of time. I've never heard upper management come to me and say, hey, you know what? You have all the time in the world, make sure this release is good, and we're good. I mean, I don't know. I've never heard those words. I don't know if Wilhelm, if you, Joe, you have, but from the QA side, I've never heard that. It's about defining the scope of manual testing, the tests that you want to test to make sure that quality of the product is at a solid state. Always remember from QA perspective, it's a black box. We don't see the code. We don't know what changes under the hood. We have to define the scope of testing knowing with our experience, our knowledge of the product. And from there, try to fit it in that time frame that upper management gives us for the release. That's always the most difficult portion of it. It is defining that scope within that time frame because that time frame is always minimal. It's just trying to reduce that amount of stress and er everything else, trying to get that in there. I mean, in my opinion, that's always the most difficult part is defining that scope and hitting all the spots that you need to hit, right?
[00:05:52] Wilhelm Haaker And I think it's not just the time frame also, but the how often you're doing testing as well. And it's not always just a normal release cycle, right? Sometimes you'll see a bug fix that needs to go out quickly into production and it's not your normal release cycle. And so that time frame is even more condensed in those scenarios. And so that time constraint really forces teams to sort of think about that scope, right, or that subsetting of the tests very critically to balance, how much testing we're able to do to fit within the time frame versus what is the perceived quality that we can try to reach.
[00:06:40] Joe Colantonio Do you still see testers trying to justify why they need to do more regression testing? Is it still a hard sell? Like a lot of times when I was working, they saw it more as like a cost center rather than a value add. How can we flip it to say, it's needed and we're helping? We're adding to I don't know, ROI or something in the long run because what we're doing is baking in value, not necessarily taking away precious time for you to get it out the door.
[00:07:08] Wilhelm Haaker I think that depends on the project and the company and industry to some degree. Everyone has a different risk tolerance when it comes to defects of their software. Certainly I think that's an easier argument if someone's gonna die, if there's a bug. And if it's like a hobbyist vibe coded side project that has low risk, then the tolerance for defects test in production. Like why bother? So I think everyone has a different answer to that question depending on what is the software that they're testing.
[00:07:50] Joe Colantonio Well, I think risk is the right answer. That's a good point because like you said, if you're vibe coding something like me, who cares doing a production, but I used to work at healthcare, couldn't do that. Hopefully they're not doing that now. For sure.
[00:08:02] Daniel Garay I think it's all about minimizing the risk, right? I mean that's the whole point of quality assurance. We're trying to minimize the risk as best we can. The best you could, I mean the whole purpose of QA is to minimize the risk. If we could do that the best we can, then we go forward from that.
[00:08:20] Joe Colantonio Speaking about risk, a lot of times testers think if I run X amount of tests, thousands of tests, then it'll be better. But nowadays, 'cause we do need to release faster, how can testers maybe run fewer tests but still feel confident that nothing critical slips through? Is it 'cause they looked at risk first and they know these tests covered these risk, or is there any kind of rule of thumb that you use?
[00:08:40] Wilhelm Haaker It comes down to balancing the time, right? And so to sort of be able to cut down how many tests you're running and still maintain high confidence, you really have to have a data-driven approach. And so one strategy for kind of applying data to this problem of scoping what tests to run appropriately is to leverage code coverage. And normally code coverage, you think of it as a developer metric. You capture it when you do unit testing. But code coverage can be captured from any type of testing even system level testing, browser testing. And so measuring code coverage while tests are taking place lets you sort of have a data point so that when code is changing from one time you ran the tests to now the test environment has a new build. Some code changed in that build, you can look at data to say, okay, well, based on what code changed from the last time I did my testing, what subset of tests are impacted by the code change. It's not vibes, it's not feelings, it's not committee. You're looking at code coverage data and code changes, and the system tells you like these are the tests that have been impacted, you should probably rerun those.
[00:10:11] Joe Colantonio All right, sorry. This just would be my pet peeve when I was an automation engineer.
[00:10:16] Wilhelm Haaker Yeah.
[00:10:17] Joe Colantonio They check in code, right? And they say run all your tests, right? You don't know. You can have like a test that has no impact whatsoever, but it fails, and you're trying to debug it while it's failing, but has nothing to do with checked in. It doesn't I'm not saying it doesn't matter, but it kind of makes the really noisy what's going on. And a lot of people miss out what you just said. Like it'd be really nice to know that you checked in this code and therefore tests A and B and Z. We'll be able to test that particular code change. And that would save time. Am I right or wrong? Or why aren't more people doing this?
[00:10:50] Wilhelm Haaker I think part of it is there's some setup involved and with the open source options that are available is kind of limited. Code coverage is always thought of as like a reporting metric. And so this is kind of like a a unique or interesting application of code coverage to kind of solve a problem around test optimization or like test subsetting in a data-driven way. And so it's not really about like from a tester's perspective, worrying so much that there's code coverage in the picture whatsoever. That's kind of in the background. From the tester's perspective, it's like, what are my tests? When is there a new build in the test environment? And what tests do I need to rerun? The fact that code coverage is being collected in the background is sort of irrelevant from the tester's standpoint.
[00:11:44] Daniel Garay I don't think it's irrelevant. I think it's just an additional benefit, right, Wilhelm? I mean, yeah, we're not considering that, but it's just an additional benefit because the higher the code coverage, we know that the there's less risk involved when there's higher code coverage. Doesn't mean we have to do less testing, it just means there's less risk involved. And again, as I stated before, it's all about reducing the risk. We want to reduce that risk.
[00:12:12] Joe Colantonio 100% agree. Well I'm gonna put you on the spot. I was wondering if you can actually walk us through a day in the life of a tester. I know we talked about test impact analysis, like how that all works. What do they focus on when they may involve regression testing? A real life kind of view into what that might look like.
[00:12:29] Wilhelm Haaker Okay, perfect. Here we're looking at a list of tests that I've imported for my demo application, Parabank. If you've seen a Parasoft demo before, you've probably seen a lot of Parabank. The question is, okay, I did a test run on V1 of my Parabank system yesterday. And I've got most of the tests passed, and now I have this request loan test that failed. And so let's think about it from that perspective of we don't have a lot of time. And now development pushed V2 of Parabank into the test environment. And I mean it does a demo, it's only a few tests in here, but you could imagine the test suite could be much larger for a large system. And so the question is, I can't rerun everything. What should I rerun? And so that's what the system really allows us to identify. And so here I can see that for my bill pay test, that's been impacted by code changes. And the other tests are kind of inheriting the test status from the previous test run. And so as a tester now, I know that I don't have to run transfer funds or new account tests. I just have to focus on bill pay and probably request loan because that test failed before and hopefully development fixed that. If I were to come in and say, okay, let's do a new regression session, and this is for V2 of Parabank. And I start that session, I go, okay, well, I want to start my bill pay test. I start the test and I come over and I start testing my application. And what's happening in the background while I'm interacting with my Parabank application here. We have a coverage agent that's deployed with the application. All of the code that's being exercised as I'm performing these actions is being associated to this bill pay test. Now stop the test, say we passed, come in and okay, let's do request loan, and we'll fill in some data here for request loan. And so some things to think about Parallel testing is supported, right? So if you have a QA team and they're testing the application at the same time, you want to be able to differentiate which code coverage is coming from which test or which tester. That's something that is part of the solution. Microservices architectures are supported as well. So if you have several services and they're all kind of grouped together as the system that you're testing, it's not like one monolithic application container.
[00:15:42] Joe Colantonio All right, so so really quick question. You mentioned something about testers. Multiple tests is what they're working on because you can run in parallel. I used to work in a company that had eight sprint teams and so there was a lot of people doing the same tests, but they didn't necessarily know because they weren't talking to each other. Would this be all the pinpoint, hey, we're running all these tests, but these eight teams are running 40% of the same tests that we could just get rid of and make the sprint even quicker almost for the verification at the end of the validation.
[00:16:16] Wilhelm Haaker Certainly, we measure that data or collect that data to do analytics like that. One part of the solution that I'm not showing here is sort of all of the reporting of test results and code coverage of that. From a test impact analysis perspective, we're really just focusing on what tests do I need to run or rerun. There's a whole other reporting side to this that we could get into. But yeah, I mean we collect the data, so that's something you could do.
[00:16:54] Joe Colantonio Does it know about duplications? 'Cause you were just recording a step. We'll know like you don't need to run this other test 'cause it's basically touching the same back end or the same code that was checked in. It's no need to do all these tests for that to recover with this one.
[00:17:08] Wilhelm Haaker Right. That's part of the data we collect, right? So like each test case that we run with the solution in place is we see what code is kind of getting associated right to that test case. And so you could imagine if you have like lots of overlap. We have that data kind of to show the overlap. But in terms of kind of coming back to that scenario of I don't have enough time. And I need to figure out what are the right things to test. Now that I finish this test session for just bill pay and request loan, I know that for the current version of Parabank that's in the test environment, I've verified all these test scenarios are passing across multiple builds. Some of these tests were verified in V1, Bill Pay and Request Loan, they were verified in V2. And I can be confident that I don't need to rerun these other tests in V2 because the code didn't change for those tests between those two builds. And so that's really what gives you that optimization and confidence to say, well, maybe I should consider bringing this into like the sprint activities and not just doing manual regression testing sort of at the end. If the system can tell me, like, hey, in each sprint, development is working on, these set of user stories. And from our large suite of regression tests, we know which ones to rerun based on code change. You might even be able to start bringing manual regression testing earlier in the process as well.
[00:19:04] Joe Colantonio What I like about this is I know a lot of teams, they actually have new regression, new code every day. It's almost like you can save so much time because okay, we have all this new code every day, but I know this code is the only code that changed that I need to really test, right? Am I following along correctly?
[00:19:23] Wilhelm Haaker Right. And that's why people try to automate tests, right? With automated tests, it's easier to keep up with the rapid change. But as Daniel was saying, manual regression testing still happens in many projects. We were at a conference recently and I was up on a panel and I asked the entire room, I said, Hey, raise your hand if you're working on a project where you do manual regression testing, and every single hand went up in the room. We were kind of surprised. Despite AI automation, automate everything. I think we still see that manual regression testing is a part of many projects. And so how do we optimize that as part of the overall strategy? I think can pay some dividends for a lot of people.
[00:20:20] Daniel Garay Yeah. But even as automation engineers, right? I mean, even automation engineers have to do some form of manual testing before they automate everything. Even automation engineers are doing some form of manual testing. You have to keep that in mind as well. I mean, everybody's doing some form of manual regression testing, even though our industry's like, oh, automation's getting rid of manual testing. AI is getting rid of manual testing. Sorry, I had to put my glasses on. But yeah, there there's always gonna be some form of manual testing going on. Even though I know everybody keeps saying it it's going out the door.
[00:21:08] Joe Colantonio It's a good point. Even people don't realize automation testing is expensive, especially when you need to do maintenance. If you have like thousands of tests that aren't necessarily as stable as you would think. So even though people say manual testing is expensive, this seems like it could almost reduce automation testing as well because once again, you're only focusing on the tests that really need testing. Does that make good sense? Is that on the right track there?
[00:21:31] Daniel Garay Yes, exactly. I mean, the whole point of this which is test impact analysis is to help the tester visualize to see okay this is what's impacted and this is what's not. You visually see that. You have that confidence of looking at it and I mean as QA engineers when we're scoping out what we need to validate and what we need to test. We're not guessing. I would never say it's g it's more we're using our experience we're using our experience with the product with our knowledge of the functionality and we define what needs to be tested and we may even reach out to developers and say hey do you think this is what needs to be validated but with this it gives us the ability as Wilhelm mentioned it's data driven right it's data driven so just like everything else is data driven code coverage software development it's data driven. I mean all these industries are data driven. Baseball, I love baseball. Data driven. My .... just won the World ... I had to throw that in there. But everything's data driven. Now, you could say manual testing is data driven. Think about that. It's data driven. Now from a manual testing standpoint, you could see, okay, this is what it's impacted, and this is what's not impacted. So now you feel you could actually tell what you need to validate and what you don't have to worry about. It's just from a QA perspective, I think it just alleviates so much of the stress and it gives you just some more confidence in what you're actually validating is what you need to validate. You don't have to worry about all these additional items. This is what you're testing, this is what will reduce the risk, which we talked about, and you move forward. And you have that confidence because I mean Joe, you don't understand how many like I said, I've been in this industry so many years. When I test and I'm done, I have sleepless nights sometimes because like did I let something go through the cracks? Right? Everybody has that. And that's just that's I mean, if you're not if you're not involved and you're not dedicated with what you do, then you might not have solid advice. I do every release, like I'm just like worried about it. Like it did something fall through the cracks and hopefully this reduces that. That's the whole point. I'm not saying this is gonna be the end all, end all and solve everything, but it reduces the risk.
[00:24:32] Joe Colantonio We're talking about manual testing, but doesn't it help with automation testing? Because you know you don't have to run these automated tests. I mean, even though we're talking about manual regression, I would think AI can create thousands of AI slop tests of you know what they say you need to run. But I mean, unless you have this visually to say, nope, you just need to actually run these tests because this is the only one that was impacted, it's gonna save time all for the whole team, I would think.
[00:24:57] Wilhelm Haaker Yeah, that's very true. The solution we're talking about today, it's we've been primarily talking about manual regression testing. But there's nothing that stops this solution from being applied to automated tests as part of your pipeline as well. When I think about what you were saying earlier about automated tests also having a high cost. If I look at those Selenium test frameworks, even Playwright, the new kit on the block, it's still a lot of maintenance work as the number of tests you have grows. And so reducing, okay, for each build, how many tests I run kind of in that same data-driven way, I mean, not only does the test run finish faster and feedback gets delivered to development faster, but also the amount of maintenance work that you have to worry about for any given build also goes down because even if you have some tests that they need updating and they need some maintenance, maybe you don't have to worry about updating them right now because they aren't impacted and you can update them later when you have more time. And these tests need to be rerun. So it helps the automation engineers be more focused getting the right automated tests to run. Quickly reliably as well.
[00:26:27] Joe Colantonio Right. What Daniel said, being data driven, I think is helpful because a lot of times we're as testers, you're kind of guessing, right? If you're kind of running blind, you're kind of guessing like, okay, this code was introduced, I assume it's gonna impact this and that, but it's a lot of times you're wrong. I don't know if you've ever done model based testing. A lot of times a model is completely not correct, and then you have correct and incorrect assumptions coming out of it. Testers, developers and tourists for not giving good estimates of how long it's gonna take to deliver something. I assume this would help with that as well, right? 'Cause now you have like Daniel said, you have real data to say, okay, I know I only need to run three tests rather than 20, so therefore it's only gonna take two days rather than twenty days. I don't know, I'm making it up.
[00:27:10] Daniel Garay Yeah, yeah. It's completely data driven. I mean, as QA engineers, I mean, I always look at it, we're kinda like the last line of defense. Before it goes out to the public, it kind of falls on our shoulders. So this will help that. I mean, it just eases the whole portion of our testing to understand, okay, what exactly has changed. It's we have data behind it. It's not an educated guess, because essentially that's what we're doing. What when we've define scopes of validation when we're testing, they're educated guesses when you really put them, when you really get down to it. Now at least it's data driven here and we've reduced that. And hopefully it gives them some kind of confidence that's okay, we covered what we need to cover. And we move forward. I mean, we have to continue to move forward because we can't just stay static. We're moving forward and we're going on from there.
[00:28:26] Joe Colantonio All right. Bad host. Before Wilhelm even showed his screen, what were we looking at? What is this tool? I don't think we probably even introduced what the tool was, what it was called or what?
[00:28:41] Daniel Garay I touched based on it but Wilhelm, I'll let you talk about it.
[00:28:45] Wilhelm Haaker So we're talking about Parasoft CTP or stand which stands for continuous testing platform. It's part of our larger suite of products that covers everything from service virtualization, API testing, and more. And so there's a specific capability for test impact analysis. What I showed earlier, we're looking in the CTP UI from kind of the manual testers perspective, but also automated tests, like I said, are supported as well. Whether you're a playwright user or Selenium or pick your favorite automation tool for testing, we have APIs that you can plug into your testing process and kind of enable test impact analysis for any testing that you do.
[00:29:45] Joe Colantonio All right, if someone's listening to us are like, Oh, this sounds incredible, but will it work for me? Or how hard is it to implement? I mean, do they need to have your whole ecosystem? How hard is it to get started? What is the support if they use say C# or Java? You're gonna need all these type of questions. Python, what does it work with? How hard is it to get up and run and set up?
[00:30:07] Wilhelm Haaker Yeah, I can answer that. Since the solution is based on code coverage, we have coverage agents that are language specific. And so today we support Java and C#. Spring Boot, Spring applications, very common in the.Net world, we see a lot of usage as well. Like I was saying, from a test framework perspective, it's test framework agnostic. It you can kind of plug it in with any test framework. And then from a setup perspective, CTP, it's a web server, supports Kubernetes. If you want to deploy it in a container in your environment, that's a web app. Have your have your users log in and import their tests. Maybe you have manual tests that you've defined in Jira X ray or Azure DevOps test plans, for example. You can export those tests, import them into CTP, and then start the journey of collecting or measuring code coverage from the test sessions that you perform, and then have the system tell you, hey, we deployed version two of the app in the test environment. The coverage agent needs to be deployed with it. There's some DevOps work with the engineer that's responsible for deploying your application in the environments. And then, once our coverage agent is kind of part of that deployment process, CTP talks to the coverage agent, finds out what changed, and goes from there, in terms of telling you which tests are impacted. I mean there's some pieces involved to to get it deployed, but a pretty reasonable.
[00:32:05] Joe Colantonio Very cool. All right. So I love speaking with vendors just because you talk to all the verticals. A lot of times people are just pigeonholed looking at their own thing. They don't see what others are doing, what they've accomplished. Have you have any big wins or that you've seen teams get out of this tool? I don't know, savings, time savings, quality or something else?
[00:32:27] Wilhelm Haaker Yeah, I think time savings and improvements on quality are kind of the obvious ones to think about. And when you think about reducing how many tests that you need to run and being confident in the cuts that you make, you can actually better utilize that time savings. More exploratory testing, more time for developers or testers and developers to test a particular area of the application more deeply or more critically. That time savings doesn't necessarily mean drinking a Mai Thai on the beach and not working as much, but it could mean, right? Or yeah, I mean it could, but usually it means like, hey, like we can focus more deeply in certain areas and as to Daniel's point, work on reducing the risk even further. And I think Daniel, you said it best when it came to sort of like the role of the tester and stress and stuff, but I've seen that be a big win as well.
[00:33:44] Daniel Garay Yeah, I mean Wilhelm touches base, so he touched base on the stress point. I mean, two aspects of it. You have from a company perspective, you're saving time and you're improving quality. But I mean, from QA perspective, when you're trying to get something out the door, the product out the door. I mean, that's generally speaking, that's the more stressful time for the engineer, the QA engineer, whether it be a patch release, the major release, minor release, whatever it may be. If you have data driven information that tells you this is what needs to be validated, it at least reduces the I mean, in my opinion, it reduces my stress in that, okay, this is what needs to be validated. This is what the focus is on. You get that peace of mind from my perspective, you get that peace of mind that, okay, I'm good with what I'm covering. From an engineering standpoint, you have that peace of mind, but from a company standpoint as well, you have two aspects engineering, the individual, and the company. The company's saving time because now that engineer could spend additional time doing other tasks. You have that time perspective, and if you need to improve additional aspects of the quality of infrastructure, whatever it may be. You have that and so the company's benefiting from it and also the individual. There's two aspects of that.
[00:35:23] Joe Colantonio Okay guys, before we go, is there one piece of actual advice you give to someone to help them with their automation manual testing efforts? And what's the best way to find contact you or learn more about Parasoft CTP?
[00:35:34] Wilhelm Haaker All right, I'll go first. Parasoft.com. We have a lot of information, a lot of extra videos, blogs, white papers to check out on test impact analysis topic, testing, test automation in general. You can reach out to us there, set up a trial, get a demo, come talk to us. We'd love to hear what you're doing with testing today.
[00:36:03] Daniel Garay I mean obviously Parasoft.com, we have our products there and everything. But from a QA engineer, I've gone started in tech support, moved up to QA management director. I always tell my QA engineers, stick to your guns. You know more than you think you know. Work with development. If you can't work with development, then there's something to be said with that. I mean the great thing with Parasoft, we have a great collaboration between development and QA. If you don't have that, work with that. You gotta have a good collaboration with development. Otherwise, QA is just makes it just that much more difficult. Work with the development, be confident in what you do and continue to QA and respect yourself and move forward. That's what this industry is about. That's my view.
[00:37:05] Joe Colantonio Love it and I highly recommend everyone checks it out using that special link down below.
[00:37:10] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a455. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:37:47] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:38:31] Intro Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.




