DevOps Software Testing Pitfalls and How to Avoid Them with Jay Aigner

By Test Guild
  • Share:
Join the Guild for FREE
Jay Aigner Testguild DevOps Toolchain Feature

About this DevOps Toolchain Episode:

Today, we have a special guest – Jay Aigner, a seasoned software testing and quality assurance expert. Jay brings a wealth of knowledge from his experience founding and running a top-tier QA agency.

In this episode, we delve into topics highly relevant to your daily work in DevOps software testing and quality assurance. We'll discuss the importance of maintaining a paper trail for daily updates, the intricate process of evaluating and selecting automation tools, and the dynamic nature of tool selection. We'll also explore the significance of proofs of concept (POCs), the challenges in integrating automation into software development, and the critical role of communication and alignment within organizations.

Jay shares practical insights on balancing manual and automated testing, navigating common pitfalls in CI/CD pipelines, and the evolving landscape of QA, including the impact of AI and future trends. Whether you’re dealing with poor releases, bandwidth issues, or need expert advice on tool selection and implementation, this episode is packed with actionable takeaways to help enhance your QA processes.

Try out SmartBear's Bugsnag for free, today. No credit card required. https://links.testguild.com/bugsnag

TestGuild DevOps Toolchain Exclusive Sponsor

SmartBear’s BugSnag: Get real-time data on real-user experiences – really.

Latency is the silent killer of apps. It’s frustrating for the user, and under the radar for you. It’s easily overlooked by standard error monitoring. But now SmartBear's BugSnag, an all-in-one observability solution, has its own performance monitoring feature: Real User Monitoring.

It detects and reports real-user performance data – in real time – so you can rapidly identify lags. Plus gives you the context to fix them.

Try out SmartBear's Bugsnag for free, today. No credit card required.

About Jay Aigner

Jay Aigner

Jay has extensive experience founding and running a $2M+ ARR quality assurance agency, JDAQA.com, over the past several years. He has over 60 employees across 3 hubs (US, co-located team in Mexico, organic team he built in the Philippines). Jay has a Bachelor of computer science from Full Sail University and has worked across multiple industry verticals and roles. Some of those include QA engineering, product management, and senior executive of tech in the fintech space.

Connect with Jay Aigner

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Joe Colantonio Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Hey, today we'll be talking with Jay all about software testing pitfalls, and DevOps. I know a lot of people probably encounter and how to avoid them. If you don't know, Jay has an extensive experience founding and running a quality assurance agency, jdaqa.com. Over the past few years, he's worked with multiple different companies, different verticals. He knows his stuff. He knows how people are doing things. He has over 60 employees across 30 hubs, and he has worked in multiple industries, verticals, and roles. Some including QA engineering, product management, and senior executive of tech in the fintech space. As you can tell, he's a really well-educated guy that knows his place around the software development lifecycle. You don't wanna miss this episode. Check it out.

[00:01:03] Hey, if your app is slow, it could be worse than an error. It could be frustrating. And one thing I've learned over my 25 years in industry is that frustrated users don't last long. But since slow performance isn't sudden, it's hard for standard error monitoring tools to catch. That's why I think you should check out BugSnag, an all in one observability solution that has a way to automatically watch for these issues real user monitoring. It checks and reports real user performance data in real time so you can quickly identify lags. Plus, you can get the context of where the lags are and how to fix them. Don't rely on frustrated user feedback. Find out for yourself. Go to bugsnag.com and try for free. No credit card required. Check it out. Let me know what you think.

[00:01:55] Joe Hey, Jay, welcome to The Guild.

[00:01:58] Jay Aigner Joe. So nice to be on, buddy. How are you?

[00:02:01] Joe Colantonio Great. Great to have you. I've been following you for a while on LinkedIn. We've been talking back and forth a little bit. I guess before we get into it, Jay, I'm just curious to know why start a quality assurance agency?

[00:02:10] Jay Aigner Why not? It's a good question. I did QA for a long time and saw the need, outsourced QA, it can be a really good thing for some companies, and did a lot of freelancing, and it just kind of came about that I had too much work and not enough Jay to go around and start hiring people that I used to work with, hired my old boss, hired a bunch of people, and I just grew up from there. I think it's just filling a need that was there.

[00:02:36] Joe Colantonio Nice. When you work with different clients, different organizations, when they think of software testing, do they ever associated with DevOps or they usually two different things?

[00:02:46] Jay Aigner It's a great question depending on who you're talking to in that org. I would say most of the time they see it as separate being in the space. I know how close knit those two things are, especially when it's automation. Typically they think of them separately unless we're just talking about automation. And I always tell people that goal at the end of the rainbow is having all your test automation looped into your DevOps pipeline, so everybody can kind of just hit a button and enjoy automation.

[00:03:11] Joe Colantonio Love it. And I know you do a lot, not only functional automation, but you also do like security testing and performance testing. I would think those will lend themselves more to like the DevOps pipeline. Do you work with companies to integrate them into the pipelines? Maybe just like a security scan or a quick performance test? Like how does that work?

[00:03:30] Jay Aigner TYpically, we have a really strong partner for security testing that's kind of fallen out of our umbrella of things that we actually offer on a day to day basis. But for the performance testing stuff, yes, it can be linked into your cycle. The only problem with most performance and regular automated testing is it can get expensive quick if you're doing it in the cloud. Depending on the size, your platform, depending on your budget, depending on all these different things, yes, you can loop in all sorts of performance tests to your deployments. It just depends on how important that is versus budget. And especially, it's typically on the initial build. You really want to make sure that this thing you've built is ready for 100,000 users to log on. If you've proven that it's not as pertinent every single build to make sure that that can happen, unless you've done, obviously, some infrastructure backend changes that could mess up that functionality.

[00:04:22] Joe Colantonio Nice. When you go into organization, who do you normally deal with? Like what are the titles nowadays? It's been a while. So is there still QA people? Are they part of a developer team or once again, I guess it depends.

[00:04:34] Jay Aigner Yes, there's still a QA team, under a log somewhere. You can pick it up next to the worms and the bugs. You can find QA. There's still Q&A teams. Nothing has changed as far as development team makeup from our perspective, we typically will work with the CTO and newer CTO that's come in and has to answer a lot of questions about things that may not have an answer for why are the bills taking so long to get out? Why are so many bugs get in production? Why is the QA always the bottleneck. All these different questions. Chief information officer a lot these days, more than I think we've seen previously. VP, Director of engineering kind of engineering leadership level. And then if we're doing more of a staff augmentation situation, will work with QA managers or directors of QA to kind of manage the resources that we bring in. We can either provide kind of a full stack where you don't need to manage our resources to come in and solve the problem for you, or we can provide you some really good engineers that can work with your team that are already managed by your team.

[00:05:31] Joe Colantonio Nice. What are some common pitfalls that you usually have to deal with when you walk into an organization, especially when trying to incorporate testing into you build, in your CI/CD pipelines.

[00:05:40] Jay Aigner It's the most non-technical one typically, it's communication. Anybody who's been in an org of any size probably knows that communication can break down anywhere, but it feels maybe even biased, but it feels worse when it's in QA because you're kind of the last line of defense against everything. If there's poor communication between you and the developers or you in the product team, or you in the stakeholders, or you and the customers, it will come out and it will probably cause a pretty big issue or even communication amongst the QA teams themselves or the members of the QA team. Have the wrong people in the wrong seats. This is something that a lot I see a lot of places that have somebody on staff and they need to free up the rest of the QA team. They need to implement automation and they go, well, Bob's worked here for 10 years, maybe he can handle it. And nothing against Bob. Bob is usually a great guy and he's very smart. But like it's not his expertise to pick the right tool, implement it, implement in a way that's componentized so that there's individual chunks of it along the way that you can independently execute. Because a lot of times people try to implement automation and DevOps or in QA, that's kind of an all or nothing. And if you run out of budget or you run out of time, or you run out of patience, you end up with nothing. We try to kind of promote this very modular way to build, test chain them together to get to your end-to-end testing stuff done that you get done. But at any point, you can stop and run those tests and get the value out of having those tests instead of having this big overarching. Thanks for having the wrong people in the wrong see that don't really understand how automation should fit into a software development lifecycle. And then just expectations, I think. Expectations of what can be delivered by wind. I think some of the biggest pitfall we see is development team, product team, executive team, telling QA how long something will take to test. You will almost assuredly never have a good outcome when that's the case because QA people know how long something takes the test, and we're all guilty of giving a little bit of buffer to make sure we have some time to do stuff, but it's usually pretty accurate and more accurate than coming from somebody who has little hands on experience with what needs to be done. Communication people in the right seats and then setting right expectations for timelines and delivery.

[00:07:53] Joe Colantonio I would think it's difficult stepping into different cultures and trying to promote that. As a company, how do you get buy in from maybe Bob, who's like disgruntled now, like, who the heck is this guy? Tell me how to do this. Is that something you have to deal with? Or you just go in, here's what you do. See you later type deal.

[00:08:08] Jay Aigner That's a great question. It's the agency conundrum, I think. It's how do you have your own culture as an organization, which we like to have. But also how do you integrate smoothly with somebody else's? We certainly have those situations where Bob's unhappy. I feel bad we're picking on Bob. Bob's not happy and we have to be very diplomatic. But it goes back to the communication and personality kind of piece to it. Like we come in with and we got better over the years. There's certainly be times where we come in and we hit brick walls because we didn't set the expectations upfront that whoever's hiring us and whatever leadership is writing the checks has to support this effort together because if they don't, then why are we there? There's been times for sure, but we try to be diplomatic. We don't come in to take anybody's jobs. We don't come in and tell people they're wrong. We're just there to kind of lend our expertise and how to set up automation, or how to manually test your platform in a way that your customers will come back again.

[00:09:07] Joe Colantonio Love it. I don't know why I'm fascinated by having like a QA agency. It's so hard to get buy in as a QA engineer, a test engineer, because you're always seen as a cost and and when you work for them. Who contact should I just say, hey, this QA people are a cost center. Let's get an agency in here to do it. When I think that be a cost center, it's like, well, what problem? Does something usually happen poorly? Like something gets released and like, oh my gosh, we need to get quality in line here. How does that usually work?

[00:09:34] Jay Aigner I think it's for multiple bad releases and a feeling from leadership especially that something has to change. A lot of maybe bandwidth. We don't always come in and replace a team. We come in and augment teams all the time as well. It's like we just landed a client that has full QA team, they have automation, they have manual, but they needed help with tool selection and they needed help with implementation, and they need help with maintenance and all these different things that they may not have experience. And so a lot of the time, it's explaining some how to better do the things that they're doing and not in a confrontational way. But when people come to us, it's typically newer CTOs or that have come into an org like are starting to kind of get their hands around what's going on there or it's high growth kind of SaaS companies. They just got their series A, series B that like need people now that are really good and they don't feel like going through hiring. And also it's more cost effective. There's two reasons why it's more cost effective. One is if you're having your project managers and your developers and all these other people test the stuff for you and you don't have a good QA team in place, then you're spending 20% to 30% of their time, which is typically expensive to test. Why would you, I mean, if you look at the numbers like it doesn't make sense to have a higher paid person doing something that you're going to pay somebody less to do, and then we're not W2 resource, then we're scalable. You're not bringing on 10 QA people because you have a big release coming up and you have to pay benefits and literally bring us on. We can scale up when you have a big release, we can do some automation on, stuff you need to kind of scale back down and kind of go into a maintenance mode where we just kind of help support whatever, the next leg of development is going to be. It's definitely, we don't go for the bottom of the barrel. We're not a cost savings kind of led organization because of our expertise that we have. But it's certainly, a secondary benefit that you can save a good amount of money if you don't want to spend the time finding people who know what browsers to test and what devices to use, and how to build automation and do all these different things that a lot of times CTOs don't really care about. They just know there's a problem and they don't want to build a QA team to fix it. They come to us and we can help them out.

[00:11:34] Joe Colantonio Nice. Makes sense. I'm always curious to know, people sometimes hate the term best practices. But if you're dealing with all these different companies, are there any best practices or are there other extended life cycles that everyone follows? Or do you have to have a group that does different processes for each organization? Does that make sense? Is there like actual best practices if people follow A, B, C, and D, maybe okay, is it just like, oh, this company needs A, this other company, we need to do it this way. Like how different are gig to gig or company to company?

[00:12:03] Jay Aigner It's a great question too. I think, I would say best theories then best practices or best. It's more abstract sometimes than just like ABCD, it's more like, what are all the pieces doing together? How are they interacting? How are the requirements being handed off. But there is some best practices to it, right? I mean, you need information to be able to test. You need to make sure you're testing in whatever cadence matches development. And you need to be able to support deployment, pre and post deployment and feature testing. It's a kind of just there are standard pieces of every process, everybody I would say everybody's writing requirements but not everybody's writing requirements. a lot of people write requirements. But sometimes, you need to lean on the different areas to say, look, your guys are struggling to test stuff because you're not handing them all the information they need to do it. The requirements are really thin or nonexistent, or they're changing all the time. Or there are a lot of different reasons why things could happen. But to your point, there are some kind of standard things. Like I said, the requirements delivery cadence that you can kind of wrap into any process. And it kind of. But yes, to your point, everything is different from the start, which kind of makes it hard to productize that service from a business perspective. But we've done well kind of supporting full stack leadership strategy, processes, people and then kind of letting rip.

[00:13:27] Joe Colantonio Nice. I know a lot of internal testers or people who have to do testing do a poor job of kind of highlighting what value they add. And I assume as a quality assurance agency, that's something you have to execute on. Even if someone's not an agency or a contractor, is it things they could do to highlight what they've been working on to the higher ups, like the CTOs, to show them, hey, we are adding value. We are doing this because I assume you have to do that constantly.

[00:13:52] Jay Aigner Yeah, I mean, I think everything needs to have a paper trail. I think for us, the biggest analog to this was providing daily updates from every person on the team, regardless of what they did that day, big or small. And I think if you're doing that consistently, it's hard for somebody to point a finger and these guys aren't doing enough. If every single day all you're doing is doing what you're being asked to do, you execute and deliver, and then you say, here's what it was that I did today. Now it's not I mean, if you have management or engineering leadership over top of you, they should be looking at those updates to go, that's either the right stuff or the wrong stuff to be working on. That's not always your job as a QA person. You're doing kind of what's been handed to you. I think over communicating and leaving a paper trail is probably the easiest way.

[00:14:38] Joe Colantonio Nice, I think another pitfall and then something else a lot of people struggle with. And I think this is like you said, you mentioned this a service job is tool evaluation. Once again, you work with different companies, different verticals, different everything. Is there a go to toolset or a set of tools you use, or is it once again depend on the organization, what language they use? How do you evaluate tools I guess is the first question?

[00:15:02] Jay Aigner It has morphed over the years. I think at any given time we'll probably have of pretty standard suggestion list. We are partners with most tools out there, and as I say that I realized last week that there was probably hundred new tools in the past few months that I'm not part of this. Maybe I can't say that anymore, but some of the major tools companies out there early on in the company, I just reached out and a lot before kind of they solidified their partnership program. And we've embedded ourselves as pretty good partners to a lot of companies. If we have a good relationship with a few of the main tools companies, we know what they're good at, what they're not good at, we can pretty quickly see what a company could need. And if we have a really good relationship, that typically means we can make sure there's really good support on both sides, which is, I think, probably one of the under appreciated things when it comes to picking a tool for automation, because you will have questions, you will have things that don't work, things will break, and you will get stuck. You need to be able to talk to whoever it is, whatever company you're working with. You need somebody to be able to reach out and say, hey, what do we do here? Then sometimes we play that role, that liaison role for our clients where they just say, guys like this isn't working, what do we do? It starts with kind of just a basic understanding of the platform, their tech stack, their processes, and then marrying them off with one of our tools. And if it's not one of our partners, we would it doesn't matter. And we don't get kickbacks from our partners. We just partner with them because we work with them for so long. So it's not a partner of ours that ends up being a better tool all for it. Like let's figure out and go for it.

[00:16:37] Joe Colantonio I assume a lot of these tools of vendors, do you ever get into the like, oh, we only use open source tools therefore. Or they just listen okay. Or does it usually work around surprises and they just go with a vendor based solution anyway?

[00:16:52] Jay Aigner Again, I think it's based on the time. Like 10 years ago, something different five years ago, something different. Now, it's something different. If somebody comes to us and says, like, hey, can you write us a bunch of testing Selenium? I would go, why? Why would we do that today? I'm going to be honest with people. You could probably burn a ton of hours and you could make a nice contract with somebody if you just went and just wrote a bunch of Selenium test cases. But why would you do that with the plethora of tools available that handle all of that spin up and tear down and all this stuff, why would you do that? It's really ease of use. It's what saves our what's most cost effective for our clients, what's easiest for us to use. A lot of different factors that come into it, but it's really a constant revolving door of what's the not every day, but it's we try to keep up with what's the best stuff like like virtualized browsers, for example, five years ago that was not really a thing where people use it very much. And it was a lot of Ghost Inspector had the record and playback kind of model. What they still do. And I love Ghost and Specter and I know Justin, over there very well, some of the more advanced features of something like Reflect in I know, Todd and his guys over there.

[00:18:05] Joe Colantonio Great guys.

[00:18:06] Jay Aigner The virtualized browser is like, to me, the wave of the future. Like you can't. Why would you go back in time to use something else? If you can use this tool that lets you instrument and come up with any browser configuration you want. Oh, by the way, you connect to real life device forms while you're doing that, can I test on real devices? I would say it's a constant evolution. There's no consistent stack. And whoever kind of comes up with the next thing that's going to help us or help our clients, we're going to check it out.

[00:18:33] Joe Colantonio Nice. And just people know Reflect was purchased by SmartBear who's our sponsor. And Jay, I don't think knew that so. Good Jay for mentioning Reflect, awesome.

[00:18:42] Jay Aigner I did not know that. Fantastic.

[00:18:45] Joe Colantonio Cool. When you get to then, do you have the right tool in place? Do you have to do a proof of concept? Is that something you provide as well? Maybe, you know while this tool will work, but you don't really know until it's in their workflow, is that something that you normally have to do as well?

[00:18:59] Jay Aigner Typically we'll do a POC as part of the SOW some slice of functionality. It's typically, something easy but something demonstrable. So we can show off like yes you can log in with it. Yes, you can change password. And then maybe something that's a little less canned where it's like interacting with some part of their system to do something specific to their platform. People like to see that to kind of go, okay, this can fit all our edge cases. It can do the things we needed to do. We typically we'll do a pretty light POC just because we're confident in the tool. And we just want the client to be confident too.

[00:19:35] Joe Colantonio How do you know what to test? Is that up on to them to tell you what to test? Or do you tell them, hey, we're only focused on risk or, this test is too long. It's going to be too slow. It doesn't make sense to automate. Is there a lot of back and forth that way as well?

[00:19:49] Jay Aigner It should be. If we're doing our job, there should be. I would say it's rare that we just go in and automate things that they tell us to automate. Part of our process typically with automation is is basic in assessment. It's like we give you a QA roadmap and we'll say like here's where you are today. Here's where you want to get to. Here's the steps in between based off of the demos and stuff we've seen from you guys. Here's how we should go about it. Now, as we're doing that, if we get into some functionality that's like over complicated to try to emulate via automation, then we'll say, look guys, we just want you to know we would recommend you guys do this manually just because to spin this up and to get the test data where it needs to be and get the environment in the state it needs to be and all these different things to make this one specific test work with automation, it may be much, much less of a headache to do it manually. Part of the initial process is just doing that. Now in our staff augmentation positions, where we just have people in there that are working with other people's teams, then they give the direction and our guys kind of go in and just do what they're asked to do. But for in that strategic role, we're certainly assessing which things should and shouldn't be tested.

[00:20:55] Joe Colantonio This might be a weird question, Jay. You have a quality assurance consultancy. And when I talk to testers they're like, oh, our jobs are being replaced by developers. No one cares about quality anymore and things like that. Clearly, if you have a consultancy, that's not true. Are you seeing a demand for? Is this a hard sell? I thought I saw somewhere where you're doing pretty well. You have a good IRR type of revenue coming in. How does that work?

[00:21:19] Jay Aigner I knew a better analogy than that. But I always tell people QA is like a funeral home. You're going to visit me eventually. You have to have it. There's no world where it doesn't exist. I don't care how great AI is. I don't care how great your developers are. When you get to a certain size and a certain amount of deployments and you need a dedicated person or people or team or in-house or out, it doesn't matter what it is, you're going to need it. Even when venture funds, quote unquote, dried up over the last couple of years. And there's the panic that set in with the economy and stuff. I felt like we're so far out like that stuff there on the fringes, and we have so much green pasture in the middle, like there's so many mid-market companies. There's so many startups going into serious series A, series B, it has really concerned me. I could see people and I think as a current and former QA guy, I think QA people are a little paranoid by design. And that's why they're in QA and that's what makes them really good QA people. If they didn't care and they were very like relaxed and they thought everything was fine all the time, then they'd probably be a lot of bugs in production under their watch. But I would say for the most part, it's not going anywhere. The demand is not decreased. It's just going up the more software that's made. So I don't see much of a decrease in demand.

[00:22:40] Joe Colantonio How about the rise of AI, generative AI and, GitHub copilot, all these things. Do you see that reducing error. Once again, people probably creating more code that needs more testing. Where do you see that heading?

[00:22:52] Jay Aigner I mean, I will stand on this hill until I get struck by AI lightning, but I'm not afraid of AI. When it comes to testing, I don't think it's going to take anybody's jobs. If there becomes a point where like, it can actually understand context better, which I think is like one of the key things that QA people understand really well, maybe more so than anybody else in a company, is contextually understanding features of a platform and interacting with developers and product people and all these other like just the context you have to understand to make a software product and then the context of having customers and how they use the platform until there's some like magical AI like has locked in context about what was made this week versus what was made last week and what changes, all this different stuff. Not really worried about it. I think there'll be some cool tools. I've seen some cool demos of some really cool stuff with generative AI when it comes to testing, but I think it's just like every other advance in technology, there'll be people who use the new tools and they'll be people who lose jobs because of the new tools. But if people didn't learn how to make use computers and computers came out, they were kind of left on the wayside. I kind of feel the same way about AI.

[00:24:07] Joe Colantonio Love it. Speaking of AI, in the future of testing and DevOps, I assume working being a founder of a company, you need to be a few steps ahead to see where the market is going, where it's heading, how do you stay up to speed on where like future trends, are there any future trends you see that our company's going to be facing coming up that you're getting on top of anything like that?

[00:24:28] Jay Aigner I have heard enough murmuring about kind of cloud back to on prem stuff that just piqued my interest a little bit, because I mentioned the cost earlier of running just automated testing in the cloud can be expensive, so running your entire business can be expensive. Just exponentially expensive running in the cloud. Potentially a world where we're doing more on prem not physically on premise because we're fully remote, but just in that the living on servers that are owned by a company instead of the cloud may be something that's kind of coming up. I don't see any massive changes right now. I think the big AI gold rushes has hit. I think there'll be a bunch of ones that stand out over the next year or two as the the models progressing stuff. But nothing has jumped up in my way that really makes me think there's massive change coming in with software development.

[00:25:21] Joe Colantonio All right. You need to recommend tools. How do you know what's smoke and what's not? Because you must be get it up all the time. Hey, Jay, you need to check the solution out when you go to your next client. This is going to solve everything. How do you know what's the real deal?

[00:25:33] Jay Aigner I mean, we'll try anything, man. Internally. I'm a tools nerd, man, I love tools. I love them so much. Especially, I know you love tools because I've seen some of your episodes where you get to talk about these things. We try anything. I mean, we've got our stable of things that work, and if it ain't broke, don't fix it. But if there's something that's promising and that could solve some problems that we're having and we do try it and it works, then, we'll give it a shot. But yeah, I mean, there's something new every day that somebody wants you to give a spin on. We don't close anything out. But we have so many clients on different tools already that we're not in a hurry to swap anything else into our portfolio.

[00:26:16] Joe Colantonio Right. Okay, Jay, before we go, is there one piece of actionable advice you can give to someone to help them with their DevOps testing efforts, and what's best way to find contact you or learn more about jdaqa.com or JDAQA.

[00:26:28] Jay Aigner I would say, understand the full scope of the project before you start. A lot of those projects seem to get bogged down because of some piece in the middle that you didn't anticipate, you didn't think about, and don't chase the shiny tool or process like there's a lot of things that have worked for a very long time. Stick to kind of the basics and develop, put in a nice DevOps put pipeline in place. With whatever your stack is and just trying to understand the scope of what you're doing so you can piecemeal it out. You can have milestones to hit along the way, and you can deliver a kind of a full end-to-end roadmap instead of just a broken piece of half of it. And if you want to reach out to me, LinkedIn, Jay Aigner, and if you wanna check us out jdaqa.com.

[00:27:12] Remember, latency is the silent killer of your app. Don't rely on frustrated user feedback. You can know exactly what's happening and how to fix it with BugSnag from SmartBear. See it for yourself. Go to BugSnag.com and try it for free. No credit card is required. Check it out. Let me know what you think.

[00:27:33] And for links of everything of value we covered in this DevOps Toolchain Show. Head on over to testguild.com/p152. So that's it for this episode of the DevOps Toolchain Show. I'm Joe, my mission is to help you succeed in creating end-to-end full-stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers!

[00:27:55] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Brian Vallelunga TestGuild DevOps Toolchain

Centralized Secrets Management Without the Chaos with Brian Vallelunga

Posted on 09/25/2024

About this DevOps Toolchain Episode: Today, we're speaking with Brian Vallelunga, the founder ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Testing Castlevania, Playwright to Selenium Migration and More TGNS136

Posted on 09/23/2024

About This Episode: What game can teach testers to find edge cases and ...

Boris Arapovic TestGuild Automation Feature

Why Security Testing is an important skill for a QEs with Boris Arapovic

Posted on 09/22/2024

About This Episode: In this episode, we discuss what QE should know about ...