DevOps Success: Merging Automation, Culture, and Performance Testing with Lee Barnes, Scott Moore and Paul Grizzaffi

By Test Guild
  • Share:
Join the Guild for FREE
Promotional banner for a DevOps talk featuring Lee Barnes, Paul Grizzaffi, and Scott Moore, focusing on Automation, culture, and Performance Testing, supported by Bugsnag.

About this DevOps Toolchain Episode:

Welcome back to the DevOps Toolchain podcast!

Today, we dive deep into DevOps and its pivotal role in today's software development landscape.

I'm your host, Joe Colantonio. In this episode, “DevOps Continuous Automated & Performance Testing,” we're bringing together a powerhouse panel for an awesome roundtable discussion that took place at this year's Automation Guild conference.

For those who missed the Automation Guild conference in February, don't worry! You can still gain instant access to all the recordings, providing a wealth of knowledge and insights. Plus, you'll have 24/7 access to our private community and monthly training sessions. To take advantage of this, head over to automationguild.com and register today.

This session focused on the critical but often misunderstood practice of continuous automated and performance testing within the DevOps framework.

In this episode, we're not just talking theory. We're delving into the real-world challenges that companies face when trying to embrace true continuous testing and agile development. Our guests share their experiences, pointing out the common pitfalls that you need to avoid in your own DevOps journey.

From highlighting the need for automation and AI as essential tools rather than roadblocks to discussing the significant impact of culture, collaboration, and alignment with business goals, this episode aims to dissect how to build high-performing DevOps teams and pipelines that genuinely deliver.

Listen up to absorb invaluable advice on establishing an efficient DevOps culture, cutting through the hype to find substantial value, and ultimately accelerating your company's journey from DevOps immaturity to maturity.

TestGuild DevOps Toolchain Exclusive Sponsor

BUGSNAG:  Get real-time data on real-user experiences – really.

Latency is the silent killer of apps. It’s frustrating for the user, and under the radar for you. It’s easily overlooked by standard error monitoring. But now BugSnag, an all-in-one observability solution, has its own performance monitoring feature: Real User Monitoring.

It detects and reports real-user performance data – in real time – so you can rapidly identify lags. Plus gives you the context to fix them.

Try out Bugsnag for free, today. No credit card required.

About Lee Barnes

Lee Barnes

Lee Barnes is a Chief Quality Officer at Forte Group with over 25 years experience as a Software Quality and Testing professional. He leads large-scale test automation and performance testing initiatives for many Fortune 500 companies and has delivered in-depth presentations and training on software testing, test automation, performance testing, and mobile quality.

Connect with Lee Barnes

About Scott Moore

Scott Moore

With over 30 years of IT experience with various platforms and technologies, Scott Moore is an active writer, speaker, influencer, and the host of multiple online video series. This includes “The Performance Tour”, “DevOps Driving”, “The Security Champions”, and the SMC Journal podcast. He helps clients address complex issues concerning software engineering, performance, digital experience, Observability, DevSecOps and AIOps.

Connect with Scott Moore

About Paul Grizzaffi

Paul Grizzaffi

As a QA Solution Architect at Nerdery, Paul Grizzaffi is following his passion for providing technology solutions to testing, QE, and QA organizations, including automation assessments, implementations, and through activities benefiting the broader testing community. An accomplished keynote speaker and writer, Paul has spoken at both local and national conferences and meetings. He was an advisor to Software Test Professionals and STPCon; he's currently a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas where he is a frequent guest lecturer. When not spouting 80s metal lyrics, Paul enjoys sharing his experiences and learnings from other testing professionals; his mostly cogent thoughts can be read on his blog https://responsibleautomation.wordpress.com/.

Connect with Paul Grizzaffi

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Joe Colantonio Hey, welcome back to another episode of the DevOps Toolchain podcast. Today, we're gonna dive into the world of DevOps and how automation plays a huge role in today's development landscape. In this episode, titled DevOps Continuous Automation and Performance Testing, we're bringing together a powerhouse panel for an awesome roundtable discussion that took place at this year's Automation Guild Conference. If you don't know, you could still get access to all the recordings for the event that took place in February. Plus, get 24 access to our private community and our monthly training sessions. To do so, all you need to do is head on over to Automationguild.com and register today.

[00:01:01] In this session, we're going to focus on the critical but often misunderstood practice of continuous automation and performance testing within a DevOps framework. So when I discuss real world challenges companies face in embracing through continuous testing and agile development. With, I guess, pointing out the pervasive confusion and pitfalls you need to know to avoid. From highlighting the need for automation in AI as essential tools, rather than roadblocks to discussing the significant impact of culture collaboration in alignment with business goals. This episode aims to really dissect how to build high performing DevOps teams and pipelines that truly deliver.

[00:01:43] Hey, if your app is slow, it could be worse than an error. It could be frustrating. And one thing I've learned over my 25 years in industry is that frustrated users don't last long. But since slow performance isn't sudden, it's hard for standard error monitoring tools to catch. That's why I think you should check out BugSnag, an all in one observability solution that has a way to automatically watch for these issues real user monitoring. It checks and reports real user performance data in real time so you can quickly identify lags. Plus, you can get the context of where the lags are and how to fix them. Don't rely on frustrated user feedback. Find out for yourself. Go to bugsnag.com and try for free. No credit card required. Check it out. Let me know what you think.

[00:02:48] Hey, welcome, everyone to The Guild.

[00:02:50] Paul Grizzaffi Hey, hey, Glad to be here.

[00:02:52] Joe Colantonio Hey, Lee. Scott. Scott's travel probably, as always on his Perftour.

[00:02:57] Paul Grizzaffi You're in the hotel, right?

[00:03:00] Joe Colantonio Very nice. Well, it's great to have you all here. Like, I really want to do like a deep dive round table and kind of free for all ask us anything around continuous automated and performance testing in DevOps. Before we do though, let's get like a real bio around the room. So Lee, we'll start with you and then we'll go across to the bottom here.

[00:03:20] Lee Barnes Sure. So Lee Barnes, chief quality officer at Forte Group. I've been involved in software quality and testing in some way, shape or form for over 30 years now. But don't hold that against me. Most of that time has been focused on test automation and performance testing.

[00:03:37] Joe Colantonio Love it. And Scott?

[00:03:39] Scott Moore Scott Moore. Been doing performance testing about as long as Lee has. And I think that's what everybody knows me for. And and happy to be here today.

[00:03:48] Joe Colantonio Great to have you. And the one and only, Paul.

[00:03:51] Paul Grizzaffi Yeah, I'm that automation guy. Paul Grizzaffi, I am a solution architect, QE specifically solution architect at Nerdery. Just joined them back in December, and, yeah, I'm in the 30-year range as well. I've done most of the things. I've been in the trenches guy. I've been a manager. I've been director of QA, the automation and the helping people just make good automation decisions. That's where my passion is. So I like the whole consulting gig.

[00:04:21] Joe Colantonio Absolutely. You guys age well. I have more whites and grays than all three of you put together. You're doing good here.

[00:04:27] Paul Grizzaffi Yeah, I don't know, I'm a little.

[00:04:29] Joe Colantonio Hard to tell with Paul. I just want to go around, get maybe the current state of your pulse. So you're all experts, you're all talk to a lot of clients. You've been in industry for years. Try to get a feel for where you see the industry going with continuous testing. I know Scott has some views on different types of ops and how he sees performance. But I guess I'll start with you to get things started and then we'll get, see if anyone has a particular questions as we go along, but they maybe kick the ball on this one, get it going.

[00:05:02] Lee Barnes Sure. The current state, I think it's really all over the board from different organizations and situations that I see. I see some organizations are very mature in terms of continuous testing and not just functional testing, but performance, security, accessibility, other types of testing as part of their pipelines, thinking very critically about the types of tests that they include. And I see others that are less mature and maybe taking a more of a Old-School approach in terms of trying to stop traditional large scale performance tests into a pipeline, not thinking critically about the scope of testing that they include. And resulting in longer feedback loops. Just start there. It's kind of all over the board. And I do see a positive trend though, for sure.

[00:05:53] Joe Colantonio So Scott, you traveled all over the world as we could tell you're traveling now. Any thoughts on this current state, what you've been hearing on the road?

[00:06:01] Scott Moore Well, I talked to a lot of the vendors who are talking to customers trying to get them to buy their software. And the conversation right now is all about continuous, right. And these companies have tried to move to a DevOps operating model, some more successfully than others. The most interesting thing that I hear is that the companies who are already doing DevOps either and I'm talking about the continuous part of DevOps, either aren't doing it very well, but those companies who are doing it well are getting amazing results from it. Unfortunately, it represents about maybe 5% of the people that they're talking to. And even the salespeople that I'm even talking to are like, that's kind of very concerned. I think part of that might be because it's not the fault of the people who wrote the original documentation around what DevOps should be and continuous should be. It's not their fault that people didn't go back and read their material and implement it that way, and made it up and did it their own way, it's not their fault. But we're learning from that. But the good news is that if you do this right and you do this well, you will see some huge benefits, which we can talk about in this presentation.

[00:07:14] Joe Colantonio And Paul?

[00:07:16] Yeah. So I'll echo what both what Lee and Scott have said. It's really all over the place. And some people are seeing really, really good results from the ability to have these things build in quotes tested before. I mean, at deployment time in their pre-production environments, a lot of other companies and organizations have what I'm going to call misplaced expectations about what can be done. Like, hey, we're going to run all of our automation. Well, that's probably not going to be economical or at least time sensitive. Kind of like Lee was saying, what's going to happen? How long are we prepared to wait to say a deployment is good enough to do something else with it? If we're pushing to actual production, we probably need to wait longer. If we're pushing to a pre-production environment like a staging or a pilot or a QA or a test environment, but we're probably okay waiting a little less time. And I think companies are sort of struggling with what that looks like for them because it really is context specific.

[00:08:24] Joe Colantonio Absolutely. What Scott just said about 5% of people probably doing this. Lee, you work for Forte Group. Is this where you're hearing as well from your customers and clients? There's a want and need?

[00:08:36] Lee Barnes I think there's a lot of organizations out there taking a very tool-technology focused approach to DevOps and not focusing on on really what the end result should be, which is having delivering value much more quickly to your customers. So they end up with implementing tools and tests and different things in their pipeline. But there's really not a an overall objective to what they're doing. And very often, especially when it comes to testing, they're just not thinking kind of in a continuous testing mindset of of what needs to happen for that to be successful. The feedback loops are long. They still got the same old flaky tests they're trying to put in the pipeline. They're breaking the build. Not because anything wrong with the application, but because they've got issues with their test suite. So yeah, I'm seeing the same thing for a lot of reasons, not only testing, of course, but definitely a very small percentage of those getting the results that they don't.

[00:09:35] Paul Grizzaffi And I'll double down on what Lee's talking about there. And that is the difference between tooling and collaboration. The whole thing about DevOps is more about a culture. And in order to get that culture where you want it to be, a lot of times you need tooling and automation and other things along the way. But just because you have a pipeline that runs your unit tests, that doesn't mean you have the DevOps, right? It's more of a collaborative to reduce the friction between Ops and Devs. And the big picture, the big dev to reduce the friction to get the stuff out the door more economically. And a lot of companies don't and organizations haven't figured that part out.

[00:10:24] Joe Colantonio Scott, you have some thoughts on Ops, DevOps, DevSecOps, DevPerfOps. Is that something you think people are confused by or they don't understand what they're trying to do?

[00:10:34] Scott Moore Absolutely they're confused by it. And it goes back to what I just said about being pure DevOps versus what some companies have tried to implement. Same thing happened with the agile movement. There was the Agile Manifesto, and then there was what people interpreted as Agile Manifesto and turned it into scrum ceremonies. And so let's think about like the topic of this is about continuous automated performance testing in DevOps. So if you're going to do this continuous testing in DevOps, it's because you have a continuous pipeline of code. You're either developing software with code or you're just developing software somehow. Maybe you're writing the code. Maybe you're configuring the code or doing something with it. But what are we trying to get out of this by doing DevOps as well? It's because we want faster feedback, right? Because we're going to fail or fail extremely early. And then from a performance standpoint, we want this automated test to be there to find those performance bottlenecks as early as possible to keep the pipeline as flowing as fast as possible. That's what developers want us to get that out there. One common problem would be a company saying, okay, well, we haven't done DevOps before. We're going to get it continuous and we're going to get this fast moving machine, but we expect to take the same amount of code, same amount of features, same amount of test that we used to do or build and continue to do that in this DevOps model. In through DevOps, they would look and say, five lines of code. Are you crazy? Right? They want five lines of code to go in there, much smaller pieces to go through there. Because then you think about the benefits of DevOps. You're reducing the risk. If you only put one line of code out and something breaks, you know what did. You can fix it a lot faster, right? And that's what we want to get in that same pace with regard to automated performance test. It's much smaller batches, right, to make it go smoother. And that's just an example benefits you get that if you do it wrong, you're going to mess it up.

[00:12:31] Joe Colantonio So speaking of benefits, either Paul or Lee, any thoughts on other benefits you get when this is done correctly? Is it just a matter of delivering software faster and that's all we care about is the end user?

[00:12:43] Lee Barnes Certainly we're delivering value faster, but we're also delivering quality software faster if we're doing it right, as Scott alluded to. And we're working in small batches. So it's much easier to first of all understand what we should be testing. But of course, if they're issues to find out where the issue is, I think one of the reasons I see so few organizations reaping the benefits is that they're not really doing continuous testing or agile development, they're doing waterfall development two weeks at a time. And however much code they think they can or how much work they can fit in those two weeks, that's what they fit in. There's really nothing continuous about that. Paul have some comments on that. Yeah.

[00:13:26] Paul Grizzaffi Yeah Paul.

[00:13:27] Paul Grizzaffi Well, sure. So related to that, the idea that a bug found earlier or an issue found earlier is cheaper, that's not necessarily universally true. There's a book called Leprechauns of Software Engineering by Laurent Bossavit . I highly recommend it. And he goes through the math of debunking some of these things that we just think that are true, intrinsically true. And he doesn't say that they're always false, right? But he says, hey, you got to look at what you're doing in all your situations. But certainly fewer lines of code to be put out means that any issues are most likely related to those fewer lines of code than more lines of code. But in these large interconnected systems that we have today, it's very difficult for us to narrow those things down. So it's actually even more important for us to release smaller things, or at least deploy smaller things into our testing environments so that we have a smaller amount of code to roll back or to debug or whatever. So then we go in and look at these larger interconnected systems and say, was this an interconnected system problem that was coincidentally happening when we roll this code out, or is this a code problem?

[00:14:48] Joe Colantonio How do you know? I guess, what do you have to have in place to to make sure that the results you're getting, you can understand you're acting on it in a way that makes sense, where it's not just so that's an issue, that's a problem. Like, what do you need to in order to get this in place so that the results you're getting are being acted on and people are understanding the results and able to debug and troubleshoot quickly what it is.

[00:15:11] Paul Grizzaffi In my smaller world, it's about the logs. Because we talked about software development and automation development being basically the same thing just with different audiences. The important difference that I find there is that outcome of app development is something for the user. The user doesn't want software, they want their detergent to be delivered on the door. They want to go out of tide. And I want the tide to be here. And the only way we have the way to do that today is through software. The outcome of using an app is not an order number. It's the actual thing showed up where you want to be. And I apologize for the dog. For us, our application development has to do with providing information so that decision makers can say we have an unnecessary risk or we don't have an unnecessary risk, right? So we have to focus on those logs and any of the other telemetry along the way. Now, some other people have sort of challenged me on this, and I agree with their challenges because with different tools, sometimes you just get some of that for free and some of the other tools you have to bake that in. But what you can't get for free is context sensitivity. Hey, this log in button wasn't here when I tried to click it versus HTML element with ID ABC 123 wasn't here, right? One is going to get you slightly faster to getting your feedback. And what that one is also going to get you slightly faster to getting any fixes out where they need to go. So that's the way I always is couch it is in the information you get out in my world automation, but I'm going to somewhat assume that it works in th performance world as well.

[00:17:08] Lee Barnes Yeah, I think one thing that I'm starting to see more of Paul and I think Scott, you could probably comment on this probably better than I can, but starting to see observability and APM tools in lower environments where they were reserved for production for the longest time. But yes, we have test driving the application and performing actions, inputs and outputs. But to just rely on what the tests can pick up, which is important, right. As you alluded to, Paul can put information in logs so you can understand what happened, but also information that might not be related at all to your test, at least not directly, that you can get from observability and APM tools even in a functional test. And of course, certainly much more related to performance.

[00:17:56] Scott Moore I do see that. I see observability being used. I see bluegreen environments being used with feature flippers and those type of things. It's all about in DevOps speeding up the flow, speeding up the flow. And we have a mindset in pure DevOps shops that everybody should be a system or software engineer, instead of being a developer or a test or whatever. Everybody should be a software engineer and should be able to do a lot of this work. And specialists, while they're needed, are there to educate and stay out of the way of develop as possible less friction for the developers as possible. And this is kind of getting stuck in my car right now. I'm having conversations with people like Bryan Finster, who ran the DevOps dojo for Walmart. He has successfully done DevOps in very large organizations successfully. He will tell you, hey performance engineers, I love you, your specialists, as long as you stay out of my way, make it easy for me to do as much performance testing as I can in my own tools as easy as possible. But when you start doing it for me, I now hate you because you're ruining my quality of life. There are things that we will be doing as performance engineers in a bigger scale. On the upside, probably in parallel with what they're doing. But when we begin to be friction to that pipeline flow, we become an issue for people that are in for DevOps, that's a challenge for us, and it's something we have to think about and have a conversation about. I think it's going to be pretty important in the near future.

[00:19:29] Paul Grizzaffi But extrapolating that into my world, right. The test automation world and the automation for testing world, yes. We don't want to be friction. We don't want to be a roadblock. However, if we see that the train is going off the rails, we should not facilitate it going off the rails. We have to be friction at that point to say, yeah, I see what you want to do, but it's not healthy for the delivery. It's not healthy for the product or the organization. Do you agree?

[00:20:00] Paul Grizzaffi Oh, absolutely. And I think that's where those additional metrics around the integration and the deployment come into play. So you're actually going to have metrics that say, how many times did you have to have manual intervention to make the deployment work? Well, we've got a quality issue when there's too many of those. And when those rates get too high, that's when we begin monitoring the actual pipeline itself. And I think that's where the conversation starts.

[00:20:25] Lee Barnes Scott, I'm interested in you said, to get out of my way, I've seen a lot of organizations that want the specialists to build them or framework build them an infrastructure to let the delivery teams, the software engineers implement performance and own performance or implement test automation and own it themselves, and then maybe come in to tweak things or improve things or provide support. I've seen that work occasionally. But in most cases I've seen it really go off the rails. I'd be interested what the two of you, if you're seeing that as well and what your thoughts are on that.

[00:21:00] Scott Moore I do see it. I do see it going off the rails. That is the problem, is that that's why only 5% of the organizations are seeing these huge results. It's because they're running into these issues between the development and the quality organization, still taking ideas from the past and trying to morph them into what they think DevOps should be. And I think what my problem is, is I don't understand a developer saying I want to do it all. I think I can do it all, but make it easier for me to do it all without having to know it all. But at the end of the day, we're going to write a report about cognitive overload and how I need platform engineering to make it easy for me to sleep because I'm working 18 hours a day, seven days a week. It's like, well, you want to do it all, then you want to stay out of your way to get this fast load. But now your brain is cognitively overloaded. And now, we're writing how the mental health of developers are being affected by this. You're taking on a lot. How can we help you and do this? Well, we can both work together. Do it. It's a conversation that's long overdue. And that's where I think the problems that we've faced with organizations, that's where that not only were overused the word friction, but I think that's where the friction is.

[00:22:10] Joe Colantonio All right. First question, what kind of tests are best suited for CT pipelines? Do you employ different test or a number of tests for different environments?

[00:22:20] Paul Grizzaffi For me, the way I always look at this is how long are you willing to wait? The more you are willing to wait, the more things you can test. The less you are willing to wait, the fewer things you can test, or the more money you need to spend to get all those things done. Because I have 40 tests that I want to run, 40 automated scripts that I want to run, and I'm gonna let the performance experts talk about the performance part after this. But you have 40 tests you want to run, and they're going to take a minute each. That's 40 minutes. That's a long time. Nobody wants to wait 40 minutes to find out that their deployment was not egregiously broken. Let's parallelize that. Awesome. That's going to cost you cash, right? You have to go to a cloud provider. We have to run your own grid, or you have to have extra horsepower and other things there. So it's a cost benefit analysis you'll have to do, but also understand how long you're willing to wait to say it's okay to go to the next stage of whatever. Yes, including different tests and different pipelines, even for the same environment on a single deploy might be valuable, right? Maybe you have a smoke test, whatever that means for you to say this is not going to fall over if I try to log in, if I have one user trying to log in, it's not going to follow. But we have a bunch of other stuff we want to do, but we don't want to pin everybody's work on getting all those other 39 things or 30 things done. We run those out of band and we still report back, but it's not stopping us. We've decided that the risk versus the reward versus the value is outside of that extra set of tests, and that has to be context dependent for each team on how long they're willing to wait and how risk averse they are for moving on to the next step in testing or delivery.

[00:24:28] Lee Barnes Yeah, I think, Paul, that's a great point about risk. You're balancing the need for fast feedback with gaining confidence that you can move the build to the next stage, but you got to balance that confidence against the risk. It means what your confidence is in one environment where maybe the software is injecting drugs into someone versus another one where it's a little icon on a social media app. It's a whole different story.

[00:24:53] Paul Grizzaffi Absolutely. And also, the cost because hey, I can't wait 40 minutes, but I can wait 20 minutes, but it's going to cost me X dollars to run those in parallel. Am I willing to wait the 40 minutes now or yeah, let's spend the money and wait for 20 minutes, because there is a lot of opportunity costs that we're going to get rid of by getting the results faster.

[00:25:18] Lee Barnes Yeah. And I think the types of the test in my experience. Right. The closer you are to the commit, you have shorter test things like unit tests as you move on, API, integration tests, and of course this move down the pipeline. The environments get more production-like in your tests get a little bit longer, so you have less of them. And again, which ones are critical. You always want those tests to make sure that they're adding value. And I just ask myself the question, hey, do I need the information this test provides me? If the answer, is not a resounding yes, well, it's probably going to be left on the cutting room.

[00:25:52] Scott Moore If you're asking me, I think just I would reverberate what you just said. I also think that from a performance standpoint, you can still use the same rules that you used all this time when you were doing testing by using the Pareto Rule, 80 20 rule, but also use the three filters of business risk, most common things people are doing, and just heavy of a back in resource usage you expect a business process or a test case to be even a piece of that. And then obviously, you have to start small and go from there and build up to it. But those are the three that I would use. I think they you look for each environment and see where that applies. And I think that's how you start building your test for performance.

[00:26:35] Joe Colantonio Right. This is a good follow up, Tom. Thank you. I was about to asks so what does the roadmap look like to move from a immature, say, 95% of organizations to a mature one, the current 5%?

[00:26:47] Lee Barnes As Paul alluded to, DevOps is more than just continuous testing, right? And it's more than just technology. It's more than just CI/CD pipeline. It's a culture. That's part of it. First of all, realizing that it's a culture and what you're trying to to accomplish, but specifically from a testing perspective, I wouldn't start from scratch in terms of thinking about what types of testing you do and how you do your testing, but certainly maybe take a step back and keep the test objectives and the risk analysis in mind. But but really take a close critical look at the types of tests you have when you're testing themYou really need all the test suites you have. How can you kind of maximize the value you get from the minimal amount of testing that you can do? And a big part of the DevOps culture is continuous improvement. So as Scott alluded to, fail fast, right? Try something if it's not exactly what you want, tweak it, or maybe even if it's early in the process, throw it out and go a different direction. Don't be afraid to experiment. Absolutely.

[00:27:52] Scott Moore I think success in DevOps means it's efficient and reliable pipelines that go fast, and there are metrics that you can put around that. For example, I actually wrote some of these down in the continuous integration stage. As you've moved over to DevOps, you want to be looking at your build success rate. Are you getting 95% or above on the successes when you put a build out? And what is the mean time to the build? Get to the target of X number of minutes. Obviously, these are going to be specific for your goals for your organization, but using some examples, obviously your test pass rate is going to be part of that pipeline stability, frequency of integration. Obviously, the higher frequency would indicate that you're getting faster feedback. So that would be, hey, we're maturing because our feedback is much faster because of the number of times we're integrating per day or whatever that unit is. On the delivery side, Deployment frequency. Deployment lead time. Average time from code commit to production. You want the lowest lead time possible. Deployment success rate change, failure rate, mean time to recovery. When you put out a bill or put out a deployment, are there incidents that you have to recover from and in the mean time between failures? Those are just some examples of that. You start putting these metrics together. You can realize over a time period these trends that look good. They showed you how you're maturing. DevOps.

[00:29:20] Paul Grizzaffi Is that the way I look at all the technology stuff? I'm a bit of a heretic there, and I like to talk about things that aren't just ones and zeros. And I'm not saying that y'all aren't, because I know that both Lee and Scott understand the bigger picture here. But a lot of times organizations don't. We're gonna automate because we're DevOps. We're going to automate because we're scrum. We're going to automate because we read this book. No matter what you're doing and no matter how much you have automated or in your pipeline or whatever, if you're getting more value out of it than you're putting in from an effort standpoint, you're still the good. You don't have to think about, well, I'm not going to get any value unless I have my whole pipeline laid out the whole way. No, that's not actually it. Everything you do that gets you better than you were from a value standpoint is better. And if you can't get to the 5% you want to get to the or you want to get to the whatever percentage it is, right? What's the difference to you between 45 and 50? Is there a huge business gain between that 5%? If there is, strive for it. If there's not, and you may have to sit down and do some math and some beard stroking right to figure it out. But if there's not, then maybe you could spend that effort somewhere else to make your entire delivery process more valuable, more economical, and just better for your org, and then therefore for your users or your customers.

[00:30:59] Lee Barnes So yeah, I think the tie was just real quick to tie what Scott and Paul said together. Scott, you talked about, higher level metrics and Paul, you talked about value. That's one reason I hate counting metrics for automation, like the number of test cases on rate percentage automated. It tells you nothing about the value of what you're doing, and you end up just automating for the sake of automation. If you're moving the right needles, guess what? You could be pretty confident that you've got an effective test automation implementation and leave it at that. Take the win and keep going forward.

[00:31:33] Paul Grizzaffi Agreed. And I would expect that it works the same way with the performance engineering world, right? And the performance testing is if whatever you put in gives you more than that, you get out more than what you put in. It's probably providing some level of value, right?

[00:31:49] Scott Moore Well, on the performance side, you're going to know because it's going to be more efficient, which means it's going to cost less. And if you're in the cloud, you're going to see that right away. It's a faster.

[00:31:57] Paul Grizzaffi Oh yes. Yes that's true if you're on the cloud. Now you've got actual monthly dollars that are coming out. Yeah, that makes a lot of sense, a little more esoteric in the test automation world.

[00:32:10] Joe Colantonio Absolutely. What does move the needle then? Pam wants to know I'm interested in hearing what does work. If you were to set up a perfect team project pipeline high quality, what would it look like and why? What do you see from high performance teams? Is it all about culture? A lot of times people like, it's all tooling and it's like really your culture is kind of messed up. No matter what we have in place. Any thoughts?

[00:32:33] Lee Barnes I'm sure we're all thinking the same thing, which is it depends. But the perfect consulting answer. But I mean, the perfect team is perfect for your organization in your context, I think, I guess two things. First of all, understand where you are, right? If you want to go to the perfect team, kind of understand where you are, where your pain points are, what your objectives are, and hopefully those objectives are align with business value. And assemble the right culture, the right processes, and of course, the right people with the right skills who understand that probably the most important thing is, to continuously improve. Whatever you've done in the past, don't let that be your hammer. And then everything you're looking at you now be willing to kind of learn together as a team and move forward, kind of a high level fluffy answer. But I think it has more of application across different contacts rather than a specific kind of a technical answer.

[00:33:35] Scott Moore It thinking about this, if I were to have the perfect team with the perfect results, these are the qualities I would think of. It would be individuals who, first of all, have a passion for what they do, whatever that area is, and like to collaborate with each other and communicate very well. But then they all have a business context attitude. In other words, they're trying to think, how can we meet whatever the business objective that's been set up by whoever that is, the executives, whoever was running the ship, they told us what the business objectives is. Every one of the departments, every one of the areas want to align with that. And let me say something about tools real quick, and it's probably the most important thing I'll say on this session is that this can be a very complex issue of which tools we use to build this process and make it work for us in DevOps, if your company chooses some technology or tool based on any other reason than it aligns as perfect as it can with the business goals, and not because of who's taking you out to lunch from a vendor perspective, more often, who you're playing golf with, you are wasting company treasure, and I'll leave it at that. I'll let you think about that.

[00:34:43] Paul Grizzaffi I heard both Lee and Scott say the word business, and I cannot echo that more, because if what we're looking at as a delivery team, your company, your audit, your corporation has some business goals and probably your org has a subset of those, and maybe your team is a subset of those. Every line of code written needs to be in direct or at the very least, indirect support of those goals. Testing has to be in line with those goals as well, and automation has to be in support of testing. Automation is not a thing on its own. Automation is a facilitator. It's a force multiplier. It is a crank turner for the human beings to be more efficient or more effective. And if we're doing things that are just cool, but they're not in line to help us deliver on those business goals, then we're we're misplacing our effort and then our cost benefit analysis is going to go to hell. And then just yeah, things are just not going to be good at that point. I'm going to 100% get behind Scott and Lee and say yes. Business, business, business. It's not fun to talk about, but it's important.

[00:35:58] Joe Colantonio All right. So for current state, it sounds like we have a lot of work to do, a lot of work to go. I think we do a speed round. On the last three trends process. We covered a pitch of all this. But just curious to get your quick thoughts on maybe some trends observability. I've heard open telemetry is being, something that it's been a trend I've been hearing more and more about. Anyone else have any other trends you think people here should maybe investigate research a little bit more after this discussion?

[00:36:25] Scott Moore Well, you can't have a discussion about any of this without a trend, without talking about AI and how AI is going to make a difference, or whether it's not going to take anybody's jobs. But you got to learn how to use some of this stuff. You will begin to see AI interwoven into pretty much every automation tool, every observability platform, more and more to find anomalies, to find issues faster. And it will be like also another force multiplier. It's coming. You got to learn how to use it. That's probably the next big thing.

[00:36:55] Joe Colantonio So you're pro AI, Scott. So you believe it's really it can help with this type of a-

[00:36:59] Scott Moore Of course there are things that it can do very very well. One of those is picking out anomalies. Well what do we do as performance engineers. We look at analysis. We're trying to figure out why did this thing do this. And there are some relationships it may be able to pick out, we didn't even think about. That's a force multiplier.

[00:37:15] Paul Grizzaffi Scott, I'm your huckleberry. You and I talked about this on the roadshow, right. And and yes, I am not anti AI, but what I expect to happen is very much like when the case tools came out, very much like when the recording playback came out, they're going to disproportionately affect people. Whoa. We can get the AI to do all our testing for us. So we're going to lay off all our testers or all our call support people, or all our people that do x, y, z. And it's going to fail because it's not an all or nothing thing, at least today, right? It's not in my career time, probably not in my kids career time, right? That AI is going to actually fully replace a job. We're gonna have all these people disproportionately affected, and then slowly they're going to be brought back in or they're not going to be they're going to say, to hell with this, I'm done with this whole operation. We're going to have this shift where it's all going to go to the machines. The machines in some facilities will fail, and in some facilities they will be awesome. We'll learn from it. But people get hurt along the way, and then we'll wait for the next big trend to come, and then we'll disproportionately jump into that as well. Again, I'm not an anti AI guy. I'm an anti let's get on the AI bandwagon because it can do all the things guy.

[00:38:41] Lee Barnes Yeah, I agree 100% with what Scott's a force multiplier facilitator. But i n the scenes, especially in the testing world, for some reason we have a proficiency to buy snake oil. And I don't know why that is, but I see a lot of tools being bought, technology being replaced because of some wild claims about AI that it will probably help in some way, but certainly not do what they say they're going to do. I agree with both of you. It's an important advancement for sure. But we just got to be careful.

[00:39:15] Paul Grizzaffi And that's the thing really. It's not the in the testing discipline. It's not the testing world, if you will, that we're buying into the snake oil. It's the people that don't understand testing and automation and what it takes and how it works that are buying into the snake oil and disproportionately affecting people. So yes, but I think the audience is different than what I think the audience is a different audience that is buying into that snake oil.

[00:39:48] Remember, latency is the silent killer of your app. Don't rely on frustrated user feedback. You can know exactly what's happening and how to fix it with BugSnag from SmartBear. See it for yourself. Go to BugSnag.com and try it for free. No credit card is required. Check it out. Let me know what you think.

[00:40:09] And for links of everything of value, we covered in this DevOps toolchain show. Head on over to TestGuild.com/p141 and while you are there, make sure to click on the SmartBear link and learn all about Smartbear's, awesome solutions to give you the visibility you need to do the great software that's SmartBear.com. That's it for this episode of the DevOps Toolchain show, I'm Joe. My mission is to help you succeed in creating end-to-end full-stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers

[00:40:43] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

AI Joe Bot, AI in Mobile, Testing AI and More TGNS118

Posted on 04/29/2024

About This Episode: AI Joe Bot, AI in Mobile, Testing AI and More ...

Dan Belcher Testguild Automation Feature Guest

Mobile Mastery: Blending AI with App Testing with Dan Belcher

Posted on 04/28/2024

About This Episode: Today, Dan Belcher, co-founder of Mabl and a former product ...

Promotional graphic for a TestGuild podcast episode titled "The Future of DevOps: AI-Driven Testing" featuring Fitz Nowlan and Todd McNeal, supported by SmartBear.

The Future of DevOps: AI-Driven Testing with Fitz Nowlan and Todd McNeal

Posted on 04/24/2024

About this DevOps Toolchain Episode: In this DevOps Toolchain episode, we explore the ...