About this DevOps Toolchain Episode:
In this enlightening Test Guild DevOps Toolchain episode, Joe Colantonio interviews seasoned testing expert and enterprise solution Architect Rich Jordan on “DaveOps” – a term conceived to stress the importance of tailoring DevOps principles to individual contexts. The conversation focuses on the pivotal roles of visualization, standards, and collaboration in managing IT complexities. Rich argues for understanding systems in-depth to ensure effective DevOps delivery. He emphasizes the value of Behavior Driven Development (BDD) and early conversations to fill knowledge gaps and enhance efficiency. The final segment dives into the potential of AI and machine learning in DevOps, the crucial role of continually updating and unbiased domain knowledge, and the concept of “digital twins.” Rich imparts that the key to successful DevOps is embracing flow, feedback, experimentation, and continually reassessing processes and strategies.
TestGuild DevOps Toolchain Exclusive Sponsor
SmartBear believes it’s time to give developers and testers the bigger picture. Every team could use a shorter path to great software, so whatever stage of the testing process you’re at, they have a tool to give you the visibility and quality you need. Make sure each release is better than the last – go to smartbear.com to get a free trial instantly with any of their awesome test tools.
About Rich Jordan
Rich is a 20 yr testing professional who has recently joined Curiosity Software after 15+ years leading teams and creating organizational wide testing strategies within a large financial services organization and leading the Test Engineering Community of Practice. As well as creating the organizational strategy, Rich led DaveOps – the organizations central technical testing team building capabilities in Data, Automation, Virtualization, Operations, Performance & Security – over recent years using modelling as the foundation to face into many of the organizations deep routed challenges. Rich has been a regular industry speaker, including Gartner and the DevOps Enterprise Summit
Connect with Rich Jordan
- Company: www.curiositysoftware.ie
- Blog: www.thecurioustester
- LinkedIn: www.richjordan
- YouTube: www.CuriositySoftwareIreland
Rate and Review TestGuild DevOps Toolchain Podcast
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.
[00:00:19] Hey, it's Joe, and welcome to another episode of the Test Guild DevOps Toolchain. Today, we'll be talking to Rich Jordan, all about what is DaveOps. That's right. I said it correctly, DaveOps. Rich, if you don't know, is a 20-year testing professional with years of experience, leading teams, creating organizational-wide testing strategies within a large financial service organization, and leading the test engineering community, a practice in creating the Enterprise DaveOps strategy. Rich has been a regular industry speaker, including Gartner and DevOps Enterprise Summit, and he's currently an enterprise solution Architect at Curiosity Software. You haven't checked out our webinar with Curiosity Software our model-based testing. I'll have a link to it in the show notes. Definitely check it out. And you don't want to miss this episode. Check it out.
[00:01:03] This episode is brought to you by SmartBear. As businesses demand more and more from software, the jobs of development teams get hotter and hotter. They're expected to deliver fast and flawlessly, but too often they're missing the vital context to connect each step of the process, that's how SmartBear helps. Wherever you are in the testing lifecycle, they have a tool to give you a new level of visibility in automation, so you can unleash your development team's full awesomeness. They offer free trials for all the tools. No credit cards are required. And even back it up with their responsive award-winning support team. Showing your path to great software. Learn more at SmartBear.com today.
[00:01:46] Joe Colantonio Hey, Rich, welcome back to the Guild.
[00:01:49] Rich Jordan Hey, thanks for having me, Joe. It's great to be here.
[00:01:51] Joe Colantonio Great. I guess before we get into it, is there anything I missed in your bio that you want The Guild to know more about?
[00:01:57] Rich Jordan You pretty much cover it out of work in test for far too many years and quite a lot of my kind of recent journey, especially in financial services, and relying on a lot of kinds of transformations and therefore agile DevOps, a lot of those things kind of facing into those from it's kind of a testing context and a broader kind of quality aspect. So yeah, so let's talk.
[00:02:17] Joe Colantonio I guess right off the bat, I kept saying DaveOps and it's a term I haven't heard before, so if you can explain all but what is DaveOps?
[00:02:25] Rich Jordan Probably not had because we made it up, it was a play on words. So then I say I worked in financial services, I led what tended to be the technical testing elements of the testing within those large organizations. So DaveOps actually had two meanings. Okay, So the more PC meaning was I led teams, that was Data Automation Virtualized Environments Operations Performance Security, DaveOps. The other one and it was really kind of an icebreaker in a lot of meetings that internal or external which was within large financial services organizations, they're all going through transformations and they're all doing DevOps. And when you scratch the surface, what you find is you've got central DevOps teams that are kind of putting together tooling change and not rollouts. All right? And therefore, they probably take a lot of the processes that they used to have or they brought in a lot of open source build tools and testing tools and things like that, and they're all clumping them together. Then the tooling chain and hey presto, they're kind of raising the, we succeeded in the DevOps kind of movement and the reality is the company kind of chug along for a couple of weeks or months, and then complexity would bite. All right. And then that pipeline would come to a crushing halt and you kind of get drafted in kind of the technical testing element of the organization is like, hang on a minute, we want to do test automation. That's how they would get you in. And it's kind of hang on a minute, you're a pipeline. You don't really know what you're putting into it and therefore it's rubbish in, rubbish out. You bend on a toilet. If you're checking rubbish down that YouTube it's going to become blocks. And so we get into a much more kind of fundamental conversation about what it means to face complexity around organizations. And you kind of get into the kind of the three ways to DevOps. And not a lot of people actually talk about the three ways of DevOps. They just talk about these tooling chains. And so you get your flow, your feedback, and your experimentation. And you kind of go into flow and you start talking about what kind of explain to us what your flow is. And one of the subheadings, if you like, of bullets of flow is teams shouldn't allow for local optimization and you find that it's within organizations then local optimization all over the place. And they're all chucking rubbish over the fence to the next person and wondering why those pipelines are not working. This is where we would come in this DaveOps is kind of this is a play on words, a bit of a joke to then come in and kind of, take a step back, let's go a bit slower, work out what we're actually doing here. Get some kind of a grounding in place and then we can start to automate that lean process. Once it's understood and want it, then execute it. That's not just a testing thing. You kind of get into the whole, when does testing become quality kind of narrative and you get into the kind of from an architectural perspective, for example. Local optimizations. Actually, we've got the monolithic systems here, which we're trying to kind of crowbar through this pipeline. And the two don't seem to go along. And if you're starting to bring in a lot of new technology, a lot of these organizations will be doing microservices, although kind of have a slower architecture in the long term, Will that be doing event-driven architecture? What do you mean by technology? But they still got Monoliths. And yeah, I'm going to minute, technology allows you to do things in a far more isolated way, but yet you're not doing them in practice because you haven't got the architectural disciplines to be able to make that technology work for you. And quite often we would find that testing as well in the test approach is also kind of agnostic or totally ignored the technology we're bringing in. And quite a lot of test teams that were aligned to those DevOps teams, yes, they have to test those. Yes, they had SDETs, but they were doing a lot of the same tests that they would always do, big end-to-end UI-based tests, and wondering why not working right. It's the hardest place to automate.
[00:06:25] Joe Colantonio Grounding in place, getting the grounding the place. Are you saying people just jump to like DevOps and they just try to ram things through without actually following what makes sense for their company?
[00:06:36] Rich Jordan Yeah, you've got this, I think this interesting dynamic. You've got companies, you get this central DevOps team, and therefore that building open source tooling to hopefully chain together these toolsets to other teams within the organization will use. Then you've got this other dynamic of those other teams. They've got this kind of accountable freedom, if you like, or federation. They don't want to use that rejecting those toolsets and therefore they're starting to bring in their own chosen toolset. And therefore you've got this strange dynamic of tool proliferation and you get into the CI/CD. Is an obvious toolchain kind of buzzword that goes about. And I remember having a conversation with architects to say, what is that team actually continuously integrating because yes, they've got lots of tools, but where are the boundaries in the system they're actually developing and you kind of get a blank look all at the time. And therefore, yeah, it comes back to this architectural kind of grounding. It's just not there in that, yes, you're using the technology, but you're just shoving the same old rubbish through new pipelines or new technology. And you keep doing it same thing and expecting a different answer. A definition of insanity. And yet we do that a hell of a lot. And DevOps is kind of the next thing on the block. Agile before that may be what came before that don't know, but you seem to keep repeating the same problem.
[00:07:56] Joe Colantonio Absolutely. So I work for a healthcare company and they had a very old application. We're going to do CI/CD now, even though we didn't release the customer every. We just released it because you had to be regulated. So as every quarter is something they say like, Oh, we're CI/CD. So they try to force us to fit into a setting that our architecture, nothing changed. We just put CI/CD and we try to ram it through. So is that what you're seeing a lot of companies doing?
[00:08:22] Rich Jordan Absolutely, yeah. To my point around, people they want to do DevOps and they want to raise that flag as quickly as they can because they brought in a lot of new tools. And I've got very similar experiences of what it sounds like you've got right. And you know you've got a monolith there and you know you're carrying technical debt. And your organization no doubt would have been quite siloed. And therefore, Conway's Law suggests that we don't really understand the interfaces particularly well. And therefore, we don't want to face any of that stuff almost. We just want to shove what we're doing at the moment through this automated chain. And yet we were always struggling before that chain came along to actually automate it. Okay. At build or understand what versions we were trying to put in. All of a sudden, because we've now got a name to give it, it's all fixed. So we've got some open-source tooling, what's going to be different? Not a lot. And this is where I come back to how many teams are actually trying to do DevOps. Have you got a central team or whether you've got these federated teams trying to do it. Even appreciate those kinds of fundamentals of DevOps, let alone kind of actually trying to face into them all what they've got today, the challenges they've got, monolithic architecture, we don't really understand the interfaces flow. How are we going to unleash flow so we can overcome some of these challenges? And I don't think there is a lot of consideration in any of these things.
[00:09:39] Joe Colantonio Yeah, once again, we didn't have a build that was automated. We had to mainly go in and put it together and the CI/CD, it was insanity what they were trying to do. What would you recommend though? It almost sounds like you need a shift left some standards are set of things to be in place before you even get to that point. What would those be?
[00:09:57] Rich Jordan I think there is a fundamental problem around almost risk and transparency about what we do or don't understand about our systems today. And that includes a kind of architecture and technical debt around interfaces. But then you've got a way of working. And therefore, how do you go slower to go faster? I think there is very much a delivery or a feature factory mentality towards I.T today and therefore we've got to get the next thing out tomorrow or the company will die. Will it really? But because of that we're almost paying a premium for those that problem we've got, trying to take a little bit more mud on that big ball of mud that we've got at the moment. And so for us and I don't want it to sound like a sales pitch, but it's going to go into modeling a little bit, right? This is where we get into ways of working. And I know you had John Ferguson Smart on a couple of weeks ago talking about BDD. And he shared with me a white paper probably a couple of years ago now, which was called BDD at the Heart DevOps. And fundamentally talked about collaboration. And doing very early collaboration upfront to understand what we're trying to do. And for us, this is what we started to do. But with modeling. Where the kind of the be the follows will go down the Gherkin route. We went down the visualization route in terms of creating models in our early conversations. All right. And actually in that white paper I mentioned, and I would encourage people, other people to go look at it, there is a table that talked about almost maturity of the conversations around eliminating, I'll call it defects, but there were never defects because they found earlier on around getting consensus and eliminating kind of ambiguity in early conversations that as you start to mature those conversations, things become a lot more clear, transparent, things become a lot more structured and robust around how things are supposed to work. And it sounds very simple, but that coupled with the whole visualization of the model, really works. And I think to come back to the complexity problem is a book that kind of percolates a lot around DevOps, which is teams, there are team topologies. And a lot of that they talk about kind of platform team and they talk about how to structure teams. In there is a section for cognitive load, and I think it's really interesting the way that if you imagine I.T systems are very nature complex beasts. And therefore what mechanisms do we have to try and understand how those systems work and they change, all right? And therefore our brains couldn't possibly cope with the kind of change that's going on, the combinatorial explosions and all that kind of thing. And therefore what living specifications or anything like that, we got to help us to kind of come over that cognitive load. And the reality is a lot of teams are using kind of visio diagrams or Word documents or head knowledge. There are a lot of old systems that have got 25-year SMEs that are kind of getting by on head knowledge. But when you start to have these structured conversations in early BDD conversations or visualizations, what you find is there are lots of gaps, either head knowledge or documentation, all of those things that today if you're not doing it and they creep into the development lifecycle or the testing lifecycle or worse still into production. And actually, by having these structured conversations where you've got a living specification supporting you, you actually eliminate a hell of a lot of those things. And because you've got almost a lot of cognitive knowledge built into those visualizations, you find that actually your three amigos conversations about what you're doing or your feature mapping conversations, they don't need to go. A lot of the fundamental stuff all the time, they can start to build on new conversations because everybody's already got these brilliant grounding-free visualizations about what they're doing and I think is quite interesting. I always remember a conversation with we talk about stakeholders straight away, we go to users quite a lot and I think but stakeholders many. One of the kinds of areas that I managed was security testing. I remember sitting in a conversation with a security function that were a function to us, and we started talking to them about this collaborative design that we wanted to do in visualizations. And they remarked very quickly to say, You know what, We never had this type of conversation before. We always kind of have to go back to the basics about how the system works because you've got all of these things. We don't need to do any of it. We know it's all there, so we can start to talk about stuff that we want put in or stuff we know that you're doing these things. How does that actually work? It's kind of a lightbulb moment, and I think that happens a lot in terms of the kind of collaborative sessions or the visualizations and maybe even Gherkin. In terms of on the face of it, it sounds quite simple, but until you've done those things and you've kind of lived it, you don't really get the kind of revolutionary kind of element to kind of the way of working.
[00:14:49] Joe Colantonio Yeah, because it almost sounds low-tech. And when you think of DevOps, usually people don't talk about BDD or modeling. They think, Oh, that's part of the development or the testing has nothing to do with me, but you're just brought up two examples, DevOps and Security, which I don't think most people think of conversations are BDD or modeling, right to be implemented there were has benefits?
[00:15:08] Rich Jordan Absolutely. Just to elaborate on that a bit more. Get into the platform as well. And so we talk about the test pyramid quite a lot. But I think, I almost don't think that goes far enough in terms of where the platform comes into that. And so compliance code. And I think it's quite interesting. We get into the stage of cloud. But whether it's cloud or whether it's kind of on-prem infrastructure, it's infrastructure, it tends to be the same sort of stuff that you're either deploying or the cloud you deploying on on-prem. And therefore, how should you actually configure a RedHat Linux server. And you ask that question to a lot of people, they haven't got it too right. And they should have. The whole idea of the platform team is abstracted away from everybody else. But you ask the platform team, how should you build it? How do you figure it? Do they know? The chances are, it's not. There are not a great deal of standards built around it. Actually, there's an industry standard around how you should configure a hell of a lot of tooling that would tend to go in the DevOps shape. All right. But the problem is, how do you take that? there are a hell of a lot of configurations that go into configuring a RedHat Linux server. And that's where you get into the cognitive load problem. If you can get that into a model, which then can generate a Python script to go and do compliance code, you've got a conveyor about there of platforms that you can just churn out like that. And actually, you've got a brilliant kind of change mechanism as well, when RedHat deliver the next version, I think they do on a quarterly basis, but they deliver the next one, There's going to be a standard that you can compare and contrast against, but you can also build that into your models very, very quickly. That cognitive load and that cycle of change are fully managed in that flow process. And I think, it sounds quite boring, but if you don't do that, how the hell are you going to keep that pipeline flowing?
[00:16:57] Joe Colantonio Not even boring. It sounds like basic and not technical, but it's one of the hardest things to do, like the culture piece of as it seems. So how do you get like buy-in? We're like, Hey, this is quality. Sure, it's DevOps, it's security, it's a platform, but it's actually quality as well. I don't know. I find it hard. People are so siloed. How do you get everyone like, okay, actually this is part of quality? Let me jump in earlier then when it gets to me now, the end of the pipeline.
[00:17:23] Rich Jordan Through experimentation, I think, and this is where I think there are many ways to do it. So this specific platform example is quite an interesting one. In terms of we couldn't convince the platform build team to adopt a standard. But we needed to test it. So we said, if you're not going to adopt a standard, we don't know what we're testing. So we're going to adopt the standard in the test. And therefore if we test it against the standard and you're not building it against that standard, you're going to find problems. And therefore, it's back to this flow or local optimizations and we're going to find stuff that is going to have to go up to the product owner to either accept or reject. And therefore you're creating this problem because we don't know what we're supposed to be testing unless we've got some kind of standard. And so we did that and we started that. It wasn't quite as kind of stick mentality as it may be it sounds out because very quickly we started executing things and it was automated by compliance code. It happens just like that. You just keep checking lots of configurations. And I think the actual benefit was where we could go back with a hell of a lot of intelligent questions about how something should be configured, which they'd not thought about. And therefore, Oh, I didn't even realize that was that. Yes, that sounds like a really good thing to start adopting. And it's like, yeah, we're already building these assets in tests. Why don't you start to adopt them as well? There are other conversations where I'm thinking of another one where you've got upfront transparency about the risks of that kind of technical debt that you've got. And I think another example that we've got was around kind of implementing an event-driven architecture with an organization using kind of a DevOps delivery methodology, but it bolted onto what was a monolithic system at that. Now everybody knew that. And so the conversations, early tools got into we're going to have a problem here because we know we've got this debt and therefore architect can we get into a place where we're talking very early doors about loose coupling because we want to get to a place where we can keep that loose coupling in place because if we don't, you that new thing you're bringing in, it's just going to be monolithic. We're never going to be able to do it. The whole kind of pipeline, the ephemeral environments that we want to operate with. It is not going to be a reality. Two very kind of polar opposites in terms of how we implement it, a very similar approach, and very different technologies. But it's kind of horses for courses where I think you need to use and abuse, the kind of relationship you've got or they kind of the standards that are available that you're kind of your peers in kind of development or architecture are not kind of up for the best ways of working if you like. There's always the catching them out in terms of this is the part that we're kind of testing against the kind of falling below that.
[00:20:18] Joe Colantonio Absolutely. So talk a little bit about flow. You talked about experimentation. A lot of times, I also see people resisting feedback for some reason like this is the process CI/CD this is what the textbook says, I don't see it in a book model or BDD or conversations. Do you see that as a bottleneck as well, that stage? Yeah.
[00:20:38] Rich Jordan Oh, absolutely. I've got a sweeping generalization here. But we always used to work with quite a lot of partners. And so we talk about BDD. BDD is synonymous with Gherkin and to me, that's a misinterpretation of what BDD is. But we're doing this, we use BDD automation. No, no, no. That's not what it means. You're going to build up the same problem as you've always got. And so do you want to have a conversation? Some of the time it's no because this is kind of what we've got the freedom to do and you've got to go and let them make the mistakes to then go and then come out. This is where the DaveOps that came in, letting them make them say to then come in and say, Yeah, this is what you should have done really. This is how we get things going, this is how we increase flow. This is kind of the feedback you need. But I think it's interesting that you need the right culture within the organization to allow feedback to be productive. If you like to see that, especially if you're working with partners, for example, within the organization of the Financial Services Organization, I was working and I was leading the community of practice for test and test engineering. And actually, we created three mechanisms to kind of increase collaboration and innovation. To a certain extent, you need to get a certain amount of momentum to kind of overcome that is diffusion of innovation. This method curve in that you get certain areas that are kind of dispassionate, but then how do you make social proof? So we put kind of a cadence of sessions that we operated with. So one of them was a kind of a fortnightly show and tell when we go and invite people to come along, whether they were doing it well or whether they had challenges come along and present back. We had the organization strategy that talked a lot about kind of the collaboration model-based testing, and I'll talk a bit about kind of automation frameworks and things that are out and freedom to use what you want to use and metrics that we collect and things like that. But we invited them to come back and present around adherence to it, but also a deviation from it. And so how did it work and why did it work when you deviated from it? Because we weren't kind of stuck in our ways in terms of how we wanted it to work. Actually, we wanted to keep evolving and be the best it could be because admittedly it shouldn't state still, but we should always be experimenting that show and tell was called the Rebel Alliance. Again, there's a DevOps reference in there for anyone affiliated with the Phoenix Project that was, well, kind of the breakaway group in the Phoenix project was called. And so and I guess that even in the main the main was thought through in terms of being a bit rebellious to a certain extent, in terms of challenging kind of what the status quo was facing into some of the kind of the apathy that I think a lot of large organizations have. And again, we would have things like the Testing Times, which was kind of a newsletter that we'd send out. And again, it was all about them, the teams writing articles for us and explaining what they were doing, but also raising their profile, encouraging people to speak out and be open about the challenges and what successes they were having. The last one was the Friday conversation which was within, we had a kind of internal Twitter kind of technology. And so it was a very open conversation that we would have around. So challenges that we knew we would have in the organization. Things like, for example, why don't people tend to care about nonfunctional requirements? And so then you'd have a very robust conversation about where functional requirements exist. What was the alternative for a functional requirement? So actually, what was the mentality around even understanding the kind of nonfunctional elements of certain systems? So I guess we were cultivating a culture where you could have those kinds of conversations. You think is a test. Almost you've got to be brave. In terms of if you don't speak up to a certain extent, then as a test to who's that to challenge and give feedback is stuck. Nothing's going to change.
[00:24:29] Joe Colantonio Absolutely. Rich, we touched a lot on the culture, a lot of what people sort of soft skills that is called the DevOps toolchain. You mentioned modeling. I don't know when you worked at another company, AI machine learning was around, but I don't know how big it was. Do you see any type of tooling now helping with the DevOps process that you wish you had when you were in that role? Or you could see is it growing is it just a buzzword still? Like it's still all it goes to culture?
[00:24:54] Rich Jordan I think a lot of it is culture. I think the interesting thing, I don't know whether you've seen any of my Model GPT videos lately, but we didn't model that. We built something quite quickly which talks to AI and using generative AI to generate models. A lot of what I've talked about in terms of the collaborative conversation, you have the Three Amigos as you could have the fourth Amigo, with Generative AI playing a role in terms of the brainstorming. I think one of the challenges, we're talking to a lot of marketing euphoria I think is the kind of the term at the moment. Around the possibilities of AI. And I think the problem that everybody will have is context. ChatGPT could do some wicked things right. But it's got the context of the world which with the greatest will in the world. None of your organization's documents are not in the world. And therefore, how does ChatGPT or any generative or large language model get the context to be truly useful? And so we're doing a lot of work at the moment around how do you train up or fine-tune language models. Now I always come back to the fundamentals, right in terms of in order to train those things up in the right context, we need a lot of domain knowledge that you trust and is unbiased and it is always up to date. There's not going to get hallucinations. And so in an ideal world, the world would be models. And therefore you would have a hell of a lot of living specifications that have got rules baked in. And they are extractable from models that you could feed into those large language models. And otherwise, you've got a problem, you've got a lot of up to date documentation, you've got a load of Visio diagrams, you've got lots of things stuck in people's heads. How do you train it up to give it the context to be useful? Now you've got some kind of declarative AI or it does stuff. I guess, you can find things on screens that have now changed, but is that truly groundbreaking? Actually, I think the kind of the moment, once we get over the how we give it context to kind of the domain of the organization, and then we get into the cyclical cycle. In terms of what you can get the domain context and you fix to keep feeding it, that context in that cyclical kind of motion of change, continuous integration. Each time that change happens, the thing should have more context. To me, there's an interesting dynamic going on here in terms of how are you training it. And how do you keep on training it, because traditional ways in which you're using visio diagrams or word documents, unless you keep that document up to date, which you've never done in the past, that large language model is going to be out of date, and therefore you need to have that living specification as a core part of your kind of your changing living ecosystem? The definition of digital twin, I think has been stolen for something else. But I think it fits far better in terms of how do you keep this large language model up to date with the contexts that it needs to stay relevant by which it can do generative AI, but it's still very aligned to the way the system should work, how the system should change.
[00:28:12] Joe Colantonio Love it. Okay, Rich, before we go, is there one piece of actual advice you could give to someone to help them with their DevOps efforts? And what's the best way to find contact you or learn more about Curiosity?
[00:28:22] Rich Jordan So I'll recap on one of the bits that I mentioned earlier on, which is your DevOps test approach should really be looking at flow feedback and experimentation. And so if you're still doing a test approach that you were probably doing 15 years ago or you've just adopted BDD, Gherkin, and Selenium doing UI-based testing only then, you're probably doing it wrong. Take a step back, have a look at flow, do a quick review of the flow that you've got in your team at the moment. Don't be tested in that respect. I mean, the whole line, you've probably got some knowledge gaps in there that you really need to address. Otherwise, you can kind of keep living with the problem. And the problem's only going to get bigger the more you leave it. You need to make the change now, the longer you put it off, the higher it's going to be. It's going to get harder. In terms of contacting us, We post a lot of content on LinkedIn and social media all the time. We've always got videos coming out and say there's a video around model GPT, which is kind of the latest thing we're exploring at the moment, but there's a hell of a lot of kind of videos in terms of some of the content I've been talking about today, blogs and things like that. So lots of content. If you want to reach out, feel free. Hopefully, I'm not scary, I'll be honest.
[00:29:34] And for links of everything of value, we covered in this DevOps toolchain show. Head on over to TestGuild.com/p119 and while you are there, make sure to click on the SmartBear link and learn all about Smartbear's, awesome solutions to give you the visibility you need to do the great software that's SmartBear.com. That's it for this episode of the DevOps Toolchain show, I'm Joe. My mission is to help you succeed in creating end-to-end full-stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers
[00:30:07] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.