About this DevOps Toolchain Episode:
In this episode of the DevOps Toolchain podcast, host Joe Colantonio chats with Derek Ferguson, the Chief Software Officer at Fitch, about the dynamic world of DevOps transformation. They explore the intersection of technology and strategy, emphasizing how to navigate regulatory landscapes while fostering innovation.
Derek shares insights from his diverse career, from his early days in the tech industry to leading the creation of the ESG-focused Sustainable Fitch platform.
Tune in to discover Derek's perspective on marrying traditional financial metrics with environmental data, the evolving role of AI in development and production, and adopting new tools like Terraform to create seamless DevOps pipelines.
Try out Insight Hub free for 14 days now: https://testguild.me/insighthub. No credit card required.
TestGuild DevOps Toolchain Exclusive Sponsor
SmartBear Insight Hub: Get real-time data on real-user experiences – really.
Latency is the silent killer of apps. It’s frustrating for the user, and under the radar for you. Plus, it’s easily overlooked by standard error monitoring alone.
Insight Hub gives you the frontend to backend visibility you need to detect and report your app’s performance in real time. Rapidly identify lags, get the context to fix them, and deliver great customer experiences.
Try out Insight Hub free for 14 days now: https://testguild.me/insighthub. No credit card required.
About Derek Ferguson
Derek is the Chief Software Officer at Fitch, where he leads the development of software used by investors to identify opportunities and quantify risk. Fitch’s products have won 27 awards — and counting — in the last two years alone.
Derek also led the creation of the software systems that enable creation of Sustainable Fitch, a groundbreaking platform that compiles granular and transparent ESG ratings for some 1,000 entities, helping investors make smarter decisions with their money. It won the ESG Research of the Year for Fixed Income award two years in a row.
Outside of software creation, Derek is the author of the bestselling Broadband Internet Access for Dummies, an expert in masterclasses hosted by GOTO Chicago, and a speaker at prestigious events — from JavaOne to TechEd and beyon
Connect with Derek Ferguson
- Company: www.fitch.group
- LinkedIn: www.derekmferguson
Rate and Review TestGuild DevOps Toolchain Podcast
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:00] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.
[00:00:18] Joe Colantonio Hey, today we'll be talking with Derek Ferguson all about DevOps transformation, culture, strategy, and a whole bunch more having to do with technology. If you don't know, Derek is the chief software officer at Fitch where he leads the development of software used by investors to identify opportunities and quantify risk. Really interesting topic. We're going to dive into that as well. If you don't know Derek also led the creation of the software system that enables or enables creation of sustainable Fitch. I guess we'll talk about what that is as well. He's also a best selling author and speaker on many events. He has a really diverse background. A lot of things. You don't want to miss this episode. Check it out.
[00:00:55] Hey, before we get into this episode, I want to quickly talk about the silent killer of most DevOps efforts. That is poor user experience. If your app is slow, it's worse than your typical bug. It's frustrating. And in my experience, and many others I talked to on this podcast, frustrated users don't last long, but since slow performance is a sudden, it's hard for standard error monitoring tools to catch. That's why I really dig SmartBear is Insight Hub. It's an all in one observability solution that offers front end performance monitoring and distributed tracing. Your developers can easily detect, fix, and prevent performance bottlenecks before it affects your users. Sounds cool, right? Don't rely anymore on frustrated user feedback, but, I always say try it for yourself. Go to smartbear.com or use our special link down below and try it for free. No credit card required.
[00:01:51] Joe Colantonio Hey Derek, welcome to The Guild.
[00:01:55] Derek Ferguson Thank you very much for having me.
[00:01:57] Joe Colantonio Awesome. When I looked at your bio on LinkedIn, you've seemed to have done everything. Just curious to know maybe how did you get to where you are now as Chief Software Officer at Fitch?
[00:02:07] Derek Ferguson So started my career back as the intranet was coming into Vogue at the risk of dating myself for an internet service provider here in Chicago called Interaccess, first company in the world to roll out commercial DSL access, actually. Great opportunity straight out of college. Startup, let me get involved in a whole bunch of things. I might not otherwise have been able to get involved in that early in my career. Heavy partnership with Microsoft as part of that which led to writing my first book, which was actually a study guide for one of their exams. And out of that, wrote some more books, started doing the conference speaking thing. And then the way I tell the story, I was just sitting at my desk one day and I got a call from a recruiter saying, today's your lucky day, Bear Stearns wants to talk with you. And my answer was, who's Bear Stearns? Never heard of them. Because yeah, it's techies, stuff like that. It's not really something that was in our line of sight, but. One up working for them, then JP Morgan Chase. And then I've been with Fitch for a little over 5 years at this point.
[00:03:11] Joe Colantonio Nice. What is a sustainable Fitch? What's that platform?
[00:03:14] Derek Ferguson Sustainable Fitch is a really interesting endeavor. So it is part of Fitch Solutions, which in turn is part Fitch Group, which is owned by the Hearst Corporation. But what Sustainable Fitch is focused on is ESG ratings for companies. So as investors look at their portfolios and think about where they want to put their capital, they have another lens through which they can look at things to figure out how does this company impact. Warming how does it impact? What's its governance level in terms of accounting standards? What's it's social policies and all that sort of stuff? It's just giving investors one additional lens through which they can sort of think about what they want to do with their gap.
[00:03:59] Joe Colantonio I would assume I could be wrong that a software like that would be heavily regulated. Am I right or am I wrong?
[00:04:05] Derek Ferguson Well, so the interesting thing about the Fitch group is we've got two portions of the organization. One portion is Fitch Solutions, which tends to be our data sales and the less regulated portion. Then you've got Fitch Ratings, which of course is a traditional ratings agency and it's extremely regulated. Obviously from a standpoint of DevOps that has all sorts of different implications. Something like Sustainable Fitch, New Build, less regulated, we can move a lot faster in terms of the way that we roll out new technologies. Whereas, Fitch training piece, because of the regulatory impacts, you really have to be careful and thoughtful in terms of how you roll out things like continuous delivery, for example, because there are some key signoffs that you have to give some thought to up front.
[00:04:55] Joe Colantonio Absolutely. When you were creating a substandardable Fitch, do you create it with DevOps in mind? Like, do you think like, or it wouldn't have to release quickly, so I have to make sure that it's going to be able to fit into the pipeline and be able to handle a rapid releases and things like that?
[00:05:08] Derek Ferguson Yeah, so it was really interesting because with the original sustainable Fitch system, the initial attempt used a lot of the existing tools and technologies that were sort of ready to hand for the tech team. And as we looked at it, we recognized that there were some definite opportunities for uplift in terms of when you think about some of the data models behind environmental stuff. That's kind of new for regulatory agency, like, we're used to dealing with financial data points, revenues, taxations, and all that sort of stuff that you standard accounting stuff, when you get into environmental stuff, you're talking about stuff like carbon footprints and all that sort of stuff. There was the need to iterate and evolve much more rapidly than we had before, because it wasn't just a new set of technology, it was really a new business. And that's a lot of what drove the need to adopt some of the newer technologies that we find ourselves using around not just the tooling, Terraform scripts to allow us to do infrastructure as a service, but also a whole bunch of stuff with the data technologies that we use to make sure that their schemas are a lot more resilient to versioning as we move forward and stuff that allows us to back out changes a lot more reliably than we had had in the past.
[00:06:34] Joe Colantonio All right, so it sounds like you're working with newer technologies as well, plugging it in. Is there any best practices you found to make it so as you were developing, you're developing maybe with the future in mind that, hey, down the road, we're probably going to have to plug in newer technology or mess with our data technology or schemas or things like that.
[00:06:50] Derek Ferguson When it comes to mind is probably encapsulation. But I would say you want to make sure that you have the right level of encapsulation, I have worked in my career with organizations that have been so obsessed with not becoming locked into a vendor that they have not done what I would describe as sort of the right math around vendor lock-in. If you're working with a technology which is on the bleeding edge and perhaps only delivered by one or two vendors it behooves you to think are you sticking to the standards or are you using more of the vendor specific functionality that's offered by those vendors and steer clear of them. It's very easy to get stuck with a specific vendor choice when you start using vendor specific extensions. One the other hand, I've had it at the other extreme where you've had been dealing with like a large vendor that's been around for decades and is unlikely to go any place in your future. And yet this drive to say, and you see a lot with like cloud providers, right? Some organizations are like, I don't want to be specific to this cloud provider or that cloud provider. And to a certain extent that, you can see where if there's a standard for using a certain technology, stick to that. On the other hand, if you can derive value from a cloud provider and they're established. And they have something which is unique and it gives you competitive advantage. I think it's really easy to go too far and get into an area of paranoia where you don't want to adopt something just because I can't find this elsewhere. It's striking that balance, right?
[00:08:29] Joe Colantonio Absolutely. 100%. But you also mentioned Terraform. Maybe talk a little bit why Terraform now, do you have infrastructure issues that you saw that needed to be automated and managed and that you thought, hey, to make this DevOps pipeline really flow, we're going to have to do something with this and you chose Terraform?
[00:08:45] Derek Ferguson Yeah, the vision has been and remains trying to get to a point where developers can send their stuff to production immediately when they're done with it and not wait for any human being to get involved and click buttons X, Y, and Z to let them go. And circling back to the compliance question that you asked earlier, that might mean something as fundamental as changing the regulatory sign-offs it were used to from being at the end of the pipeline against the finished functionality to being against the tests that will make sure the functionality does what it says it does. With something like Terraform, it's not so much a human being sort of thing in terms of like processes and saying, yeah, this is okay. I like this as just the idea that if a developer is going to be empowered to release their stuff to production as soon as they're done with it, the package that they're sending has to do everything soup to nuts to get that out there. Not just lay down the code on infrastructure that is expected to be there, but also to build the infrastructure, right? That thus part of our looking at actually all of our looking at an embracing of something like Terraform to say, hey, let's empower the developers to say here are the changes that we need in our infrastructure in order for this new component to go out and run.
[00:10:10] Joe Colantonio How does that work? Does that help?
[00:10:11] Derek Ferguson I would say it's early days right now. It's definitely a net positive. You get into this thing where you try to figure out, do you, and this is true of all DevOps, right? Is it better to take the developers and give them the training that they need to get more knowledge of the operate side, or to take operate folks and give the knowledge that they needed to get into the developer side? Who owns those scripts? I think it's the whole thing with you know, you see permutations of that with QA also. Do you want the sort of thing where a given technical resource is sort of a jack of all trades who knows a little bit of everything or do you have these specialties? Thus far, I think one of the nice things about Terraform is the deep integrations that it has with our choice of cloud platform that sort of already exists has made for a pretty comfortable learning curve for the techies who've embraced it thus far. I won't say that it's seamless, but as we were looking at our different options, that close pairing between the tool and the underlying in our choice of infrastructure and we tend to be in AWS shop primarily has really worked out well for us.
[00:11:31] Joe Colantonio Back in the day, we had like a DevOps CI/CD team, and they were kind of like a roadblock almost because developers would try to push things in and you'd have this group that like no or yes. So it sounds like it's a fine line between the power you give developers and having some sort of oversight, I guess, over a team to handle the infrastructure piece.
[00:11:49] Derek Ferguson Exactly, exactly. And I think one of the interesting things, and maybe this is something you're gonna get to later on, but up to this point, we have really relied on pull requests and human beings from different roles to take a look and to say, well, yeah, that seems like a safe thing to do or you think, think twice about that. It's not to say that we're going to move away from that, but with the advent of Generative AI and its capabilities, really interested in the idea of AI pull request reviewers that are able to be given specific instruction sets to look for things that would have been very difficult to have done programmatically before. Like you've got all the security scanning software and it knows how to look for very specific patterns in the code and that's fine and everything. But I always about the case where somebody has, maybe not intentionally but if you've implemented something which fits the design and the design fits the requirement, but the requirement was something that should never have been done in the first place. Code review is rarely gonna catch something like that, right?
[00:13:00] Joe Colantonio That's true, yep.
[00:13:01] Derek Ferguson But if you have this sort of neutral AI arbiter that looks at every pull request and you've given instructions, under no circumstances, for example, should private data be published out on the internet, that gives you another sort of last chance exception to look at everything that goes through and really understand more of what's going along holistically and give sort of an independent third party opinion of is that what's really going on, more big picture stuff. I'm super excited about that.
[00:13:28] Joe Colantonio All right. Yeah. So let's get into AI. It was on one of my thoughts of ask you about this seems like you are trying different things. I guess with ChatGPT probably AI has been around for about a really took off probably November 22, 23. How are you applying AI then? Are you still in the early phase of evaluating it and seeing how it's going to help or if you actually built it in like you gave this example, is that something you've already done?
[00:13:54] Derek Ferguson There are a couple aspects of AI that I can probably would flag up. One is its use for coding assistance, the sort of copilot use case. We have embraced that. It's rather interesting, as you see the same patterns across the industry, that I think many of us thought as soon as these copilot tools, I mean, I'll just say for myself, like on weekends, I do a lot of personal programming, no surprise there. I love it. To me, I cannot possibly get enough assistance with the stuff I'm doing. If it lets me move faster, that's great. Not everybody has that same immediate embrace of the AI coding tools, right? It can sometimes be a matter of habit that folks just know how to do something and they want to do it the way that they want it to do. And who is this algorithm to be telling me otherwise? Maybe in some cases, it's the shortcomings in the tools that like to me, I don't mind working with an AI and having to ask it five times to get the right answer because I know if I have to ask at five times, but at the end of the day, it gives me two pages of code to do what I want. Well, that cost me three minutes of arguing with the computer, but I got two hours worth of code out of it. By the end, that's a great trade-off, right? But that's, like I said, we're not unique in that. Everything that I read across the industry sort of reflects the same, slower than anticipated adoption and that side of the tools. And the other side, of course, is one of the things that we do as an organization is we publish and we sell research. Typically, our customers in the past have used standard search to find based on keywords, the information that they were interested in. Now we're bringing out this next generation of what we call genies, which allow our customers to ask questions the same way that you see with like Google that now you can go and you can ask questions or and so that we have AI adoption both on the patterns and practices and technology side and on our product side and, of course, I'm much more involved in the patterns and practices than in the product decisions.
[00:16:08] Joe Colantonio I think AI is very underhyped, even though people say they're sick of hearing it. It's almost like, I don't know, you probably remember when the internet first came out, you're like, ah, it's not a real value of businesses, not making money. And then all of a sudden, it is what it is now. People undervalued it almost, even thought it was all hyped, but it became real. I think AIs even taken off quicker than that. How do you balance then AI with human expertise, say people that maybe over rely on are people that just completely, I'm not gonna use it at all.
[00:16:40] Derek Ferguson It's an interesting question, so there are two paradoxes I recently became aware of. I'd been telling folks pretty much the same story for the past, I guess, 18 months about AI, and then I happened to read, oh wait, these are paradoxes that pre-existed. So these things have names that I've just been saying in English. And the first one is what's known as the Meno's Paradox. Apparently this was a Play-Doh thing but he talked about if you're trying to figure out the answer to a question, the paradox is you sort of need to know what an answer looks like in order to find the answer. But if you know what the answer looks like, why didn't you just, you already had it. I talk about this a lot with the adoption of AI and perhaps managing expectations and also managing fears because a lot of technologists, they sort of falls into two camps, There's the folks who underestimate it and aren't worried at all. And then there are the folks who I hesitate to say overestimate it, but have what I would regard as unwarranted fear, because in alignment with this paradox, I know when I do my weekend coding and I'm using ChatGPT left, right, and center, it accelerates me tremendously. But if I think just for a second about the guidance that I'm giving the tool, and what a non-technologist would have to do in order to get the same results out of it. It's not there. And I don't think it will ever quite get there because to the extent that somebody is willing to go with something, which is if I want a website and I'm happy to go with something which is very opinionated template developments, I don' t want any customizations from it. You can get that. That's easy, but you can get it without AI today. I think, for non-technologists, AI is going to accelerate that and make those sorts of things a lot better. But then you get into another set of phenomena, which is covered by something called the Joven paradox, or excuse me, the Joven paradox, which is the more you have of capacity, the more demand comes out of that capacity. The way I sort of see this playing out is, AI is gonna give folks the ability to create so much more with so much less time than they had before, but that's just the start of all of these new systems where we'll see software everywhere, everywhere, everywhere, say a hundred times more software than we have today. But then it needs customization. It needs maintenance. It gets into the things that are like the integrations that weren't considered and can't be known by an AI in advance, which is going to drive the need for, I think more software engineers. I tend to be an optimist in general, but the whole thing about adoption and what lies in the future for AI tools, I can see it creating a lot more software than we've ever had before. But along with that, I think will be increased demand for folks who can take that next level. And you mentioned the internet, something else I always tell folks, which really dates me is my senior project, my graduate from university was creating a website, but I'm not talking fancy interactive website here. I'm talking static HTML pages. I think maybe we had the scrolling banner at the top or the dancing baby, but that's now we accept that, well, gosh, you should be able to do that in half a day at most, right? It's going to be the same thing with build a mobile application with these 10 functions. It's today that seems impossible to us that that could be a half day task. It will get to be to that point, but as a result, our businesses are just going to ask 10 times more of us, right. Maybe 20 times. Maybe that's more than you were asking, I apologize.
[00:20:33] Joe Colantonio No, I love it. A lot of great points there. I love the paradox that you brought up. I guess one thing is I was pretty optimistic as well, thinking it wouldn't impact jobs. But then, like last week, I saw Zuckerberg saying he's actually going to replace mid-level engineers with AI. I don't know if it's all gusto, if it is real or not. But any thoughts on that? Because it's almost to me, it's almost like a paradox where, as well as in order to get the most from AI, you need to be a good coder, a good tester, a devops person because it's spinning things out. But how do you know what it's spinning out is what you really need? It's almost like it's works best if you know coding and things like that to get the most out of it, but I don't know being completely replaced if that's even possible. But I was wrong because there it is, that new story.
[00:21:18] Derek Ferguson Yeah, I don't like it's back to the Meno's paradox, right? That in order to know what, in order to find a solution, you need to know what good looks like. I can't comment on a specific statements by any specific company, I think in general, just as a technologist, it seems to me like pundits get a lot of value out of making sensationalistic statements. What I get was where do you use bravado? You have to take all of that sort of stuff with a certain grain of salt, because the person who comes out and says, hey, I'd like to talk about how AI is gonna make some slight changes in some of the stuff we do some of time, isn't gonna get nearly as much media coverage as the person who says, yeah, your job's gonna be gone in two years. Yes, that's what the media saying if it bleeds, it leads.
[00:22:17] Joe Colantonio Right, yes, for sure.
[00:22:18] Derek Ferguson Yeah. So, so there's a lot of that sort of stuff going on, which isn't to underplay potentially civilization changing capabilities of AI, particularly once quantum computing gets to a point that it's, you put those two together and theoretically find the answer to anything in a trivial amount of time. But I think from my standpoint, I fall back on that Joven paradox where the more capacity you give folks, it was apparently originally fuel-efficient cars that drove that cause folks to realize that because people were sick of paying so much for gasoline. And so they were driving less and then they came out with fuel-efficient cars thinking that will really help folks with their gas prices but people just said oh well if I had a hundred dollars or whatever to spend now I can use that hundred dollars and go twice as far cuz my car gets twice as much mileage. Same thing, I think, with AI. It just couldn't drive a lot of demand.
[00:23:20] Joe Colantonio Interesting. I totally agree. I'm from the testing side and I think the more code you have, the more it's going to need to be tested. Whether it's AI assistant or not, that's fine, but I don't think replacement is optimal. But like you said, with the quantum chips, I think NVIDIA came out with the Quantum chip. If it does replace coders and testers, then I assume it could replace anything. Then we're all going to be in the same boat, reading Play-Doh, like you said, and probably just staring out at belly buttons and kind of planning things, I guess.
[00:23:49] Derek Ferguson I think the other thing is I just, I find ChatGPT is just such a super interesting or even Claude. I mean, any of the different, you can do just so much interesting stuff with them. Now there is that bump in quality of life that like the internet, it's easy to sort of forget how much better things are when they start changing and forget what things used to be like. My daughter was talking about going to the library a few weeks ago. Oh, wait, yeah, that's something we used to have to do to get access to any information. I don't know about you, but I couldn't live without generative AI assistance at this point in time. I mean, that is my go to for every answer.
[00:24:34] Joe Colantonio I use it for everything. I use it for every aspect of my business. It's crazy. I mean, I don't necessarily trust it all the way, but it's like, it's a complete assistant helps like work alone. It helps bounce ideas off of and go, that's a good idea. That's a bad idea. But I use that every day. I think it's awesome.
[00:24:52] Derek Ferguson Yeah, me too.
[00:24:52] Joe Colantonio I guess let's, I kind of got off on a different path there. Let's go back to DevOps really quick. We started in the pre-show, you're talking about Dora metrics and a lot of people always looking for metrics and ways to know our indicators, how the DevOps efforts are going. Can you talk a little bit about DORAME? We can wrap it all up and maybe how that, that can assist people listening with their DevOps efforts.
[00:25:13] Derek Ferguson If you are going to embrace the path of continuous delivery, and I sort of see things like Agile and microservices architecture and the Dora metrics and cloud adoption, those pieces are all intended to sort of fit together because unless you have, I won't necessarily say microservices, architecture, unless you a decently decoupled architecture, it's gonna be difficult for you to have teams that operate independently. And unless you have teams that operate independently, you're gonna find it very difficult to get that sort of constant releasing into production because everything remains this, we all have to hold hands and jump at the same time to get anything to production because everything's intermingled, right? The Dora Metrics have presented, not just us, but to many organizations in the industry, a great way to sort of empiricize our progress in the areas where we want to move forward with continuous delivery, I would say. And predictably, what we see when we look at those metrics is the applications we have, which are newer and more decoupled, find a lot easier to adopt that working style where they're pushing things out constantly. And when you look at something like lead time to release. You see that stuff going left, right, and center. The stuff that's older technology with the more older technology and the older way of doing things, a lot harder for it to do those constant releases because every piece of technology that gets rolled out within those applications is linked to everything else. And it's harder to say when you do this, is it going to affect 10 different things? One thing that I did find surprising is we were rolling out the Dora metrics, which was a real sort of discovery for us when we got serious about tracking major incidences, which is two of the four Dora metrics are really about your incidences. How often do things go down and how long does it take you to recover when they do go down? If you had asked me prior to our adopting those and study, I would have thought that when things break, it was typically a result of code that we were pushing, like, you now, hey, everyone, be more careful with the that you're pushing. We've had a bunch of outages recently or something like that. In actuality, like what you find in what a lot of organizations find when they do stuff like this is your tech debt can be a liability just sitting there because other things change in your environment, operating systems upgrade or whatnot. And a lot the outages that you find, a lot expensive outages that you found are as a result of that tech debt more often than code pushes because when you're pushing code and knowing that you're from a testing background, that's the stuff that's gone, that's where your eyes have been. You've done the testing on it, that's what your focus is, that's why you're being careful. You actually stand a pretty good chance of success with that stuff going out the door. What Dora sort of showed us was actually, you know what? We need to be looking more at the stuff that hasn't changed in X amount of time, because unless you keep up to date with some of the stuff, it can become a liability to your uptime.
[00:28:34] Joe Colantonio That's a good point. Also, I would assume your software relies on a lot of third party services are consuming others data. Maybe I'm wrong. So it's almost outside your control when you're coding, making assumptions here. And if that's the case, how do you test or anticipate issues with non deterministic kind of dependencies that a lot of modern software relies on nowadays.
[00:28:58] Derek Ferguson It's an excellent point and I would say you would be right, were it not for the fact that we are sort of data experts and that is an aspect of our business, which has been the same for many, many, many years. That's the sort of thing where like data cleaning and data structuring and all that sort of stuff. That's we got that down and we've had that for a while. I think, when we think about the testing of it, the one area of acumen that I would really sort of like to try to get to is this idea of point-in-time testing and more unit testing because no matter how hard we try, to try get every possible test case in our data is not always feasible. Trying to push more back to the development side and say, you can do a lot more on the unit testing side with your code to feed in and mock up different scenarios than we're ever likely to get into the database. I think is, is sort of that's almost more important than trying to get the different model, every different permutation of data, something that's interesting. I've worked in both. In finance, there's fixed income and there's equities. In equities, there are test symbols that are recognized out in the marketplace where you can send a test trade like I want to buy a thousand shares for $50 of, I think it's symbol like XZ VBT or something like that.
[00:30:43] Joe Colantonio Okay, cool.
[00:30:44] Derek Ferguson That doesn't exist and the markets have agreed that that will just be a test symbol and they'll give you back fills. For something like synthetics running in production, that's great because you can and I think most of the major equities and trading organizations will have something that like every minute goes through and buy 500 shares of this test symbol, sell 500 shares of it and just keep doing it. For whatever reason it doesn't seem like that has been that has caught on as much in the fixed income space. And that's a real irritant and something, I would like to see sort of more of an organizational wide change to get that same acumen in the, cause testing in QA testing and UAT, all that sort of stuff is great, but ultimately what you want to be able to do is in production, have that constant monitoring and assurance that, Hey, everything is still running, particularly when you're worried about breakages caused by tech debt as opposed to changes, because if you know that something might break just by sitting there, you want to have something that's constantly going through and doing that test loop, very difficult to or it's much harder to arrange, I would say in fixed income than it is in global equity. Make that what you will.
[00:31:59] Joe Colantonio Yeah, no, great, great point. I've been seeing a lot of applications of AI in production as well to listen to what users are really doing with the application. When you are shift left and you're telling developers to do unit tests, they're not just testing things to test things or testing things that maybe that you know your users are really during. And if you don't have cases around it, then you probably want to handle it for sure.
[00:32:19] Derek Ferguson Usage analytics is huge and we talk a lot lately about like like Dora Plus, which is this concept of when you got the Dora metrics, you're measuring what we in software engineering would consider to be goodness, but Dora plus is sort of extending that to the business and saying, are we really building the stuff that our customers want? And that sort of usage analytics stuff that you talk about is super useful there to say, we've added these three buttons. And this button gets clipped 10,000 times a month. The second button hasn't been clipped by anyone. Why did you have this build, stuff like that. Super useful.
[00:32:59] Joe Colantonio Absolutely. Okay Derek, before you go, is there one piece of actionable advice you can give to someone to help them with the DevOps pipeline, CI/CD efforts, and what's the best way to find or contact you?
[00:33:09] Derek Ferguson I would say it will seem perhaps not as relevant specifically to DevOps, but to our topic earlier about AI, this is what I'm telling everyone right now. Dig in and understand more of the under the covers with AI. It's the same as every technology that has ever come along. The more you understand how to build the tools that you're using, as opposed to just be the end user of them, the better your positioning yourself. And in the labor market and competitively to sort of understand. Someone said, your job won't be taken by AI. It will be taken somebody who knows how to use AI better than you do. That's I think that that's the advice of 2025 in general is just understand the internals and that will allow you to be a better user and apply it more places. And as far as my contact information goes, you're welcome to connect to me on LinkedIn, send me a message or whatever. I would be remiss if I didn't mention my employer, we just found out today, Fitch has been listed for the third year in a row by built in as the best place to work in technology in Chicago and New York. So we're always on the lookout for great talent and definitely ping me you're looking.
[00:34:27] Joe Colantonio Great point. We'll definitely have a link to that, because I know a lot of people are looking for new opportunities. So definitely sounds like a great company, great technology, and a great organization and people to work with. So thank you, Derek, for that.
[00:34:39] Derek Ferguson Thank you.
[00:34:40] All right, before we wrap it up, remember, frustrated users quit apps. Don't rely on bad app store reviews. Use SmartBear's Insight Hub to catch, fix, and prevent performance bottlenecks and crashes from affecting your users. Go to SmartBear.com or use the link down below, and try for free for 14 days, no credit card required.
[00:35:01] And for links of everything in value we've covered in this DevOps ToolChain show, head on over to testguild.com/p181. So that's it for this episode of the DevOps ToolChain Show. I'm Joe, my mission is to help you succeed in creating end-to-end full-stack DevOps ToolChain awesomeness. As always, test everything and keep the good. Cheers.
[00:35:23] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:36:06] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.