Shift-left DevOps Testing with Vishnu Nair and Geetanjali Gallewale

By Test Guild
  • Share:
Join the Guild for FREE
Vishnu Nair Geetanjali Gallewale TestGuild DevOps Toolchain 2 Feature

About this DevOps Toolchain Episode:

Welcome to another episode of the DevOps Toolchain podcast! In today's episode titled “Shift Left DevOps Testing,” we will dive into shifting testing to the left and its significance in the DevOps and Agile methodologies. Our guest, Vishnu Nair, is a QA advocate with extensive experience designing and writing test cases across various technologies. Alongside him is Geetanjali, a seasoned technology lead with a wealth of experience in automation testing. Both guests have hands-on experience implementing DevOps and DevOps testing in a complex enterprise setting. Together, they will share real-world experiences and insights on implementing DevOps successfully and elevating your testing practices. Join us as we explore the different aspects of shift-left DevOps testing and learn how it can enhance your development process. Take advantage of this episode, which is packed with valuable insights and practical tips!

TestGuild DevOps Toolchain Exclusive Sponsor

Get real-time data on real-user experiences – really.

Latency is the silent killer of apps. It’s frustrating for the user and under the radar for you. It’s easily overlooked by standard error monitoring. But now BugSnag, one of the best production visibility solutions in the industry, has its own performance monitoring feature: Real User Monitoring. It detects and reports real-user performance data – in real-time –so you can rapidly identify lags. Plus gives you the context to fix them. Try out Bugsnag for free today. No credit card is required.

About Vishnu Nair

Vishnu Nair

Vishnu Nair is a QA advocate, currently working as QA Architect at Duetsche Bank. Vishnu has over a decade years of experience in designing, curating, and writing test cases across different technologies, including Big Data, UI, and Backend services. He also has expertise in Performance Engineering, Resiliency Testing, and Security Testing. He also has experience in QA tools and frameworks across different, functional domains and technology stacks. He has been a member of the Testing Center of Excellence in the past, where he not only designed & scaled Test Automation frameworks and established QA best practices and standards. In addition to expertise in automated testing, he is delighted to work with teams to find innovative ways to solve complex software problems.

Connect with Vishnu Nair

About Geetanjali Gallewale

Geetanjali Gallewale

Geetanjali Gallewale is a hands-on seasoned technology lead with a core development and automation testing background with leading global banks. She has a wealth of experience in driving strategic change, and organizational transformation, executing complex client engagements, and delivering enterprise-scale test automation programs for leading Global Banks and IT Organizations.

Engineering Leader with vast experience in Agile Software Development, Cloud, DevOps, and Test Automation to deliver highly effective and creative solutions to business and technology challenges. Individually involved in developing different automation frameworks of UI, Middle, and backend automation multiple times.

Recognized for exceptional problem-solving abilities and a collaborative approach, effectively partnering with cross-functional, different geographical teams to deliver robust test automation solutions. Exceptional academic record with university ranking (II rank among all branches) during graduation and post-graduation.

Connect with Geetanjali Gallewale

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild DevOps Toolchain Podcast. Today, we'll be talking all about shift left DevOps testing. If you don't know, Vishnu is a QA advocate currently working at as a QA Architect for a very large bank and he has over a decade of experience in multiple things like designing, curating, and writing test cases across multiple technologies, including big data, UI, and backend services. He also has expertise in performance engineering, resilience testing, and security testing, and he has a lot of experience, with other QA tools and frameworks across different functional domains and technical stacks. Really excited to have him back on the show. Joining him, we have Geetanjali, who is a seasoned technology lead and has a lot of experience with development in automation testing. She has a wealth of experience in driving strategic change in organizational transformation, executing complex client engagements, and delivering enterprise-scale test automation programs across leading global banks and I.T organizations. Really excited to have them both on the show because they both have hands-on experience as well, implementing DevOps and DevOps testing with a really complex enterprise company. Really, if you want to learn real-world experience on how to really implement DevOps and how to really up your testing in this area, you don't want to miss this episode. Check it out.

[00:01:38] Hey, if your app is slow, it could be worse than an hour. It could be frustrating. And in my experience, frustrated users don't last long. But since slow performance is sudden, it's hard for standard error monitoring tools to catch. That's why BugSnag, one of the best production visibility solutions in the industry has a way to automatically watch these issues. Real user monitoring. It detects and reports real user performance data in real-time, so you can quickly identify lags. Plus, get the context of where the lags are and how to fix them. Don't rely on frustrated user feedback. Find out for yourself. Go to BugSnag.com and click on the free trial button. No credit card required. Support the show and check them out.

[00:02:25] Joe Colantonio Hey, Vishnu and Geetanjali. Welcome to the Guild.

[00:02:30] Vishnu Nair Hey. Hi.

[00:02:30] Geetanjali Gallewale Hi, Joe. Thanks

[00:02:32] Joe Colantonio Great to have you. I really love people to have real experience implementing automation and DevOps in the real world, so I'm really excited to dive into this topic. I guess before we get into it, though, people may have heard of shift left testing, but maybe not necessarily shift left. DevOps testing, is it the same? Is it different? How would you explain what is shift left DevOps testing?

[00:02:53] Geetanjali Gallewale So if you see, Joe. Shift left is a wide concept. If you can see across the industries, shift left is used for the different nomenclature, if you go with the auto industry or some other industry, everyone talks about shift left. But when it comes to the IT industry, the shift left always talks about the shifting left to the testing. But whenever we are talking everything it is correlated with the shift left, with the testing. And when it goes with the DevOps or Agile methodology as well, these are really high-performing teams required this as a guideline and principle. But still there are challenges to perform as a habit. And at that time every time shift left come what we can say as a savior for them, they're saying that shift left was good and let's move shift left with the testing. So yes, most of the time we say yes, shift left is correlated to the testing, but I would say it's not only testing but many things utilizing properly with the DevOps and Agile, then we can say, yes, shift left is a good for Agile and DevOps. Testing one part of it, I would say.

[00:03:58] Joe Colantonio Absolutely. I have this debate all the time. A lot of people say testing has nothing to do with DevOps, but I think it clearly does. I think a lot of people just focus on functional automation as testing. They forget about all these other types of automation activities and testing activities. So maybe can you give us a few examples of other types of like DevOps, more related activities that you can shift left, and then we can dive into each one of those.

[00:04:21] Geetanjali Gallewale When we talks about DevOps and Agile, most of the time everyone talks about the feedback loop where we go with continuous feedback if we are good and how you will get feedback, continuous feedback. So most of the time it gets correlated with the continuous testing, we'll get the feedback. And for that, every time we talk about the automation is there. Then it will be a feedback loop. And when it comes to the DevOps or Agile when it comes to the faster delivery time. And so it talks about only on the functional testing part of it, but here we are talking about when it goes with the shift left, it's not only with the functional test, but we have to go beyond that functional testing and have the shift left not only for functional testing but integration testing, regression and some basic part of the performance for the stability purpose when it goes to the production.

[00:05:13] Vishnu Nair Basically, what exactly in our project, the main idea with the shift left approach was that to shift everything, the testing part as well as the DevOps part to the left side of the development. So to go hand in hand actually. We are actually approaching for a v-type model where the testing is happening along with the requirement analysis itself. So if we talk about the Agile principles and we follow the Agile ceremonies, as soon as the PI plannings are happening, the testing should, the mentality of the testing should be also in place with that along with that. So the requirement analysis in that the testers should be, it's not about just only testers, but everyone's mindset should be also in focus of the quality. So everyone should be aware of what the testing is happening and the main focus should be on the quality of the product as well as with the development.

[00:06:07] Joe Colantonio I find it very difficult why I did maybe it was the company I work for a difficult getting everyone buy into testing in general. How do you get people to embrace this DevOps then where it's more and more and more including developers and everyone else on the team? How do you get your culture, your team really buy into this whole shift life type approach?

[00:06:26] Geetanjali Gallewale Yeah, so it's not that easy, Joe because it need the mindset change it. And when we are talking about the mindset change, not only mindset change but there is a skill gap as well. And then we are talking about the shift left in the Agile team, everyone is a team member. We don't have these silos like QA or Dev, everyone should start working on it. So there are different challenges as when you start bringing these concepts into the team. We also face similar challenges when we started with these shift left approaches in our team. Basically, challenge number one is the mindset where initially even though we have DevOps and Agile, the team as a small sprint is working as a small we can say Agile, not fully sprinting but as a mini Agile model is followed like to SDLC model. Development is done after that the study is picking up that story while the testing and then going. As Vishnu mentioned initially, we asked team to start from the requirement page itself. So everyone is on board. Now, second challenge where we have to break the mindset so that breaking the mindset we need to onboard the QA schemes on each and every person. So we started with the knowledge sessions where how QA's mindset can be developed and how we can have the QA mindset and developers instead of developing the code, how they can think from breaking the code itself. So that is the area where we work. Apart from that, we work on the technical side where Vishnu can give the more idea about it. How we get involve tester or developer in testing, using the programming itself.

[00:08:12] Vishnu Nair The main functional part was train the developers from the changing of their mindsets and everyone on the team to have a QA mindset so that they can break the code. These was the functional parts. From the technical part, what we incorporated is that we used a certain kind of framework that was more focused rather than writing the code to write good test for code actually. Earlier we had some, this project is elaborated so we have big data, ETL, back end, UI everything is in place actually and it's a huge background of the project. Very big teams are working and there are a lot of QAs are also involved in this, so we needed a framework which we can design and the testers who are testing when I'm seeing the testers, it's a unique term. Anyone can test it actually, the Dev team or the tester. So their main focus should be on writing the testing test cases actually, rather than depending on any framework itself for writing any code on the framework side, they should be only writing the test cases so that we have a higher productivity on the feedback loop. For this, what we did is that we used Karate framework, the DSL where we are writing on the keywords so that we can write the test codes. So we have the same framework being used for UI, API, back-end, ETL, as well as for big data automation as well. And since we only have to take care of the keywords, then if we place it correctly, the test code itself gets developed. So in that fashion, what we did is that we used the framework to have a faster automation script generated for this. So in this way, we were able to attain writing the test case on the pre-development side itself. So as soon as the story is in place and developers are working on the development side and the tester who were maybe testing on that they can write the test cases parallelly on that. Rather than they are focusing on the framework and how the execution will happen, they are now focusing on writing the test scripts only.

[00:10:18] Geetanjali Gallewale I want to add on it that the framework in such a way that no one has to learn for it that there are less learning curve. So that is another process change we applied that if anyone wants to onboard on it, he or she doesn't have to give more learning or he can start immediately.

[00:10:35] Joe Colantonio It's a good point. Especially, when you're dealing with developers, Oh the developers, they should be able to pick up Selenium or something and code it. In my experience, that is not the case. So even using a keyword-driven framework, a lot of people think I was just for business users, but it sounds like it's also a driver for developers and testers. So you're speaking the same language and not getting caught up in like the technical geekery of testing almost.

[00:11:01] Vishnu Nair Right. The Karate ideas will actually help us here. We are using the keywords and so that they can understand what exactly is happening with the server. Whoever is reading their test code is aware of what exactly is happening with the testing. So everyone is involved from the Dev team, from the stakeholders, from the project managers, everyone who is involved in the testing side, and then are exactly knowing what that because we are following the acceptance criteria. So based on the acceptance criteria, we are writing the test case. Everyone knows what exactly is being developed and what exactly is being tested.

[00:11:35] Joe Colantonio All right. Like I said, you work for a huge bank, big, big organization. How big is this group? Like, do you have to have like an architect on top? Like, do you actually lead the effort with each of the teams? Do you have to know who has training? What they need to know? How does it work as a part of like you have a new hire and a new hire goes through like just how does it work? How do you get everyone involved and how big is the team and how do you make sure everyone's on board and using that all the right processes that you have in place?

[00:12:02] Geetanjali Gallewale Yes, ours is a big thing. We have around 100 plus team members involve for our project where we work for the surveillance. And when we are talking about surveillance. It's a huge piece of data we're getting which needs to be processed and alert. And after getting alerts, reported to the business or for further actions on it. We have enterprise system. As you mentioned, yes, we have around 6 sprint teams working on it for each features, different features. On top of it, we have an architecture layer, on top of it we have the businessmen layer, on top of it we have the management layer. And when I'm saying about this business layer or the architect level, it's across the globe because surveillance for being a big bank presence across the world. We have the branch data connecting across the world. So data is so huge. So when we're talking about data, unstructured data, where it is coming from the bank, whether in a different type of languages, because when we are talking the surveillance of all the data, which is going. It's all types of data audio, and video records as well as email, chat, or social network data and even from the regulatory system. All types of data coming. So if you can see for us backend means, ETL jobs as well as databases, as well as the ... where everything is being processed and structured and then it comes to the UI. So if you consider our presence of the project guidelines, 70 to 80% of backend and just 20 to 30% on the UI side, and we need a framework where everything are there. And that's where Vishnu had clearly mentioned that we are talking about QA skillset, one can be everywhere, that the backend testing as well as ETL as well and same with the development team as well. So we need to onboard as well on the every side and that's where yes, as you mentioned from the training perspective. Yes, whenever there is a new onboarding person comes, new person, we have to give the idea about everything.

[00:14:10] Vishnu Nair To add on that, we are taking regular sessions on that actually. Monthly, we have one session throughout the entire team actually. So whoever is new joining, we make sure that they are joining and we also have a feedback loop. So earlier we actually make some who have worked previously on the shift left, we invite them and we generally provide feedback how their experience with the shift left approach was so that it builds confidence on the new onboarding members, how well it is right now working and how well they can contribute to it.

[00:14:46] Joe Colantonio I guess I don't know why I'm fascinated by this, but once again, I'm going back to your banking company. So a lot of times if people take Agile and shift left, like all the latest and greatest, it seems like they have to be more like Google or something where you have like maybe newer technology. But I used to work for an insurance company, I used to work for a health care company, and they have older technology and people usually say, Oh, we can't do ICD, we can't do DevOps because we have all this legacy software, we have all this regulation. So working for a banking company with you probably have all the regulations as well, you have to work around. ETL is not simple. You're not doing simple browser-based front-end type automation. Did you have any challenge to overcome in that aspect where people are saying like, Well, wait a minute, we can't move this fast or we can't shift left because we have to follow these different processes because we're regulated or something, or we have to worry about being audited or anything like that.

[00:15:44] Vishnu Nair Right? We had these challenges actually since we are having audits as well, and we have certain limitations on adopting a new kind of framework or new type of any open source. Because being on the regulatory side, we have to make sure that whatever we are using are stable on the stability versions and they are secure actually. Yes, we had these initial challenges. So how did we overcome is that we actually have a very good DevOps team and the architect team, and we ourselves has go back through the different back to where we have started earlier. We actually go back to what are the challenges we have and what all are solutions are available in the bank already and using that we accommodate few changes we did ourselves, DevOps teams helped us in setting up the CI/CD pipelines and still they are providing the support on whatever friends they need. Either it is any UNIX side or any database side. Definitely, the DevOps teams and the architect team are providing better feedback and help to us so that we are able to achieve.

[00:16:52] Joe Colantonio You both mentioned feedback multiple times. People talk shift left a lot of times they don't realize that when you shift left, you're getting information from production after you release and then building that back into the system from the beginning. So how do you measure how you're doing? Do you have any metrics or KPIs used to realize, okay, we did this release, it went successful, it didn't go so well, let's learn from it and then build it in. Now, when we start from requirements again with our next sprint or next feature.

[00:17:23] Geetanjali Gallewale Initially, what you mentioned, yes, we used to do that, but now we have the control environments available for us for validation. And when you talk about the control environment, we get broad like structure on the control environment. It's a broad like replica where we do the feedback loop and where we get the production-grade data, the production-grade manufacture data, where we can accommodate large volume as well, and we can do the performance and continuous feedback. So from the regulatory perspective or what you are talking about, yes, there are still restrictions, but with the control environment, we can have the feedback loops available.

[00:18:02] Joe Colantonio All right. So how do you create a realistic control environment? Because once again, when I was in health care, it's like we shipped to these different hospitals, their environments were completely different or it was too expensive. What are you using to build those environments to make sure that they're accurate?

[00:18:17] Geetanjali Gallewale It's a replica production and it will be like a production environment only with restricted access. So when we are talking about all these being Google or Amazon, they have the replica, everyone can access, including their team and they can use. But in our case, it's a control environment where limited users are going to have the access from the architect and that's the small team who can execute and replicated it. In an integration environment, everyone can do and replicate. It is again, the similar structures like from the budget perspective, yes, we get the budget where we can replicate it and have the production-like environment, but it will be with the restricted access and with the limitations. But it means like a fraud. Fraud replica you can say.

[00:19:02] Vishnu Nair To add on that, we have a particular team for that actually who are doing and masking all that stuff so that the production data can be replicated. So we are not entirely keeping all the production data, it is just a small portion of the production-like data which we can mimic for our environment and for which we can run our automation test cases on that actually and can provide the better feedback.

[00:19:31] Joe Colantonio Because that was my follow-up like the data, especially in banking. How do you get realistic data that is like production, but you can't use it real customer data obviously is going to be regulated in your banks like masking. But like is there any more that you can give for like how that is done?

[00:19:47] Geetanjali Gallewale We have separate teams CDO teams available to us. It's the central data organization which provides us the data on all the environments with the manufacturing of whenever it is required by the lower environment and with masking when it comes to the control environment.

[00:20:05] Joe Colantonio Very cool. Now we talked about Karate. Is there any other tools or things that help to enable you to shift left with DevOps that you find helpful?

[00:20:17] Vishnu Nair Karate is the main tool that we are using for the automation, but for performance testing, we have a huge data and we have to do some of the infrastructure testing and as well as some of the performance testing on the API sides for which the UI is taking care of. So for performance testing, we are definitely relying on the JMeter where we are writing the custom jobs for that actually, for the infrastructure validation. So infrastructure validation is not a regular basis we are doing. It is based on the infrastructure changes if any. For any upgradation we are doing infrastructure, then we are running the performance script on the infrastructure side. For the UI and APi side, we have loader runner, we are using the load runner to run, to execute, and mimic our user experience. And for the sprint level also, we are trying to execute our performance scripts using Karate itself. So we are writing the script in the Scala format and Gatling, the inbuilt mechanism of Karate it is helping us to test the performance script within the sprint timeframe.

[00:21:20] Joe Colantonio Who handles the performance resilience testing? Is just testers and you also expect the developers to contribute. Is it just the whole team who just part of a feature? Okay. In the sprint, these are the definitions are done. Does it matter who does it? But it needs to get done?

[00:21:35] Vishnu Nair So we are actually doing the initial phase when we were starting before that shift left of the testers were only that part of the team. But now right now, since we are moving with the shift left, so anyone who has done in the scripting, because once the feature has been designed by the Karate, you can just call that feature in the Scala language and then you run the performance set. So anyone who is working on that feature, whether it is a Dev member or QA member, they can execute the test cases for the performance.

[00:22:07] Joe Colantonio Cool. All right. If someone listens to this sounds great. I work for a banking organization. I want to do this as well. Any caution you would give them or any like advice on how maybe to make it a maybe a smoother transition than maybe some things that you may have had to have dealt with or you dealt with when you were first on roll this out yourself?

[00:22:25] Geetanjali Gallewale Yeah. So couple of things I would like to do the benefits that one thing whenever you are starting from scratch. Don't look at the silos where developer will do development. Build a T-shift team where at least some knowledge about the testing on about the architecture developer and so on. And when it comes to the tester, it should have a development background as well. On how he or she should be able to contribute on the dev side as well whenever are required. At least, from the analysis side as well. T-shift team is the most you have when it goes with the shift left that will be the key. And second thing go for the framework automation feedback loop which can be utilized from functional testing in performance test without any maximum exchange. Whatever is requirement for the changes from moving from functional testing to integration testing, it should be just plug and play from the functional test script. So that will be the give a good advantage for the shift left. Third and the last thing, don't focus on only build automation when you are talking about continuous testing is a must to have the shift left and that feedback loop from the continuous testing is must to have. From scratch, when they are developing, then these three points should be looked into.

[00:23:50] Vishnu Nair So my giveaway would be like the performance should also be included in this testing, mainly from the infrastructure side, because infrastructure is a costly environment and we are investing so much. And once we have everything on that and the last we decide that this infrastructure is not supporting the data but we were supposed to have. So infrastructure testing should also be a key portion when we are having an infrastructure being designed and it should be tested as well.

[00:24:25] Joe Colantonio How do you improve the system? So you had a new tool, a technique like AI is a big rage. Do you research different technology like and say, okay, let's try to incorporate this now and see how it can help us? Like how do you continually improve their process to make sure it's continually getting better as well?

[00:24:44] Vishnu Nair We are analyzing on the market what we have in the market and we are trying to implement the current trend with the current trend. AI is a big play right now that we have and it definitely has something to provide on the testing as well. As a team, also we analyze what can be improved on the AI side and as individual contributor, we will definitely go back to the market and we try to have a continued conversation with other team members who are working in different organization and how they are working with the AI tools if they are using anything and we try to get a feedback from them how well it is so that if we come back to our organization and we see there is a chance that we can implement this changes, we can incorporate.

[00:25:27] Joe Colantonio All right. So one other thing I'm dying to know is obviously you need money to do these types of initiatives. How do you convince you're like your C-suite, your CEO, like your executives, like, hey, yeah, we're going to do this. And this is why it makes sense. Is this coming from them to tell you, Hey, we need a better DevOps process? Is it coming from you all saying, Hey, this is what we're going to do and then you have to get buy-in from your team, their executives, to do it?

[00:25:51] Geetanjali Gallewale I would say both of way. We have in a bank hackathons going on even on the innovations going on over the year and the competitions we on the innovations, we can put forward our ideas, and the best idea get selected and get budget as to implement it. Even in the hackathon we gathered the ideas and best ideas, we circulate across the team to develop it and whatever is been developed can be utilized and work on it as well as it can be a product as well for the bank.

[00:26:26] Joe Colantonio Very cool.

[00:26:27] Geetanjali Gallewale We have these things implemented.

[00:26:30] Joe Colantonio That's nice. All right, before we go, is there one piece of actionable advice you can give to someone to help them with their DevOps efforts? And then what's the best way to find or contact either one of you?

[00:26:41] Geetanjali Gallewale I would say if you want to go with the DevOps with the one good advice, start with the V-model where you can have TDD and BDD approach, combine the work where TDD can go with the unit testing and BDD can go hand in hand with the TDD for acceptance test. So help them work with that so that you can achieve shift left better.

[00:27:04] Vishnu Nair From my advice would be you know better than focusing on the manual testing effort mostly should be on the automation side as well, because automation, even though the ROI on the automation side for the initial phase is less. But as soon as we grow, as soon as our project grows, the ROI seems to be on a higher side on the automation side. So it's better to have the automation working parallelly. Manual, in some cases it's beneficial, but it's always to have an automation script to have the pure benefit you can get.

[00:27:38] And for links of everything of value, we covered in this DevOps toolchain show. Head on over to TestGuild.com/p126 and while you are there, make sure to click on the SmartBear link and learn all about Smartbear's, awesome solutions to give you the visibility you need to do the great software that's SmartBear.com. That's it for this episode of the DevOps Toolchain show, I'm Joe. My mission is to help you succeed in creating end-to-end full-stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers

[00:28:11] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

SimpleQA, Playwright in DevOps, Testing too big? TGNS140

Posted on 11/04/2024

About This Episode: Are your tests too big? How can you use AI-powered ...

Mudit Singh TestGuild Automation Feature

AI as Your Testing Assistant with Mudit Singh

Posted on 11/03/2024

About This Episode: In this episode, we explore the future of automation, where ...

Eli Farhood TestGuild DevOps Toolchain

The Emerging Threats of AI with Eli Farhood

Posted on 10/30/2024

About this DevOps Toolchain Episode: Today, you're in for a treat with Eli ...