Performance Testing iOS apps Using Appium and XCode Instruments with Sandeep Dinesh

By Test Guild
  • Share:
Join the Guild for FREE
Sandeep Dinesh

About this Episode:

Did you know that Appium has an in-built integration to XCode Instruments? In this episode, Sandeep Dinesh shares how to leverage this information to measure your iOS app’s performance in a real device or a simulator. Discover performance testing mobile app tips, what to measure, and key performance indicators to be aware of. Listen up!

TestGuild Performance Exclusive Sponsor

SmartBear is dedicated to helping you release great software, faster, so they made two great tools. Automate your UI performance testing with LoadNinja and ensure your API performance with LoadUI Pro. Try them both today.

About Sandeep Dinesh

Sandeep Dinesh headshot

I am Sandeep, having close to about 17 years of professional experience as a Software Development Engineer in Test, CI/CD, and DevOps. I am passionate about writing code to make Software testing more efficient and effective. During my free time I like to watch movies, read trivia, solve puzzles and spend time with my family & friends.

Connect with Sandeep Dinesh

Blog:  sandeepqaops.medium.com

LinkedIn: k-sandeep-dinesh-3b609016/

Full Transcript Sandeep Dinesh

Intro: [00:00:01] Welcome to the Test Guild Performance and Site Reliability podcast, where we all get together to learn more about performance testing with your host Joe Colantonio.

Joe Colantonio: [00:00:17] Hey, it's Joe, and welcome to another episode of the Test Guild Performance and SIte Reliability podcast. Today, we'll be talking with Sandeep Dinesh all about performance testing, iOS apps with Appium and Xcode instruments. Sandeep has over or close to 17 years of experience as a software developer in test, build and release, and DevOps. And he's really passionate about continuous testing and writing codes to make software testing more effective and efficient. Really excited to have him on the show because he's a tester, but he also does performance tests, and since we're going to dive into a little bit on why he thinks performance testing is important and how he incorporates it into his normal day-to-day automation testing plans. We're really excited to have him on the show today. You don't want to miss this episode. Check it out.

Joe Colantonio: [00:01:00] This episode has brought to you by SmartBear. Listen, load testing is tough. Investing in the right tools to automate tests, identify bottlenecks, and resolve issues quickly could save your organization time and money. SmartBear offers a suite of performance tools like LoadNinja, which is a SaaS UI load testing tool, and LoaUI Pro, an API load testing tool to help teams get full visibility into UI and API performance so you can release and recover faster than ever. Give it a shot. It's free and easy to try, head on over to SmartBear.com/solution/performancetesting to learn more.

Joe Colantonio: [00:01:46] Hey, Sandeep, welcome to the Guild.

Sandeep Dinesh: [00:01:49] Hi, Joe, thanks for having me.

Joe Colantonio: [00:01:50] Awesome to have you on the show today. I saw a post on LinkedIn that grabbed my attention about you doing performance testing with mobile apps. Before we get into it, though, is there anything I missed in your bio that you want the Guild to know more about.

Sandeep Dinesh: [00:02:00] Oh, no, no, we're good. OK, thanks for, I mean, this was like a great introduction for me. Thank you so much.

Joe Colantonio: [00:02:06] Cool. Awesome. So Sandeep, I guess you have a lot of experience with testing in general. How did you get into performance testing because a lot of people sometimes completely ignore it?

Sandeep Dinesh: [00:02:16] I've been mainly working as a software developer in tests and most of the time the focus of most of the companies that I was working in. They were like, OK, we want to get to this much of test coverage. We want to make sure we have a CI/CD working fine. We want to know how that software is behaving functionally with the backend servers or even the clients, maybe like the browser client or maybe even the mobile client. So I've been working in my current organization for almost one and a half years. We have a very small mobile app, which is both iOS and Android. And so we have got to a point that we are reasonably stable in terms of the test automation, functional coverage. And that's where we really thought, OK, hey, what's the next thing which we can do for this product? The next thing, obviously, is that how best that is performing in live systems. So and the biggest challenge is that now that we have iOS and people who have worked in iOS automation would know that it's really tough may be going through things like Appium or, you know, all these opensource tools, because the platform, in general, is sort of, you know, limiting you in a lot of it, not compared to, comparable to something like an ADB or all the programmer-friendly APIs and tools which the Android project. So I was working on this feature for almost three or four months now. And just to figure out how to get my existing automation test to also trap the performance metrics as well. It's been a long time since I wrote something and I really wanted to make sure I knew for sure that it is definitely going to be useful for people. That's why I wrote this piece of material.

Joe Colantonio: [00:03:56] Very nice. And I assume the reason why you wrote as well as probably there wasn't a lot of information out there and how to do this. So was that one of the challenges?

Sandeep Dinesh: [00:04:03] It was sort of because like we have some example in Appium Pro, but that's actually like a Java thing. But then how we adapt it more towards what you say, maybe in the JavaScript world or even more information about how the Xcode instrument works. Those are some things which I wanted to make sure of because like the current example which we have in in the documentation is something like, OK, we can use some specific things like activity monitor, time profile, and things like that, which are Xcode specific instruments out of the box. What I wanted to do was write something which is useful for JavaScript as well as, you know, we can have, we have the capability to sort of add custom profilers. Let's say for this example, I tried and like this was like a real-life scenario, which we wanted to do, like in my app, I wanted to make sure that I want to check the GPU, what is the FPS? Because those are the things that really matter as performance metrics for a mobile app. So all of those things, how we can sort of measure using a single software test. So that's the reason that was my main motivation to write more in a more sort of you know the detailed way.

Joe Colantonio: [00:05:11] Do know you also have an Android version of the app as well?

Sandeep Dinesh: [00:05:14] Oh, we do. And then for Android, we probably can get these things, you know, most of this information using ADB, because one good thing about Android as a platform is that you can basically use the shell commands itself, maybe like a PS or top and all those sorts of things. And then it's basically about getting that information in some CSV format. And then basically there is another thing for specifically the dumpsys command, which we can use for the GPU profile. And based on that, we can get the stats. So that's what I'm saying. The point is the Android is much more programmer-friendly. I don't, I'm not saying whatever we have now is as bad, but the thing is that getting it, it's almost like Apple's ideologies like, OK, this is what the customer would want. And this is actually a fabulous tool, the Xcode instrument, you know, you get everything, our profiler's literally it has a lot of graphs, it has a lot of statistics. You can drill down further and further on all of the things. But then when it comes to the sort of like a test automation requirement like where I want to know the trend of how that software is like behaving. Typically, what we will do is that you run performance tests periodically, you collect the stats, and then you store the key stats somewhere maybe. And like eventually you can retrieve that data and see how the trend is going, whether we are improving performance-wise or degrading. So those sort of things, we really want to have that sort of programming flexibility, that sort of something which is still missing in iOS, I feel. The tool is fabulous, but then it's more or less what is it and what gives you all the options to write the code where you wanted, I mean, how you design it, but iOS is like, OK, this is API which you have, these are the format of data you will have and you probably need to work backward from there. Oh, those are my personal views.

Joe Colantonio: [00:07:04] This might be a dumb question. I'm thinking of web testing, people say, well, Chrome performs the same as Firefox, so does performance vary for the same exact app on iOS compared to an Android, could the performance be significantly different? 

Sandeep Dinesh: [00:07:17] I haven't sort of worked on specific native apps. So on both iOS and Android for the comparison, we have a codebase where both the platform independence is guaranteed using React Native, so the React Native by itself, it sort of takes care of how the app even displays itself, manifests itself in both iOS and Android. And of course, I have noticed that there are marked differences, the way the API, underlying API behaves. And surely it's kind of it's more or less like JVM per se when you say ReactNative, you basically have the same code which runs on different environments, but whether it is iOS native or the Android native, but then how does it actually translate? And that's what we were really interested in, because in the past when I was working in one of my previous organizations, we had specific native claims for both the platforms and the performance tests for them were different. And over here we have React Native so it adds another layer of sort of performance measurement. The level of control we have and the level of measurements we need to do are really critical in that sense.

Joe Colantonio: [00:08:23] So you mentioned a few monitors already GPU and FPS, and how many people are familiar with those terms? Maybe we could dive in a little bit more of the main monitors you look at and maybe what each one does, that be awesome.

Sandeep Dinesh: [00:08:32] Sure. So GPU is the graphical processing capability of the mobile. And specifically, we want to make sure that let's say whether it is A.I.S, the top frames per second, which is basically the number of frames of animation or the map rendering, is a measure of how the app renders itself, if there is what they say sometimes what happens, probably if you have played games or things like that, you see that suddenly the game slows up and, you know, people say, OK, there is a lag, there's is a drag. There is a lot of, you know, sort of terminology people say about the way the app has displays being shown. So it's a measure of that, you know, so it's a measure of how many frames per second we generally we are currently showing and the best case is 60. But then obviously human eyes don't need that. We really need to make sure that we have on an average of around twenty-four sort of frames per second, if it's more maybe 30, 32, it's actually pretty good. So we want to make sure that whenever we execute a particular performance scenario you go through when you're playing a game or maybe you're navigating from pages to pages. We want to make sure that the GPU information is captured and Xcode Instruments has a GPU measure of which you can basically add to one of your instruments and we want to make sure that, you know, it's not really like pushed to the GP pass percentages around less than 50 percent, that's actually pretty good. And then you want to make sure that you have on average around 30 frames per second.

Joe Colantonio: [00:10:04] Now, you didn't mention that Appium gives you some information and Apple itself gives you some information, do you have a report at the end, or are you able to merge both of them. So you get a good performance kind of overview?

Sandeep Dinesh: [00:10:14] No. So just to explain how Appium does this, so Appium, as you can know, it's actually something like it's a small web server which sits in between the mobile app and the client, which our test, which you are like you know sort of running. But in order to get to the app, it needs a driver. So for iOS, they have something called XCUI test server, it's basically an XCode UI test server, it used to be a separate project in itself. But now I think over the last couple of years since there was no new development happening, Appium sort of, you know, has added it to one of their own opensource projects. So this driver basically also Appium runs on a set of what is a W3C standard comment command so that gets translated into what exactly what you want. Let's say when you say Appium.click, it actually gets translated into the driver, into the iOS code. So this driver has an integration with the Xcode instruments. So and they have a utility, they have opened up like in the example I have given, a couple of JavaScript commands which says that, OK, start the performance and stop the performance and basically give what is the instrument which is going to run. And then basically this format, the data format is sort of it's like a buffer. You just need to, and like it, it collects all this data and then we just need to decode it to 64 bit. And then it gets translated into a zip format, which is more or less like the .res file, which you can sort of, you know, if you have Xcode installed on your Mac, you can basically double click on it. This file would render itself how you know, how the app was behaving, and things like that. It's more or less like, let's say if we didn't have Appium, you just wanted to profile your app. So what you do is you open the project in Xcode and then you click on one of the top menus, you clicked profile. So what it does, it builds it. When you are doing this, you can choose which are the things you're going to profile. And then the app is, you know, it's sort of, you know, ready with that information. And it has all those, what they say, monitors triggered and attached to it. And basically, you can do all your operations and then stop this activity. And then you have this information, it's basically Appium sort of providing you that sort of information, like behave programmatically, if you want to do this using Appium, you can do this. You don't need to go through the Xcode method and you just need to, you know, it'll be like if it's like I don't know anything about this thing I said tester probably, or as a test automation engineer probably will think, OK, I'll try to automate the Xcode, I know, opening the project and things like that. You don't need to do this. You can do this like this. So that is the strategy.

Joe Colantonio: [00:12:53] Really good. I have a link to your blog post in the show notes because it's just four steps. So it seems fairly easy. I don't know. Could you talk at a high level about what those four steps are to get someone started if I feel like it's too hard for me or you know, I don't know if I could do this?

Sandeep Dinesh: [00:13:05] Yeah, I mean, that is the whole point that I really wanted them to sort of, it's that easy because like, obviously we don't know much about Xcode instruments in general or maybe how easy it is sort of to convert your existing because existing test automation code into a performance test using that. In my example, it's a simple login test, which typically how you would be writing instead of that, I'm just saying, OK, add a couple of hooks during whatever you want to really measure. You just need to start the performance and stop the performance once it's done. The only thing is that probably whatever template you're using for the Xcode instrument that needs to capture whatever you want. So the first thing which you need to do is you open up the Xcode, then add a blank sort of what is a performance template? Add all the content that you want, maybe like you want activity monitor, you need the GPU, you need the allocation— the allocations are for memory—, the disk IO and like one crucial thing which people don't realize is the thermal state of the iOS. to be honest like Xcode really gives a lot of good features, you know because a lot of times you see that the mobile just heats up because when you're playing an app or then you are using an app for a long time, all of those things can be measured. So about how you can add all those, you know, instruments would you say, together create a template, set that aside, note down its name, and then create a small utility which would basically just call this Appium script. What is the Appium, the driver.execute script, start the performance with this and make sure that the PID which you are, the process ID which you are using is the current. Just that's something they have written in their code like that. If you don't pass that process ID, PID is current, what would happen is that it would do the complete forms, what you say data that will be huge, but you don't, which you probably are not interested in. So and then once your test is done, probably you just need to make sure that you don't stop the recording and like, you know, in your Appium are the same logs. You can actually find that this much data, these many bytes have been returned. You just need to make sure that the data is captured, convert it to a format that you want and then you're good to go. It's that easy. 

Joe Colantonio: [00:15:17] Very cool. So do you have a separate test just for performance or do you instrument some of your tests that are normal functional tasks and you just add extra features to capture the performance as well?

Sandeep Dinesh: [00:15:28] The current strategy we have is that we have some functional tests which are like Single-Use or use cases, or maybe I'm not really logging into a folder or place like that where I don't have multiple data. But here, maybe it could be just like, you know if we open maybe let's say we are trying to automate a feature like a gallery. So a functional test would be just, OK, I have this a single picture or a single video there, how quickly that gallery app opens up, that would be more or less like a functional test. But then for a performance test, I really want to make sure that, OK, maybe there are like hundreds of images there or hundreds of videos there, or if I scroll through if I zoom through all the sort of things. So some of the tests like the final test, we definitely want to make sure that they are Single-User or basic things like, you know, the performance can be, you know, it can that can go beyond that. So for those sort of things, we use, we reuse some of the function tests, but otherwise, it's basically like, you know, we have some extra things, which probably that's a good place where you can do, just where you really need to have a discussion with your app developers to maybe they know some part of. Of course, in an ideal world, we all have enough time and you have time to open and read the code, which the developer has written. But of course, that's where like in good teams really they help each other and say that, OK, hey, these are the performance scenarios which I can foresee, how is it good enough or is it like, do you want extra or features to be, you know, tested or even we can say, OK, hey, this is a small called some trick, which I have done probably I'm not so sure the developer can themselves say that. And so the whole point is we want to try and avoid bugs rather than, OK, hey, I caught this bug or I didn't catch that bug, that sort of thing.

Joe Colantonio: [00:17:09] How often do you run these tests? And is it on every check-in? Is it like every, every quarter, like how often? 

Sandeep Dinesh: [00:17:16] Well, the company that I'm working is very small and we are we want to really push automation to the core and like we want to shift as much as possible to the left, shift left is that really the really mean business there. So, of course, functional tests are run almost every day. These are part of the sort of no release test where we have sort of, you know, I mean, even before our product actually goes out to the market, we want to make sure that, you know, we do performance tests every cycle. But then the ideal candidate for performance test would be like more or less like a staging environment pre-prod, we really are sure. But of course, when you write smart code, we want to make sure that we can make sure that it runs on maybe underdeveloped branch, too. So that's the strategy we are currently taking. But ideally, it should be on a staging environment pre-prod. 

Joe Colantonio: [00:18:04] So are there any lessons learned? Like maybe you started the project and then after you like, I should've done this differently, or maybe we should have, I wish I knew this before I started this particular project?

Sandeep Dinesh: [00:18:13] That is actually sort of interesting because when, I started my career as a performance test engineer, almost like way back in 2004, I was just doing LoadRunner. And like at that time when my team asked me to do, I could it didn't really make much sense. You know, how critical some of the steps which we have or things like that. OK, so at that time, back in the day, I was doing server performance tests, but out here when I did the app performance, I forgot all that. Of course, we are all humans. We, I mean, most of the time I was doing development work and automation that's why, of course, I forgot some of the performance concepts. So I was doing scenarios one by one and most of the time I was just noticing that the app was, the performance of the app was really going bad. And I was like really worried. Well, that's, that can't be true. Then obviously I understood that each of the scenarios are sort of different. We want to make sure that they are independent and we want to give the best chance for the app every time we are doing the test. So that is a lesson which I learned basically the next time what I did, I make sure that, you know, the system has all the hardware capability. Maybe it could be through rebooting, reinstalling the app, making sure that the connectivity is perfect, all those things. So just to make sure that, you know, the performance data which you collect are really valid. Otherwise, if I just took this data that they probably would panic, oh, come on, it can't be that bad. So it is sort of a nice experience and probably, you know, it is good to learn that early rather than, you know, view all this data, publish it to the web and they get the feel of the product is really bad. So we want to make sure that we are doing the right things at the right time.

Joe Colantonio: [00:19:49] Absolutely. So I guess another thing, I come from a similar background, I did mostly performance testing earlier in my career, but back then it was putting a load on a server. So you had multiple users and that was the performance, how well it did under load. When you do mobile testing, do you ever do concurrent users or is it always a one-user type of performance test?

Sandeep Dinesh: [00:20:08] Yeah, I mean, it really depends on the app, most of the apps which we have, don't support concurrency. Maybe Google Chrome is sort of an, or any browser is sort of a concurrency example where you can have multiple tabs and each tab remains as a separate process in itself. But most of the apps, if you open one, maybe the second app, they don't generally allow because they somehow feel that maybe that's not the right thing to do. But yes, the major concern when we are doing performance tests for apps are basically how much memory it is consuming, how much CPU time it's been using, how many disks write it all and all those sorts of systems are you using and of course, the GPU. GPU is so critical, I can't quite emphasize it enough because previously I was working for game testing as well, and back in the day I was doing slots testing. Slots are like, you know, if the reels are really there, they became suddenly in slow motion. The customers are going to be super, super pissed. So we really want to avoid that. So the major thing we didn't need to worry about this is these critical features. At the same time, we want to also test some things like when the app goes into the background. How much of memory is it releasing back, you know, because in an ideal world, probably the app doesn't reside on its own. People have multiple apps, you know, they're seeing multiple notifications. How does that prioritize all those sorts of things? We want to make sure that that customer experience is good. So that's why like you said like it's, the major difference is that you know, you don't have, most of the time, you don't have multiple users, I'm sure maybe the browser, maybe the testing theme, they really need to worry about. They might be doing that sort of, you know, concurrent multiple user scenarios that will be interesting too, maybe that's something we will also be interested in, maybe we will try it or let's see. 

Joe Colantonio: [00:21:53] It would be awesome to have you back on the show for that. So do you actually test on a real device, is it a simulator, is it an emulator, is it all the same or does it matter what or where you testing against?

Sandeep Dinesh: [00:22:02] Oh, no, no, no. I forgot to mention that. Thanks for reminding, Joe. So simulators are the sort of the real thing, but not the real thing. I thought of what you say, your same, you know, the system under test as well as the test driver are the same. It's sort of like not the perfect, that is a good thing, probably, which we use for test automation, because the simulator or the emulator, which we have, they generally don't come in with the real-life problems of actually having a device or maybe a real device. I think it doesn't do, you know, system updates on the flight. It doesn't get a notification from other apps and things like that. And also, at least for the Xcode thing, a lot of features like the GPU or even thermal stats or things like that, they are not really, the instruments would simply fail, it would say, OK, I can't collect this because it's a simulator. But the good thing is if you have a decent, one of the latest iOS devices using Xcode instruments, you can, you can get most of the information that you want.

Joe Colantonio: [00:23:02] Now, this is a weird question. I've seen this a lot but I've never asked anyone about using code subjs. What is code subjs?

Sandeep Dinesh: [00:23:09] Oh, yeah, I'm not really a JavaScript expert. I mean, it's one of the frameworks which, it's more or less like the TestNG or the PI Test or whatever we have. It has certain what you say, good features, that's the reason our team uses this. From my observation and usage, it's more or less like a set of instructions that you can view plus things like the automatic retrial, the skipping of certain scenarios, and things that are much more, it's handled in a much cleaner way. It has very good integration with most of the current technologies like Puppeteer, WebDriver, Selenium, even Appium, and Detox. In that sense, it's actually very good and the fact that our core is React Native and we are a React Native shop, it actually works very well for our technology. And of course, there are some things which I find that because coming from Java and Python background, that some things are tough getting done there because it just goes in like a marching order. OK, I see this, I see this, I see this. And if I have seen maybe step at once, it won't do the step again. I don't know. That's why I said it has a set of orders, only that it follows that if there is an error if it suddenly popped up, it just fails. It just really tries the same stuff but then you can't really write a sort of in a loopy code, which we, I know, of course, our most of our test automation code is we need to put a lot of bandages to make sure it works, because at the end of the day, maybe if a log-in fails, we just don't, we can't just stop there. And we try to make sure that we get the best chance for the code to go forward. But in that sense, the concept is very strict. But of course, that's one, another way of looking at it is saying, hey, we have a bug there, what's the point in going further? Right. So it's like that.

Joe Colantonio: [00:24:56] Very cool. Okay, Sandeep, before we go, is there one piece of actual advice you can give to someone to help them with their iOS performance testing efforts or strategies? And what's the best way to find or contact you?

Sandeep Dinesh: [00:25:06] Sure. So definitely if you are sort of using maybe Appium or any technology, try to use this. There are other methods too by which we can get the same thing done because I was exploring further and recently I was working more on the webview automation because what happens is that we might not get that sort of info, performance metrics from the app process sometimes. Sometimes it gets delegated to the app webview, the Apple webview which you have. So specifically that process needs to be instrumented and found out. That would definitely be a separate process. I mean, I would say a separate blog in itself. Maybe currently I'm working on it, I can't really share many details because I'm still learning and probably whatever I have shared with my developers, they're quite happy with what they have seen. It's basically what you need to use at that point of time may be the Xcode command line by itself. Of course, in iOS, I would say it has lesser flexibility. Let's say using ADB, let's say you can say ADB top command and then collect the stats, but then they still have some, I know, some information is like not really exposed or may be advertised that well. But then they still have some things like XCode XCTrace, it's called Trace by which you can achieve the same thing and you probably can automate and you can attach. It's actually very neat as well. You know, you probably view the process name not really the process ID, you can give either of it in attach itself and it will still run on the same what you say, instruments. So that's something which I would say probably don't, probably lose hope, and probably just try to do this manually. Listen to podcasts and talk to people. I would say ask these questions in web forums and things like that. Maybe that'll definitely help because I for sure learned a lot from I mean, some of the other bloggers or people who actually worked on this. And that's why most of us are really happy to share this information. And I know that this information is less but then maybe try to ask that question and maybe seek help. Probably we might have, maybe I can also learn a lot of new things since a lot of people would be able to share that information.

Joe Colantonio: [00:27:17] Awesome, Sandeep, the best way to find or contact you?

Sandeep Dinesh: [00:27:20] Sure, my LinkedIn or my medium and my email address, please. I have shared that information out there and open. A lot of people actually directly talk to me as well, like maybe I did a lot of GitHub action integration for iOS simulators and Android emulators, they did reach out to me. And like I will try to, I tried my best to help or at least point them in the right direction so that, you know, they try to solve that issue.

Joe Colantonio: [00:27:42] Thanks again for your performance testing awesomeness. If you missed anything of value we covered in this episode, head on over to TestGuild.com/p67 and while you're there, make sure to click on the Try Them Both Today link under the Exclusive Sponsor section to know more about SmartBear's 2 awesome performance test solutions, LoadNinja and Load UI Pro. And if the show has helped you in any way, why not rate and review it on iTunes? Reviews really do matter in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Performance & Site Reliability podcast. I'm Joe. My mission is to help you succeed in creating end-to-end, full-stack performance testing awesomeness. As always, test everything and keep the good. Cheers.

Outro: [00:28:26] Thanks for listening to the Test Guild Performance and Site Reliability podcast. Head on over to TestGuild.com for full show notes, amazing blog articles, and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.

  • Rate and Review TestGuild Performance Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Bas Dijkstra Testguild Automation Feature

Expert Take on Playwright, and API Testing with Bas Dijkstra

Posted on 04/14/2024

About This Episode: In today's episode, we are excited to feature the incredible ...

Brittany Greenfield TestGuild DevOps Toolchain

AI-Powered Security Orchestration in DevOps with Brittany Greenfield

Posted on 04/10/2024

About this DevOps Toolchain Episode: In today's episode, AI-Powered Security Orchestration in DevOps, ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

First AI software tester, Will You Be Replaced and more TGNS116

Posted on 04/08/2024

About This Episode: Will you be replaced by AI soon? How do you ...