Evolving Test Automation with Ethan Chung

By Test Guild
  • Share:
Join the Guild for FREE
Ethan Chung TestGuild Automation Feature

About This Episode:

The success of DevOps is intrinsically linked to test automation. But how do you quickly scale your automation effort to keep up with the speed of delivery?  In this episode, Ethan Chung, a Solutions Architect at  Keysight Technologies, will share a comprehensive DevOps strategy that he has seen work for other companies.  Discover the power of automation intelligence, team collaboration hacks, handling the ever-expanding test surface, and much more.

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Ethan Chung

Ethan Chung

Ethan is a leader for Solution Engineers across EMEA and APAC with a technical background across automated testing, APM, software development, and research. Ethans' team focus is on building end-to-end testing solutions encompassing end-user interactions down to hardware solutions across enterprise environments.

Having experience with clients in banking, high tech, defense, healthcare, and retail, Ethan enjoys consulting and building DevOps and testing strategies across multiple industries focusing on digital transformation.

Connect with Ethan Chung

Full Transcript Ethan Chung

[00:00:01] Intro Welcome to the Test Guild Automation podcast, where we all get together to learn more about automation and software testing with your host, Joe Colantonio.

[00:00:16] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. Today we'll be talking with Ethan Chung all about the evolution of test automation to accelerate continuous delivery. It's really a timely topic, so I'm really excited to talk to him all about this. Ethan if you don't know is the leader of Keysights Solution Engineers across EMEA and APAC with a technical background across automation testing, APM, software development, and research. He has a lot of experience in all the areas I think we are going to touch on today. Ethan's team actually focuses on building end-to-end testing solutions encompassing end-users interactions down to the hardware solution across enterprise environments. So hopefully we'll touch on that as well. You don't want to miss this episode. Check it out.

[00:00:55] Joe Colantonio The Test Guild Automation podcast is sponsored by the fantastic folks at SauceLabs. Their cloud-based has platform helps ensure your favorite mobile apps and websites work flawlessly on every browser operating system and device. Get a free trial. Visit testguildcom.kinsta.cloud/saucelabs and click on the exclusive sponsor's section to try it for free for 14 days. Check it out.

[00:01:22] Joe Colantonio Hey, Ethan. Welcome to the Guild.

[00:01:27] Ethan Chung Thank you very much. Nice to be here. This is exciting.

[00:01:31] Joe Colantonio Awesome. Yeah. Great to have you. Before we get into it, is there anything I missed in your bio that you want the Guild to know more about?

[00:01:36] Ethan Chung No, I think that wraps up what I'm doing day to day nowadays.

[00:01:39] Joe Colantonio Awesome. Awesome. So it seems like you're… that's what I love talking to the folks I work for vendors. You speak to a lot of customers. You probably see a lot of issues that are happening at an enterprise level so really tricky, knotty issues. So where did you see test automation maybe a few years ago? And what are you hearing from customers or maybe some challenges they may be running into now with test automation in general?

[00:01:58] Ethan Chung So I think test automation as we move through the years has gotten more and more complicated, just as the environments have. Traditionally testing I think of any developer first starting off writing the Hello World is starts with blogging, starts with the unit testing is very functional and it's a very specific application. And as applications are growing even when it breaks down into multiple tiers and bring across different architectures, fundamentally, it's still one application that's running underneath there, and you generally have single control. Most large companies, no matter how big they are, still have their own in-house development environment. So that way, they're not really focusing on having to worry about these hidden black box environments. In the last few years, particularly with everything being SAS, SAS, SAS, and everything moving to new cloud, ultimately, I think a lot of environments getting more challenging. What was being traditionally hired a couple of developers, depending on how you test it, you know exactly how your application is going for these large enterprise firms now, then no longer having to rely on that, they're all moving towards the off the shelf products, a lot more package solutions, whether the Salesforce, SAP, Oracle. They're the big names that have something and pretty much everything now, particularly the add ons and extensions. And so testing now has become not how do we testing test ALS (??) But really, how do we test things that have evolved way, way beyond the scope of what they can manage.

[00:03:17] Joe Colantonio No, I absolutely agree with you to I have been seen as more and more and even people, I don't see a lot of hardcore software development going on. It seems like people are just pulling GitHub libraries and piecing them together to create a solution. So a lot of solutions I know though back in when I started it was just we had a server on a raised floor and everyone lived there. But now you have all these services everywhere, you don't even know maybe what's being consumed by what. So when you say black boxes, is it also in the sense that an application could be consuming some third-party services that someone doesn't even know about because we kind of lost control of our codebase almost?

[00:03:47] Ethan Chung I think it's a mix of a lot of different factors. So if you're an in-house dev house, I think, using pulling code of like I think every developer does this, go online, you're going somewhere and you just basically pull code and steal our solution, but ultimately that still sits inside your repo. But now black boxing can even be an internal problem. You're having multiple teams, you have microservices launching applications. They may or may not talk to each other. So that's black-boxing of actual code is in the need between different departments. As well as moving up when you're buying add-on solutions that preexisting products that you can manage. So, for example, Salesforce is the classic one where you have your core salesforce and then loads and loads of extensions that sit on top of it. Ultimately, these can be Salesforce provided or generally more third party with enhanced solutions on top. And, of course, black box on the completely hidden layer. If you have anything military, a lot of health care, a lot of financials, it's just a pure black-box environment where you have no access to the code. You can barely even connect to some of these environments other than as a remote connection. You can see what their live (??), you can actually do anything from a user perspective or from a tester perspective. So it's coming from a lot of different positions now.

[00:04:56] Joe Colantonio So, you know, I've spoken to a lot of vendors and they keep bringing up like these packaged solutions. Why is there a trend? Why? Why is there a trend of companies starting to use more and more packaged solutions? I know enterprises always have, but is it speed to market? Is it security reason like why, why would? Why are we seeing this trend? I guess.

[00:05:14] Ethan Chung I think, there's the usual sales answer, right? It's faster, it's quicker, it's easier, yada yada. But I think ultimately for a lot of our customers is they have salesforce, they have SAP and they don't want to build a brand new solution across the system. So generally, Salesforce, we say, that there's a lot of customization, but fundamentally you should always be basic operate layer that you can call that it feels into if any of them, so you're going into an accounts page within Salesforce, you generally have around the same kind of data, whether you're Ericsson or Sony or any other enterprise firms are existing in the world. And so there's no real point to rebuild your foundations. I think a lot of approaches now in testing are really focused on that user experience because it doesn't with the complexity of black-box environments, ultimately, you don't really know what's happening beneath. As long as your users are happy, you're not going to get tickets being thrown in for the complaint lines. So as we're moving towards that space, the package solution is we just need a wrapper around what your environments doing to make sure it's working for the user. And frankly, that's the only space you can actually interface with anyway.

[00:06:18] Joe Colantonio Absolutely. So what I think is interesting, I know maybe it's just because my experience is I speak with a lot of testers. I love and don't mention package solutions as automation. And I wonder if they're just focused on web-based, browser-based and don't even realize that their company has a need for automating these other solutions. Is that a common thing you hear or a problem?

[00:06:36] Ethan Chung That's a huge thing, especially with testing. I think everyone focuses very much on what I can do and what I can physically connect to. When your dev or test, you focus on what you're told as the most important thing you need to deal with and what you can do with, it's the only thing you can touch. And so why would one go outside of our world and then try to struggle with something that is, I think, from a development perspective, a test perspective, not your problem, right? Because when I did any development, it goes, “That's my code, deal with my code, and I make sure my code is working well, but I'm not going to work out with other departments is doing in fixed their code because that's not my day to day job and everyone's so busy already.” So this package solution, it's almost become a not-a-viewed item as a one in all solution because everyone has to build solutions for their own little slice or package solutions has the same problem as well. Let's say Salesforce, Oracle, a package solution around that is still ultimately just around that product itself, so it becomes quite segregated for other testing solutions.

[00:07:36] Joe Colantonio So but there also the problem is because we lost control. People want to just be able to focus on what they have control of and CI/CD Continuous Integration Continuous Delivery lot of teams have been moving that way. Do they also incorporate package solutions? Are they also trying other challenges that a lot of people shy away from incorporating into their full pipeline-type automation lifecycle?

[00:07:55] Ethan Chung Well, I think packaged solutions is ultimately just a means to an end. It's not magic beyond fixable problems for any kind of software. I think when we look at anything we're testing one of these things how do we start getting value within a week, a month, a year? There's a big transformation and package solution is just how do I get the basics first? I think within this space, we push around package solutions as a bit of a this is something else we can manage. But ultimately, package solutions should also be a way of just getting you to that end result faster. It doesn't really matter what the …(??) middle is,  so if you go Selenium, there are 101 different packaged solutions around web testing and how that should do. But ultimately, all of it is just one real focus. How do I get from A to B faster? And that B can be for testing, for modeling of the application across different applications? But I think a lot of because as packaged becomes, I don't want to get too close to that because we have our own solution that works well.

[00:08:55] Joe Colantonio You also think that because it's a packaged solution, that people just assume it's already been tested in a sense why test package solution and they must, Oracle must spend thousands and millions, not thousands, but millions and billions of dollars of testing their solution. So why should I have to test it?

[00:09:09] Ethan Chung I think there's a huge problem around that, particularly receipts. So software breaking down one of the most expensive times in any company is when a key software breaks down. That's SAP, the Salesforce of the world. If most companies are using salesforce or SAP or anything Oracle is so embedded within that infrastructure that if it goes down, what it means is a P1, productivity stops, the factories are down. And when that happens, the testing role (??) goes, “Hey, we're paying you so much amount money. You should come and fix this.” But I think it becomes a natural wait time between the actual solution and problem because you have to get in your consultants, you have your third policy, if it's party managed, then you're always blind until you bring someone in. A lot of these package solutions actually do a really nice step in that gap, even large companies may not have an SME in-house for these off-the-shelf solutions. But having that packaged solution will mean you get visibility before something breaks up before it goes into production because you have a way, way deeper visibility into it. And there's also a sense of when is a third party in a large organization, as you're mentioning, they should be managing that testing solution, but that only tests their own piece of infrastructure. So SAP is never going to be testing Oracle software, right? So how do you make sure you have testing that could fit across all of these items because it's generally the integration that causes problems now? I mean, it's rare we ever hear Salesforce or any of the SAPs in the world crashing. It makes news nowadays when you have a new release that has huge problematic issues. It ends up being news, people become aware very quickly. But it's the integration once that causes specific companies to get in a lot more trouble.

[00:10:48] Joe Colantonio So do you have that example of an integration issue? Is it because people aren't testing the API connection or is it because like, where's the breakdown then at the integration level?

[00:10:55] Ethan Chung We find it, particularly around the data perspective. So if you mean fundamentally, Oracle systems and Salesforce systems are fundamentally different data schemes, right? And that communication when you're converting is or just mapping that data over when you're doing checks or you're going SAP back ends with a Salesforce system. How does that data connect together? When it transfers, none of these are generally a production-breaking problem in the sense that the application doesn't load. It's just the data feels no longer makes sense, it doesn't work from a user perspective. So we see a lot of these kinds of actions taking place where the packaged solutions even have a lot more difficult to deal with because Salesforce can intake data and it's technically correct, but you don't actually have a clue whether it's correct or not until a user gets on it and it hits a production level view.

[00:11:45] Joe Colantonio So how do people usually overcome that then, that challenge?

[00:11:47] Ethan Chung Around this is the model-based approach works extremely well. I'm talking about the complexity of testing environments now before going from the units moving up to functional. Now, the next step, which I personally believe in, is the model-based approach. So whether you have your own custom front end, that's most websites have or most companies have for their users to jump on to and then an off-the-shelf backend and database, et cetera, to have one testing solution that can orchestrate the testing across all of these. I think that's key. So when you have a model-based approach, you have one section of the model that focuses purely on the browser, which is one tech stack and how the user can interact with it. And then the second you put a workflow order for a browser or a purchase order for a browser, you jump straight into your SAP environment. You even check that purchase order to make sure that is the correct one. Their data flow matches the actual items. You do a business logic validation there. The currency is correct. The actual numbers are correct. The order numbers fulfill anything that you need from a run the business perspective is still active. And that bypasses the problem of how do I make sure different environments are working in sync.

[00:12:57] Joe Colantonio So I guess the challenge with that is what web usually can understand the elements underneath? Usually, you have developers create ids and things. I think a lot of people shy away once again from the backend because maybe they're almost like a black box system like you mentioned a few times, they're not really accessible. So how does the model know? Like, here's a web and here's a backend, and here's how you can access that back end data?

[00:13:16] Ethan Chung So I think you either have to go for an option of an overlaying model approach that has access to all these tooling. So if you go going from the web perspective, traditionally you use something that's object-oriented to focus and then you go into your back end in the same model, you go through a DB or you can go into someones (??) or you go into some kind of other third parties. You have stitched all these tools together, and then that's how you build a wider picture. The only problem with this is you then have one set of developers for Selenium testing, one set of DB administrators, one set of SAP, SMEs and then you have free teams just to do the testing itself. Some approaches now of the model approach is using purely an image or image recognition or OCR approach. So that way, ultimately as long as you can get a user to connect into the systems, you can go from a browser perspective on the user, on the actual website, and then you can go straight to your SAP. Use the exact same one and they can run the queries via the database tooling. And then from that, that gives you the actual how the flow works. And then you inject a layer of how do I interact with the objects? How do I interact with the database in the back end? How do interactive applications? Because ultimately, the user, I think now, particularly as was saying these black-box environments, you have to give the option of accessing it purely through the eyes of the user because, without it, you always have gaps inside any complex system.

[00:14:38] Joe Colantonio So I guess I've heard of model-based testing for a while. A lot of people back in the day would say, Well, my model is only good as my model, as I model it. But is there anything new with the model-based testing that maybe helps where rather than making assumptions that kind of help guide you like, no, this is really happening, maybe in production or you may not be aware of?

[00:14:58] Ethan Chung Yeah. So that's a huge change with particularly the packaged solution. They're building these models. I think you hit the nail and had felt my model's own as good as how well we keep up off to it. And if you look at any organizations, Visio diagrams, they're generally not that up to date with infrastructure or how the actual application looks, and then you need to know people in between. However, one of the things we know pretty much every solution has some of its layers for how the application should be looking like. And that's it's a little bit of mark your homework, but it gives you that view of the digital twin of the application itself. You go into a website, you have your site map, you know, generally web applications look like with some levels of confidence you go into your backend applications and generally it's going to be structured in some way. How we take that phase (??) and now and convert it into the model, I think becomes key. So you have these multiple incredibly complex environments. So in any Salesforce site will have hundreds or thousands of different unique pages and every year its complexity doubles and goes more and more complex because there are more fields for items. By simply taking in that I say simply engineers are hard at work building it, but by taking that structure from Salesforce, you ultimately get a skeleton of the application itself. And then all that logic becomes one step easy to build the model so you don't have to, I guess put the labor of building the model itself. I mean, that's a huge move now for the model-based approach because of packaged approaches with model focus, because when the longest time drain is to go how do I map this out without being an SME in the product itself.

[00:16:29] Joe Colantonio Yeah, but took another challenge, probably solved now as once you mapped it out, when change occurs, like how do you know a change occurred and you're like, “Oh darn, I have a thousand Tesla or Model then mapped and they added a new field?” And I would think nowadays but AI maybe can update your test on the fly, I don't know, how does that work?

[00:16:45] Ethan Chung That's such a really good point because when the environment changes, it's really hard to know why things change, particularly with microservice as part of the prime example. We know the industry loves microservices, right? Because it's fast and Agile, all the buzz words you can imagine. But then when you change things, the actual QA or the actual team that has to make sure it's working might not even get notified it's working other than a notification in the pipeline. And that's generally when there are only issues detected with monitoring tools. When you're updating via the model itself, you can see the difference between the different versions so that automatically goes, “Hey, this location changed. This is probably the most important location within your application test right now.” When you do an integration, let's say we're Salesforce. One of the biggest breakers with extensions is the testers don't know there's an extension because any user could just rock up to Salesforce admin in some systems and then update, add an extension, do changes, customize the fields. When that happens with any kind of model tool you can just dive against the change. No way the model has changed using frankly, quite simplistic machine learning models tells you that's going to be the most important thing way those areas. And then with the digital twin, you just navigate to that first and make sure all the surrounding impacts. And because your model is the same thing as building a path, right, you know that path is the most important one first and you start triggering those actions.

[00:18:05] Joe Colantonio Well, that's interesting. I used to work for a large enterprise and we used to do patches, but we never knew what tests to run for that patch. Sounds like, with model-based testing, you can almost say, “Well, we introduced this change and the model's telling us it's only affecting these this area. So I know, therefore that I run test A, B, and Z that it's covered.” Is that true?

[00:18:23] Ethan Chung Yeah, exactly. It's one of the trickiest things. I think there's a lot of disconnect between testers and devs and where it actually gets impacted, ultimately, I think nobody wants the dev to be at the end of the day, having to update the testers for an hour or two hours every release to keep up to date. I think those are companies trying to do it with good updates and a good journey and tracking via the dev notes so that where they can do, but that's always down to interpretation and this effectively wasn't the model. It doesn't matter what the testing solution is underneath. The testing ultimately is the arm and legs, but the model itself is just the brains for where people should be going first.

[00:19:00] Joe Colantonio So we spoke a lot about automation, ERP. I speak with a lot of testers like manual testing going away. What are your thoughts on manual testing as we try to move to CI/CD and we're shipping faster and quicker and we need to get in the hands of our customers? Is this move shift basically to the end-users doing our manual testing and we're monitoring production and trying to pivot as quickly as we find an issue or how does that work nowadays?

[00:19:24] Ethan Chung I think this is a very interesting topic, right? This whole shift where end users can become your testers, you can deep blue-green and just check it out. I think it's great if you have a massive resource pool of people that will basically stomach it for you. So for some companies like Facebook and Amazon is probably the best example, they can reach so many variations for our day and then just turn it for vengeance, which is the best outcome, and then decide from that. I think it's great in principle, and if you have over 10,000 users actively using your application, you can take a slice of it comfortably. However, particularly in enterprise and highly sensitive environments, so defense medical, you can really have your medical patients or your doctors be the trial to make the guinea pigs to make sure that nothing is going to break, right?

[00:20:10] Joe Colantonio Right. Right.

[00:20:10] Ethan Chung And this mimicking a user flow, I think it's the same thing as monitoring space, you know, the APM space. The always answer is how do we test this or the user was doing it and that triggered actions. I don't know any CIs or CDs that are going to be like less impact on our user experience generally, even no matter how small that percentage is. And that's probably the biggest gap. If you have that massive user pool and it's okay for you to take a little bit of a guess hit against the user satisfaction to something that they may not like to then calculate the best actions? Yes, there's a great solution, but I think actually very, very few companies ever get the privilege of doing that.

[00:20:47] Joe Colantonio Yeah, absolutely. So where does monitoring come into play then? I know you mentioned monitoring in the pre-show? Where do you see how we are involved in monitoring?

[00:20:54] Ethan Chung I think around monitoring, it's where they ….(??) of application. So if you have mappers in the testing perspective, so if you have a unit testing is making sure that joints and ligaments are working. If you have the functional is, the whole is going forward. If you have a model-based approach, you're literally making that human run around jumping. But monitoring is great because it's a sensor on your heart as an insulin level. It's your beta and alpha waves and making sure everything's generally working once everything's put together, which …(??) we acknowledge you just thrilled that effectively is the how to make sure everything's just running operational, but is not going to every place testing, but you need to have them sync together. It's one of the low Chevening monitoring became huge the last few years, and everything's around to form baselines. But fundamentally, the problem always goes in, it ends up in the end-user. You generally don't want the end-user having to be that person to make sure things are working. You need something to, you know, hit the funny bone and hit the low back on the knee, what the doctors do to make your leg bounce out to check things are working consistently, but is really just an aid to any kind of medical measuring device to make sure it's working while you're doing the test itself.

[00:22:04] Joe Colantonio Nice. So you did mention your works a few times? I don't know is accessibility testing, I've been hearing more and more about accessibility testing now. Sure, as you see as we're that's one of the key advantages now I guess not only shipping faster, but making it so it's more accessible. It has a good user experience. It's responsive. Is that also something that you see as we have evolved, as we create software faster, quicker, that that's something that has bubbled up as an issue or something that needs to be more looked at?

[00:22:28] Ethan Chung Oh, hugely. I think in the past if we go back 5, 10 years, everyone will go, “How is the database doing? Are we having memory leaks or we're having issues?” Now, if your database says 95% some Dbs somewhere might be stressing and there might be a better way to fix it in the background. But ultimately, the company is not sweating anymore because as long as your user experience is running fine, no one cares, right? Even if you're hitting max capacity and you claim that 99% or the threshold is hitting red, I think from the business value perspective, great. It means we've provisioned that machine just perfectly, while it hits that 99% as long as it doesn't hit 100%. So the UXI is really while we trigger okay when we need to start changing the background? When do we need to change the environment? I think a lot of the times in the past will set of arbitrary thresholds, so we're hitting 80% now. Therefore, we need to dynamically scale up even aware of the new infrastructure solutions we go. Okay, great. We're hitting Kubernetes. We're hitting XYZ utilization in the pods. Therefore, we're going to scale up. However, it doesn't really make sense other than you're just making machines for the sake of making machines that may or may not fit. It's really good from a preparation perspective, but once you need to go, there was actually impacting the business. I think the UX point of view of can the user still do what they need to do becomes king.

[00:23:45] Joe Colantonio Absolutely. And I was just thinking that one of the evolutions from when I started was I just focused on UI automation or just one little small piece of automation. We talked about really broad, talked about cloud infrastructure or most UX databases, backlog. Who does this now? I would think of evolution maybe a tester now is someone that is almost like full-stack automation to pipeline automation, or is that how you see it, as well as an evolution, maybe where we're going now?

[00:24:10] Ethan Chung 100% I think with testing there are loads of times whereas you focus on your little piece of the pipe and you solve that problem, you make sure it works. However, now testers are becoming more SMEs of that business more than the actual technology space itself. We're seeing a lot of getting new titles now that business analysts and there are user journey experts, so some kind of new branding name, but ultimately they're just people that make sure the full experience is working when there are internal stakeholders or external. And their real value is they know how the application should be going through, what was previously in the domain of manual testers, the regression testing of going down multiple paths clicking through. Those values are now I think transitioning into the model-based approach where you go, “Okay, if we need to build a model, then I need to know what negative testings I need to do? What logic do I need to do? What business assumptions do I have to have within the application itself? And those people are converting into the model testers because they're the only ones that know what to click, how to test for things that no one else would test for.

[00:25:12] Joe Colantonio Great. So I know we've talked a lot about a lot of different things. I don't think we touched on an Eggplant. Eggplant has been around for a while. It's like I said, I work for a large enterprise and we've had some really, really old backend systems. And the only way to test it was to use a technology like Eggplant to interact with it. So what's been up with Eggplant? I know if people know they've been, you've been acquired by Keysights. So I'm, I'm sure if they were that you're all still around. So maybe like I said a little bit about maybe about Eggplant where you're all at and where, where you've been?

[00:25:40] Ethan Chung Sure. So Eggplant historically is always around testing as a user, right? So we have a test tool. We can use OCR image recognition to navigate for an application, as a user would do. But 5, 10 years ago, it was very linear. Perhaps you map back journey, as you would do as a user do. You click here, the Eggplant clicks here. You go here, Eggplant is that and it just maps exactly the same way as a normal journey. One of the big developments in the last few years has been going towards an automation intelligence approach. Effectively, we map the environment out into digital twin that all the logic that you require to move through use the exact same old Eggplant, right? That's the bread and butter of eggplant. It knows how to interact through the system, but now it's moving towards more of a how do I build the testing solution as usable across any or as generalizable as possible? When I build a new testing solution on an Oracle application, I'm not going to go, “Hey, I want you to click on the field that specifically, the, you know P.O number or query number and then find out or the name of the thing”. You can go, I'm only going to make a generic search field item inside a model that just searches for whether text the user inputs for the QA and then find that thing in the right hands side the click on it to make an action. So that way, you can then apply that in the model itself to do anything because it's all just different scenarios that a user can go through with a DAI in the machine learning side of things. That model becomes an easy journey that machine learning can actually create the paths themselves. So it's this huge move towards let's move away from just a standard regression test. We still need those for the most important business journeys. But how do we automatically create the regression tests when we have the model mapped talked? It's machine learning and the buzz words, nice things around it. But ultimately, it's just a really fancy way of saying I'm going to make decisions that should be better than a human. In theory, that's what we did. We automatically know what journeys you can do and then we optimize against those journeys so that where you're building instead of 1000 regression tests, potentially 100,000 regression tests that can go every scenario across every single application that you have.

[00:27:45] Joe Colantonio Now I was also noticing on your website you mentioned performance and load testing. Not sure if it's part of the solution, but I was just thinking while we were speaking earlier, with packaged solutions, one of the biggest things I hear are people having issues with is “How the heck do I do a proper performance test or load test?” So is this something like Eggplant can also help you perform some load testing of the packaged solutions as well?

[00:28:05] Ethan Chung Yeah. So the way we do the upon (??) with the Eggplant performance and the package solution is simple. Eggplant build uses a package solution that would build a model around the application. So Salesforce gets released with a package solution, we now automatically build a model of what it should look like. Now to run the tests themselves, we have the users' flows, we know the business logic we test against up, and then we have a plan performance, which it's load test the actual environment. So if you put X amount of uses against the environment, is your test still running the same way? And I think of it as your user journey as the user going through and then suddenly we're going to make it so that it's 10,000 users on this Salesforce environment. Is the experience still the same? Is the timing is the same? Is the infrastructure still responding in the same way? So that we can pair those two data sets together and that effectively ties that end-to-end solution but going further down. So now the model builds the horizontal aspects of the applications while the performance goes vertically down into the actual applications, the machine level, hit the bad bones, hitting and retaining on the load, as well as anything DB or API, SAS across the system itself.

[00:29:12] Joe Colantonio Right. And another thing that came to my mind with performance testing for any type of automation, especially it seems like the domains are in aerospace, defense, financial services, health care, there's test data. And I can't find anyone that can say here's a solution to help you with test data. Do you have any suggestions around that or any advice? Does Eggplant help with test data as well?

[00:29:30] Ethan Chung So test data is on the trickiest part because ultimately we can generate test data, right? You can get a developer, just a data pipe running creates something, but ultimately test data is always going to be so domain-specific. That is why we need to business analyze the regression test that knows how to model it. Although we can just create it directly in the DB or use programming knowledge to build out a lot of the test data perspective. One of the things we commonly do is just use standard Excel because any person, whether they're very technical non-technical can build our dataset themselves, and a lot of manual testing world because focuses on building that dataset by generating it in Excel. So we take that as a data entry and then let Eggplant itself choose which of that data to use. It's one of those ones where a lot of customers come to us and go, “Hey, can you generate data set?” Well, we can. We can generate any data. It's just programmatic data generation, right?  As long as it's a string, as long as is not creating some kind of automatic 3D model of something, we can generally just make data for you. But where do you want that data? How do you want that to be? That's when you start once again needing the mindset, as you would experience the product itself. So that way, we can incorporate that knowledge into our testing solution.

[00:30:44] Joe Colantonio I guess the problem also would be dependencies. So using a packaged solution, maybe it's, I don't know invoicing but no to have an invoice, you need to have a pack, you need to have a product and that product needs to be purchased. And then, it needs to be in a certain state. Does the tooling help you at all with thorough modeling to say, “Okay, you're going to need this type of data that are already there before we do this other type of test run?”

[00:31:03] Ethan Chung So let's say something like invoicing, right? Where SAP or anything, ultimately that building itself, we actually wait until it's converted into a real product. I think one of the chicken and egg of testing if you go package solutions, is how do I test something that doesn't exist yet?

[00:31:17] Joe Colantonio Right, right.

[00:31:18] Ethan Chung So actually it has a bit of a feedback loop. We take the production level outcome or a staging environment to form that model itself because you cannot test something until it exists. So we are going to have to wait for that stage where that thing exists, that production knows that staging environment exists first, and we use that structure to form the model, which then can fit back into an early testing stage.

[00:31:40] Joe Colantonio It seems like you really have a holistic solution here. Another buzzword I've been hearing more about is RPA. How's RPA different from normal automation testing? I guess, it helps you automate business processes that may not be necessarily a test, but it helps you create software faster, so automate certain things that maybe were data-intensive, I assume.

[00:31:58] Ethan Chung Yes. So we actually do have some customers that use tool for RPA. It generally is our main focus because it's a very niche thing. Generally, RPA  is very tied to specific tools because fundamentally, if you tie to the application itself and you can always brute force things faster than coming in from the outside world. However, for older applications, so we have some COBOL environments that we have to deal with, which no new RPA solution is going to build stuff for them anymore, right? So we can just do duplicate as a user to move it application itself to do all those RPA processes. We actually find a lot of the testing is effectively in a way RPA because it's just duplicating that user behavior through the application itself.

[00:32:40] Joe Colantonio Okay, Ethan. Before we go, is one piece of actionable advice you can give to someone to help evolve their test automation efforts. And what's the best way to find or contact you or learn more about Eggplant?

[00:32:49] Ethan Chung Sure. The most first actionable approach is, I think, don't think of testing as piecemeal items, so we get loads of tests that are only focused on the unit, only functional, only regression tests. Ultimately, these are all steps in the journey to get that full automation story that you need. You can't have the model-based approach unless you have the unit base testing, the functional, the regression to build upon. So the journey is not how do I get to X and Y in the future and how do I get that right now? Because you need that foundation. So yes, I think that's probably my first thing that most of our clients go to. They come to us and go, “Hey, how do we get the end result?” Well, you need to know what you need to know first so that you can build towards it. To reach out to me, LinkedIn is probably the easiest and for Eggplant whether you want to reach out to me for technical discussion, we have our Eggplant website that's easily accessible. Just simply ping me a message.

[00:33:40] Joe Colantonio Thanks again for your automation awesomeness. For notes on everything of value we covered in this episode, head on over to testguildcom.kinsta.cloud/a368, and while you're there make sure to click on the try it for free today link under the exclusive sponsor's section to learn all about SauceLabs awesome products and services. And if the show has helped you in any way, why not read it and review it on iTunes? Reviews really help in the rankings of the show, and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe. My mission is to help you succeed in creating end-to-end full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:34:21] Outro Thanks for listening to the Test Guild Automation Podcast. Head on over to testguildcom.kinsta.cloud for full show notes, amazing blog articles, and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.

 

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Nicola Lindgren Vernon Richards TestGuild Automation Feature

The Software Tester’s Journey with Nicola Lindgren and Vernon Richards

Posted on 12/22/2024

About This Episode: Today, we dive deep into how to advance your career ...

Alex Kearns TestGuild DevOps Toolchain

Leveraging GenAI to Accelerate Cloud Migration with Alex Kearns

Posted on 12/18/2024

About this DevOps Toolchain Episode: Today, we're diving deep into how you can ...

Three people are pictured on a graphic titled "AI Secrets You Should Know." Set against a striking red background, the image features the ZAPTALK logo in the top left corner, highlighting discussions on AI and automation.

The Secret to Embracing AI and Automation (ZAPTALK EP 02)

Posted on 12/17/2024

About Episode Join Alex (ZAP) Chernyak, Joe Colantonio, and David Moses in episode ...