Using Your Environment Data for a Better Testing Outcome with Wayne Ariola

By Test Guild
  • Share:
Join the Guild for FREE
Wayne Ariola

About This Episode:

Do you know how to use your environment data to create better testing outcomes? In this episode, Wayne Ariola, a recognized thought leader on software testing, shares his suggestions for leveraging an open-testing platform.  Discover some ways to re-think your software testing lifecycle with an eye towards more comprehensive automation. Listen up!

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Wayne Ariola

 Wayne Ariola

Wayne Ariola is a recognized thought leader on software testing topics such as continuous testing, risk-based testing, service virtualization, API testing, and open testing platforms. Wayne has created and marketed products that support the dynamic software development, test, and delivery landscape. He has driven the design of many innovative technologies and received several patents for his inventions. He has been a contributor to the software testing space for many years and in the software industry for more than 20 years.

Connect with Wayne Ariola

Full Transcript Wayne Ariola

Joe [00:01:29] Hey, Wayne! Welcome to the Guild.

Wayne [00:01:33] Hey Joe! Thank you so much. Obviously a big fan of the podcast, and I'm really excited that this is my first time.

Joe [00:01:39] That's awesome.

Wayne [00:01:40] You know, this is…I hope to do it again, but I really do enjoy the content and it's actually on my iPhone. So thanks for offering.

Joe [00:01:49] Thank you! Awesome. Great to have you on the show. I've been wanting to have you on for a while. We chatted previously on a few different things. I'd like to dive into that. Before we get into it, is there anything I missed in your bio that you want the Guild to know more about?

Wayne [00:02:00] No, but could you follow me around and just say that all the time for me, that I'd be awesome? And no, I guess the one thing I like to differentiate here is I think the software testing space and vendors have a mixed relationship, I think, over the years. And it's something that's pretty interesting. And I think we finally come to a point where I think open-source tools are so productive and there is a great way to actually assist in acceleration for organizations. And I think, you know, from my perspective and I've had multiple patents in the area, some of them are even expressed in open-source tools these days, right? I think that no matter what, organization, I really want to force more collaborative thought around how to achieve what you need to achieve. So I think we're trying to figure out how to stretch things and then we're catching up, we're stretching it, catching up instead of kind of leapfrogging to the next point. So my whole thing is I hope we can talk about some topics which assist organizations to kind of leap into that future state rather than kind of trailing along development, evolution of architecture. So that's kind of my thing these days. Like, how can I help you to move forward faster?

Joe [00:03:12] Great. So when you bring up a good point, you know, I'm just having this discussion with someone. I had an online conference and I have sponsors that are vendors. I actually come from enterprise development backgrounds so I'm used to working with vendors and I love vendors. But everyone always gets upset. They always want to do open source. What's the happy medium then between vendors and open source solutions nowadays?

Wayne [00:03:31] It's a fantastic question, and I think what you need to realize is in the software testing space itself, I think, Joe, I think we've really radically focused on the mousetrap, right? On whether it's building a new mousetrap to building a better mousetrap. But it's really always around, when you talk to anybody about automation, it's really around this pretty isolated thing about generating a reusable script, right? Or whether it's no code, low code, or coded. You know, it's always really isolated. I think open source has come into the picture and created a really valuable array of tools that are even optimized to particular architectures, right? So there's no doubt the more technical an organization is, the more focused they are going to be on using those kinds of architecturally optimized tools, right? And I'm talking things like Playwright or React or the more technical frameworks that also have testing components associated with them. I think today that the question associated with how to partner with teams, dev test teams is changing dramatically. I look at all these tools as nodes, right? Nodes in the process, nodes that feed particular sets of automation. And the question now is, “How do I best orchestrate that information, not creating this kind of divergence between a developer's asset and a tester's asset, but looking at those nodes of information in a much broader picture, right?” Not to get too ethereal here, because sometimes this floats away from you and it can. But the idea now is really how do I get this puzzle put back together? So I'm actually building upon the information that's already there in order to produce a better picture for the business to understand risk, right? That's what it's all about today. So I think that's where the vendors are coming in and thinking about, “Okay, what really do we need to do? Do we really need to build a better mousetrap and build another mousetrap and another mousetrap?” I just don't think so. I think it's now about creating a platform for more comprehensive effectiveness in terms of what I'm trying to accomplish.

Joe [00:05:46] Absolutely. And I love how you use the term “node” because I think the concept you have is based on like a hub kind of that assists testers and I think you call it open testing platform. I guess before we dive into what is an open testing platform?

Wayne [00:05:58] So an open testing platform, thank you for using that word, is a collaboration, right? It basically inverts the concept of what we do today around building scripts. It abstracts logic into a digital twin of your critical transaction. I'm going to have to unpack all these terms and I love the word, digital twin. People sometimes get it. Some people don't. I know if I had to have a digital twin, Joe, it would look a lot better than what I look like now. So a digital twin gives you the ability to create and abstract your logic away from the test script. Today, what we're doing is we're building test scripts and those test scripts are really being buried and very tightly associated with the tool, right? What you need to do now is abstract that into a collaborative form. And that collaborative form is a representation of the critical transactions associated with your business, which is the digital twin. And the good thing about the digital twin Joe is, (A) it gives people or humans a way to collaborate against something visual associated with the transactions. And what I'm talking about is a transaction horizontally across the organization, not necessarily trapped within a vertical of a single application, but really looking at and I know the word end to end, sometimes used in different formats or with different contexts in our industry. But we've got to be looking at the entire experience from a user perspective, whether it stems from a business process, whether it stems from a user engagement, and then having that model in a method or a form in which humans can collaborate, number one is critical. But number two, since we are now in the data age, having that model open and receptive to have machines contribute to it as well. So meaning that the model not only needs to be influenced by a human who is looking to engage or looking to basically understand the next requirement coming down the pipe. But we also need to mine and curate data from production from the dev test infrastructure and inject that into the model to assist us with prioritization, assist us with visibility, reuse, maintainability, to give us this method in which we can actually then push the logic down to the nodes whatever open-source, whatever a commercial tool you might want to use, to be much more productive. So the idea of the open testing platform is really to create this new method of work, which is called “Inform, Act, Automate”.

Joe [00:08:36] Awesome, so Wayne I love this approach because a lot of times people always ask about tools, with the tool, automation tool. And it seems like we're at a point now where I mean, there are so many tools use a tool that helps you and use the tool. But people are missing out on all the data all these tools are creating. And it sounds like this information you're talking about is really where the critical information is. The tool is just driving things, but the data is where all the gold is. And it seems like people are really missing out on all this other information, all these tools are creating. They're just adding another tool, rather, and saying, “Okay, wait a minute, what are we doing with all this data, what are some insights we can look at?” So what kind of insights then can people focus on if they focus more on these types of data that are being produced I guess?

Wayne [00:09:14] So. Great question. And I think when you look at an open testing platform and back to this idea of “Inform, Act, Automate”, what we've got to realize is our job as testers is actually much, much more complex than anyone else's job within the cycle. (A) we have to interpret the logic and (B), we have to be creative around how we're going to exercise, how that logic is implemented, right? And then we need an environment and then we need data and then we need to run it. Then we got to go prove whatever we fished out is actually an error or defect. We have this whole downstream cycle that creates that most of our work to go prove back to the organization that what was uncovered is actually a priority to attack, right? So if you look at the idea or the role of testing, it is truly a more insane process than you would typically think when people just say the word test. So when you say what kind of information or what does it do? It promotes a pattern. So the first pattern is “Inform”. And when I've worked with organizations, the first thing I began to notice in terms of what was delaying the process of testing was essentially late changes, and that can be late changes to the environment, which was one of the main issues. It could be delays in the requests for test data. It could be changes in the requirements or features being pulled at the last minute, feature flags changes. I can go through a list of 70 of these things, by the way, which are kind of a late-stage kind of impacts that stall our process, right? And when you ask about these conditions in which caused the delay, the one thing that became extraordinarily clear to me was there was somewhere some point in the system where that change was evident. And that change was evident a long time before the tester actually knew about it. And this is where the idea of curating data for testing really starts to make sense. So if you go into an organization, just as an example, you will see that there's an array of pre-production tools, an array of dev test infrastructure tools, an array of production tools. The level of access to those tools is variable across the organization. I see organizations that have very mature applications performance monitoring infrastructures. But I also see the fact that testers don't have access to it or they have only access to a dashboard. So the idea of a dashboard is good, but a dashboard still requires me to go to it, click on it, interpret it, and then I have to infer what I need to do to take action. So “Inform” inverts that process. It basically focuses on what the tester is doing today. And let me give you an example. Let's say that the tester is responsible for a very particular…they're assigned to a test and that test is also tied to a requirement. We could monitor the infrastructure for pull request, right? And automatically notify the tester that a change has been made underneath one of the requirements that they're working on. This information alone is absolutely amazing because you can understand whether we are now at a point where there's going to be a difference in…that the code change and you need to update something. That's the most basic one. But let me continue. Let's just say that that particular requirement or that particular application has a shift in persona or a shift in usage. This data is in your application performance monitoring solution. It's sitting there, but the tester is usually not aware of it. You're usually building your personas or if you're addressing personas, you're building it into your test design way before its time. We can automatically give you that information because you are associated with a particular set of requirements and we can correlate that and produce the data for you in your feed. But we should only do it for the requirements that are in play, that are in sprint, that are hot, that are live. Don't get dumped on this massive data of things that you can't act upon that's isolated to what is going to impact your work in the next X days. And wherever you are in that sprint is essentially there, the framing of it. But curating this data for change is, I think, the first step. Give me the information that's pertinent to what I'm working on now, which is really this informed pattern.

Joe [00:13:43] It's almost like real-time bubbled-up insights. So like you said, I used to create these complex dashboards and people like, you know, it's really (unintelligible) to point them to. And, I'm, looking at it this way it's like in sprint, I'm working on a feature, I guess my team wasn't very agile because we had like eight different sprint teams and one was building component that this other team would consume. And if they didn't finish it, we (unintelligible) up to the sprint. So it sounds like this is kind of like consuming that data in real-time so that you're not waiting around to find this data later after the fact. It sounds like.

Wayne [00:14:13] Yeah, absolutely. Now, let me give you another pattern. I love this particular pattern. Let's say I'm newly assigned to your agile test management system. I'm newly assigned to a requirement or test. I would automatically get a notification in Slack or HipChat or whatever the notification system or email system you're using. I would automatically get a notification of the developer who's working on the code or developers working on the code that impacted the results of the last test, what had changed the unit tests associated with it, and what were the results of it. And I would actually feed that right into your channel. And the benefit of this is you've basically eliminated the archeology associated with kind of uncovering where the bones are buried or where the bodies are buried associated with this requirement, right? You get that information, right? Now, this is a very very simple requirement, or excuse me, a simple pattern, but it's just amazingly valuable to highlight that information. And the reason why I love the idea is that it's so basic, so visceral that it actually makes a lot of sense. We just haven't connected things up. So information can be really highly curated for testers. We've always thought in terms of the dashboard. So now we're switching that to make it more proactive. An open testing platform should have hooks that allow you to correlate these artifacts to give you those patterns, those information patterns. Now, as I said before, I've uncovered 70 different inform patterns that I think testers would want to know about. This could be config changes in test environments. These can be changes in data patterns or whatnot, impacts to test data config, right? Everything that has to or is associated with you being able to execute the test. But we always have this aha moment when we're talking to clients and we get in this, “Can you?” And by the way, you know that they're thinking because their voice goes higher because, “Can you?” and I swear to you, everybody has their own environmental conditions which have forced delays in the process. And so openness in the way we're articulating these patterns is going to be critical.

Joe [00:16:24] Absolutely. So if you have a lot of automated tests and they fail because this insight is not being shared among the teams through collaboration and the tests are failing, can you automate the automation in a sense that the automation can be alerted in real-time on changes to the data and the environment and things like that?

Wayne [00:16:40] Absolutely. So this is the second piece of the puzzle. So the pattern goes informed. And what you get through the informed piece, what opens up to you is this idea of collapsing the information you're getting to “Act”. So like I said before when you're in a dashboard view, you basically need to interpret the dashboard, understand the dashboard, and then decide what you're going to go attack. So what we've done with the open testing platform is collapse this. So information then is presented to you with this model or with the digital twin giving you the ability to take action against the digital twin in a panel. So, for example, the persona conversation. We've noticed with application X that there is a change in the usage pattern of the persona using it, or there's a change in the endpoint or device that is most prominent in terms of its use or change in the browser or changes in one of the vectors associated with persona. We can feed that into the digital twin to give you the ability to now, either you have a choice, you could get the notification. You can create a subset of tests that address that persona. You could update your test data to better reflect the persona, or you could basically do a combination, right? So what we're trying to do is collapse this idea of feeding you information with the opportunity to act upon it. Now, once you go through that act motion a number of different times, Joe, I think what you're going to realize is now you know what some of those things are pretty predictable. We can actually connect live data into automation. So you never want to really automate first, I would say. You've got to learn the pattern and understand the impact of the pattern. And once that pattern now becomes repeatable and it's noticeable, taking the step to automate it is probably the next most logical step. I think one of the things that I've noticed in our industry is that we tend to want to automate and we take massive strides to automate something very complex, but it produces this very brittle automation that we're always fixing. So then that becomes the next failed project. So going through this pattern of looking at the data, creating the connection between the data, and acting upon it within an open testing platform, I think is the first two critical steps to the final step where we know we can automate it because we've seen the pattern happen over and over and over again.

Joe [00:19:02] Absolutely. I guess the next tricky thing then when you talk about automation is people automatically think automation, functional automation. But I would think you would have to act on things that can be automated but unnecessarily test. So, like almost like a remote process automation, or something. How do you define automation at this phase then? Is there's automation just for testing or what is it?

Wayne [00:19:21] That's such a great point because as soon as you say automation in our and by the way, coming from your background that you think of performance testing, I'm thinking functional testing. And by the way, the developers thinking unit testing.

Joe [00:19:35] Right.

Wayne [00:19:36] And we're all talking about the same word, which is test and test automation. You're absolutely right. Automation has multiple levels. So there is the you know, there's the automation what I would say is understanding and optimizing what needs to happen. There's automation around how and there's automation around when. And all those, by the way, Joe, are separate, orchestrated triggers in terms of what we do. The how which is mostly the test scripts, right? “How am I going to drive that interface?” is what we primarily focus on. However, these other two components we've kind of taken on as either adjunct types of projects. But this is where the brittleness comes in the process. Now, DevOps and enterprise DevOps, it's forced us to really look at these kinds of peripheral subtasks and try to understand whether we can draw them into the process to be much more efficient. Ideas around containerization, ideas around leveraging virtualization, ideas around how to aggregate the artifacts in order to actually have more sustainable automation in aggregate certainly forced us to look at this. But when you start looking at the pieces, the idea still comes down to this point where the more we try to aggregate and automate, the more brittle it has become. And this is why the open testing platform has this concept of bringing the pieces together in the form of information so you can decide what pieces you need to actually automate in the long run. I guess in summary, and I've never really said it this way before, Joe, but I think we took a massive leap forward because we have the tools to do so. But in doing so, we really didn't understand the brittleness of the components we were automating from a process perspective. And this is what throws a lot of pause into the process.

Joe [00:21:25] So this may seem like circular logic, but you talked about a process so is open testing platform a process or is it a tool? Is it a framework? How could someone say, “Well, this sounds awesome. How do I actually implement an open testing platform in my organization?”

Wayne [00:21:40] So an open testing platform, basically, by my definition is a concept and it has three steps to it. It has a mechanism to curate data. It has a mechanism to act against and it has a mechanism to automate the edges when you need to do so. This might be a combination of solutions that you already have, but this pattern is the pattern that's going to allow us as dev testers to keep pace with the ever-increasing rate of change. I don't believe that even though that there's a lot of great progress in AI, I don't believe that without this open testing platform pattern, we'll be able to really achieve the business objective and enable AI the way it could be without this pattern. So it's primarily a concept. Now, if there is a person out there or if there was an organization out there that would say, “Hey, I am an open testing platform vendor or open testing platform infrastructure, I would say you would have connections into your broader environment, which you can manipulate. You could curate those data patterns. So you are actually curating data so it's impactful to the tester. It's quality-focused, and then it would have an automated way for you to actually take action. It's not about a dashboard, it's about taking action. And then it would have kind of ubiquitous tentacles into the open-source infrastructure. So if you…the good thing about it is if you wanted to use a project like Selenium, you would be able to do so. If you wanted to say, “Hey, by the way, I got a shift. We're making architectural changes.” And now something like a Cypress is a better fit. You could potentially make that shift without having to look at a load of scripts that you now need to say, “How do I make this now valuable in my (unintelligible)?”.

Joe [00:23:34] Is there one piece of actionable advice you can give to someone to help them with their open testing platform efforts? And what's the best way to find or contact you?

Wayne [00:23:41] Well, despite the fact Joe that we just give away our age to the audience, this is what I would recommend. Keep a log of what is causing delays, right? And when things are causing a delay, as simple as it might be, think back to where that data might reside so you would be able to be more informed about that delay. And by the way, you're not allowed to go as far back as developers said. You need to go to there is a pull request on a particular piece code potentially or there's a feature flag or there's something in a system that would alert you to the fact that you would be aware of a change. And what you're going to find for each one of your delays is that there's some system that you could actually mine that holds this particle(??) and open testing platform is going to assist you to make sure that that information is then available to you in the future. So that's my only bit of advice.

Joe [00:24:42] Awesome. And Wayne the best way to contact you and find out more about OTP?

Wayne [00:24:46] So we've written a lot about this at Curiosity Software. And if you go to you'll see a lot of information on an open testing platform, how to potentially enable it, and a lot of data on the “Inform, Act, Automate” pattern. One of the more interesting pieces of data on that is these inform patterns. I've shared it with a lot of people in our industry and testers in our industry. And this is where the fun comes, is you're going to go through that list and go, “Oh yeah. Oh yeah. Oh yeah. That happens to me.” But once you start understanding how those patterns are then curated, what's going to happen is you're going to have an explosion of ideas of “What if I combine X with Y? I would then know when my environment (unintelligible) or if the database isn't reset or if you know the litany of things that actually (unintelligible). So it's kind of a fun journey. It might be kind of picking through the skeletons so I apologize. But what you're going to realize is that a lot of this information that's out there can actually be expressed and used to assist you rather than become barriers to progress.

Connect with Wayne Ariola

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Promotional image for "AI Observability" hosted by Eran Grabiner from TestGuild DevOps Toolchain, supported by SmartBear. Eran Grabiner is shown on the left and another person speaking into a microphone on the right.

AI Observability with Eran Grabiner

Posted on 06/19/2024

About this DevOps Toolchain Episode: Today, we are honored to be in conversation ...

Testguild devops news show.

Browser Conference, OpenSource LLM Testing, Up-skill Test AI, and more TGNS125

Posted on 06/17/2024

About This Episode: What free must attend the vendor agnostic Browser Automation Conference ...

Harpreet Singh-TestGuild_DevOps-Toolchain

DevOps Crime Scenes: Using AI-Driven Failure Diagnostics with Harpreet Singh

Posted on 06/12/2024

About this DevOps Toolchain Episode: Today, we have a special guest, Harpreet Singh, ...