Automation Testing

Future of AI in Testing (Will You Be Replaced?)

By Test Guild
  • Share:
Join the Guild for FREE
Man Vs AI Robots

Hey I just wanted to share with you a webinar I was a guest on along with Frank Moyer CTO at Kobiton and Mush Honda VP of Testing at KMS Technology  a few weeks back with Kobiton on the Future of AI and Testing (will you have a Job Tomorrow) and how it's going to affect your job in the future. Let me know what you think. Watch it now:

Here are Some Key Takeaways From the Future of AI and Testing Webinar

Frank Moyer's View:

I'd like to start with the opportunity that sits in front of us with machine learning both the impact this technology will have on our daily lives and also the effect it will have on our careers. As testing professionals.

Testing AI Luddites

Some liken the recent advances in machine learning to the invention of the steam engine or electricity and the industrial revolution of the 1800s. At that time machines were replacing manual labor jobs doing it more efficiently and at a lower cost. Over the long term, the industrial revolution led to tremendous improvements in the quality of life. Over a 50 year period, the real wage doubled. So the real salary accounting for inflation doubles. And a lot of that was due to the efficiencies gained through the industrial revolution.

However, in the short term, there were some significant impacts. Unemployment skyrocketed. Exact numbers are tough to come by, but the estimates are between 25 to 50 percent unemployment. Especially those who whose jobs were being replaced by machines. Over time those unemployment numbers return to lower levels as these manual laborers re-skilled and found new jobs.

Machine Learning NOT like the Industrial Revolution

I believe machine learning will be significantly different than the industrial revolution. I think it's going to have a much more significant positive impact on our spending power but more importantly unlike the industrial revolution where the machine took over the job of the human-machine learning provides us testing professionals with the ability to work in tandem with a tool to make us better testing professionals and improve the quality of our product.

MicroFocus HPE Merger

Man and Machine

So I'm going to talk a little bit about this concept of working in tandem with the machine.

Where the machine learning and the professional working together to improve the productivity of the professional task — we refer to this as intelligence augmentations. Unlike some machine learning that you may have heard about like autonomous driving where some say will replace people who drive as a profession. Industries like testing will advance by intelligence augmentation. A good example is outside of the testing space was a presentation done by Nicholai Rostrum and one of the portfolio companies Sibley. They have a mobile app that helps audiences find affordable and effective mental health coaching through conversation. So this AI platform takes all this conversation data and offers suggestions to the mental health coaches of Sibley to help them build a consistent message over time. So these coaches aren't licensed therapists, but they're trained continually by the machine.

If we look at the testing today, let's face it testing budgets are tight. Right. Development is pushing new releases at a blistering pace, and it's impossible as testers for us to keep up.

We will be able to do more with less by pairing testing professionals with machine learning capabilities to drive this future of testing intelligence augmentation.

We're already beginning to see this in action Applitools eyes which is a product in the market today takes human annotations tester annotations on the user interface as input to train and machine learning algorithm to make it better. It reduces the amount of time that the testing professional has to spend on verifying how a mobile application renders on different screen resolutions.

So you know it's a challenging task for and time-consuming task for humans to do a very effective way for a machine or a computer to do that same process.

A Testing AI Matrix

I'd also like to clarify that when I look at intelligence augmentation and expand upon that a little more because it goes beyond the scope in my opinion of a single organization.

Eventually, we will all participate in a testing matrix where intelligence gathered from one organization will help improve the quality of tests for other organizations. You know we're seeing early signs of this already happening.

Jason Arbon is the CEO and founder of Test.AI I presented at an event in Las Vegas in June, and they crawled 30000 apps from the App Store using a type of machine learning algorithm called reinforcement learning to train the machine on what the login to an application generally looks like. And so you know there are a few different types of Loggins that you see in an app. One time passcode two-factor authentication and username and password. It learned and generalized those types of Loggins so that you know how you don't need to write any scripts for them for logging.

You can merely have the machine go from its wealth of knowledge about how to test those that flow. And I think it also goes beyond the scope of just test data. The data that's pulled from other parts of the organization can identify cross-correlations. So there's valuable information from Google analytics that we can now tap into our testing to drive from test coverage to user coverage which is really what matters most.

So finally where does this lead us?

k6 Performance Test Tool

Future Activities of Testers

You know from a workflow perspective the professional Tester will focus on the parts of testing that are most difficult for a machine to do. Like running exploratory tests, validating and correcting the decisions made by the computer overseeing and directing the work done by the device and then analyzing the anomalies identified by the machine.

The machine should be given tasks that it does well. Like, compare and screenshot images, identifying and advising the tester on flows that should be explored, analyzing the exhaust from test executions. The amount of data that will come out of our tests will grow exponentially over the upcoming years. And identify anomalies that need to be analyzed.

At a macro level, this will overall drive a higher quality product and less frustration from testers doing their job.

Machine Learning will drive change to what manual testers do today. Back to my three comments here executing rote tasks will be performed by machines and the testing professionals will advance their careers onto more professional and human-related tasks.

On the Kobiton side, we are launching a product called application health check that is not meant to replace existing testing processes but rather augment the current testing process. One of our professional testers executes a test against a single device and when we use that single device execution to run the tests across a multitude of devices then. Reporting back different facets of the application including user interface issues and performance issues. If the system is crashing will provide crash logs and system metrics like batterie network CPE and memory usage.

So that is that is all for me entertain questions and comments from a panel.

Joe Colantonio's View:

So I think we're all pretty much in agreement that I don't think AI's can replace tester. I guess it all depends on how you define what a tester is. When I think of a tester, I think of a domain expert, and I don't see us replaced anytime soon. But I think a lot of the testers activities as Frank pointed out are going to change, and we need to embrace that.

I remember as a kid if someone told me a computer it could beat someone in chess. It would have said they were crazy, but sure enough, a computer Big Blue beat Garry Kasparov at chess.

We then keep moving the line in the sand.

Well, a computer will never learn how to drive. We don't have to worry about that.

Once again companies like Tesla are now using AI to do the driving for us.

If You Can't Bet AI Them Join Them

We need to be careful that we don't hide our heads in the sand and think that what we do is so unique that ‘s not replaceable. I believe there are certain activities that we do that can't be replaced, like creativity and also being domain experts.

I can also think a recent example of this is in Detroit where they made the most cars in the world back in the day, and in the 70s and 80s, we asked them ” Do you think a robot would replace you?” They would say “No, that's impossible!” of course now I would think probably a majority of the cars created or put together or assembled are by robots. So if these people knew this ahead of time and embraced robotics and learned robotics rather than fight it or to say it would never happen, they'd be in a better place.

AI Testing Hype

I think a lot of this is hype right now. So I'm not saying that AI is going to replace anything immediately, but keep your ears open and continuously be evolving with the technology and try to learn what's what's real, and what's not, and how can benefit from it.

A lot of people ask is AI is real or is it all hype? Is it the real deal?

Follow the money! A lot of companies are getting funded if any of their products have the term AI in it. You can see already some testing companies are getting funded by Google. So you need to follow the money. Some investors believe in the technology, but you also need to remember that vendors are maybe hyping up some of the tools right now. But if you tried some of them right now that are out there, they're great, but they don't necessarily do what you think they're going to do. So I mean just based on that first iteration I don't see testers getting replace anytime soon at all.

A lot of it is hype, but a lot of it's not.

Levels of AI/ML

There's a lot of cool stuff out there. I think what people get hung up on are that there are two definitions for AI and a lot of people go the first one when they think of HAL from the book/movie 2001 where they think it's an omniscient artificial intelligent machine, that can pass a Turing test.

That's not what we're talking about here.

We're talking more about machine learning. Machine learning as more of us this kind of thing that it does that you need to train. So it's more of getting the software to act without being explicitly programmed. And if a program can improve based on its experience, then it can adjust itself and learn.

That's the type of technology we're talking about?

It's more machine based machine learning type technology. So once you get that definition in place, you will not get sucked into the hype and go case machine learning statistics.

Machine Learning and Test Data

As Frank mentioned, we're delivering software quicker, faster and faster. We have all this data. So what's great about machine learning? It's made entirely for analyzing this data.

So as we start pushing things to production, I see a lot of technologies being able to analyze how our build is acting and in the wild before our users find issues and then be ready to modify itself or roll back automatically. So that is one of the benefits I see from machine learning.

Also Frank mentioned visual testing. I think a visual examination is an excellent example of how a man a machine can work together. So with visual testing it takes images of your application and then when it finds a different it will bubble that up as an insight but it takes a user to look at that data and say is that a real issue. So it's the machine triggering it and giving you an insight saying hey this doesn't seem right. This has changed over time, but it takes a human to look and say oh, is this a bug or is this a part of a new release that, of course, it's acting this way.

AI Crawling

Also, there are things like AI crawling where the technology can crawl your Web site and give you some insights into maybe some test areas that you may not have coverage for already. So that's once again being able to run a test in the background over and over again have the AI learn your application for you and to be able to analyze some gaps that you may not have noticed as a tester. Then use a Tester to decide if the missed path should be added or not to the test suite. This is another machine learning benefit I see that these tools can start adding value now to our testing efforts.

Another area where these technologies can help is in running out tests. I don't know about you, but when I get a new build a lot of times, folks will say oh just run everything. But if something fails how do I know it correlates with what's been changed. It's not very strategic.

If your doing continuous integration and continuous testing your probably generating a bunch of data from your test runs, but who has time to go through all that data? I think machine learning is perfect for that scenario. It can analyze the build, see what's been checked in, what modules have been changed and tell you based on analyzing that info what test you need to run to cover the changes. What I love about this approach is that it's both strategic and atomic. If something fails you know, it's a real failure, not just noise.

Machine Learning Anomaly Detection

These type of machine learning tool are great at giving you insights. So it uses data over time to analyze and alert any deviations from pat run histories. Like to say hey this image looks different, is this a real issue? You as a human have to go and say oh you know I didn't notice that that is a real bug. These tools also keep track of performance over time. And they can tell you for the past month; this function took you to know three seconds to run. But now with this latest build, it's taking 20 seconds to run.

So those are some fresh insights these tools, right now, can start bubbling up to you. And they know hey here's an issue. Let's check this out and see if there's something there. You as a tester then need to go in and look at it. So I think it's critical that you remember that it's Man and Machine working together, it's not about replacing testers. It's pretty much our jobs as domain experts being the same but also expanded our knowledge with newer skills we need. Like learning to train testing algorithms. Before an AI algorithm goes live you will need to prepare with some test data to make sure it's doing what it's doing and verifying that.

Stuff like that is going to be I think a big growth area.

Testers in Data Science

Also, anything in data science is going to be helpful. I think those types of things are going to change the automation landscape as we know it. And as I said right now if you use these tools they're great if you could tell I wouldn't be scared I'm not afraid right now of any of them replacing me. It's just that a lot of things we do are going to change and that we really need to adapt and really incorporate them into our current environments and learn what they can actually do and leverage them to help us but it's not necessarily going to replace us so I wouldn't be scared of that at all.

Machine Learning Assisted Test Maintenance

Also as automation engineers how many times have you run a script and it broke because someone changed the ID. We spend a lot of time with maintenance doing these updates with machine learning with these types of tools it can identify fields and elements, and self-correct themselves at runtime to adjust to these changes without human interaction. So it's going to help a lot of maintenance and reliability.

There's a lot of things like machine learning where it can learn from our data and give us some insight as testers that can help us with our other test activities. I ran an online conference called the AI Summit Guild in may where I where I had some sessions specifically on AI-based automation and what the experts thought it was going to affect the testing area and all of them agreed on that same thing. It's going to be machine being a helper to tests, not a replacer.

Mush Honda's AI View

So you know as we've all talked about I think we agree that this is something that will be an enabler. So the obvious question that typically comes in in the mind of a tester would be Hey wherein all of these different situations that we talk about where this is going to be giving me the best bang for an ROI. For the investment that I potentially need to male in training the algorithms and the tools that use these algorithms. How does it make my job and my response as a tester better? And can I be preemptive? I feel that the machine learning options that are coming up with the tools I think would be the most suited for Testers if there were a way for them to be able to give us preemptive recommendations.

AI Automation Execution Help

Looking at the execution of all of the testing that has happened whether it's through manual interactions or to automation tools of being able to give us an assessment of saying given all of the changes that have been done in this release you have executed these tests. However, this area of the application seems to have been touched less and based on prior data here's been some critical bugs that are typically there. Tools that are set up to allow us to to be able to analyze that sort of data and be able to advise us preemptively before a lot of effort on energy goes to the executing of testing would be the maximum add-on.

Automation AI to Reduce Test Flakiness

The other area that I see also is being able to leverage tools that have reduced the execution brittleness where you are almost in a reactive mode when something fails. What is your confidence in the fact that the failure is genuinely driven by a defect that has been absorbed in the system vs it being related to a broken test script?

How soon can tools that encounter such brittleness recover and continue to run? In the current state of affairs typically a lot of teams that run a lot of automated regression tests, even though we want them to be run quicker and be on demand does take some time. I've known teams that leave them running overnight just because of the complexity of the application, or just how broad the app is. Being able not to have to watch for error messages hitting our inbox in the middle of the night saying that something is wrong. Worse is you know having the tool make some assessment and say yes we were able to recover and rerun and overcome that hurdle. Where just the confidence in the entire execution and the tasks and activities of the testing tools the testing team that value we are would be something I see machine learning adding significantly too.

Test Management Machine Learning Robot

Train your AI

The other aspect of what we want to talk about today has been you know how do we train the way the machines to adapt algorithms. How do you do the supervised learning? How do you reinforce this learning? All of that is what I believe is something that testers need to look at it and need to adopt. I know Joe also talked about the being more familiar with the role of what data scientists play these days. Will there be a way for us to get more involved and understand a little bit of the prediction models and things of that nature that we can then leverage so that we can apply that and use it to train the machines and the algorithms to be able to be more effective.

WEKA

Speaking of the automation guild, I was actually at one of the sessions where one of the testers talked about this tool called WEKA from the University of New Zealand which is more of a data analysis tool. But that tool just based on being able to create logic to allow us to parse through a lot of defects reported historically on a particular application that will enable us to sort of make those preemptive recommendations or suggestions of say hey what are the chances that there is a bug that is identified when you have a particular feature that has been updated over the last period of six months or six weeks or six days. But tools like that are something that we as testers should try to adopt them and learn and be able to apply to make our analysis and assessments of the situation that we're going to use for testing to be able to make more of an informed decision there.

Automation AI Assessments

Thinking about what to do regarding preparing for it. I would say look at the tasks rather than just the execution of testing. I would say you should you know as a test and see all of those assessments that you do. If you were to train a new member who joined your team to be able to take on the same responsibilities and be a good tester within the organization what are some of the thought process and the risk analysis that you do? If there's a way to actually try to take a deep granular dive and say here are the four or five things that I always look for and then try to see if there are tools out there that you can model these on. These types of things I would say should be part of initiatives for any testing team especially supporting the demand of today which is speed being the number one thing for time to market for applications.

Machine Leaning are Testing Enablers

Finally, the other thing I think about as a test especially is I look at how all of the machine learning tools and the upcoming AI toolsets as just enablers similar to what automation tools of today are. I remember very distinctly several years ago when the concept of automation tools came in. We were having a very similar conversation where everybody thought that all the day's off of testers are over. We now have tools that can go thru and do the automation. But just like the role of a tester evolved into now also be able to be a software developer in test or a software engineer in test. It's going to follow a similar pattern where we understanding how to train and use mathematical models to be able to effectively scrutinize and make sure that the algorithms that are providing us with either results or recommendations have a good bearing of the same thought process that we use as testers to make that risk assessment.

Wouldn't it eliminate defects? I think that's another conversation that we should be having. We should be thinking about. I don't believe that could be a case. I think what will happen is the outcome of using ML/AI tools or any automation tool is to catch the defect and be aware of them sooner so that way we can try and avoid them.

Are all items that are included in our applications that are not built with the intent of they all critical to fix. Maybe not right. You know there can be another discussion about do you want an application with zero defects? If the industry or the particular application offered does not warrant it. Or is there more flexibility about that severity of the defects. My take on it is you know us as testers whether your engineer based tester where your more hands-on with automation and scripting do you not look at AI tool. I would say whether you are or are not an automation guy you have to adopt these tools and look at it as another way that'll make your tasks simpler and let you focus on other activities such as creativity and things of that nature to be applied when you are testing a system.

More on AI Automation Awesomeness

So what do you think? Let me know. For a deeper dive into AI test automation, get instant access now to all the session for the AiSummitGuild. Learn from the best like Angie Jones, Tariq King, Jason Arbon, Dan Belcher(Mabl), Geoff Meyer(DELL), Oren Rubin(TestIm), Adam Carmi(Applitools), Noemi Ferrera and Jonathon Wright on how to apply AI now to your testing efforts now.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Synthetic Monitoring? (2024 Guide)

Posted on 04/18/2024

I've found over the years many testers are unsure about what is synthetic ...

SafeTest: Next Generation Testing Framework from Netflix

Posted on 03/26/2024

When you think of Netflix the last thing you probably think about is ...

Top Free Automation Tools for Testing Desktop Applications (2024)

Posted on 03/24/2024

While many testers only focus on browser automation there is still a need ...

Discover Why Service Virtualization is a Game-Changer:  Register Now!