Automation Testing

AI Test: How It’s Changing Test Automation (+6 Examples)

By Test Guild
  • Share:
Join the Guild for FREE

What’s the first thing that comes to mind when you hear the words “Artificial Intelligence?” Most folks automatically think of sentient machines, like HAL from the movie 2001: A Space Odyssey or Skynet from The Terminator.

If you’re a tester and you manage to get this concept stuck in your head, it’s undoubtedly going to trip you up.

Remember: there are two kinds of AI that most folks are familiar with.

The first one is the strong type of Artificial intelligence like the Turing Test machine you can have a conversation with and not be able to tell if you're speaking with a human or a computer because they generate human-like responses.

The other is a more statistics-based, machine-learning AI. If you use the latter definition of AI, you can see how machine learning is already transforming the way we do test automation.

In this post, we’ll talk about:

  • The machine learning kind of AI in test automation
  • Six examples of automation testing scenarios that are already leveraging AI.

( “This article is based on my article, “How AI is changing test automation, originally published on TechBeacon.com.”)

Join Test Automation Training

First, what is Machine Learning?

Before we look at some automation testing examples impacted by machine learning (ML), let’s try to define what machine learning actually is.

At its core, ML is a pattern recognition technology.

So naturally, it’s excellent at finding patterns in data. It can then use those patterns identified by your ML algorithms to predict future trends.

Of course, to do this, you need a set of data that would be impossible for a person to go through and detect all of the familiar repetitions.

That’s why ML is so powerful.

It can consume tons of complex information and find patterns that are predictive, then alert you to those differences. Angie Jones shares a hands-on demo of this in her AiSummitGuild session on Show and Tell: Explain Machine Learning to Me Like I'm Five.

Real-World Examples of Machine Learning Applications

To see machine learning in action, you need to look no further than the smartphone you’re probably carrying.

Phones that have voice recognition software (like Cortana or Siri) apply AI to mimic the voice/human interactions you take part in.

Another frequent use case you’ve probably experienced is the list of recommendations served up when you’re shopping on Amazon, which is based on your previous, machine-learning-generated purchases.

By now you’re probably saying, “Okay! Enough already. Show me some automation examples!”

Fair enough–read on.

Visual Validation Tools

Visual Validation Automation Testing 

As we’ve already covered, ML is a pattern recognition technique. So what kind of patterns can it recognize? One that is becoming more and more popular is image-based testing using automated visual validation tools.

Using ML-based visual validation tools like Applitools can find differences that human testers would most likely miss.

“Visual testing is a quality assurance activity that is meant to verify that the UI appears correctly to users,” explained Adam Carmi, co-founder, and CTO of Applitools, a dev-tools vendor. Many people confuse that with traditional, functional testing tools, which were designed to help you test the functionality of your application through the UI.

With visual testing, we want to make sure that the UI itself looks right to the user and that each UI element appears in the right color, shape, position, and size,” Carmi said. “We also want to ensure that it doesn't hide or overlap any other UI elements.”

In fact, he added, many of these types of tests are so difficult to automate that they end up being manual tests. This makes them a perfect fit for AI testing.

This has already changed the way I do automation testing. I can create a simple ML test that automatically detects all the visual bugs in my software to help validate the visual correctness of the application without me having to implicitly assert what I want it to check.

Pretty cool!

Get Automation Testing Tips

Not Testing UI Interfaces

API Testing
Thin line flat design of lab scientist holding mobile app store solution, smartphone application analysis and study, technical service. Modern vector illustration concept, isolated on white background.

Another ML change that is impacting how we do automation is the absence of a user interface to automate. Much of the testing is back-end-related, not front-end-focused.

In fact, in her TestTalks interview, “The Reality of Testing in an Artificial World,” Angie Jones, an automation engineer at Twitter, mentioned that much of her recent work has relied heavily on API test automation to help her ML testing efforts.

Angie went on to explain that in her testing automation, she focused on the machine learning algorithms. “And so the programming that I had to do was a lot different as well. … I had to do a lot of analytics within my test scripts, and I had to do a lot of API calls.”

Becoming a Domain Model Expert

Besides not having a traditional user interface to test against, being able to train an ML algorithm requires that you come up with a Testing Model.

This activity needs someone with domain knowledge; many automation engineers are getting involved with creating models to help with this development endeavor.

With this change, there is a need for folks who not only know how to automate but can also do more headless-based automation–along with analyzing and understanding complex data structures, statistics, and algorithms.

Running More Automated Tests That Matter

How many times have you run your entire test suite due to a very small change in your application because you couldn’t trace that change?

Not very strategic, is it?

If you’re doing continuous integration and continuous testing you’re probably already generating a wealth of data from your test runs. But who has time to go through it all to search for common patterns over time?

Wouldn’t be great if you could answer the classic testing question, “If I've made a change in this piece of code, what’s the minimal number of tests I should be able to run in order to figure out whether or not this change is good or bad?”

Lots of companies are leveraging existing AI tools that do just this. Using ML, they can tell you with precision what the minimal subset of tests is to test the piece of changed code.

They can also analyze your current test coverage and flag areas that have little coverage, or point out areas in your application that are at risk.

Geoff Meyer, a test engineer at Dell EMC, will talk about this in his upcoming session at the AI Summit Guild. He will tell the story of how his team members found themselves caught in the test-automation trap: They were unable to complete the test-failure triage from a preceding automated test run before the next testable build was released.

What they needed was an insight into the pile of failures to determine which were new and which were duplicates. Their solution was to implement an ML algorithm that established a “fingerprint” of test case failures by correlating them with the system and debug logs, so the algorithm could predict which failures were duplicates.

Once armed with this information, the team could focus its efforts on new test failures and come back to the others as time permitted, or not at all. “This is a really good example of a smart assistant enabling precision testing,” Meyer said.

Get FREE Automation Testing Courses

Spidering AI

In my opinion, the most popular AI automation area right now is using ML to automatically write tests for your application by spidering.

For example, newer AI/ML tools like Mabel have a feature that allows you to simply point it to your web app, and it will automatically begin to crawl the application.

As the tool is crawling it is also collecting data having to do with features like taking screenshots, downloading the HTML of every page, measuring load times and so forth. And it continues to run the same steps again and again.

So over time, it’s building up a data set and training your ML models for what the expected patterns of your application are.

When it runs, it compares its current state to all the known patterns it has already learned. If there is a deviation (for instance, a page that usually doesn’t have JavaScript errors but now does), a visual difference, or it’s running slower than average, it will flag it as a potential issue/insight.

Some of these differences might be valid. For example, there was a valid new UI change. In that case, a human with domain knowledge of their application still needs to go in and validate whether or not the issue(s) flagged by the machine learning algorithms are really bugs.

Although this approach is still in its infancy, Oren Rubin, CEO, and founder of machine learning tool vendor Testim, say he believes that “the future holds a great opportunity to use this method to also automatically author tests or parts of a test. The value I see in that is not just about the reduction of time you spend on authoring the test; I think it's going to help you a lot in understanding which parts of your application should be tested.”

ML does the heavy lifting, but ultimately a human tester does the verification.

More Reliable Automated Test 

How often do your Selenium or UFT tests fail due to developers making changes to your application, like renaming a field ID? It happens to me all the time.

Our current solutions are based mainly on one selector or one path just using one way of actually finding fields in our applications.

But AI tools can make use of machine learning to automatically adjust to these changes. This makes our tests more maintainable and reliable.

For example, current AI/ML testing tools are able to start learning about your application, understanding relationships between the parts of the DOM, and learning about changes throughout time.

Once it starts learning and observing how the application changes, then it can make decisions automatically at runtime as to what locators it should use to identify an element–all without you having to do anything.

And if your application keeps changing, it’s no longer a problem because by using machine learning the script can automatically adjust itself.

This was one of the main reasons Dan Belcher, co-founder of testing tool company Mabl, and his team developed an ML testing algorithm. In my recent interview with him, he said, “Although Selenium is the most broadly used framework, the challenge with it is that it's pretty rigidly tied to the specific elements on the front end.

“Because of this, script flakiness can often arise when you make what seems like a pretty innocent change to a UI,” he explained. “Unfortunately, in most cases, these changes cause the test to fail due to it being unable to find the elements it needs to interact with. So one of the things that we did at the very beginning of creating Mabl was to develop a much smarter way of referring to front-end elements in our test automation so that those types of changes don't actually break your tests.”

Don’t Panic!

Gil Tayor, in his Automation Guild 2018 session Not Only Cars: “AI, Please Test My Apps,” gave some of the best advice I've heard about AI — Don't panic!

As you have seen, machine learning is not magic. AI is already here.

Are you scared?

Are you out of a job? Probably not.

So let’s do what we do best. Let’s stop worrying and keep automating.

AI Summit Guild

Also, be sure to check out the one-day conference dedicated 100% all about AI Test Automation: https://aisummitguild.com

FREE TESTGUILD COURSES

 

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Symbolic AI vs. Gen AI: The Dynamic Duo in Test Automation

Posted on 09/23/2024

You've probably been having conversations lately about whether to use AI for testing. ...

8 Special Ops Principles for Automation Testing

Posted on 08/01/2024

I recently had a conversation, with Alex “ZAP” Chernyak about his journey to ...

Top 8 Open Source DevOps Tools for Quality 2024

Posted on 07/30/2024

Having a robust Continuous Integration and Continuous Deployment (CI/CD) pipeline is crucial. Open ...

Become an Automation Guild '25 sponsor (limited spots left) - Reach out now >>