Top 10 Reasons for Flaky Tests Automation

Automation Testing Published on:
Top 10 Flaky Test

If you're like most software professionals, you're always looking for ways to improve the quality of your code and tests. But even with the best purpose, it's easy to let your automated tests fail by the wayside. Maybe you don't have enough time, or your tests are too flaky to be worth running.

In automated testing, flaky tests are the bane in automation. They're unreliable and frustrating, making it difficult to know whether our tests provide value.

Armed with a guide, you'll be able to create reliable, consistent test scripts that will help ensure the quality of your software products.

So, let’s begin.


Top Reasons for Flaky Automated Tests
Not having a framework
Using hardcoded test data
Using X, Y coordinates or XPath for element recognition
Using shared test environments
Having tests that are dependent on one another
Test not starting in a known state
Test no managing their own test data
Not treating automation like any other software development effort
Failure to use proper synchronization
Badly written tests
In a Nutshell

Top Reasons for Flaky Automated Tests

There could be more reasons why your automated tests are flaky, but here are the top 10 that we see most often.

By understanding these issues, you can have more reliable and stable tests automation.

1. Not having a framework

In the software industry, flaky test automation is a common occurrence that can be frustrating for testers. So, how can we prevent it?

The test fails because of not having the proper framework in the first place. It's best to consider putting the process ahead. So, what is needed, how to do it right, and how long would the test automation be done.

2. Using hardcoded test data

There are many benefits to automating your tests, but if you use hardcoded, you may be undermining those benefits. Hardcoded data is explicitly written into the code as part of the automated test instead of being provided by an external source. Almost any test automation engineer will tell you that this can create flaky results. 

3. Using X, Y coordinates or XPath for element recognition

Have you ever written a test automation script that relied on specific coordinates or XPath to identify an element on the page? And then, when you ran the script, it failed because the element wasn’t where you thought it was supposed to be?

We're all looking for steps to speed up test runs and make them more reliable. But sometimes, using X or Y coordinates or XPath to identify specific elements on a web page can be flaky and unreliable.

4. Using shared test environments

By having a defined and consistent environment to run your tests, you can ensure that the results are accurate and reproducible. However, what happens when this stable environment starts to become unreliable?

Definitely, you can’t get back on track as it will result in failed tests.

5. Having tests that are dependent on one another

One way to achieve a great result is by ensuring that your tests are dependent on one another. This way, if one test fails, they all fail, and you know where to start investigating. However, if your tests are too dependent on each other, then your automation can become flaky.

6. Test not starting in a known state

When creating tests, the initial state of the system must be known. This means that all variables are set to a specific value. As such, no unexpected changes have occurred since the last time the test was run. If this isn't done, your tests may be unreliable and produce inaccurate results.

7. Test no managing their own test data

When it comes to testing data management, many organizations think they can manage it independently. This often leads to inconsistency and flaky automation. By outsourcing test data management, you can avoid these inconsistencies and ensure accuracy in all the tests.

8. Not treating automation like any other software development effort

You can't just automate everything, assuming that the automation will work as expected. Automation needs to be treated with care and consistency because it can quickly become flaky and unreliable if not done correctly.

9. Failure to use proper synchronization

Tests are run to ensure that the code works as intended and meets the requirements. It is no secret that proper synchronization is key to a successful automated test suite.

10. Badly written tests

One of the many ways to cause and detect flaky tests is having a poorly written test. Tests that are not well written can produce inaccurate results, and they can be a real pain to maintain. 

In a Nutshell

In automated testing, there are good and wrong reasons why something exists. Test flakiness can be a huge productivity killer in your software development process. Fixing these issues will help you produce reliable, accurate test results. It allows you to ship high-quality software that confidently meets your customers' needs.

Read More Blogs

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. I’d say your list is pretty much on target. Only thing I could add is that people have a tendency to make their tests too complex and do too much. They need to break them down and make them more “directed” in their purpose. Use the KISS method of design/construction of an automated test. Get in, get it done and then get out.

    Also to the point of not starting in a known place, better known as State Restoration, people tend not to clean up after a test has been run (true State Restoration). Use the SEARCH method; Setup, Execute, Analyze, Report, Cleanup and Home.

  2. I agree with a lot of the 10 points except part of 3. I use xpath to locate elements almost exclusively and it is only a source of ‘flakiness’ when the attribute is changed. I should say I don’t use brittle xpaths, I use * and contains a lot. I try to stay away from using class names, mostly using our own ‘qaid’s for location of key components. One of the biggest sources of flakiness has been our test bed machines (cluster of VM systems ) having systemic issues. (chromedriver debris left after a test closes) And as we’ve honed our test image and clean up processes, that has been less and less an issue.

    Good to see you are still around, I haven’t been on SQAForums much since the change

    1. Hi Steve! Good point – I guess I would call these good practices in general. In my experience with new development on a green field application the XPath is always changing. If you have an application where the XPath is always stable then I agree use it but I would still recommend using a unique ID or Name before you did. I can never remember my SQAforums user name and password but I should to go on it more often :)

  3. Hey Joe

    Great post.

    Hey Joe

    Great post.

    I would like to add on to point number 6: “Scripts not executing from known points”. In other words, not making scripts generic enough to account for unexpected states and outcomes.

    For example, suppose a script inputs data into a search screen but does not enter data into all of the fields and selects the result. That is all good and well until the data changes and more than one result is returned and the unwanted result gets selected.


  4. Hi,

    I was struggling with flaky tests as well. It may be a huge pain. All mentioned reasons in this article are so true. Knowing potential causes helps a lot with fighting with them.
    I was also using test quarantine and retests. You can read more about here:
    or here:

    For me very helpful was the article from Google about their battle with flaky tests:

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Top 10 Flaky Test

Latest Automation Trends: Top 13 Predictions for 2023

Posted on 01/04/2023

Welcome to my annual Latest Automation Trends: Top 13 Predictions for 2023 article. ...

Chrome Selenium Driver in Visual Studio C#

Posted on 12/21/2022

I'm sure you've all heard of Selenium by now. It's a popular tool ...

15 Reasons Why You Should (or shouldn’t) Automate a Test

Posted on 12/20/2022

Just because you can automate tests doesn’t necessarily mean that you should. In ...