Automation Testing

15 Reasons Why You Should (or shouldn’t) Automate a Test

By Test Guild
  • Share:
Join the Guild for FREE

A MacBook with lines of code on its screen on a busy desk

Just because you can automate tests doesn’t necessarily mean that you should. In some cases, it might be more beneficial to use manual testing to test several aspects of the project accurately. 

In this post, you’ll learn:

  • some factors to help determine what test cases cannot be automated
  • several guidelines to help identify good candidates for test automation
  • where to find more automation testing tips

INDEX

Tests that should be automated:
Pros of Automation over Manual Process
Tests that should not be automated:
Pros of Manual Over Automated Process
Conclusion

(FYI: I originally wrote this article in 2010, but the automated testing principles I cover are timeless and still apply)

Join the Guild

Tests That Should Be Automated

What to automate robot

Here are some signs that a test may be a good candidate for an automated test:

  1. Tests that need to be run against every build/release of the application, such as the smoke test, sanity test, and regression test
  2. Tests that utilize the same workflow but different data for its inputs for each test run (data-driven and boundary tests).
  3. Tests that need to gather multiple information during runtime, like SQL queries and low-level application attributes.
  4. Tests that can be used for performance testing, like stress and load tests
  5. Tests that take a long time to perform may need to be run during breaks or overnight. Automating tests maximizes the use of time.
  6. Some tests involve inputting large volumes of data.
  7. Tests that need to run against multiple configurations — different OS & Browser combinations, for example.
  8. Tests during which images must be captured to prove that the application behaved as expected.

Pros of Automation over Manual Process

Manual testing can be time-consuming and may not be suitable for testing large applications or for running frequent CI/CD regression tests.

Prone to human error: Manual testing is prone to human error and may not be as reliable as automated testing.

Limited coverage: Manual testing is limited by the tester's skill and experience, and may not cover all possible scenarios and combinations of input data.

Important: Remember that the more repetitive the test run, the better it is for automation.

Free TestGuild Courses

Tests That Should NOT Be Automated

What should not be automated

Here are some signs that a test may be better than a manual test:

  1. User experience tests for usability (tests that require a user to respond as to how easy the app is to use).
  2. Tests that you will only run one time. (This is a general rule. I have automated one-time tests for data population situations in which the steps can be automated quickly and when placed in a loop, can produce thousands of records, saving a manual tester considerable time and effort).
  3. Test that needs to run ASAP.
  4. Tests that require ad hoc/random testing based on domain or subject matter expertise/knowledge. Tests without predictable results. For automation validation to be successful, it needs to have predictable results in order to produce pass-and-fail conditions.
  5. If a test needs to be manually “eyeballed” to determine whether the results are correct.
  6. Tests that cannot be 100% automated should not be automated at all — unless doing so will save considerable time.
  7. Test that adds no value.
  8. Test that doesn't focus on the risk areas of your application.

Pros of Manual Over Automated Process

  • Allows for more flexibility: Manual testing allows the tester to explore the application and come up with new test cases on the fly, which is not possible with automated testing.
  • More accurate: In some cases, manual testing may be more accurate than automated testing as it allows the tester to use their judgment and experience to identify defects that may not be caught by automated tests.
  • Cost-effective: Automated testing requires a significant investment in tools and resources, which may not be justified for all types of testing. Manual testing may be a more cost-effective option in some cases.

Conclusion

Hopefully, this short post gave some brief insight into what should and should not be automated. For more tips on automation testing, be sure to subscribe to the Test Automation Podcast.

What would you add or remove from this list?

Let me know.

Let’s Talk!

  1. 1.Does QTP supports the Phone based Interactive Voice Response recognition
    (i.e. verify that the cotent of the announced prompt matchess expected results?
    If yes pls let me know how to do?
    Otherwise is there any tool to support?

  2. As you already eluded to via your statement “unless doing so will save a considerable amount of time” I would take your last 3 points under “Tests that should not be automated” with a grain of salt. We have often automated test that have high re-run rates or save a lot of time as you pointed out but less than predictable results or results that are difficult to validate via QTP. We flag the results with a warning or failure to prompt the test executer to review the results manually and either accept the failure or manually pass the test. Would be better if QTP/QC had a status of “Manual Review” or similar so we didn’t have to use the micFail status but…

    Thoughts on this approach?

  3. Ben » Hi Ben Have you tried going into your QC Project Entities\Run\Status Field settings and adding a New Item named “Manual Review”? You could then created some QC OTA code that sets this value for you if the QTP script needs to be reviewed.

  4. Hi Joe,
    Last one year i’m doing QTP scripting for automating Java Desktop based client/server application. We use the BPT model and data driven framework.
    I want to know that how we can decided which framework/methodologies we should use / best fit for application.
    Could you explain this through a new article? Or please suggest if you know any best book for this.
    Thanks,
    Vikas

  5. Vikas Gholap » HI Vikas – that’s a tough question to answer because it all depends on the technology and the people using it. I would setup a small pilot program using both approaches (just a small example of each one) and get feed back from the user’s to decide which approach they prefer.

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

DevAssure Review: Cut Testing Time Using Low Code Automation?

Posted on 12/11/2024

As you know, testing often becomes a bottleneck in software development due to ...

Leveraging AI and Playwright for Test Case Generation

Posted on 11/22/2024

Two BIG trends the past few years in the software testing space been ...

Symbolic AI vs. Gen AI: The Dynamic Duo in Test Automation

Posted on 09/23/2024

You've probably been having conversations lately about whether to use AI for testing. ...

Automation Guild '25 Online Event - Registration Kickoff Special (Limited Time) - Elevate your E2E testing skills in 2025 >>