15 Reasons Why You Should (or shouldn’t) Automate a Test

Automation Testing Published on:
Test Management Machine Learning Robot

Below are some factors to consider to help identify which manual tests should or should not be automated. Just because you can automate something doesn’t necessarily mean that you should. Here are some guidelines to help identify good candidates for test automation:

Tests that should be automated:

  • Tests that need to be run against every build/release of the application, such as smoke test, sanity test and regression test.
  • Tests that utilize the same workflow but different data for its inputs for each test run (data-driven and boundary tests).
  • Tests that need to gather multiple information during runtime, like SQL queries and low-level application attributes.
  • Tests that can be used for performance testing, like stress and load tests
  • Tests that take a long time to perform and may need to be run during breaks or overnight. Automating these tests maximizes the use of time.
  • Tests that involve inputting large volumes of data.
  • Tests that need to run against multiple configurations — different OS & Browser combinations, for example.
  • Tests during which images must be captured to prove that the application behaved as expected.

Important: Remember that the more repetitive the test run, the better it is for automation

Tests that should not be automated:

  • User experience tests for usability (tests that require a user to respond as to how easy the app is to use).
  • Tests that you will only run one-time. (This is a general rule. I have automated one–time tests for data population situations in which the steps can be automated quickly, and when placing in a loop can produce thousands of records, saving a manual tester considerable time and effort)
  • Test that need to run ASAP.
  • Tests that require ad hoc/random testing based on domain knowledge/expertise.
  • Tests without predictable results. For automation validation to be successful, it needs to have predictable results in order to produce pass and fail conditions.
  • If a test needs to be manually “eyeballed” to determine whether the results are correct.
  • Test that cannot be 100% automated should not be automated at all — unless doing so will save a considerable amount of time.

What would you add or remove from this list? Let me know.

7 responses to “15 Reasons Why You Should (or shouldn’t) Automate a Test”

  1. 1.Does QTP supports the Phone based Interactive Voice Response recognition
    (i.e. verify that the cotent of the announced prompt matchess expected results?
    If yes pls let me know how to do?
    Otherwise is there any tool to support?

  2. As you already eluded to via your statement “unless doing so will save a considerable amount of time” I would take your last 3 points under “Tests that should not be automated” with a grain of salt. We have often automated test that have high re-run rates or save a lot of time as you pointed out but less than predictable results or results that are difficult to validate via QTP. We flag the results with a warning or failure to prompt the test executer to review the results manually and either accept the failure or manually pass the test. Would be better if QTP/QC had a status of “Manual Review” or similar so we didn’t have to use the micFail status but…

    Thoughts on this approach?

    • Ben » Hi Ben Have you tried going into your QC Project Entities\Run\Status Field settings and adding a New Item named “Manual Review”? You could then created some QC OTA code that sets this value for you if the QTP script needs to be reviewed.

  3. Hi Joe,
    Last one year i’m doing QTP scripting for automating Java Desktop based client/server application. We use the BPT model and data driven framework.
    I want to know that how we can decided which framework/methodologies we should use / best fit for application.
    Could you explain this through a new article? Or please suggest if you know any best book for this.

    • Vikas Gholap » HI Vikas – that’s a tough question to answer because it all depends on the technology and the people using it. I would setup a small pilot program using both approaches (just a small example of each one) and get feed back from the user’s to decide which approach they prefer.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Test Management Machine Learning Robot