Just because you can automate something doesn’t necessarily mean that you should. In some cases, it might be more beneficial to use manual testing to test several aspects of the project accurately.
In this post, you’ll learn:
- some factors to consider to help determine which manual tests should or should not be automated
- several guidelines to help identify good candidates for test automation
- where to find more automation testing tips
(FYI: I originally wrote this article in 2010, but the principles I cover are timeless and still apply)
Tests that should be automated:
Here are some signs that a test may be a good candidate for an automated test:
- Tests that need to be run against every build/release of the application, such as smoke test, sanity test, and regression test.
- Tests that utilize the same workflow but different data for its inputs for each test run (data-driven and boundary tests).
- Tests that need to gather multiple information during runtime, like SQL queries and low-level application attributes.
- Tests that can be used for performance testing, like stress and load tests
- Tests that take a long time to perform and may need to be run during breaks or overnight. Automating these tests maximizes the use of time.
- Tests that involve inputting large volumes of data.
- Tests that need to run against multiple configurations — different OS & Browser combinations, for example.
- Tests during which images must be captured to prove that the application behaved as expected.
Important: Remember that the more repetitive the test run, the better it is for automation
Tests that should not be automated:
Here are some signs that a test may be better as a manual test:
- User experience tests for usability (tests that require a user to respond as to how easy the app is to use).
- Tests that you will only run one time. (This is a general rule. I have automated one-time tests for data population situations in which the steps can be automated quickly and when placing in a loop can produce thousands of records, saving a manual tester considerable time and effort).
- Test that needs to run ASAP.
- Tests that require ad hoc/random testing based on domain knowledge/expertise.
- Tests without predictable results. For automation validation to be successful, it needs to have predictable results in order to produce pass and fail conditions.
- If a test needs to be manually “eyeballed” to determine whether the results are correct.
- Test that cannot be 100% automated should not be automated at all — unless doing so will save a considerable amount of time.
- Test that adds no value.
- Test that doesn't focus on the risk areas of your application.
Hopefully, this short post gave some brief insight into what should and should not be automated. For more tips on automation testing, be sure to subscribe to the Test Automation Podcast.
What would you add or remove from this list?
Let me know.