Just because you can automate tests doesn’t necessarily mean that you should. In some cases, it might be more beneficial to use manual testing to test several aspects of the project accurately.
In this post, you’ll learn:
- some factors to help determine what test cases cannot be automated
- several guidelines to help identify good candidates for test automation
- where to find more automation testing tips
INDEX
Tests that should be automated:
Pros of Automation over Manual Process
Tests that should not be automated:
Pros of Manual Over Automated Process
Conclusion
(FYI: I originally wrote this article in 2010, but the automated testing principles I cover are timeless and still apply)
Tests That Should Be Automated
Here are some signs that a test may be a good candidate for an automated test:
- Tests that need to be run against every build/release of the application, such as the smoke test, sanity test, and regression test
- Tests that utilize the same workflow but different data for its inputs for each test run (data-driven and boundary tests).
- Tests that need to gather multiple information during runtime, like SQL queries and low-level application attributes.
- Tests that can be used for performance testing, like stress and load tests
- Tests that take a long time to perform may need to be run during breaks or overnight. Automating tests maximizes the use of time.
- Some tests involve inputting large volumes of data.
- Tests that need to run against multiple configurations — different OS & Browser combinations, for example.
- Tests during which images must be captured to prove that the application behaved as expected.
Pros of Automation over Manual Process
Manual testing can be time-consuming and may not be suitable for testing large applications or for running frequent CI/CD regression tests.
Prone to human error: Manual testing is prone to human error and may not be as reliable as automated testing.
Limited coverage: Manual testing is limited by the tester's skill and experience, and may not cover all possible scenarios and combinations of input data.
Important: Remember that the more repetitive the test run, the better it is for automation.
Tests That Should NOT Be Automated
Here are some signs that a test may be better than a manual test:
- User experience tests for usability (tests that require a user to respond as to how easy the app is to use).
- Tests that you will only run one time. (This is a general rule. I have automated one-time tests for data population situations in which the steps can be automated quickly and when placed in a loop, can produce thousands of records, saving a manual tester considerable time and effort).
- Test that needs to run ASAP.
- Tests that require ad hoc/random testing based on domain or subject matter expertise/knowledge. Tests without predictable results. For automation validation to be successful, it needs to have predictable results in order to produce pass-and-fail conditions.
- If a test needs to be manually “eyeballed” to determine whether the results are correct.
- Tests that cannot be 100% automated should not be automated at all — unless doing so will save considerable time.
- Test that adds no value.
- Test that doesn't focus on the risk areas of your application.
Pros of Manual Over Automated Process
- Allows for more flexibility: Manual testing allows the tester to explore the application and come up with new test cases on the fly, which is not possible with automated testing.
- More accurate: In some cases, manual testing may be more accurate than automated testing as it allows the tester to use their judgment and experience to identify defects that may not be caught by automated tests.
- Cost-effective: Automated testing requires a significant investment in tools and resources, which may not be justified for all types of testing. Manual testing may be a more cost-effective option in some cases.
Conclusion
Hopefully, this short post gave some brief insight into what should and should not be automated. For more tips on automation testing, be sure to subscribe to the Test Automation Podcast.
What would you add or remove from this list?
Let me know.
1.Does QTP supports the Phone based Interactive Voice Response recognition
(i.e. verify that the cotent of the announced prompt matchess expected results?
If yes pls let me know how to do?
Otherwise is there any tool to support?
Hi Raman,
Since IVR is a pretty specialized interface out of the box I don’t think QTP has support for it. You may be to roll your own DLL to get the functionality you need and have QTP call the DLL. I think
NuEcho may have a IVR solution called NuBot http://www.nuecho.com/content/view/28/147/lang,en/
Mentioning “usability testing” in your article, I’d suggest you take a look at http://www.userfeel.com.
As you already eluded to via your statement “unless doing so will save a considerable amount of time” I would take your last 3 points under “Tests that should not be automated” with a grain of salt. We have often automated test that have high re-run rates or save a lot of time as you pointed out but less than predictable results or results that are difficult to validate via QTP. We flag the results with a warning or failure to prompt the test executer to review the results manually and either accept the failure or manually pass the test. Would be better if QTP/QC had a status of “Manual Review” or similar so we didn’t have to use the micFail status but…
Thoughts on this approach?
Ben » Hi Ben Have you tried going into your QC Project Entities\Run\Status Field settings and adding a New Item named “Manual Review”? You could then created some QC OTA code that sets this value for you if the QTP script needs to be reviewed.
Hi Joe,
Last one year i’m doing QTP scripting for automating Java Desktop based client/server application. We use the BPT model and data driven framework.
I want to know that how we can decided which framework/methodologies we should use / best fit for application.
Could you explain this through a new article? Or please suggest if you know any best book for this.
Thanks,
Vikas
Vikas Gholap » HI Vikas – that’s a tough question to answer because it all depends on the technology and the people using it. I would setup a small pilot program using both approaches (just a small example of each one) and get feed back from the user’s to decide which approach they prefer.