One question that got quite a bit of feedback during the Automation Guild Conference and Community Day One roundtable was:
“What kind of report and metrics should you provide to your management on the state of your automated tests?”
Thus began a lively discussion on what metrics, if any, you should use, as well as the real value of your test automation scripts that you should be highlighting to management.
Following is an overview of what the panel thought:
What’s the Value?
Ali Khalid brought up the excellent point that the goal of any report to management should not be just the pass/fail rate, but should primarily be to demonstrate the value that test automation is adding.
Simply sending a pass/fail report to your team is not adding value to the contribution your tests are making to the bottom line of your software development efforts.
Test Automation Bottom Line
What will add value to the bottom line, however, is identifying your defeat removal efficiency. How many bugs (using both manual and automated tests) were your team able to avoid?
That's the metric you really want to employ.
Only highlighting the pass/fail number may encourage management to obsess over getting more passed tests instead of finding issues.
You don’t want your metrics to be used to chase the wrong things.
Green Tests Add Value, Too
According to Oren Rubin, CEO of the startup TestIm.io, many folks overlook the value of their passing tests.
Although the main goal of testing is to catch bugs, as more and more teams move towards CI/CD it’s also about increasing your team’s development velocity.
Nowadays we need to develop and deliver software faster.
Having passing automated test that gives developers confidence in the quality of their code is a huge win. The other benefit of having automated tests is that it allows teams to deliver software quicker.
How do automated tests give your developers confidence?
Automation Testing Safety Net
Ideally, your test automation acts like a safety net, giving your developers confidence that the changes they’ve made to the code base didn’t break anything.
Quality is a big part of making refactors to your code. Without a reliable set of automated tests in place to catch any code changes, developers would be reluctant to touch existing code. That’s why your test automation is like a safety net. Making changes to the code and seeing passing tests after those changes will encourage developers to make refactors with confidence.
Test Automation Metrics is a Team Effort
The best way to come up with test automation metrics is to gather the whole team together and make sure they understand what they want to improve.
Good conversation starters might be, “Were there any bugs that were found in our release product?” and “How did we miss them?”
If a bug is found, chances are there is a test that could have solved that.
Metrics are a Personal Thing
There are no cookie cutter metrics. What’s most important to teams vary from company to company.
Gather your team together and ask them what measurement(s) they prefer to use to keep themselves on track.
Some good questions are, “Do we need more coverage? Do we need to have that has run faster?” and, “How can we make it faster?”
Wrong Metrics
Angie Jones mentioned that every manager she has ever had has asked for the wrong metrics in regard to automation.
They tend to ask wrong questions that ultimately add no value to the development bottom line, like “How many tests do we have automated?” Or they set unhelpful goals, like having X amount of test automated, or a certain percentage of the backlog automated as opposed to figuring out the real value of automation.
The value of automation is not the number of tests!
If you can have just 20 tests automated but if it's given you that coverage that ensures that one was able to move faster because we have that confidence in what you’re building that that's so much more valuable than how many more tests.
Adding more tests just to add more tests does nothing but add more noise and also slows you down that whole velocity piece.
More Automated Tests – More Problems
Whenever you start talking about adding more automated tests, keep in mind that more tests mean longer run times and more time spent on maintenance.
This is actually a good metric to use–how much maintenance time does your automated suite need?
That's something a lot of managers don't consider up front.
Maintenance Cost of Test Automation
Try to keep that metric visible to your team as well so that they understand they’re not just coding new tests, but that there are some effort and time needed to maintain what you have. You should also ensure that you’re keeping the tests stable in order to achieve the velocity that you need.
Missed Out on the Automation Guild 2018 Conference?
Did you miss the Automation Guild 2018 Conference? No worries. You can still register and get access to all the conference recordings, including the Day One Angie Jones, Ali Khalid and Oren Rubin roundtable.
I think that it is also valuable to consider the time gained by automating regression tests normally done by manual testers. It could be beneficial to management to see with every test report more than just pass and fail, but also an estimate of the average time it takes for someone to perform a test manually along with test execution time. I would also like to see the difference of these two metrics reported as time gained/saved. That speaks volumes to the added value of automation in general and will add confidence to management in the the automation effort. I also agree with the point that more tests = more problems. Value can be added by squeezing the effectiveness out of each test. Can be done by removing redundant tests which will decrease total test suite execution time (Good for CI/CD) and give you less tests to worry about so that you can spend time making more robust tests.
“Without a reliable set of automated tests in place to catch any code changes, developers would be reluctant to touch existing code. ” I think that this is one of the best side benefits of test automation. A confidence that Developer changes didn’t break code. This confidence allows the whole team to move faster to market. That confidence is driven by a good suite of automated tests.