Software Testing

Test Automation Efforts Visibility

By Test Guild
  • Share:
Join the Guild for FREE

Test Automation InvisiibleTest Automation InvisiibleTest Automation Invisiible
I was recently a guest at a QASymphony webinar called Creating Visibility and Speed with Automation and Test Case Management. As I was putting my presentation together, I had the opportunity to think back over my past testing projects and think of ways of could have made my test automation efforts more visible.

Prior to my current project, I always felt like the testing and automation efforts I was involved with weren’t visible enough to everyone on the team. It seemed no one would listen to the testers or look at the test results. At times I honestly felt invisible with my automation efforts-–UNTIL the last minute–right before a release.


Then suddenly everyone would start taking an interest in testing–asking why a certain test wasn’t passing, why they didn’t know about this or that, how come a certain requirement wasn’t being tested–even though the testers had made their best effort to keep everyone in the loop.

I know I’m not the only one that has felt this way. In fact, one of the polling questions we asked on the webinar was, “How visible are your current testing and automation efforts to everyone in the organization?” A good number of folks (43.5%) answered “Somewhat visible,” and 26.5% answered, “Not very visible.” I think we can do better. So when I started on a new Agile project I began to focus on a few specific areas that would help give my testing efforts more visibility.


In this article, I’d like to share how you can create visibility and speed with your automation and test case management efforts and make your testing more visible throughout your organization by focusing on three main areas of your testing process: Planning, Writing, and Execution.

  1. Test Automation Planning


I’ve been involved with test automation for more than 15 years and I’m amazed that, to this day, there are misconceptions about test automation that most managers aren’t even aware of. The first thing you need to do to make your test automation efforts visible is to set your manager’s expectations. This is key because if you don't have your manager’s support for testing and automation, they often don't realize that automation is a development effort. And like any development effort, it’s not simply a “one and done” effort; test cases need to be maintained over time.

If you don't maintain your tests, it's going to be a nightmare to manage them. Managers need to know this up front. Managers can also sometimes come up with crazy metrics, like “90% of our tests should be automated.” It’s up to you to basically do some push back and educate them that it's not necessarily possible, or necessarily a good thing.


Managers also need to know that testing and automation is now a team effort. Gone are the days of silos where everyone had a clearly defined role. Automation and testing are now activities that need to be performed by everyone on the team rather than by a solo tester. (For more examples, check out my 5 Things Your Boss Doesn’t Understand about Test Automation.)

Staffing and Skills


With the current trends in software development practices, as well as the shift in focus toward customer-driven development models, test automation is more important than ever. Agile and DevOps demand more automation, and practices like continuous integration and delivery require automated tests that can be run quickly and reliably.

I would go so far to say that one can't be successful in any of these practices without some degree of test automation in place. In my experience, teams often have more developers than testers, in which case the developers need to take responsibility for most of the team's testing and automation efforts. The responsibility for testing lies within an Agile team. Test activities are planned and controlled in the same way as all other activities within a Sprint. Each member of the team can (and should) carry out testing tasks according to his or her own individual skills.

If testers are the only ones responsible for creating tests, it's going to slow down your development process. Distributing testing to developers helps speed up your efforts, but more importantly it makes your testing strategy visible to everyone. Everyone now has skin in the game. One way to get your developers to start creating tests—bridging the gap between developers and less technical testers—is to use behavior-driven development practices.

Use TDD and BDD


Walk up and down the corridors of most companies and you’ll increasingly hear terms like TDD, BDD and Red-Green-Refactor. You also may be wondering, “What is this all about, and why should I care?” Test-Driven Development (TDD), uses tests as a way to design code by creating the test first, before any actually production code is written. You then try to make the test pass by creating production code that fulfills the test. Tests are created using developer-friendly tools and languages so they fit in with their current developer eco system. TDD helps make defects visible to developers — sometimes before any code is even written. This visibility creates a quicker feedback loop as opposed to having to wait days for a QA person to run a test and then give their feedback.


While TDD can be considered a low-level approach written from the perspective of a developer, Behavior Driven Development (BDD) is more of an Agile “as a user” approach. Basically, you’re writing test as stories. The focus is on the user and having discussion before creating a single line of code. When it comes to uncovering bugs, you can’t get much earlier in the development process than that.

The real value here is the communication. If you work for a large company with teams spread out across the globe, one of the biggest roadblocks to success is communication. With BDD you get the side benefit of executable specifications, but the most significant benefit is the ability of your teams to collaborate and communicate better with one another.

Make your automation efforts part of the DoR and DoD


A team’s DoR is a checklist meant to bake quality into their user stories. The DoR refers to when a team takes a story from their product backlog to work on in their current sprint. Using the DoR, a team can tell if their user story is ready to be worked on. This also forces them to view their stories from a tester’s point of view. If they can’t create suitable test cases, or if it’s unclear what criteria should be used to determine whether the functionality is actually passing or failing, then the story is probably not defined enough and should be rejected as not ready.

The DoD is another checklist that describes what criteria the team has to reach before a story can be closed and included in the sprint review. The DoD should include things like the required types of tests, test coverage, and the conditions that define a test. The DoD adds visibility to your product’s quality and customer satisfaction.

When you work on a scrum team, your definition of “done” should always contain test automation as part of your story. I’d even go as far to say that if you have a developer that changes, say, a login page in your application it's their responsibility to go into the framework and make the change within that page object. That's how much developers need to be involved with automation.

  1. Writing Test Automation Phase

Test Mix


A testing pyramid is a way to visually break down the type of test case distribution you should have for testing your development efforts. That breakdown resembles a pyramid.

Unit Tests make up the largest section of the pyramid, forming a solid base. Unit tests are the easiest to create and yield the biggest “bang for the buck.” Since unit tests are usually written in the same language an application is written in, developers should have an easy time adding them to their development process.

The middle of the pyramid consists of Integration Tests. They’re a little more expensive, but they test a different part of your system. GUI testing is at the top of the pyramid, and represents a small piece of the total number of automation test types that should be created.

Sounds reasonable, but I think a concept like the testing pyramid can sometimes be misused and misunderstood. And honestly, I sometimes feel the testing pyramid is misleading. There are two main reasons for this. The first is that it disregards market risk. Determining whether or not the project itself is a good idea — how do you test that? In other words, operating at a level above system tests.

The pyramid model also implies volume. It’s basically implying you should have more unit tests than integration tests because unit tests are cheaper to write – but I think that’s missing the point. It shouldn’t be about how what it costs to write and maintain, but rather what kind of risks are your tests are addressing. I actually did a TestTalks interview with Todd Gardner of TrackJs around this concept (he calls it terrible testing) that you should definitely check out.

Make your Tests More Visible with Code Reviews


You should treat your test code just like you treat your development code. That means using the same exact practices and developer tools for both. Code reviews are an awesome way to ensure that everyone is following the processes you’ve laid out, and it will also make visible any deviations from the best practices you have in place.


What I've typically seen at large enterprises is that there is a test automation lead outside of the sprint teams who is involved in all test automation code reviews. This individual helps guide the teams using automation and testing best practices. It’s the same with tests. A checklist is created that is checked off by the code review folks to help them determine whether they have the correct test coverage. Developers on individual sprint teams submit their tests for code review. The key reviewer, a tester, can then make sure the checks are correct and that there are no other scenarios the developers might have missed that should also be tested.

  1. Executing Automated Test

Make Your Tests Visible with Continuous Integration


Another great way to make the status of your projects more visible to everyone on the team is to use continuous integration. Rather than wait long periods of time in order to be sure your application is in a working state, using CI tools can help ensure your application is never broken and can be delivered to your customers at any time. Everyone should be ashamed to break the build and using CI will help alert everyone when something isn’t right with your application.

  • Provides fast feedback about the quality of their code
  • Displays real-time status of your project using wall displays
  • Helps detect conflicts a team might have introduced that impacts another team’s code

Manage Automation Tests with Metrics


I’m not a huge believer in lots of metrics, but here are a few I think can add value and make your test automation efforts more visible:

  • Mean time to diagnosis (AKA the Alan Page Object)

Mean time to diagnosis is key. When a test fails you need to know why as quickly as possible! How long does it take you to find out why your tests are failing? If you have a lot of tests that run every night in a continuous integration environment and it takes you a day to find out why one or more of them failed, a developer has already pushed new code by then, and they're already running a new build and your tests are running against that build. You've already lost a day of effort, and ultimately the battle. If you actually have to start debugging your tests and step through it, you're going to have an issue.

  • Automated to manual ratio

You need to keep track of how many manual vs. automated tests you have. The more manual tests you have, the longer it will take you to verify and validate your application before you can release it. This knowledge helps you keep a pulse on how long your release efforts will take. It also helps you gain visibility into whether teams are automating (or not automating) the right things.

  • Flaky Rate

This will drive you crazy, but flaky tests are those that randomly pass, pass, fail, pass for seemingly no reason. These types of tests will kill your automation efforts because your team will begin to lose trust in your automation and start ignoring your tests. This metric helps you ensure that your tests are reliable.

  • Bug found by automation

A good metric to use to determine how much value your automation efforts are having. 


Becoming Visible

Test Automation Efforts
I’ve covered several areas in the software development lifecycle where I think greater visibility can be added to make your testing and automation efforts more awesome.


I believe that if you focus on raising the visibility of your tests in the planning, coding and executions stages of your application, you’ll see an increase in team ownership of testing. An added benefit is that everyone will have a better idea of the quality of the application they are developing. 
This in turn will make your teams happier, your customers happier and ultimately yourself happier like it did me.

To view the webinar in its entirety, check out the replay here:

Creating Visibility and Speed with Automation and Test Case Management

  1. Almost nobody talks about this subject and it is one of the most important things for a software development engineer or test engineer involved in test automation to know.

    I wrote about the importance of highly visible test results here:
    https://www.tesults.com/blog?id=the-importance-of-highly-visible-automated-test-results

    However I did not cover some of the points you did. The point about flaky tests is one I missed. Everyone is quick to blame the testing infrastructure or tests for flaky tests and sometimes developers even want the flaky test removed rather than looked into or edited. However often times there are real underlying problems causing the flakiness such as poor memory management. The best way to combat apathy and avoid turning off a team from automated test results altogether, besides keeping track of a ‘flaky rate’ metrics as you suggest is to ensure the test harness is capable of recording crashes and available call stacks and ensuring a test failure reason is presented to the team where possible. Flaky tests are actually some of the most crucial tests because they identify hard to diagnose issues that without the high regression of automation may be difficult to spot otherwise.

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 03/14/2024

What is ETL Testing Let's get right into it! An ETL test is ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...