Automation Testing

Testing Lessons Learned at Microsoft with Alan Page

By Test Guild
  • Share:
Join the Guild for FREE

After speaking with Alan on my test automation podcast, Test Talks, I walked away with four main principles that I'm calling the “Alan Page Principles for Testing.” (APPT)

  • Be a good system thinker
  • The cure for flaky tests
  • Mean time to diagnosis
  • Exploratory debugging


Be a Good System Thinker – The Tester Mindset

In an Agile, shift-left world, many testers might be asking, “What is a tester supposed to do now?”

Great testers are those that investigate the software and can find the holes in the larger system, and are good system thinkers. They see the system as a whole. They can figure out how piece “A” integrates with piece “B” and what that should look like, and know when it's working and not working. They also tend to have specialties around some non-functional areas of performance testing, and can help developers write better tests and make sure the teams are doing things correctly.

At Microsoft, development teams own the short type of tests like unit, and tests that are very biased toward verification pass/fail. The Quality Team writes longer tests that are more investigative and don't necessarily have pass or fail criteria, but give you lots of the information you need to interpret — like reliability tests. Long test are more in-depth. Alan gives some good examples of long tests against XBOX and Kinect in our Test Talks interview. Listen now:

Flaky tests

[tweet_box design=”box_2″]95% of the time, 95% of test engineers will write bad GUI #automation @alanpage testtalks.com/44[/tweet_box]

95% of the time, 95% of test engineers will write bad GUI automation just because it's a very difficult thing to do correctly. Those of us that have written lots of automation tests over the years have come to the conclusion that it's very difficult to write trustworthy automation where, if it fails, we automatically know it's a product bug. Most of the time we have to make sure it's not a test bug first –before we can say whether it's a product bug or not. We start saying silly things to ourselves like, “The test failed. I'll just run it again. Awesome — it passed! It's fine! WRONG!

If you have tests that are failing but should be passing and you're okay with that, then are you also okay that you have tests that are passing but should be failing? Those are harder to find, so it's important to have reliability on both sides. Until you can trust that your failing tests aren't flaky you can't trust your pass rate, either. The more you can trust your failing tests, the more you can trust your passing tests.

Alan is not against automation per se, but rather he is against automation that attempts to simulate what the user does. That is unfortunately what most teams do when they start automation. This might be okay for simple Selenium tests that act as quick verification tests, but the more complex the more prone to error GUI automation becomes.

A better use for automation or “programming assisted testing” is to make it easier to do complex actions reliably. These types of tests tend to be done at the API level that avoids the GUI — like using the model viewer pattern. If you ever have a choice between automating using the UI directly or without having to manipulate the UI, always go with the latter. What you want your automation to do is verify functionality. That's the important thing for automation to figure out. Then, even using the UI directly can tell you whether the UI itself is okay or not okay – the things that the actual human eye is good at detecting, like colors and layout issues. If you can manipulate things at a layer other than the UI your tests will tend to be more reliable and less flaky.


Mean Time to Diagnosis

When creating a test, a good question to ask yourself is, “Can I make this test fail?” — just to make sure that you know what it looks like when a test fails. You can then diagnose from the failure. And it's not enough for the test to simply fail — you should be able to easily figure out why it failed. Alan calls this Mean Time to Diagnosis (MTTD).

MTTD is essentially a testing metric that tells you how long it takes you to figure out why a test fails. If you need to hook up a debugger or run the test again, you've lost the battle. You win the battle when the test fails and you look at the log and within two minutes or less can say what the issue most likely is.


Exploratory Debugging

Write your test, and right where your test determines pass or fail, set a breakpoint and see what the variables look like. What's going on when the test passes or fails? That should give you an idea of things that you may want to log. There may be an error path in the code you're testing that you might want to exercise to get more test ideas.

This is not really full Test-Driven Development (TDD) for test development, but you should want to see what your tests look like when they fail while you're writing them. Make sure you understand what that looks like and how it may influence other tests you are writing.


Testing Knowledge Bombs

Alan drops much more testing knowledge bombs in episode 44 Alan Page: Testing Software at Microsoft – Lessons Learned. So make sure to give it a listen and be prepared to get blown away.


Learn More About Alan Page

Books:

Podcast:

  • AB Testing <–Must listen podcast about software testing

Blog:

  • angryweasel.com

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

SafeTest: Next Generation Testing Framework from Netflix

Posted on 03/26/2024

When you think of Netflix the last thing you probably think about is ...

Top Free Automation Tools for Testing Desktop Applications (2024)

Posted on 03/24/2024

While many testers only focus on browser automation there is still a need ...

Bridging the Gap Between Manual and Automated Testing

Posted on 03/13/2024

There are some testers who say there’s no such thing as manual and ...