Podcast

44: Alan Page: Testing Software At Microsoft – Lessons Learned

By Test Guild
  • Share:
Join the Guild for FREE
Alan Page Microsoft Testing Feature

Have you ever wondered how testing is done at big companies like Microsoft? Curious to know what it takes to succeed with testing and automation across a huge enterprise that has a bunch of different products and technologies? Alan has seen it all and shares his years of testing wisdom. Your ears will be ringing from all the test automation awesomeness knowledge bombs Alan will drop on you in this episode.

About Alan Page

AlanPageHeadShot

Alan Page has been a software tester for nearly 20 years. He was the lead author on the book How We Test Software at Microsoft, contributed chapters for Beautiful Testing and Experiences of Test Automation: Case Studies of Software Test Automation. He also writes about a variety of software engineering subjects on his blog at http://angryweasel.com/blog.

Alan joined Microsoft as a member of the Windows 95 team, and since then has worked on a variety of Windows releases, early versions of Internet Explorer, and Office Lync. Notably, Alan served for over two years as Microsoft’s Director of Test Excellence.

Currently working on something new at Microsoft. Alan has had many roles at Microsoft his most recent was as a Principal Software Design Engineer in Test (SDET) on the Xbox team, Alan spends his time designing and implementing test infrastructure and tests; and coaching and mentoring testers and test managers across the Microsoft organization.

Alan also leads company-wide quality and testing focused communities made up of senior engineering employees.

Quotes & Insights from this Test Talk

  • Exploratory Debugging – write your test, and right where your test determines whether it passed or failed, set a breakpoint and look at the variable values. What’s going on when it passes or fails? This will give you an idea for things to log — or you may find an error path that you don’t have a test for.
  • The difference between short test and long tests
  • Meantime to Diagnosis – a testing metric that tells you when a test fails – how long does it take you to figure out why.
  • If a test fails and you need to hook up a debugger or run the test again, you’ve lost the battle
  • You win the battle when you look at the log and within two minutes or less and you can say most likely what the issue is.
  • Flaky tests – if you have tests that are failing but should be passing and you’re okay with that, are you also okay that you have tests that are passing but should be failing? Those are harder to find, so it’s important to have reliability on both sides. Until you can trust that your failing tests aren’t flaky you can’t trust your pass rate, either.
  • Much, much more!

Resources

Connect with Alan Page

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

SauceLabsSponser

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Special offer for TestTalks listeners, get 20 hours of automated testing for free when you sign-up with promo code testtalks14 (more info).

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...