Hunting Sasquatch Automation: Finding Intermittent Issues and More
Paul covers many things in this episode, including hunting down elusive, intermittent issues that can cause your automaton to randomly fail due to actual product defects. You’ll also discover how to socially engineer your test automation efforts with empathy and conversations in order to build relationships that will help your automation thrive. This episode is chock-full of good test automation practices, so listen up!
About Paul
Paul Grizzaffi is the Automation Program Architect and Manager for Revenue Cycle Technology at MedAssets, a healthcare performance improvement company. His career has focused on the creation and deployment of automated test strategies, frameworks, tools, and platforms. He holds a Master of Science in Computer Science and is a Certified ScrumMaster (CSM) from Scrum Alliance. Paul has created automation platforms and tool frameworks based on proprietary, open source and vendor-supplied tool chains in diverse product environments (telecom, stock trading, E-commerce, and healthcare). He is an accomplished speaker who has presented at both local and national meetings and conferences; he’s also an advisor to Software Test Professionals and STPCon. Paul looks forward to sharing his experiences and expanding his automation and testing knowledge for other product environments..
Quotes & Insights from this Test Talk
- Periodic automation in the context of what we’re talking about here is typically in a continuous integration environment. It’s good at catching certain classes of errors and issues, but there are other types of issues that are more timing related, race conditions, other intermittent type issues that … It’s a little harder to catch issues that way, but if you rerun your automation, or a subset of your automation, or perhaps, some even different types of automation at different periods, not necessarily on an event boundary, you have the opportunity to catch this type of issues a little more often.
- As soon as you have that first win or the periodic automation said, “I see a problem that the humans are not seeing because I’m looking for it more often,” to personify the automation here, once you have that first win, people start to take notice. They start to give it a little more credence. The first one is hard, and it really is.
- Automation gets stale. Things that were valuable before are no longer valuable. Perhaps, we should cull those scripts things that have expanded beyond what the script does and we didn’t go back and expand the script’s coverage, so now, we’re assuming that that script is doing something it is not, and we don’t have the coverage we expected to have. We can get problems that escape us that way as well. Looking at the results as a whole for trends is very interesting, but looking at and auditing individual things along the way is very valuable as well.
- The more I can make it such that the tool, the framework, and all the insularly pieces like the error messages are conducive to helping you be more effective and more efficient, I’m going to bake those things into the framework, I’m going to coach you on, “Here’s a better way to write your assertions,” where you do have the, “Here’s what I was looking for. Here was the actual value,” as opposed to a true-false binary type thing.
- I use that word “empathy” a lot because I have to put myself in the position of the different people that I work with, everybody from the person who’s just going to kick off the automated scripts and consume the results to the people who are working on the front work with me to the people who are actually paying me to … funding me to build frameworks and deliver automation capability into their teams’ hands and let them know, “Here’s what you were getting, and if we don’t do this thing you’re asking for, I can give you this other thing, and I can quantify it by reducing your opportunity cost or by helping you get to your next milestone quicker.”
- I would say really start thinking about it less from a test case count standpoint and a test script count standpoint at a percentage done and think more about, “How can I provide some value using technology? I’ve only got X amount of time to spend helping me get my job done. Where can I most effectively spend that?” If it’s building a smoke suite that’s going to run on continuous integration check-ins, great. Go do that by all means, but if it’s something that’s going to build a dashboard for you or do a giant data conversion for you, spend your time doing that because you can save hours, days, weeks of effort potentially doing these types of things, and I’ll submit that that is automation because a computer did the bulk of the work for you.
Resources
- Pauls STPCON 2016 session: I’m Hunting Sasquatch – Finding Intermittent Issues Using Periodic Automation
- Experiences of Test Automation: Case Studies of Software Test Automation
Connect with Paul
- Twitter: @pgrizzaffi
- LinkedIn: https://www.linkedin.com/in/paulgrizzaffi/
May I Ask You For a Favor?
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.
Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!