It’s the middle of 2016 – a good time to take a mid-year pulse of what’s going on in the test automation community. In this episode we’ll TestTalk with Greg Paskal, author of Test Automation in the Real World and Director of Quality Assurance-Automation at Ramsey Solutions, A Dave Ramsey Company, all about his latest “testing in the real world “survey to find out what folks are currently succeeding and struggling with in their test automation efforts.
About Greg Paskal
Greg is currently the Director of Quality Assurance – Automation at Ramsey Solutions, A Dave Ramsey Company. He is also the author of multiple white papers on test automation and testing in general.
Greg recently published his first book, “Test Automation in the Real World”, sharing insights from over 30 years of automated testing development. He has spoken at many conferences including StarEast, StarWest, QAI and QA TrailBlazers.
Greg is known in the testing industry as being passionate about mentoring and training others in software testing and other technical areas. Greg hails from California but currently lives in Nashville, TN with his wife Cindy and daughter Hannah.
Quotes & Insights from this Test Talk
- I created the survey really because, now that I'm getting into the open source tools with Selenium, I'm starting to see some commonalities between both tool sets. You might have mentioned earlier about how some people will look at a certain tool set and say, “Hey. This one's the silver bullet. It solves all these problems.” I've heard people talk about that when it comes to Cucumber and tools in the Selenium space like these. As “this solves all the problems.” Capybara is another one. Everybody and their grandma's talking about how that solves all these problems, but when you get into it, you're finding similar problems across both tool sets.
- Consider this: What type of automation do we get if we don't have a solid QA background? It's going to probably lead to a lot of automated activity, and those sort of things that might not be maintainable over the long haul. We've all encountered this. I've been at a number of shops that have hired someone to come in and they automate the entire regression suite, and then they leave and it's completely un-maintainable. It's just a lack of experience in what it takes to keep this stuff running the long-term, but, boy, there's a lot of people that would love to make living doing that.
- If you're 100% self-taught, man I got to hand it to you, give you kudos for doing that, but if you don't have a methodology you're following, where that's going to become trouble is when you've got to hire someone else that's got to continue to maintain that automation. You may come up with a great process that's simply not maintainable over the long-term, and you wouldn't even know it.
- A lot of test automation dies on the vine. What you just laid out is the scenario I've seen time and time again. That a guy has the best intentions, and he works his tail off for a year, and then, unfortunately, that automation can't be maintained over the long-haul and sustained. It just wasn't built into with any kind of experience or guidance along the way. I write about this in my book. The process I call “don't learn to use a tool as discovered”. If you start using a tool as discovered versus as designed, it leads you into all kinds of wild goose chase.
- I call this approach the “desert island scenario” where if you were stuck on a desert island and you couldn't have anything else, and Selenium was there, or whatever tool you use, well so be it. That's what you've got to use, but we aren't like that. We're so connected, so, at the very least, I would challenge folks to take a look at what's available out there. Consider the long-term history of these tools. Are they growing? Is it the right thing for my company? I'm about to invest maybe a whole year of dev time in it and hiring folks, and that thing's got to be sustainable over the long run.
- It does make you question where do we hit that ROI? If we wind of chasing these kind of this wild goose chase for these false fails, there's some balance in there. Most automation engineers are pretty familiar with the false fail. When they usually happen, I'll have my guys kick off the scripts a second time to make sure it's a real defect or not. If it's not, that's something I'll go and attend to to try to shore up the automation if it's something wrong with the synchronization or object recognition.
- Book – Test Automation in the Real World
- Test Automation in the Real World Survey Results
- TestTalks episode 19 Removing “Test” from Test Automation
- TestTalks episode 8 Helpful Tips When Implementing Test Automation
Connect with Greg Paskal
- Blog: gregpaskal.com
- Email: greg[@]gregpaskal.com
May I Ask You For a Favor?
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.
Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!