Podcast

112: Real World Test Automation Survey Results with Greg Paskal

By Test Guild
  • Share:
Join the Guild for FREE
Greg Paskal Feature


It’s the middle of 2016 – a good time to take a mid-year pulse of what’s going on in the test automation community. In this episode we’ll TestTalk with Greg Paskal, author of Test Automation in the Real World and Director of Quality Assurance-Automation at Ramsey Solutions, A Dave Ramsey Company, all about his latest “testing in the real world “survey to find out what folks are currently succeeding and struggling with in their test automation efforts.

About Greg Paskal

Greg Paskal

Greg is currently the Director of Quality Assurance – Automation at Ramsey Solutions, A Dave Ramsey Company. He is also the author of multiple white papers on test automation and testing in general.

Greg recently published his first book, “Test Automation in the Real World”, sharing insights from over 30 years of automated testing development. He has spoken at many conferences including StarEast, StarWest, QAI and QA TrailBlazers.

Greg is known in the testing industry as being passionate about mentoring and training others in software testing and other technical areas. Greg hails from California but currently lives in Nashville, TN with his wife Cindy and daughter Hannah.

Quotes & Insights from this Test Talk

  • I created the survey really because, now that I'm getting into the open source tools with Selenium, I'm starting to see some commonalities between both tool sets. You might have mentioned earlier about how some people will look at a certain tool set and say, “Hey. This one's the silver bullet. It solves all these problems.” I've heard people talk about that when it comes to Cucumber and tools in the Selenium space like these. As “this solves all the problems.” Capybara is another one. Everybody and their grandma's talking about how that solves all these problems, but when you get into it, you're finding similar problems across both tool sets.
  • Consider this: What type of automation do we get if we don't have a solid QA background? It's going to probably lead to a lot of automated activity, and those sort of things that might not be maintainable over the long haul. We've all encountered this. I've been at a number of shops that have hired someone to come in and they automate the entire regression suite, and then they leave and it's completely un-maintainable. It's just a lack of experience in what it takes to keep this stuff running the long-term, but, boy, there's a lot of people that would love to make living doing that.
  • If you're 100% self-taught, man I got to hand it to you, give you kudos for doing that, but if you don't have a methodology you're following, where that's going to become trouble is when you've got to hire someone else that's got to continue to maintain that automation. You may come up with a great process that's simply not maintainable over the long-term, and you wouldn't even know it.
  • A lot of test automation dies on the vine. What you just laid out is the scenario I've seen time and time again. That a guy has the best intentions, and he works his tail off for a year, and then, unfortunately, that automation can't be maintained over the long-haul and sustained. It just wasn't built into with any kind of experience or guidance along the way. I write about this in my book. The process I call “don't learn to use a tool as discovered”. If you start using a tool as discovered versus as designed, it leads you into all kinds of wild goose chase.
  • I call this approach the “desert island scenario” where if you were stuck on a desert island and you couldn't have anything else, and Selenium was there, or whatever tool you use, well so be it. That's what you've got to use, but we aren't like that. We're so connected, so, at the very least, I would challenge folks to take a look at what's available out there. Consider the long-term history of these tools. Are they growing? Is it the right thing for my company? I'm about to invest maybe a whole year of dev time in it and hiring folks, and that thing's got to be sustainable over the long run.
  • It does make you question where do we hit that ROI? If we wind of chasing these kind of this wild goose chase for these false fails, there's some balance in there. Most automation engineers are pretty familiar with the false fail. When they usually happen, I'll have my guys kick off the scripts a second time to make sure it's a real defect or not. If it's not, that's something I'll go and attend to to try to shore up the automation if it's something wrong with the synchronization or object recognition.

Resources

Connect with Greg Paskal

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Joe & Greg,
    Great talk! I like how you both point out that Selenium is an API and that it takes the process of putting it together with other tools to complete the overall solution. I found it both interesting and disheartening, like you said, that the reason was because it was “free”. As we all know nothing is free and the “costs” of implementing automation can vary both in place and amount (time and money).

    I’m not knocking Selenium, it has matured fantastically compared to a few years ago. People are building out add-ons for different technologies everyday. It has gained ground on the commercial tools because of that. But as with anything else there isn’t a Silver Bullet here.

    In general though, I would like to have seen the survey ask more questions regarding project implementation success/failure. Why and why not, what went right and wrong. We need to understand more about how people are doing with the tools/process so that we can start to come up with solutions/methods to help them out.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...