Podcast

145: Automation Testing Maturity Curve with Danny McKeown

By Test Guild
  • Share:
Join the Guild for FREE
Danny McKeown Test Talks


In this episode we’ll be test talking with Danny McKeown, a test automation architect at Paychex, about Staying Ahead of the Mobile and Web Test Maturity Curve. Learn how Paychex iteratively built a well-defined web and mobile app test automation architecture. Listen in to hear some of Danny’s lessons learned about the strategy and structure of their “wicked cool” automation solution.

About Danny McKeown

Danny McKeown Headshot

Danny McKeown has more than thirty years of technical and management experience in information technology. With Paychex for thirteen years, Danny has spent the last seven years as the test automation architect. In this role, he is instrumental in implementing a secure integration of Paychex's automation framework that leverages Selenium and other vendor technologies. In addition, Danny is an adjunct lecturer in the Rochester Institute of Technology software engineering department and on the advisory board for the International Institute for Software Testing. Danny is very active in the continuous delivery and test automation space and presents regularly at the Star East conferences and other forums.

Quotes & Insights from this Test Talk

  • That was one of the first things we actually put in place, was my role as an architect. Then, the next thing is you want to have a strategy. What we did is we honed in on the test automation pyramid that was introduced by Mike Cohn and we reinforced that concept constantly of how you have to have a different tier of testing and dependencies from one tier to the next. Specifically, as you may know, the automation pyramid has at the base of unit testing. Struggled, believing that the developers start with unit testing the first step is quality, and then the next step is based on how you get through that gate, we want to do a lot of non-UI testing.
  • But what we did do, is we pretty much jumped over the non-UI early on. Jumped right to the UI which is very typical of any functional automation group. And what we learned, the hard way, I tried to reinforce companies that are just starting off in automation, don't make that mistake. Because what you get is a lot of false negatives. The test environment may not be whole, and there's other factors that may cause problems. And what you'll be doing is chasing ghosts. They look like a GUI failure, but what you'll find is, when you get to it, it has nothing to do with the application, or the UI layer. Diagnostics is harder to pinpoint where you're at. And you waste a lot of time drilling down. So we kind of learned the hard way lets get that API, non UI layer built, let's test the integrity of the web servers it is … Let's test the integrity and availability of the application and the firmware, and the hardware. And plus if we're doing cloud solutions, if we're going outside, do I make sure that cloud connection's there? Then, and only then, and once those tests pass, do we want to proceed to the UI layer.
  • And how much effort we're putting in our pre environments, and we have many of them. And it nears production, and we have ten plus one, which is usually a good majority of the regression tests will happen. With ten plus two systems, which are really where the project are happening. You may even like a plus ten three, where it's more making that test environment. Every one of those environments has it's challenges, of how much effort, and resources, and money quite frankly we want to put into understanding those environments. But what I can tell you is that every one of the environment we will run some type of non UI or preconditioned automated test, to be able just very some basic integrity before we proceed. And when we do end to end tests, which goes across many systems, vetting more important, because each test could run for over an hour. So the stability of these non production test, excuse me, non production environments, can change. Because the patching, or installed apps, may not be as planned as it would be in a production environment. So to answer your question, we don't use a lot of off the shelf, or external tools today. We are gravitating towards looking at those more and more, and also maybe putting monitoring around our non production environments. But as of today, we really just run pre scripts to validate the integrity of the environment.
  • So what we did, is we went forward in our next generation, and built our own tool. And under the hood, integrate the different vendors through their API's. And now present that spread sheet if you will … We present the application, we present the UI's to our testers, in one uniform way regardless of what vendor we're integrating behind the hood with API. So they all know how to execute test automation across the enterprise. So there was … The one big problem really had at the time was how testers move with different areas of Paychex, but not have to learn different test automation tools.
  • One of the things we did though, which may be unique at Paychex, and I think every company has to start out what that would be, is we did have a lot of people running around saying, “We're doing automation.” I mean, development does automation by creating unit tests. We have eight build automation group, involving the build deploy and test pipeline. We have testers, as we mentioned, were actually working at the high level, consuming our, that's automation framework, half of … And they were doing automation, and they were. They were both exercising the UI or non-UI. And they needed the test automation framework team to build tasks, and they were doing automation.
  • The best advice I would give somebody is, use your automation. Whatever you build whether as a big script or a small script … Independent of the technology you used, or the vendor you used, I've seen too many scripts sit there, and not be running. Automated, run it once a day, or once a week, with not passing to find out why. Environment, not recording that and sharing that with people. Look I run this script once a day, it fails four of the five days a week. But it's not because of the application, because if I run it manually, when I run something manually I can see that the environments slow, I can compensate for that. If the environment's down, I won't try my manual test, I will wait until the environment's back up. So you can start really … Talking to IT in a factual way as opposed to an emotional way, of some of the changes that we needed to get automation to be successful and valuable.

Connect with Danny McKeown

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...