Podcast

122: WalmartLabs Test Armada with David Cadwallader

By Test Guild
  • Share:
Join the Guild for FREE
Dave Cadwallader TestTalks


In this episode, we’ll be test talking with Dave Cadwallader, a senior engineering manager in automation infrastructure, about a fleet of proven, open-source test automation tools developed by WalmartLabs that can help you succeed with automation.


Get ready to discover lessons learned about building and maintaining code and infrastructure that support massive- scale testing at Walmart, as well as tips for working with front end web development teams on making them successful.

About David Cadwallader

Dave Cadwallader

Dave Cadwallader is currently a senior engineering manager of automation infrastructure at Walmart eCommerce. His goal is to help teams automate the tedious elements of testing, so that we can focus on the fun stuff and catch bugs sooner.


Dave is also a co-creator of the open source tool TestArmada. TestArmada is a fleet of testing tools for making cross-browser, end-to-end testing fast, user-friendly, and valuable at scale across large teams, without those annoying false positives.

Quotes & Insights from this Test Talk

  • Basically, a few of my colleagues and I took it upon ourselves to do a little weekend hackathon projects and try to put together some automation for testing checkout across multiple browsers, and we made some good progress, and then we just gained traction from there. Eventually, that led to me having my own team building a whole organization around test automation, and then starting to roll that out into multiple parts of the Walmart ecosystem. Now, it’s actually a part of our continuous delivery pipeline. It’s a mandatory part of what we do every day.
  • For the managers and the VP level persona, we really pitched this idea of continuous delivery. At the time, when we started this effort, Walmart had release cycles that would sometimes be 1 month apart, 2 months apart, and whenever we needed to do a release, there was just always a massive amount of stress, and coordination, and getting everybody on to this consolidated release train where we had to have people staying up till 4 in the morning doing a test cycle during off hours when it was less likely to affect customers, deploy something.
  • One of the goals with TestArmada is we didn’t want to be just another test automation library, so we didn’t try to reinvent that wheel. We internally chose to use Nightwatch as our testing library of choice because we’re a Node JS shop so everybody is already thinking in Node at our company and because the kind of syntax of Nightwatch was … we thought was very familiar or very easy to grasp for developers and some QA people alike, and we had to think about the persona of a QA person who didn’t have strong development experience, and so they might not necessarily use, or appreciate, or understand immediately some of the more advance JavaScript constructs around promises, and things like that, and callbacks.
  • When we first started rerunning failed tests multiple times before reporting their failure you should understand that when we first implemented this, we felt shamed. We felt ashamed of ourselves. This is something that as an engineer, you don’t want non-determinism. You don’t want errors that you feel like you can’t control, and we spent months really trying to pin down all of the different sources of flaky behavior in Selenium, and we talked to Sauce Labs about it. We talked with people in the community, and it was just a game of whack-a-mole, right?
  • It’s super important for us to not have false positives is because we have shifted to relying on test automation as hard gates as we call them for everything from deployments to pull requests, right? We believe firmly in this shift-left or as we like to call it, “Close the barn door before the horse bolts.” We run a cross-browser functional test suite of the entire smoke test suite for an application using Sauce Labs, using TestArmada on every single pull request.
  • I think my one piece of advice would be as developers, it’s very tempting for us to think about writing our tests from a developer’s point of view and thinking about, “Can I test with everything that’s going on under the hood? Can I test every technical aspect of my page?” Sometimes, what we’ve seen is that this really gets test writers and developers tripped up in this state of, “I’m trying to test something really complicated, and I can’t wrap my head around it,” and so what we really just try to steer everybody towards is test things that a user would test. Test things from the perspective of a user of the website.

Resources

Connect with David Cadwallader

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Test armada is a great tool, we have been using it at Godaddy for over a year and it one of those strange tools that live upto its promise of reliability and parallelism. Thanks Dave for your efforts and this podcast.

    I would like to point out one thing that was discussed during podcast ‘test pyramid’ and how it is ok to have a hour glass shape. Test pyramid talks about E2E and integration tests and how they should be minimal, while Dave argues for 100% functional test coverage. All the tests that he mentioned for PRs, for build validation etc are all functional tests because they are using mocks, they are not E2E tests which test pyramid would like to minimize.

    E2E tests are hard/bad, necessary evil and they have to minimal, otherwise we would be bogged down with flakiness. I just don’t want folks to misunderstand what he meant and it would be nice if he clarifies it as well.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...