Podcast

126: Making Cross-Browser Tests Beautiful with Meaghan Lewis

By Test Guild
  • Share:
Join the Guild for FREE
Meaghan Lewis TestTalks Feature


On this show we’ll address an extremely common problem most automation engineers have to deal with – running cross-browser tests. Meaghan Lewis a test automation engineer at Lever and a former employee at Thoughtworks (the folks that gave us Selenium). She’ll be giving us a sneak peek at her Selenium Conference presentation: Making Cross-Browser Tests Beautiful.

About Meaghan Lewis

Meaghan Lewis

Meaghan Lewis is a passionate quality engineer who loves all things testing. She is skilled in automation for both web and mobile applications and an advocate for embedding quality throughout software delivery practices. Meaghan currently works at a startup in San Francisco and is continuing to evolve her craft.

Quotes & Insights from this Test Talk

  • This session is about my experience at the first startup I worked at, and how the decision came about that we needed cross-browser tests. I had built an automation suite running in Chrome, and that was really great at first. The team was really happy, the tests were catching issues, but customers were complaining that a certain feature didn’t work in Internet Explorer – users weren’t having the same experience across all browsers. That prompted me to think about a solution, and that’s where cross-browser testing came in. I already had this automation running for Chrome, so I figured it would be really good and easy to have these same tests run for Internet Explorer, for Firefox, for Safari, and I started building up these tests. In doing that, I found it wasn’t as easy as I assumed, just to be able to switch out a Driver – say, instead of using a Chrome driver, were going to use a Firefox driver – there were some issues along the way. I’m going to talk about some of the pitfalls that people might run into that are extremely common, and solutions about how to get over that, and have some really robust set of cross-browser tests.
  • Should you be testing on all browsers? I guess it really depends. The most important thing for me, and why I wanted to do cross-browser testing in the first place (getting lots of customer complaints of “this feature isn’t working,” or “this doesn’t look right”) – I didn’t want to have that happen anymore. I wanted to have a good safety net to be able to say, “Yes, I have tested functionality across all browsers, and yes, it does work,” versus getting complaints when I can’t answer for sure, I don’t have the confidence to know whether things are actually functioning or appearing correctly. In my opinion, I think that’s why it’s important to run your tests across all different browsers; or at least, if they’re not automated, to do some manual checks to make sure that this core functionality is working.
  • What tool I’ve used most commonly, especially as I”m working in companies that only have Macbooks, and I don’t have access to IE at all; I’ve really come to love Sauce Labs. Sauce Labs gives you lots of great options for using all these different platforms, using browsers like IE, and at-hand you have all different versions of IE, I think even going back to IE6 – maybe you have users using IE6! (Laughs) I think that has been a great way for me to manually test features across browsers and platforms.
  • IE drivers are just really slow. I mentioned I use Sauce Labs. With those virtual machines, it just makes the experience even more slow. In terms of waits, I generally try to stick to using explicit waits, and those are confined to a particular web element. You’re waiting until this element is clickable, until it’s visible, until it’s displayed, and you can wait for a certain amount of time, so maybe up until ten seconds. I think it’s better than using an implicit wait, which just waits for a certain amount of time, rather than for an event to happen or an element to be located.
  • I try to, especially at the UI levels, just stick to the big user journeys. For example, at my last project, I was working on a loan application. The first and most important step with that was that you can submit a rate check, and get rate checks, so that was one user journey. The next part was that you could actually submit the application and the application was going into review, and that was another user journey. But, generally, I want those big picture tests to be at the top view, at eye-level; but, as much as possible, try to push more tests to the integration level that is not going through the UI, typically, and the most validation happening at the Unit Test level. That’s very specific thinking about what each function should return, or testing the code, and seeing that the code works as expected. I tried to push things out of UI as much as possible.
  • I am currently on a project where we are not running our automation in continuous integration, for various reasons, and I would definitely recommend, whenever possible, run your tests in CI, and run them often. Make sure that you think about how to run your tests from the very beginning; otherwise you might be in a situation where you’re not running your tests in CI, and you’re not getting that really crucial feedback that you need to get often, with being able to run your tests on a schedule. That would be one piece of advice that I have. The biggest point would be: run your tests often. You spend a lot of time writing these tests and making sure that they run well, so you want to make sure you’re getting the most value out of them by keeping them up-to-date, and make sure that you are constantly having that safety net out to catch issues.

Resources

Connect with Meaghan Lewis

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...