Automation Testing

How to Make Your Selenium Tests Cross-Browser Ready

By Test Guild
  • Share:
Join the Guild for FREE

Have you ever written an automation suite against one particular browser and just assume that if it ran without issues it would run fine against any other browser? No big deal, right? — wrong!

It’s just one of the reasons I was so happy to have Meaghan Lewis on Test Talks to chat about her Selenium Conference presentation on Making Cross-Browser Tests Beautiful. She also discusses some of the common pitfalls people might run into as well as fixes for them.

Should You Test Your Application Against All Browsers?

When I asked Meaghan if she thought that automation engineers should run their entire test suite against across-browsers, she said that like most things it really depends on your situation.

The most important thing for her — and why she wanted to do cross-browser testing in the first place — was that her team was getting lots of customer complaints; a feature wasn’t working or something didn’t look right, or instance, and she didn’t want that to happen anymore.

Her team came to the conclusion that they wanted to have a good safety net to be able to say, “Yes, we’ve tested functionality across all browsers, and yes, it does work,” versus getting complaints when they couldn’t give a definitive answer.

In her opinion, that’s why it’s important to run tests across all different browsers; or at least, if they’re not automated, to perform some manual checks to ensure that this core functionality is working.

Why do Browsers Behave Differently?

My team has a requirement where we have to test against older version of IE and it’s frustrating because the tests may run 100% against Chrome but are a disaster against IE.

Meaghan has seen the same thing; she often finds that a type of functionality available in one driver isn’t available in another.

For example, IE driver doesn’t have the same feature set that Chrome driver might have, so for example, when using some native events — like clicking certain elements – IE doesn’t interact well with certain option tags or select tags, which are typically common when you’re interacting with dropdowns. You might have to find a different way to interact with the dropdown. Chrome might support those native events, but you might have some difficulty doing it in IE.

All these drivers are open source, so you have the option to go in and play with them, learn about their differences, and maybe even contribute to making them more similar.

Another excellent example (and something that I came across in my application), was when I was running tests in Safari. Safari driver doesn’t allow you to upload files, so if you have an application in which you need to upload files with automation, you have to think about how you can get around doing that. There are slight differences that you sometimes stumble across and not even know.

How to Design your Selenium Tests to Work Across Browsers

Something that Meaghan has found generally works best is to use some type of CSS Selector that will be the best representation of that element.

Meaghan believes one can always use IDs, names, classes; but something she’s tried to stick to is using CSS Selectors. She discovered this when she was forced into using CSS Selectors as she transitioned to using a Javascript testing framework.

For example, in her last project she was using Nightwatch, with CSS Selectors which are the supported locators, and had really great results with it.

Most Selenium engineer run into issues with radio buttons, dropdowns and check boxes field types but Meaghan has found that using a CSS Selector that’s in the center of the locator works great.

So in general try to:

  • Use the same locator for all browsers
  • Stick to CSS whenever possible

Timing Issues

Once again, I’ll refer to my team’s requirement to run our tests against IE and Chrome. One of the sprint teams recently told me their tests will only work against IE – not Chrome – which is the opposite of most situations.

When I asked why, they didn’t know.

After I looked into it, it turned out that they weren’t using any waits, and because fields were loading too fast in Chrome their tests were failing. To avoid this in your own tests, always take into consideration that some drivers are faster than others.

When I mentioned this to Meghan she said she also had experienced this before, and that IE drivers are just really slow.

In terms of waits, she generally tries to stick to using explicit waits, and those are confined to a particular web element. For example, one should wait until the element is clickable, visible, or if it’s displayed and tells webdriver to wait up until set amount of time.

I agree with Meghan’s assertion that it’s better to use an explicit rather than implicit wait, as the latter waits for a certain amount of time, rather than for an event to happen or an element to be located.

Strategy to tests against all browsers 

Meghan’s team tries to stick to a core set of tests that they run against all browsers at least once a day.

Her team realizes that it’s something they don’t need to do all the time, but they have varying levels of test set types that they regularly run as a general rule.

For example, they have a small smoke test suite that runs after each commit to the project, and those tests give her team a high-level overview that the application is working as they expect. They also run a full regression test once a night.

One Piece of Actionable Advice 

One way to make sure that you are creating reliable automated test against multiple browsers that Meaghan recommend as her one piece of actionable advice that you can put into place right away is —whenever possible, run your tests in CI, and run them often.

[tweet_box design=”default”]Whenever possible, run your tests in CI, and run them often~@iammeaghanlewis[/tweet_box]

Also be sure to think about how to run your tests from the very beginning; otherwise you might get into a situation where you’re not running your tests in CI, and you’re not getting that really crucial feedback you need to get often, with being able to run your tests on a schedule.

The most important point is: run your tests often.

You spend a lot of time writing those tests and making sure they run well, so you want to ensure you’re getting the most value out of them by keeping them up-to-date, and seeing to it that you’re keeping that safety net in place to catch any issues that might arise.

More Selenium Tests Cross-Browser Awesomeness

Listen to my full Test Talks interview with Meaghan Lewis for more selenium test cross-browser awesomeness:

Joe Colantonio: Hey, Meaghan. Welcome to Test Talks.

Meaghan Lewis: Hi, Joe. Thank you.

Joe: It’s awesome to have you on the show. Today I’d like to talk about some of your experience with Test Automation, as well as your upcoming presentation “Making Cross-Browser Tests Beautiful” for the Selenium conference. I think that’s happening in the next few weeks in London. Before we get into it, though, could you tell us a little more about yourself?

Meaghan: Sure! I’m a Quality Engineer based out of San Francisco, California. I have been a QA now for the past five years or so. I started my career working as a consultant for the company ThoughtWorks. I had a lot of great experiences with different companies, built up some really good automation experience, both for mobile and desktop applications, and for the past couple years, I have been working for startups in San Francisco. I’m currently working at a company called Lever, and helping to build out the QA team here from the ground up.

Joe: I’ve heard a lot about ThoughtWorks. I think that’s where Selenium originated; there’s some new applications coming out for software development, like Gage. What was it like to work with ThoughtWorks?

Meaghan: It was such a great experience for me. I was there for a little over three years. The time I was there, I worked on over eight projects, ranging from a couple months to a year, at times. It was great to be able to go to a project and maybe learn a different programming language, or a testing tool. ThoughtWorks also builds their own in-house tools. You mentioned Gage, that’s one of them; there’s Go, that’s a CI tool. There’s Twist, which is another testing framework. It’s a combination of the in-house tools and the consulting aspects, but they were really great.

Joe: Very cool. I know you were also an intern at IMS, some Logic companies (JP Morgan). What’s your experience between working for a larger company and a smaller startup?

Meaghan: Quite different! (Laughing) Quite different. I think the reason that I wanted to go to a startup, was that there would definitely be a lot more responsibility, a lot more room to help me grow my career, and get really good hands-on experience with automation, and even with setting up infrastructure – because there might not be anyone to do that. I’m used to integrating my Test with a continuous integration server, and having to figure out how to do that. I think it’s a lot more versatile, that I found, whereas with a larger company. Even with ThoughtWorks, the processes are so established already, that there’s still room to make an impact, but not quite as much as working at a startup, where everything is very fast-paced, and you get to make all the decisions.

Joe: Awesome. I’m a little jealous. I’ve only worked at large, large companies, where I’m only a little cog in it, so I always fantasize, “I’m gonna go to a startup, and move to California…” (Meaghan laughs). It sounds like you got heavily into automation, like you said; you started focusing on mobile testing, and I assume this is where your presentation came out of: “Making Your Cross-Browser Tests Beautiful”. Can you just give us a high-level overview of what the session’s going to be about at the Selenium conference?

Meaghan: Sure. This session is about my experience at the first startup I worked at, and how the decision came about that we needed cross-browser tests. I had built an automation suite running in Chrome, and that was really great at first. The team was really happy, the tests were catching issues, but customers were complaining that a certain feature didn’t work in Internet Explorer – users weren’t having the same experience across all browsers. That prompted me to think about a solution, and that’s where cross-browser testing came in. I already had this automation running for Chrome, so I figured it would be really good and easy to have these same tests run for Internet Explorer, for Firefox, for Safari, and I started building up these tests. In doing that, I found it wasn’t as easy as I assumed, just to be able to switch out a Driver – say, instead of using a Chrome driver, were going to use a Firefox driver – there were some issues along the way. I’m going to talk about some of the pitfalls that people might run into that are extremely common, and solutions about how to get over that, and have some really robust set of cross-browser tests.

Joe: Awesome. This is one of the reasons I’m so excited to get you on the show. My team is actually struggling with this, and I’m curious to get your opinion. Our application only officially supports IE.

Meaghan: Oh, wow!

Joe: Selenium against IE is awful, but the application is really an Angular JS application; there’s no reason that we can’t run against another browser. The only issue is, we have an integration with thick client application, so we need an Active X control for those types of tests, and that’s why we only support IE. What I’ve told the teams is, “Look, if it’s not browser-specific, we’re just testing a flow, a functionality; we’re looking for a doctor, we get a doctor back. It shouldn’t matter what browser we use it in, so let’s use Chrome; it’s more stable, it’s more reliable.” In your experience, why is it that these browsers are so different? Is that approach an issue? Or do you think people should be testing against all browsers in all platforms?

Meaghan: I’ll start with the last part of the question – do I think people should be testing on all browsers? I guess it really depends. (Laughs) The most important thing for me, and why I wanted to do cross-browser testing in the first place (getting lots of customer complaints of “this feature isn’t working,” or  “this doesn’t look right”) – I didn’t want to have that happen anymore. I wanted to have a good safety net to be able to say, “Yes, I have tested functionality across all browsers, and yes, it does work,” versus getting complaints when I can’t answer for sure, I don’t have the confidence to know whether things are actually functioning or appearing correctly. In my opinion, I think that’s why it’s important to run your tests across all different browsers; or at least, if they’re not automated, to do some manual checks to make sure that this core functionality is working. A far as why they behave differently, I think it’s just the way they were created. (Laughs) Even talking specifically about using Selenium, and using these different browser drivers, they just function a little bit differently. You may find that some kind of functionality that’s available in one driver isn’t available in another. For example, IE driver just doesn’t have all the same feature set that Chrome driver might have, so, for example, using some native events – like clicking certain elements – IE just doesn’t interact well with certain option tags or select tags, which are typically common when you’re interacting with dropdowns, for example. So you might have to find a different way to interact with the dropdown, that Chrome might support these native events, but you might have a little bit of difficulty doing that in IE. All of these drivers are open source, so you have the option to go in and play with them and learn about their differences, and maybe even contribute, to make them more similar. There are just some things that are a little bit tweaky. Another really good example, and something that I came across in my application, was when I was running tests in Safari. Safari driver just doesn’t allow you to upload files, so you have to think about if you have an application that you need to upload files with automation, how you can get around doing that. There’s slight differences that you sometimes stumble across, and might not even know. (Laughs)

Joe: Awesome, great advice. You mentioned that you found issues where your customers have complained that something doesn’t look right to them, and that’s actually something that burned us. Someone was using IE9 and said, “Hey, this doesn’t look right,” but even our tests wouldn’t have been able to find the issue with that browser, because it looks different. How do you handle that?  I know manual testing, we have exploratory testing (all our exploratory testing is done in IE), but do you use any specific tools, like Applitools, or any Visual Validation testing you recommend?

Meaghan: I personally don’t do any type of visual testing, but I think something like that would definitely come in handy, and save you a lot of time from constantly doing the manual verification of those visual differences. What tool I’ve used most commonly, especially as I”m working in companies that only have Macbooks, and I don’t have access to IE at all; I’ve really come to love Sauce Labs. Sauce Labs gives you lots of great options for using all these different platforms, using browsers like IE, and at-hand you have all different versions of IE, I think even going back to IE6 – maybe you have users using IE6! (Laughs) I think that has been a great way for me to manually test features across browsers and platforms.

Joe: Awesome, I love that answer! Sauce Labs is the exclusive sponsor of Test Talks, so they’re going to love that. That was totally unscripted, so thank you.

Meaghan: It’s great. (Laughs)

Joe: Awesome. Do you have any tips on choosing the best locators that will work against all browsers?

Meaghan: Sure. Something that I’ve found generally works the best, is to use some kind of CSS Selector that will be the best representation of that element. I think you can always use IDs, names, classes; but something I’ve tried to stick to recently is using CSS Selectors. I think in a way I was forced into using CSS Selectors as I transitioned to use a Javascript testing framework. In my last project, I was using Nightwatch, and CSS Selectors are the supported locators, and I think I found really great results with using that. I think if there are just a couple problem elements – radio buttons, I mentioned dropdowns before, check boxes – usually, if I find a CSS Selector that’s in the center of the locator, those work great.

Joe: Awesome. Besides locators, I know a lot of people struggle with synchronization issues. Developers will test against Chrome, and everything runs really fast against Chrome when we run a regression test, but then you go against IE and all of a sudden, it’s like “Why is this failing? It was working!” But it’s because IE, for some reason, is slower. Any ideas or suggestions on how we can do better, explicit or implicit waits, anything like that?

Meaghan: Oh, yeah. I’ve definitely experienced this before, and IE drivers are just really slow. I mentioned I use Sauce Labs. With those virtual machines, it just makes the experience even more slow. In terms of waits, I generally try to stick to using explicit waits, and those are confined to a particular web element. You’re waiting until this element is clickable, until it’s visible, until it’s displayed, and you can wait for a certain amount of time, so maybe up until ten seconds. I think it’s better than using an implicit wait, which just waits for a certain amount of time, rather than for an event to happen or an element to be located.

Joe: Awesome. I’m just trying to picture your workflow. When you run your regression test, it sounds like you point to a service like Sauce Labs – do you run your whole regression against all the browsers, or do you have a strategy in place where you just run a core set of tests against all browsers?

Meaghan: We try to stick to a core set of tests that we run against all browsers, and generally we do that once a day. We realize that’s something we don’t need to do all the time, but we have varying levels. We have smoke tests, for example, that run after each commit to the project, and those give us a really high level overview that the application is working as we expect. We also run a regression once a night.

Joe: Awesome. I love that idea of quality gates, almost, and the [cadence?] it creates in the environment.

Meaghan: Right, right. It is great. There are so many options of how you can run your tests, and that’s how I’ve found it to be valuable, to run certain fast and valuable tests often, and just stick to running other things once a night, or once a day,or whatever the cadence is.

Joe: Cool. What’s your approach to automation? Do you try to automate everything through the UI, or do you try to encourage more unit API level tests? What is your test mix like?

Meaghan: That’s a great question. Are you familiar with the test pyramid?

Joe: Yes.

Meaghan: Okay. That is something that is cemented in my head, and I will always have that idea going forward, but basically, I try to, especially at the UI levels, just stick to the big user journeys. For example, at my last project, I was working on a loan application. The first and most important step with that was that you can submit a rate check, and get rate checks, so that was one user journey. The next part was that you could actually submit the application and the application was going into review, and that was another user journey. But, generally, I want those big picture tests to be at the top view, at eye-level; but, as much as possible, try to push more tests to the integration level that is not going through the UI, typically, and the most validation happening at the Unit Test level. That’s very specific thinking about what each function should return, or testing the code, and seeing that the code works as expected. I tried to push things out of UI as much as possible.

Joe: I’m curious to know, also, how your life as an automation engineer works within your particular company now; does everyone do automation, or are you the only person doing automation? Do you recommend a certain approach that agile teams may take towards automation to be successful with it?

Meaghan: That’s another great question. Something that I’m doing at my current company is my QA team is pretty small – we have a QA team of two, for fifteen developers – and something that has been working really great for us is to pair on the happy path scenarios with developers as we’re working on a project. We just make sure that, over each quarter, we’re pairing at least once with every developer. I think that’s really great for QA, and that’s great for developers, as well. Sometimes, there’s just too much to do, we can’t possibly be writing all the tests, so it’s really important for my QA team to make sure that we are working with these developers, and that they are empowered to write these Selenium tests as well. So far that’s been working really great for us. I think it’s also good, just in case – for example, Selenium conference is next month, so I will be out, and I think my QA counterpart will be out on vacation – it’s really great to know that tests are going to be written, even though we’ll be away. Just empowering the whole team to be able to do this is really working out.

Joe: Awesome, I think that’s a great approach. You mentioned earlier you using a Nightwatch.js in your previous position; what makes you select a certain tool? Is it based on the technology stack that your developers are using so that if you did leave, like you mentioned, that your developers would be able to use the same types of tools and technologies that they’re familiar with?

Meaghan: Yes. that definitely plays a huge part of the decision. At my last project, I was working in Angular, and using Node for the backend, and I think there were really only a handful of options at that point. It came down to using Protractor, which is heavily tied to Angular, and then using Nightwatch, which is written in Node and would be very relatable for the developers as well. Just make a big pro and con list and go from there to decide which framework to use. Nightwatch seemed really cool, so we just went with it, and that way we’re able to write tests directly in the same project, in the same repository, that all the other code is going into, which is something else I think is really crucial, if you want your whole team to be able to write and run these tests really easily, to pick a framework that has at least the same language, and that is going to be relatable to your team.

Joe: In your perfect automation world, what is your favorite automation technology that you love using, or language?

Meaghan: That’s a really good question! I’ll have to think about that! (Laughs) At least for right now, I’m a really big fan of WebDriverJS. I think I’ve been working in Javascript shops for the past few years, so that’s something I really like right now. I think WebDriverJS is really cool, and I’ve found it really nice to work with.

Joe: For some reason, I’m hearing more people talk about Javascript, Javascript, Javascript frameworks everywhere. It probably makes sense if people don’t have that experience, that they might want to check out Nightwatch or WebDriverJS. Is there one thing you see over and over again, in your experience, that most test automation engineers are doing wrong, or you think that most companies just don’t understand about test automation?

Meaghan: Kind of. This relates to a question that you asked a few minutes ago, about how much needs to be automated at the UI level. That’s a problem I’ve seen a lot, and it continues to happen –  that people try to automate everything at the UI level, and I don’t think that’s always appropriate. These tests are the most costly, in terms of time that it takes to run, and time that it takes to get feedback to your team. Just keeping the idea of the test pyramid in mind, that often inverts and you end up with the ice cream cone pattern, where you have all these UI tests at the top level, and not a lot of other integration or Unit tests. I think that can have some pretty bad results. I’ve been on projects before where I’ve seen the UI tests take hours and hours and hours to run, and that seems like that isn’t doing anyone any good at that point. Especially if you’re having failures in these super long test runs, I think over a certain amount of time, people get desensitized to the value of the UI test in general. I always try to keep this idea of having just the most important user journeys being automated at the UI level, and avoid the pattern of automating everything just because you can.

Joe: I love, love that advice! I definitely recommend that to anyone listening as the way to go. Alright, Meaghan, are there any books or resources you would recommend to someone wanting to learn more about automation?

Meaghan: Probably one of my most favorite books that I’ve been reading recently, and that I always find myself coming back to, is Fifty Quick Ideas to Improve Your Tests [by Gojko Adzic]. That is such a great book, and they are just fifty quick ideas. I think the book is a pretty easy read, but I find a lot of value in keeping these things in mind, especially as I join a new project and I’m thinking about where to start. “What am I going to automate? How is that going to look? What is my interaction with my team going to be like? How do I want to collaborate with people? How do I want to make sure that I am building a good foundation to have these tests be long-lasting and continue to be valuable over time?” I’ve learned so much great information from that book, I always keep it handy.

Joe: Awesome. I love that resource, I love the book. I love the way that book is written, too – like you said, it’s small, digestible pieces, and it’s really readable, and fun. I definitely agree with that. Meaghan, before you go, is there one piece of actual advice that you would give to someone to improve their automation efforts? And let us know the best way to find or contact you.

Meaghan: There’s one thing that comes to mind. I am currently on a project where we are not running our automation in continuous integration, for various reasons, and I would definitely recommend, whenever possible, run your tests in CI, and run them often. Make sure that you think about how to run your tests from the very beginning; otherwise you might be in a situation where you’re not running your tests in CI, and you’re not getting that really crucial feedback that you need to get often, with being able to run your tests on a schedule. That would be one piece of advice that I have. The biggest point would be: run your tests often. You spend a lot of time writing these tests and making sure that they run well, so you want to make sure you’re getting the most value out of them by keeping them up-to-date, and make sure that you are constantly having that safety net out to catch issues.

  1. Great interview as always, the absolute right set of questions and apt responses. I agree with the recommendations and understand why one needs to do it that way. I suppose most users end up creating their own libraries based on Selenium for managing the differences to make the tests easier to develop. As a test tool developer leveraging these strengths of Selenium (IBM RTW Web UI Tester), we decided to codify the differences encountered in cross browser testing into the product itself. This makes us see less of the problems from our customers for cross browser testing. For the features that might be completely missing like totally native dialogs popping up from the browser and differently with each browser, one has to resort to code that can interact at the native level. Java Robot APIs or something more accomplished like IBM Rational Functional Tester inserts prove very useful. Coupled with their integration with CI tools, I agree it’s a great setup for testing.

    I enjoy your interview Joe. They provide an excellent insight from daily users.
    Ashish

  2. Automated cross browser testing is hard. Especially if you have to support multiple browsers. The instability of the tests can increase at an exponential rate. I always try to consider whether more effort should be spent in providing better test coverage on 1 browser or whether we should spend more time covering all the browsers but less functionality. Maybe 1 day when Selenium is a W3C standard, we won’t have to settle?

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Leveraging AI and Playwright for Test Case Generation

Posted on 11/22/2024

Two BIG trends the past few years in the software testing space been ...

Symbolic AI vs. Gen AI: The Dynamic Duo in Test Automation

Posted on 09/23/2024

You've probably been having conversations lately about whether to use AI for testing. ...

8 Special Ops Principles for Automation Testing

Posted on 08/01/2024

I recently had a conversation, with Alex “ZAP” Chernyak about his journey to ...

Sponsor The Industry-Standard E2E Automation Testing Annual Online Event (Limited Spots Left) - Reach Out Now >>