Automation Testing

Take the Guessing Out of Your Mobile Testing Coverage

By Test Guild
  • Share:
Join the Guild for FREE
MobileTestingCoverage

Having issues with your mobile testing coverage? Not sure what combination of devices and operating systems you should test your mobile apps against?

Do you waste your time testing configurations that only a small minority of your users actually use or care about?

Figuring out the must-test configurations for your apps can be hard – but things just got easier with Eran Kinsbruner’s mobile testing digital test coverage index reports and online test coverage tool.

I learned about these in my latest episode of TestTalks, in which we discussed mobile testing. Here are some keys points from our conversation to help make your mobile testing efforts as painless and effective as possible.

Which Mobile Devices/OS Should You Test?

There are so many devices we need to test against; dozens of different OSs, iPhones, mobile devices and patches. Nowadays, how does a company put together a comprehensive test strategy for delivering awesome experiences for applications on any device?

Eran hears this question all the time, and one tip he recommends to find the right testing mix is using real-use monitoring solutions.

protractor-46461_1280

You Don’t Know what You Don’t Measure

There are a few ways you can monitor how your users are actually using your application. One way to achieve this type of insight is to simply embed an SDK into your mobile application This would allow your apps in the wild to report back to a dashboard which you can analyze on an ongoing basis to see which devices and which operating systems are hitting their websites or servers on a daily basis — and from which geographies.

Based on this date you can make meaningful decisions about your testing strategy. Another way is to use a tool like the digital test coverage optimizer.

Digital Test Coverage Optimizer

Besides analytics, Eran has also created a FREE tool called the Digital Test Coverage Optimizer. This tool allows you to enter your mobile information like:

• Where is your target audience?
• What are your primary device types?
• What are your primary operating systems?

For example, let's say you want to support just the US and Canada. You’d plug those countries into this tool, select phone factor, whether it's a device, tablet, or smartphone and which operating systems you’d like to support, and it would recommend to you the top 30 devices based on priority.

You don't need to take all 30 if you don’t wish to, but you can definitely get a prioritized list based on different elements such as quality, usage, market adoption and so forth, and simply use this as a baseline.

This custom report can also help you have some great conversations within your organization — whether it’s with your marketing, product management or whatever team — to help you decide what the best mobile testing strategy to have in place is.

You Should Test a Real User Experience

One thing for sure is that if you’re trying to decide between using emulator or real devices for your testing, always go with real devices.

The reason is very clear. End users today are not exploring your application through emulators; they're using real devices in real network conditions, with different background applications and competing processes running (camera, GPS, and so forth).

If you want to test for real user experience against real conditions – and on the right flavors of the operating systems which Samsung, Apple and others are deploying on their devices — you need to have the real devices.

For example, most of the carriers in the US and in the world, are not just taking the Android out of the box. They're flavoring the OS to their needs. Verizon’s Samsung S6 from Marshmallow, which was just deployed, will probably be different from what you’ll get from Android Studio on an emulator.

So you ideally want to test on exactly what your end users are eventually going to use and experience.

Location Matters when Testing Mobile Apps

Location, Location, Location! Location Matters.

Whether you’re using a Sauce Labs or Perfecto, these services have options to leverage real devices in data centers in numerous locations around the world.

For instance, Perfecto has 13 different data centers, in locations from Brazil to Mexico to the US, as well as in Germany, the UK, India and China, Australia.

In each of those geographies, the end users have different usage scenarios and are using different network carriers. You have different carriers and different operators operating in the UK and the US, as well as different LG and 4G connections.

Remember — it does matter where your users are operating their devices to interact with your mobile apps.

Eran mentioned says he knows that the Vodaphone UK flavor of Android 6 or 5 will be different from what you will see from Verizon here in the US, because each carrier tweaks the OS they ship with their phones, and once they’re tweaked it's not exactly the same.

For creating a valid mobile testing strategy, you should always take location into consideration.

How to Create Robust Automation

We’ve covered monitoring and non-functional, but there’s another key piece to mobile testing, and that’s how to create more robust tests.

I think we can all agree that at the end of the day, the amount of time we have to test our apps before they’re released is always shrinking.

For these test scenarios or test execution cycles to be executed on an ongoing basis on every commit the developer is doing, you need to have robustness in your test automation.

One way of achieving this robustness is to put more validations into your test automation scripts. There’s no doubt that using object ID, either through XPath or other identifiers, is the best way in today’s mobile testing.

However, you should make sure that you’re also visually able to analyze the expected next screen before moving on to the next test. If you add some prerequisites between the test scenarios and put some validation points on a visual basis, you get two in one.

For example, after login, to ensure that login was successful, you perform a visual validation. If you find it didn't succeed, you simply stop the execution and report the defect back to the engineers or developers.

You basically get two action items here; One, that your test passed, and two, that you’re not breaking or wasting the time of your overall CI or test automation flow. This can be achieved using both object IDs and something like Applitools or other visual techniques.

Using both approaches will definitely put robustness into your test execution and test scripts.

Discover More Mobile Testing Coverage Awesomeness


Here is the full transcript of my Talk with Eran Kinsbruner:

Joe: Hey Eran. Welcome to Test Talks.

 

Eran: Hey. Hi Joe. Thanks for inviting me. I'm thrilled to be on your show, hopefully to be an insightful content for your audience.

 

Joe: Awesome. Well it's great to have you on the show. Before we get into it, though, i guess can you just tell us a little bit more about yourself?

 

Eran: Absolutely. I am in this testing space, mostly mobile, for the last 17 years. Along the way I've been CTO for Mobile Testing and director of QA. I worked at Sun Microsystems back in the J2ME area, moved to Symbian and other dying platforms in the past. For the last couple of years, I've been the mobile tech evangelist with Perfecto, working on thought leadership material, investigating, putting some hands-on experience on mobile testing, both on functional testing perspective as well as non-functional testing, performance, user-condition testing, and so forth. Hopefully I can share some of the insights. This is a crazy and very dynamic and fragmented market in which most of the organizations today are finding it very hard to operate in.

 

Joe: Actually that's what I'd like to focus on, with the first few questions, is there's so many devices that we need to run our applications against that we need to test against so many OSs, so many iPhones, so many different mobile devices, different patches. How does a company nowadays put together a comprehensive test strategy for delivering awesome experiences for the applications on any device? I know back in the day it used to be just a website but now there are so many variables we need to think about. Do you have any high-level, best-praxis, you think everyone should have as part of their test strategy?

 

Eran: Yes. I think this is the most commonly-asked question and it's the biggest challenge in today's market, to cope with the variety of devices, operating systems, different parameters, screen sizes, different metro conditions as well which is also part of this coverage mix. Over the last couple of months I investigated a lot of time and energy in developing a strategy on how to select, in a wise way, your devices and operating systems for your test strategy.

 

I am publishing on a quarterly basis an index report but also quite recently an online responsive tool which also gives this guidance. Basically how it works, it collects a lot of data based on usage from the market, usage coming from different web traffic transactions and everything, and gives scores to the device and OS parameters and based on that, provides a recommendation.

 

So for example, I can give you an example, I know from market research that Android 5.1.1 Lollipop, on the Samsung S6, has a huge, long history of issues. Some of the issues are around battery drain, the other ones are around reading and writing to the SD card and everything. So this will definitely be one of the combinations which I would recommend to test on. And this is all captured in my tools and my overall methodology.

 

But to your question as well, I would like to add that organizations today that are struggling, simply also take into account analytics.

 

They either do it through a real use of monitoring solution, they simply embed an SDK into the mobile application, which reports back to a dashboard which they are analyzing on an ongoing basis, and see which devices, which operating systems are hitting their websites or servers on a daily basis, from which geographies.

 

Based on this date, they are making decisions. So that's another way of looking into the coverage conversation.

 

Joe: The tool you mentioned, I believe it's called Digital Test Coverage Optimizer, is that correct?

 

Eran: Yes. This tool is an online, free tool. It's responsive, so you don't need to actually have a desktop, you can use it from your tablet or your smartphone. You come with your information from home if you like, you know which geographies you would like to support. Let's say you want to support just the US and Canada. You plug these countries in this tool, you select phone factor, whether it's a device or tablet, a smartphone or a tablet, which operating systems you would like to support, and it will recommend to you the top 30 devices based on priority. You don't need to take all 30 if you like, but you can definitely get a prioritized list based on different priorities around quality, around usage, market adoption, and so forth, and simply use this as a baseline. You don't need to take everything as is, you can get it to your counterparts and do a conversation within the organization, whether with your marketing or product management, and make the decision based on this tool.

 

Joe: Once again I believe it is a free tool. I believe there are four steps, just like you said, you enter in your information, it tells you what the most common configurations are that you should be running against. I guess my question is, you did bring up something earlier like a metric where you knew that Lollipop has a certain amount of issues and that's why it would be one of the ones you'd probably have a higher percentage of-it did get a higher score, so it would bubble it up to the top of which ones you should run against.

 

Along metrics, do you actually tell people-before they even get to testing, I know you have a large user base, you must have a lot of metrics-tell them, hey, Lollipop, here are the five top issues that you should be developing for, to be a little proactive so you don't have to wait to get to testing to find out, hey this drains your battery, you may want to use a different mechanism for taking care of that.

 

Eran: I was thinking about this direction, Joe, and I think you can find so many examples out there. You can find iOS 8.4.1 on an iPad 2, they also had some issues with WiFi connectivity. There are many combinations. These are also embedded in the tool but what I also do, as a practitioner myself, with a lot of experience on testing, I try because each application will have its unique use cases and will operate in different conditions. Maybe just by knowing that Android 5.1.1 Lollipop on a Samsung S6 is problematic might not be enough of a threshold for you to test on, because you have your own analytics and you see that 80 % of your audience is coming from iOS and the cog device.

 

I can give you as a specific example, one of our clients coming from Brazil, he doesn't care about iOS. Ninety per cent of his users come just from Android, so if I were to give him an iOS example, it means nothing. So what I am recommending is first, actually, use tools like analytics. I would use your overall App Store ratings, App Store reviews, and try to find based on analytics and user traffic coming from your end users which is the most relevant combination for your application, because maybe this buggy combination is not even showing up in your analytics, so it's going to be like a false direction for you to take.

 

Joe: And the tool once again shows the top ten devices, I think it also gives top 25 and top 30 too. So it breaks down really nicely. But I guess, like I said you can probably reverse-engineer it, where you go to this tool, you find out the top tools that you recommend testing against, you make sure your developers focus on those particular ones, and how is it possible.

 

I guess at that point, what do you recommend? Should people test against a real device or an emulator? Do you think it matters? Or is that one of the values of using a service like Perfecto Mobile where you're actually testing an actual physical device?

 

Eran: It's a good question Joe. From a testing perspective, I think at the end of the day, at least my answer-by the way if you go to [inaudible 00:08:11] and Foster and other external vendors, they would say the same. Perfecto by the way supported emulators and simulators until quite recently. Based on conversations we had with developers and testers, we found that overall time used to test on emulators was very minimal, compared to testing on real devices. The reason is very clear. The end users today are not exploring your application through emulators. They're using real devices in real network conditions, with different background applications and competing processes running, like Camera, GPS, and so forth.

 

It made no sense to increase and invest-there are tools out there. There is GenyMotion, it's a large vendor today in the market that is providing access to simulators and emulators. Developers can also get these emulators through their IDEs today from Excort or the Android Studio. But at the end of the day if you want to test for real user experience against real conditions, on the right flavors of the operating systems which Samsung, Apple and others are deploying on their devices, you really need to have the real devices, based on the go/no go decision which you would like to take.

 

I can give you another specific example. Of the carriers in the US, in the world, not just taking the Android out of the box. They're flavoring it to their needs. Verizon Samsung S6 from Marshmallow, which was just deployed, will probably be different from what you will get from Android Studio on an emulator. It will be a different version. So you want to test on what your end users are eventually going to use and experience.

 

Joe: That's a great point. I think a lot of people get so geo-focused to where they're actually located and they forget about other areas in other countries, and all their conditions.

 

When someone's testing for, like you said, real network conditions, is that emulated? Or if they use a service like SauceLabs or yourselves, where they're running on a real device, do they get to choose where that device is located? Does that matter?

 

Eran: Yes, it matters a lot. Whether you use Sauce or Perfecto, they're using real devices in data centers in different locations in the world. For Perfect we have 13 different data centers from Brazil to Mexico to the US, east coast, west coast, Germany, the UK, India and China, Australia.

 

First of all in all of these geographies the end users have different usage scenarios, and they are using different network carriers. You have different carriers, different operators operating in the UK and the US. A different LG connection and a different 4G connection. It does matter where your users are operating and which devices are there because as I mentioned, I can assure that the Vodaphone UK flavor of Android 6 or 5 will be different from what you will see from Verizon here in the US. They are tweaking it and once they are tweaking it it's not exactly the same.

 

Joe: Another thing I'd like to bring up, I think a lot of times people get so focused on automated testing, but a lot of these services like yourself, I believe you also allow people to manually test some of these devices, it doesn't necessarily have to be an automated test. Is that correct?

 

Eran: Yeah, you can do both manual testing and automation. Unfortunately, one of the biggest challenges in today's mobile or digital space, if you like, is the conversion of manual test to automation. Perfecto provides both manual and automated test, you start with manual testing. If you start it correctly and you design the test correctly, you can probably start thinking of moving them to automation, either through the [end of agreement?] development, data-driven queue agreement, there are many practices. But at the end of the day, I think there are huge problems in today's test designs. How you design your mobile application test plan, and this is a key to sustaining the test automation going forward.

 

We see many organizations today starting with mobile testing, manual, they say “Okay, it's working,” then they start doing it automatically. After the first cycle or after the first version, they find that it stops working for them. Why? Bad practices. They are developing the test in a disconnected manner.

 

Let me just explain what I mean. You have today three different layers in mobile testing space. You have the application under test, you have the test itself, and you have the mobile device. Each mobile device would have different supported capabilities, and if you have the application where you want to test a specific feature, and you have two different devices, one does support the feature and one doesn't, let's take as an example iPhone 6S. iPhone 6S today comes with false touch. It's a unique gesture. But if you are Facebook, and you want to test this specific capability of writing a post for Facebook without logging into Facebook, just doing it on the home screen, you can do it with False Touch, right? But this is an additional capability of the write post from the UI of Facebook. So this is one example where the manual test or the automation test is going to be broken, because you have different device capabilities where the test simply reaches a dead end. It cannot grow into an enhanced capability.

 

You can take this example to many, many other areas in mobile testing. So to make it work for you, both from a manual testing perspective but specifically from automation testing perspective, you need to design from the beginning the test, the application and the device capabilities as one. Make sure that you are always tying the dots between these three layers.

 

Joe: Now that is a great point with mobile testing and also mobile automation. With normal automation is that you should design for testability up front, because once you've delivered an application and it's already developed, it's so hard to bake that back in at that point.

 

So how does someone do that though? Like you said there are so many different devices, how do you know which device handles what particular activity? Do you recommend they have separate tests for separate device groups, or one test that would handle somehow with some sort of logic all these different types of actions that could be performed regardless of what version they're running against?

 

Eran: Exactly, Joe. Exactly. I think that you just nailed the point. But to make it work, I think you need to have an ongoing conversation or discussion handshake between the testers and the developers. What do I mean? The developers today will definitely put all the use capabilities of the application in, if you are talking about an Android application, there is the manifest file, which is inside the application itself. If it's an iOS application, it's going to be the P List, in the infodoc P List application file.

 

So the developer knows, when he's developing the application, it's actually mandated to provide which permission the application requires, which feature such as TouchID, False Touch and some other gestures are required or supported by the application, so the information exists today. What is left for the tester to do, the test automation developer to do, is to simply do a scanning, a dynamic scanning of these two files, and try to map the test cases based on the capabilities, as you mentioned, by groups. You have different groups for different tests. By the way, it ties back to the coverage conversation which we had in the beginning. IF you have the index, it gives you the top 10, top 15 or top 25 devices. If you are grouping these devices also by features and capabilities, and send the test accordingly to these devices by capability, you're actually tying all the dots. You have a very good coverage, based on capabilities, on supported features and on the right devices and operating systems.

 

Joe: So I also noticed you recently gave a presentation on mobile test automation and the need for continuous quality. I think this might touch on it. One of your first points I think you talked about why velocity requires a properly maintained function library. Is this how this ties in, where if you have a properly maintained function library that'll help you be able to get these different test coverages that you need and not be so tied in to a specific implementation?

 

Eran: Exactly. Exactly. And as the mobile application or the mobile operating system changes, you want to have a disconnected function library than the test logic itself. So you would like to maintain-I would actually use a different term, of object repository, in that case. Objects tend to change. When you have an application, you have a new screen, a new feature, your object elements of the application might change. You would like to maintain a disconnected or separate object repository from the test, and the objects are also tied to methods or function libraries. So you can re-use them in different scripts, in different test scenarios in a way that if something changes, you just change something in the function library or with the object repository and the test continues to run.

 

So let's assume you have 100 test cases which leverages one function, one function library or one object repository. Let's say just to do the log in feature. In that case, if you changed just the method of the log in object identifier and so forth, you have all the inherited test scripts already getting this adjustment automatically and that allows you to continuously support your test suite, run in continuously as part of continuous integration, and this is a key for velocity.

 

Joe: I definitely agree. I think there's another point you bring up, it's such a fast-paced environment, things are changing all the time. How does someone keep tabs on what's happening in the market, so they can plan ahead of time for testing, for maybe a new release that's coming out down the road or a new technology? Do you have any updates on how someone can stay continuously updated to the changing market.

 

Eran: Yes. In the index I am addressing this question through a market calendar, which I am maintaining. There are some rules, people come to me often and say “This mobile space is crazy, it's unpredictable,” when I hear it I automatically say “Listen, it's crazy but it's kind of predictable.” Why? You know that in specific months every year, you have the same events. Every February or March, every year in Barcelona, you have the Mobile World Congress. And in every event for every year for the last-I can't remember the number of years-you have the Samsung new S6, S7 being launched, you have the LG series being launched and so forth. Each September and October you have the iOS major release with iPhone devices and iPads being released. Each October you have the Android Next version being released.

 

So you have some market events which are kind of predictable. Nothing changes them. September 16, more or less is the iOS 10 release, I know it today, six months before or five months before. I already have some ads up, I know that if I am about to launch my mobile application, let's say two or three months from today, I might want to wait for June-June is just around the corner-I'm going to see the iOS 10 beta, which is public, in just a few weeks. I can already start testing on this iOS beta and make sure that whenever I am deploying my application at least I have seen some snapshot of it, of how it works on the next generation of Apple. Same with Android N, Android's next release. We already have two different previews which Google has made available. You can already execute them on real devices. All the Google Nexus devices today are able to support the developer preview of the Android N which is just going to be launched in October.

 

So to your question of how you can plan ahead to address this constant changes, constant noises in the market, one of the ways is to map these different calendar market events. Some of them are device launches, some of them are operating system, beta or GA launches. Make sure that your release cadence takes them into account. You might want to even delay a release which is just targeted the same month of the next iOS or Android, just to give yourself a few weeks of validation and sanity, so you're very much prepared to support the new GA, rather than deploying something and then the new GA for iOS is out and you see that it's crashing or it's not supported, the pain is going to be much greater for you.

 

Joe: Absolutely, great advice. Like you said there are events that happen seasonally, almost and that development should be tied in to that. Management should really encourage the developers and testers to really plug into the different conferences that are going on to be on top of the news in the mobile sector I believe.

 

Eran: Definitely. Also from beta version. They don't need to wait for the GA, they need to get it already from the beta version.

 

Joe: Good point. Now I know we talked mainly about functional testing, but how does someone address performance testing in regards to mobile? Lots of times in the old days they used to use LoadRunner and we'd have an emulator where we could emulate different bandwidths that were running our tests against in different network conditions. So how does someone test performance in this type of environment?

 

Eran: That's a very complicated task to achieve but it's doable. There are many tools today which help to assure client-side performance. As you mentioned, in the past when you were testing for web performance you had tools like LoadRunner which is still available today, StormRunner in today's new flavor, and you would simply mimic different transactions on different number of scale users, virtual users if you like, and you will get your thresholds, you will get your metrics and everything.

 

For mobile, it's quite different. Why? Because these mobile clients are all coming with a very degraded set of capabilities. Screen size, battery, different competing applications which you cannot really control, and other things and the network conditions themselves which are also impacting the overall user experience. And then you have the server side which is also something that you need to monitor.

 

So how do you do it? The recommendation of how to do it is to take one device or two devices, let's say one iOS and one Android. Try to measure the timers, the user experience of a transaction. Let's take a log in of a purchase end-to-end flow of something on a retail, a native application. You run through an automation process using tools such as BlazeMeter or [Neotis?] or even LoadRunner. You capture this transaction, and the pickup file, the network capture file, and you have the timers, how much it takes. You then replicate-because it was recorded on one real device or two real devices, the tools today, also LoadRunner, which can replicate this one single execution on one device, or let's say 100,000 devices. So once you are able to replicate one mobile device 100,000 times using these kinds of tools, you get the user experience only through one mobile device, but you get it right, you get how much it's going to take you to do a log in in a real network environment, while 100,000 other users are doing the same from their mobile devices. So it's doable.

 

It's not easy, but you can do it today with Blaze Meter and Perfecto, you can do it with LoadRunner HP, and some others. But again the key is to try and record the first transaction from a real device, not an emulator. You want to record it on one real device, get the full end-to-end transaction, the timer, the time it took to complete this transaction, make sure that it's acceptable from one device before you actually scale it up. Let's say it's acceptable, it takes two seconds and it's okay by you, then replicate it 100,000 times and see if it doesn't degrade, the amount of time it took when the network is being loaded by more devices.

 

Joe: I guess along those lines, I don't know if this is a myth, so I'd just like to get your input on it. I've heard that a lot of times when people are doing performance testing of a mobile app, they get confused with the usage of the web app. So they may test functionality that may be very popular on their web apps, but not so much on their mobile apps and where they go wrong is they have the wrong breakdown, the wrong scenario where they're running a test on a mobile app as if they were running against a web app and that the users behave differently on both platforms.

 

Eran: I know that this is definitely a challenge and the risk, which you can run into. I think that as long as you are recording your performance test from a mobile device where you have the user agent itself, you know that this was recorded on a Samsung device or an iPhone device, and you manage it correctly within the load test framework. But again, I agree with you, there are a lot of practitioners out there which are being challenged by this scenario and my recommendation to them is start small. Take one scenario on one mobile device, see that it works, it doesn't mix and confuses your other existing test scenarios from web or from other performance testing, then scale it up and try to analyze it on an ongoing basis.

 

Joe: So far we've talked about developing best praxis for mobile testing, we've talked about functional testing, and performance testing. There's one other piece I've been hearing more and more about, and that actually alerting and monitoring once it's in production. Is that a trend you've been seeing also? I notice you mentioned a few times network congestion is different in one country to another, and even from state to state. So I see more and more companies now, not only functionally and performance testing something, but now being more proactive where they're actually monitoring these apps in the wild. Is that a trend you've been seeing more and more of?

 

Eran: For sure, for sure. Actually what I am seeing in the market is divided into two different practices. One of them is the real user monitoring, and this is the real production monitoring. You actually inject an SDK library as part of your APK or IPA, your native applications, which are out there in the App Store. Once the users are logged in to the applications, this SDK library starts reporting back to the dashboard, to the servers, to the nook, whoever is monitoring in the organization, everything. Which device is being used in which network, which user flows, if it crashes, where it crashed and everything. So you know everything, live from production, and this is a very pro-active approach to monitoring, to quality. You get exactly the same time, whether the end-user experience, the crash, or a performance issue, an alert and you can act upon it before it goes into the social networks and “beta is down” or what ever application “is down.” We know about this kind of things.

 

There is another approach, which I am also experiencing a lot, and this is the pre-production monitoring. It's still being done in production environments, but in a synthetic way. This is the difference between real-user monitoring and synthetic monitoring. Synthetic monitoring is also the same. But it's done in a closed lab where you can control everything. You mimic the production environment. Once the application is ready, you set a very small number of devices. You don't need too many. You take two Androids, two iPhones and so forth, and you continuously run, 24/7, your test scenarios on the most important transactions. You collect the data. The data can be performance degradation, you put some thresholds, if they exceed it you get an alert. If the application crashes, you get an alert and you get it on an ongoing basis.

 

Once it's in your lab, as opposed to external monitoring, you can definitely get much more insights. Why? Because you get the failure on devices which are under your control. You have the full access to the logs, you have the full access to the device itself, you know what happened, you know which scenario and everything, and you actually control what was actually running on the device as part of your test execution. When you are doing it in a real-user monitoring scenario, in production by an end user who you cannot even get in touch with, okay, it crashed, but you only know that your application crashed on a Vodaphone UK network on a Samsung S5. You have the environment profile where it crashes, but you have no access to what drove this crash. You don't know which applications were running in the background, you don't know the battery level, you don't know the other supported de-bugging elements which can help you resolve and drill down into the issue.

 

These are the two approaches. I'm hearing more and more about the two of them. I would say that there is no one which is better than the other. My recommendation is to actually combine both and do both in the lab and also as part of real-user monitoring in production.

 

Joe: Thank you Eran. Before we go, is there one piece of actionable advice you would give someone trying to improve their mobile testing efforts? And let us know the best way to find or contact you.

 

Eran: Sure Joe. So on top of the other pieces you talked about coverage and monitoring and non-functional, there is also the robustness of your test automation. At the end of the day, the test cycles are very limited in time. I'm hearing of cycles which need to run for 48 hours and sometimes even less. For these test scenarios or test execution cycles to be executed on an ongoing basis on every commit the developer is doing, you need to have robustness in your test automation, and one way of getting this robustness is to put more validations into your test automation scripts.

 

It's no doubt that using object ID is the most important, either through X-Path or other identifiers, is the best way today in mobile testing, either through Accum or other tools. But you want to make sure that you are also visually able to analyze the expected next screen before you are moving to the next test. If you put some prerequisites between the test scenarios, and put some validation points on a visual basis, you get two in one. You get one, from one end, the ability to transition safely to the next test step. For example, after log in, you want to make sure your log in was successful, you do a visual validation. If it didn't succeed you simply stop the execution and report back the defect to the engineers, to the developers.

 

So you get two action items here. You get one, validation that your test passed, but also you are not breaking or wasting the time of your overall CI or test automation flow, with both object Ids and visual through [Oseeya?] or other visual techniques. Using both would definitely put robustness into your test execution and test scripts.

 

So several things. I am actively posting on Twitter using my credential, you can search for me using Eran Kinsbruner. I have my own mobile testing blog called Mobile Apps Testing dot com where you can find me. Simply search Google, Eran Kinsbruner, you'll get most of my contact information. I'm active on LinkedIn, on Twitter and on my blog.

 

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

SafeTest: Next Generation Testing Framework from Netflix

Posted on 03/26/2024

When you think of Netflix the last thing you probably think about is ...

Top Free Automation Tools for Testing Desktop Applications (2024)

Posted on 03/24/2024

While many testers only focus on browser automation there is still a need ...

Bridging the Gap Between Manual and Automated Testing

Posted on 03/13/2024

There are some testers who say there’s no such thing as manual and ...