Podcast

129: End-to-End Mobile DevOps with Guy Arieli

By Test Guild
  • Share:
Join the Guild for FREE
Guy Arieli Test Talks Feature


I don't think there’s a more popular topic in test automation nowadays than DevOps. While many folks I speak with have a process in place for most of their applications, one area some are struggling with is how to apply DevOps best practices to their mobile app initiatives.


That’s why I was happy to speak with Guy Arieli, CTO of Experitest — a company that provides a bunch of cool quality assurance tools for mobile DevOps. If you’re involved with mobile development of any sort, you won’t want to miss this episode.

About Guy Arieli

Guy Arieli Headhsot

Guy Arieli has a strong track record in technology innovation with more than 18 years of development and hands-on experience in test automation. Prior to Experitest, Guy spent several years in management positions in HP (formerly Mercury), Cisco and 3Com. In addition, Mr. Arieli founded and sold the largest local Test Automation services company – TopQ (formerly Aqua Software) – to a leading publicly-traded technology group – Top Group. Guy leads the largest Test Automation forum online and is a keynote speaker at events worldwide. He holds a B.Sc. from Israel's world-renowned Technion.

Quotes & Insights from this Test Talk

  • If your website sucks, no one really knows about it. This is not the case if you have a bad mobile application. If you have a low-quality mobile application, very quickly it will get rated, and no one will download it. The visibility of your result is a different thing. The other thing is that the complexity of distributing your result. In the case of the website, if you have a bug, you can fix it, and in the next refresh, all your customers will experience the new fix; in mobile, this turnaround can take a few weeks. You need to upload your application to the app store, and then either Apple or Google needs to approve it, and then your customers need to download it. This cycle is much longer, and because it's much longer, if you have a problem in your application, you will face a longer down time, or a longer reputational damage, than in your website. So this is one part of your question. The key, obviously, in order to avoid those big pitfalls, is to test your application and automate it. I don't see any other way but to have a real, high-scale automation project.
  • One thing that is also obvious in mobile is the size of the testing matrix is much bigger. Let's say I want to test, to cover, 80% of my customers, in sense of device manufacturer, device version, and such. When it comes to web, I can do that using something like three, four, five different combinations of browser / version, plus operating system. When it comes to mobile, we need at least three times this number in order to get to the 80% coverage.
  • Usually what I see my customers are doing, they're going to their marketing department, where they have all the BI from their existing version of the application, and they do have this exact knowledge of how many people are using iPhone 7 with iOS 10 using their application, or iOS 9 with iPhone 6 – they can do this drill down to a point. They say, “I'm good with 80% of my customers, and in order to cover 80%, I will need this many devices.”
  • Usually when you release an application to the app store, especially if you are a big enterprise, you really think that your application is good. Sometimes you are aware of some issues, but you find them to be minor. We've tried to analyze why people in these big organizations are uploading their application, and in the end they're getting a bad rating. What we've found is that there are three main reasons, and they are split equally between themselves. The first one is device fragmentation. A third of the issues you will find are related to device fragmentation. That means that the enterprise that releases the version that didn't test on this specific combination of device and operating system that is used by a specific customer. The second thing is what we call network virtualization, or network condition. This is one of the main pitfalls that I see today in test automation, and it's so easy to fix. What happens is, most of the time, when it comes to mobile automation – mobile automation is generally hard, it's not so easy to achieve, to get to a point where you have hundreds or thousands of tests that are continuously executed and stable – in mobile, it's even harder. When you have such an environment, adding to that network virtualization, or different network conditions is a very easy thing. This is something that I've encouraged everyone to do, because as I've said, a third of those issues are coming because people are testing their application in a wifi environment, in a very high bandwidth, low latency environment, and when their customer is using their application, they are using it in a 3G environment, sometimes with bad reception, and the difference in the application experience can be huge between those two environments. This is the second thing. The third part is, obviously, some functional problem that you didn't encounter. This is how those are getting distributed between the gap between how I perceive my application, and how my customers perceive it.
  • We provide a network virtualization solution as part of our solution, and it gives you the exact template. You can say, “I'm using AT&T, my phone is in New York, the server is hosted by Amazon, they are located in Europe, I'm using 3G, and I'm seeing three bars in my phone – simulate the condition for that, and now I want those conditions to be dynamic, so now I’m entering an elevator – I will be in the elevator for thirty seconds.” Those scenarios are something that can be easily simulated when you have a network virtualization tool.
  • I think be a good programmer and know the design code, because in the end when you build automation projects, you need to understand that you build a product that can get to a huge amount of code. You need to think about it from the beginning as if you are designing a product that is going to scale and it's going to have a lot of code in it. In the end, usually what you will see, the problem that automation projects run into, are usually scaling problems that would have been avoided if they had been designed as a software product from the beginning.

[tweet_box design=”default”]If your website sucks, no one really knows about it. This is not the case if you have a bad #mobile app @Experitest[/tweet_box]

Resources

Connect with Guy Arieli

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...