About This Episode:
Have you ever seen or used robot-powered mobile testing? Discover how Mobot’s mechanical robots automate app tests that are impossible via emulators, eliminate manual testing, get products out faster, and improve app quality. In this episode, Eden Full Goh, the Founder, and CEO of Mobot, shares how this automation technology is the missing link to your automation testing plan. Listen in to hear more about the future of mobile testing.
Exclusive Sponsor
The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!
About Eden Full Goh
Eden Full Goh is the Founder and CEO of Mobot. Mobot (YC W19) is building the future of physical testing in the cloud, leveraging mechanical robots to help engineering teams automate regression testing for software that runs on proprietary hardware, starting with iOS and Android.
As a former Product Manager, Eden worked on software spanning the energy, healthcare, and government sectors at Palantir Technologies and Butterfly Network. Before dropping out to accept a Thiel Fellowship, she studied Mechanical Engineering and Computer Science at Princeton University. During her fellowship, Eden founded SunSaluter, a global non-profit that deployed an open-source solar panel tracker design in 19 countries, impacting 17,000+ people.
Connect with Eden Full Goh
-
- Company: www.mobot.io
- LinkedIn: edenfull
- Twitter: edenfull
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:01] Speaker Welcome to the Test Guild Automation podcast, where we all get together to learn more about automation and software testing. With your host, Joe Colantonio.
[00:00:16] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild Automation podcast. Today, talking to Eden all about mobile app testing powered by you got it! Real robots, really excited about the show today. Eden is the CEO and founder of Mobots, a mobile app testing service which is powered by supervised mechanical robots. If you don't know what that means, don't worry. You're going to find out. She's the daughter of brave refugees. Eden is proud to be Chinese-Canadian, born and raised in Calgary, Alberta, and she attended Princeton University and received the prestigious Thiel Fellowship. So she has a lot of awesome experiences behind her. I'm really excited to get into our founding story and what Mobot is all about. You want to miss this episode? I think it's really kind of groundbreaking that you haven't heard about before this technology. So you want to stick around. Listen up!
[00:01:04] Speaker The Test Guild Automation podcast is sponsored by the fantastic folks at SauceLabs. Their cloud-based test platform helps ensure you can develop with confidence at every step from code to deployment to every framework, browser, OS, mobile device, and API. Get a free trial, visit testguildcom.kinsta.cloud/SauceLabs and click on the exclusive sponsor's section to try it for free today. Check it out.
[00:01:31] Joe Colantonio Hey Eden, welcome to the Guild.
[00:01:36] Eden Full Goh Thanks for having me.
[00:01:37] Joe Colantonio Awesome. Awesome to have you. I'm really excited about what you're all doing at Mobot. Before we get into it, though, is there anything in your bio that I missed that you want the Guild to know more about?
[00:01:45] Eden Full Goh Yeah. I mean, I started my experiences as a product manager and that was ultimately what inspired me to start Mobot. But a lot of that kind of came from my early experiences studying engineering and building a nonprofit, dropping out of college, and getting a Thiel fellowship. Happy to dive into any of that if that would be relevant.
[00:02:04] Joe Colantonio Yeah, I think it's like really cool, inspiring story. So I thought we'd start maybe from the beginning, like, you were in Princeton, you dropped out and create a nonprofit. Could talk a little bit about that?
[00:02:16] Eden Full Goh Yeah. So in high school, it actually started earlier for me.
[00:02:19] Joe Colantonio Wow.
[00:02:19] Eden Full Goh I competed in a lot of different science fair competitions and built. Did a lot of research around solar and renewable energy, and I carried this research work with me into college. And so during a summer internship, I had a chance to travel to Kenya and actually deploy a very early prototype of a solar panel tracking system I developed at the time. And 10-15 years ago, solar panels were still really expensive. So the idea of having this technology that could optimize the rotation of a solar panel, so it would collect more energy, that was actually really compelling, especially in developing countries where the cost of energy is so high. So I ended up dropping out of college and getting a Thiel Fellowship to start this nonprofit and scale operations around it. And I think that was where a lot of my early founder experiences came from, and I still use a lot of those skills and lessons today.
[00:03:15] Joe Colantonio Nice. So what are those skills and lessons that maybe you learned from the first startup that you've brought over to your next one?
[00:03:21] Eden Full Goh I think the biggest thing is you really have to be building a solution that people care about and it has to be solving a real need. And it can't just be like in this ideal, beautiful lab setting. I remember when I deployed the first version of SunSaluter, which was the name of the solar panel technology. I had developed it in a lab at Princeton and I thought, okay, I know, I understand what the use cases. I understand what people are going to want from this. And I remember going to the village in Kenya and there were cows and children and the well water was dirty and it was dusty and everything. And I realized, oh, I didn't actually design the prototype to account for all of these, the rough terrain, the different conditions. And I think that's something that's still relevant today, is you have to be constantly iterating and you have to also iterate with your users. Your customers don't just kind of like have your own thesis for the world and develop things in a vacuum.
[00:04:19] Joe Colantonio Absolutely. So what made you then? Obviously, you're very smart. So what made you then focus in on all the areas you could focusing on and the problems you could focus in on, come into like the software testing space. How did you come into that?
[00:04:31] Eden Full Goh Yeah. So for me, I think that early nonprofit hardware experience really kind of gave me an appreciation for solving problems in a physical way. I switched gears a little bit after my fellowship and I got a job working as a forward-deployed product manager at a company called Palantir. I worked there for a couple of years, helping them build web applications that kind of spanned different industries, whether it was health care insurance or government or energy. And so in working with those engineers, they wrote all of their own Web tests. Of course, I would show the final product to customers, but I didn't really have to worry about QA, where things kind of got interesting for me was a couple of years after working at Palantir, I switched jobs and started working at a company called Butterfly Network and they are building a portable ultrasound probe that plugs into an iPhone and Android device. So all of a sudden I was going from the world of like web apps to mobile, but not only that, but mobile with additional hardware peripherals. And so the way that I thought about the interactions between a product manager and an engineer and QA team, everything I kind of understand about the software development lifecycle kind of went out the window. I realized things work very differently. Of course, there are different preferences, different engineering teams, and different cultures. But I do think there was fundamentally when you move from the web to the mobile world and the hardware world, it's a different tech stack. And so that was ultimately what inspired me to start Mobot.
[00:06:00] Joe Colantonio Nice. So what gave you the, most people are scared. What made you like you had a job again? And then you said, Oh, I'm found another company. I'm just going to leave it and just do it all over again which is not an easy thing to do, especially in this industry of probably hardware. It's probably very, very difficult. So what gives you the, not the energy, but what made you make that decision?
[00:06:22] Eden Full Goh Yeah. So the biggest thing for me was seeing how easy testing is supposed to be for the web. And coming from my first job where I didn't have to think about QA. But then transitioning into this job where I remember like my second day of work, the engineers were like, Hey, we're really sorry. We have some automated unit tests that are running. There's a bunch of probes and there's a bunch of phones sitting in a rack somewhere in a server room. But it's not a replacement for manual testing. So before you take this prototype to the hospital, because we had real doctors who were, this was an FDA cleared medical device, we had real doctors who would try out new releases. They were like, before you take this to the hospital every week, could you please just run a round of manual testing? Just pretty please just make sure everything works. And I was like, Wait a minute. What is the point of having your automated CI test? What is the point of having all those other unit tests? If I still, at the end of the day, have to go and run these manual tests? And the more research I did into this problem, the more I couldn't really find a clear-cut, clean, automated solution that you have for web and for even just like other desktop apps and RPA solutions. Like none of them really worked for mobile, none of them worked for hardware. And because this was something that I kept having to do week after week after week, after several months of doing this, I'm naturally a problem solver. I started my own nonprofit trying to fix a different set of problems. I'm like, Wait, why can't I just also solve this problem physically? I remember like on my desk, I had like ten different iPhones and I was literally assembly-line style, just like tapping my way down. And I was like, If I can do that with my finger, can we get our robot finger to do that? Because we can't get rid of the physical part. We can't get rid of the mechanical part. So I should just get a mechanical robot to do all this. And I couldn't stop thinking about that idea. I remember I like came up with the original idea like 9 PM I was taking a shower at night and knowing I was going to have to go back to doing more QA like late-night QA that night I was like, Oh no, this is the worst part of my job having to do the manual testing. And I just remember this idea popping into my head in the shower and I couldn't stop thinking about it. And after several weeks of doing that, I realized, like, no one else was going to start this company and I wanted to be the one to do it.
[00:08:42] Joe Colantonio Nice. So for the particular specialty, you focus on, you obviously needed a hardware background. So were you the one that created that prototype from scratch and then made sure that worked and then you started to scale it up slowly?
[00:08:56] Eden Full Goh Yeah. So the original prototype from Mobot was there are actually these robots you can buy off the shelf that are for signing greeting cards. And so you can actually, automate a robot to sign all your Christmas cards for you. And so I basically took one of those robots. We swapped out the pen and swapped it out with a stylus that I bought on Amazon for like ten bucks. And that was the first version of a robot that we used for testing. And so really one of the things that we wanted to focus on was if we're going to need to have many of these robots running many devices, we want to keep the robot and the hardware part of this very low cost, very affordable. That's not the innovative part of what we're doing. You can totally buy a robot that will test a touchscreen. There's Optofidelity, there's Matt, there's Tapsters, and there are all these other robots out there. But what really makes Mobot compelling is that we make this job super easy for your everyday software engineer. So we're providing infrastructure as a service, so we don't want the engineer to have to worry about programming the robot maintaining the robot mechanically fixing the end effector broke. It's not calibrating. There's a lot of complexity with maintaining this whole process that we want to kind of take the burden off of that, off of the engineer's hand. And so I remember the original version of the robot. I was just kind of thinking like, Let's do off the shelf. We customize the parts over time. And since then, of course, like we've built our own commoditized robot and there are custom parts to our platform in our hardware. But the original prototype was me building this like web app that was an Arduino-controlled robot that controlled the pen plotter, a greeting card robot. And we found our first customers. And the robot would only work on my personal iPhone. Like calibration was so painful and so hard. So I got all of our original tests for our first customer to run only on my personal iPhone.
[00:10:48] Joe Colantonio Wow. So I can imagine it's hard to get a script to run against multiple devices, so that's a unique thing. How do you calibrate that for all these devices? Because I guess it just sounds like a mash-up almost of robot technology, like you said, but also like a service like SauceLabs, a browser stack where you're running on real devices in the cloud on service, running gets everything like a phone, a watch, who knows what else. I'm sure there are all kinds of use case that come out that like car companies and things that probably have hardware that needs to be interacted with somehow. So how do you calibrate a robot to do that then?
[00:11:24] Eden Full Goh Yeah, so that is actually what we spend a lot of time and our engineering team is not so much the robot itself, but how do you build software that tells the robot what to do. And being able to scale it reliably across all of these different mobile devices, because there are different flavors of Android that run on different manufacturers and iOS can be a struggle. And so we've spent a lot of time building out that technology where we actually are able to reliably obtain screenshots and the video feed off of each device that we test with. We actually have a native mobile app that we've built ourselves where that it essentially will register that, hey, this device is now being connected on this robot and it runs its own little two minute calibration routine to actually figure out like, okay, where is the top left, the bottom right, everything in between and making sure that like before a test run starts on a robot, that we have actually registered that device, we've calibrated it, we're ready to obtain screenshots off of it and that we have that script, that test script that's ready to run.
[00:12:28] Joe Colantonio Alright. Cool. What language is used for that to calibrate a robot? Is it a python?
[00:12:32] Eden Full Goh Yeah. So for us right now, if you look at it on the most granular level, like everything translates to X, Y, Z coordinates for the robot. And so what we are trying to do over time, especially as we think about ways that we can integrate with other testing tools, we'd like to essentially build more and more intelligent ways of telling the robot how to get to those X, Y or Z coordinates. But at the very low level, where essentially it's a sequence of ordered instructions that you are telling the robot X, Y, Z, this is where you go, this is what you tap. And we've built our own internal tools to make it very easy to program those robots, and so that we are able to very quickly kind of execute a series of tests. And then our customers will receive a screenshot by screenshot replay as a test report containing timestamps and logs of everything that was actually seen on screen as if a manual test was completed. Because what we're essentially doing here with Mobot is we are automating manual testing and we're starting with mobile as kind of the core platform today because that's where a lot of the pain is being felt in the industry. But the reason I started the company was because, inspired by my background, I wanted to build a solution that can test medical devices. You mentioned automotive, your self-checkout kiosk at the grocery store, your Apple Watch, and your other peripherals. I think the software is becoming more physical, and we're going to need to build more innovative testing frameworks and tools that cover beyond just simulated testing or virtualized device testing in the cloud.
[00:14:05] Joe Colantonio Cool. I probably missing a piece there then. So you have a robot, you have a cloud solution with all the devices. And mentioned, you're automating the manual tests. So do you actually create the tests for a company or does someone have to create like an Appium script or something or a like XUnit test or something and send it to you? How does that work?
[00:14:25] Eden Full Goh Yeah. So just to make sure we're all on the same page because I know this is kind of a new concept.
[00:14:29] Joe Colantonio Yeah.
[00:14:29] Speaker The best way to imagine Mobot is where it's a fleet of mechanical fingers in the cloud if you will. We want to essentially, we're building in our office shelves of robots where every robot is mounted with we have over 200 iOS and Android devices in our inventory. And essentially we have these little mechanical robot fingers or end effectors that are tapping on every screen. And so the reason that we are doing testing in this way is because it enables us to test things that normally can't be automated, multi-device messaging back and forth, push notifications, location services, bluetooth, all of that. And so in building out that platform, it means that we are remotely delivering essentially almost like manual testing to the engineering teams that we work with where normally they send us a batch of their test cases. It's usually things like push notifications, location services, like I said, and we will actually set up, maintain and run those tests on our robot. And so we don't sell robots, we don't ship robots to anyone. We run them all in our office and then we send you a test report in the cloud that just shows you the outcome of everything with logs of what was actually executed by the robot. And so it does feel like in many ways, almost like you're hiring a team of robots remotely to do the work for you. And so we do the setup and the maintenance currently, but in the long run, what we're looking to do is build out an API and our own integrations with existing testing tools. So one day it would be awesome to be able to build an Appium test that runs on a real mechanical robot in our infrastructure. But that's probably a year or so away still.
[00:16:11] Joe Colantonio Nice. So I think you mentioned something that's kind of critical is beside the cool factor of using robots, you're able to automate things that you can't automate with a typical existing automation script, is that correct?
[00:16:22] Eden Full Goh That's correct. So I think the most common test cases right now are going to be things like phone calls, device to device messaging. When you have dependencies on one device before you can run on another, a very common thing is a ride-sharing or ride-hailing application. You have to have a driver account that's logged in, that's available, and then you have to have a rider account that's like in the same GPS location. And it has to be the rider request one once you get a push notification. The app behavior might be different if the driver's app is backgrounded or foregrounded. Or maybe you're using a map application and you want to make sure that the audio or the alerts from the driver app is actually being rendered correctly even when you have a different app that's in the foreground. And so there's actually a lot of complexity here, that traditional kind of testing frameworks assume. Your app is always open, but that's just not the way that people are using mobile apps and using technology. And kind of like I was mentioning earlier, the broader theme behind all of this is software is becoming more physical. The way you use it is manifesting more physically and more dynamically. And so we are trying to build a testing tool that tests like humans are actually using technology. And so we see this as a complement to your existing XCUI or Espresso or Appium testing frameworks. We're not here to replace those tests. What we've just kind of observed in the market is even though there are all those tests that are running, someone still is usually doing some kind of manual testing, whether it's your marketing team or your QA team or the engineers themselves before a release is being pushed to the App Store or Google Play. And so Mobot is really trying to replace that final line of defense just because most people have felt that physical testing is still needed for a lot of these edge cases.
[00:18:11] Joe Colantonio Absolutely. So it sounds like someone has a pipeline, they go through it, they run the other unit tests, the integration tests, automated tests. And right before they release, they send it to you. You run your tests, you send the results. How do you know when something fails? If you're just going by coordinates? Like how do you tell the robot to know how you actually didn't click on the right thing? The right thing didn't appear for you to click on.
[00:18:31] Eden Full Goh Yeah. So the way that we do testing is we normally will try to build a baseline based on a stable staging or production build. So we know what's supposed to happen in advance because we're checking for regressions. And so what we'll do is kind of like as a basic sanity check if we know your app entirely disappeared off the screen and you see the home screen, it probably crashed. But if we're seeing something slightly different or if you tapped on something and then you're still seeing the previous screen, probably whatever you tapped on didn't work. And so we're essentially using those screenshots as points of reference. But the other thing that makes Mobot compelling to a lot of our engineering teams is the process is not 100% automated. We're not pretending that it is. What really I think makes engineering teams want to use Mobot is, we have these human supervised mechanical robots that are doing the work. So if a robot gets stuck, a member of our team is nearby. We're watching the robots. We can quickly retrain the robot. We can do a manual intervention. We can perform any actions or that might require some human context. We can get the robot back on track. And so this way we can filter out any annoying false positives, false negatives that pop up in results. And I think that's really been one of the things that makes automated testing challenging. And so using human sense or common sense, we can filter out some of those results so that by the time an engineering team actually receives a test report from Mobot, they feel good that like there's an actual bug here. When we say there's a bug, it's a real bug. It's not like your UI changed and this is like a rebrand or something. It's something actually legitimate that warrants your attention now.
[00:20:11] Joe Colantonio Nice. So that's what you mean. You seem that you always mention humans supervise. So that's what you mean by human supervise. It's very cool. So you must have a huge team then. How do you. I can't imagine. How do you have all these people on-site and the robots and all the different use cases? Like I guess people have to be on-site as well because you can't just have someone remotely, can you? Or is it all they log in, they can monitor something remotely of what the robot's doing?
[00:20:36] Eden Full Goh So the long-term vision is we want to have like, I don't know, a warehouse in Kansas with thousands of robots that maybe you only need one person manning the warehouse and everyone else can watch robots from home. And eventually, we want our the engineering teams that we work with to be able to watch the robots themselves. So that involves a lot of infrastructures, probably live video feeds. We're not quite there yet. And so you're right, like currently in the current stage of our company, we are around 30 people and a significant portion of our team is the operations team that are watching robots run. And in that process, like it's helping our R&D as well because every single time the robot makes a mistake or something fell out of calibration or a robot stops working, we can funnel that feedback directly to our engineering team. And so there is a very tight feedback loop there. And so we are learning a lot about our technology as we're building it. But in the meantime, we're using this as an opportunity. To essentially validate the use cases, understand what people actually want to be tested.
[00:21:36] Joe Colantonio Yeah, right.
[00:21:38] Eden Full Goh And I think because we work with such a technical audience when you throw out buzzwords like automation and AI and machine learning, real engineers know like, that's not it's not perfect. It's like it's not, some, like, sentient thing that's going to test your stuff. And so I feel like by us taking this sort of like authentic approach where we tell people we know automation is not perfect, we're here to protect you with human supervision when things don't quite go right. I think that actually helps the engineering teams feel better about using Mobot. And so it is a combination of we're building this automated platform, but we also have really strong customer success so that we deliver a good service and we deliver good support, especially when time comes where we find some sort of crash or some sort of memory leak. And it only happens on a very specific SDK on certain manufacturers of Android. You want to have someone come and investigate and work with you and understand what's happening. And so we provide that as a part of our inclusive solution. It's not just an automated tool. We are your partner in the engineering process as well, and we're working with you to kind of figure out what's going on. We have devices you don't have. We'll give you the information you need. We'll help you find the logs in the video reproduction that you need. There's a lot that we do to support our customers. And of course, over time, all of this is going to get automated and improved. But I think it's very valuable to be iterating alongside them because then every question they ask, every time that we learn about a new type of testing, a new use case that people need automated in the physical world, we can add that to our roadmap.
[00:23:09] Joe Colantonio Love it. So I guess also what I'm thinking is maybe I'm wrong. Do you find that a lot of people have like a lot of confidence and maybe their automated tests, like Appium tests because maybe it's emulating like a swipe or some sort of action. But when you actually perform that action as a real user, as a robot, it actually bubbles up a bug that won't be found by an automated script. Are there any use cases like that? So maybe people are like, Well, why do I need this? I can emulate gestures and not realize, Oh, wait a minute, it's actually having someone doing it or a robot emulating it. You'll find bugs they won't find even if you have that emulate automated.
[00:23:46] Eden Full Goh Yes, that is actually, in addition to like the stuff we can do that regular emulators don't and I can talk about that a second.
[00:23:52] Joe Colantonio Yeah.
[00:23:52] Speaker There are sometimes cases where something passed in an emulated test or even a virtualized real device test in the cloud. But either that cloud, that device farm didn't have the actual device we're testing on or there is something about the real-world touchscreen gesture that doesn't get caught if you do the testing with just a simulated event. One example of this is we work with a lot of fintech applications, so signing up for a bank account, one of our customers has this flow where you select that you want to sign to an account you enter like your Social Security number. There's two-factor authentication. The two-factor authentication part is hard to automate because we are sending real SMS, we get that deep link, you relaunch the deep link and you want to make sure you get taken with all the account information that you just filled in back into the app. And so we are testing that flow. But then as a final step in that flow, there's actually a signature pad where you get to draw your signature, and then that's the signature that's going to go on the debit card that the bank mails you. And so we discovered that the way that that signature pad was rending the engineering team found it really hard to build that signature pad in the native mobile app view. So they built that as a web view that would render natively. It feels native to the end-user, but when you draw lines on the signature pad, we found that on a couple of devices the lines weren't being recognized. So then you kept feeling like you were drawing and nothing was happening as an end-user. But if you just simulated that event in the cloud, that was something that wasn't picked up by their automated detox test. So that was something that we automated. And so there's kind of like two different things there. There's the two-factor end-to-end flow that we were able to help them test that they don't normally test with their emulation and their detox runs. But then there was also this web view that we caught that other tests missed. And then we also work with a lot of different customers that need browser testing in the native Facebook or Instagram app or LinkedIn. You click on an ad and you sign up and you check out and it's all inside the Facebook app. It's really hard to automate browser testing inside someone else's native app because you don't control that browser and it's not open source. And so there's a lot of things that we can do beyond just like your traditional native app testing as well, that Mobot is compelling for.
[00:26:07] Joe Colantonio Nice. So maybe some other use cases that people may not know about or think about, but you think this would be a great solution for them if they're listening? Well, why I should contact Mobot?
[00:26:16] Eden Full Goh Yeah. I think the broader thing is we can test anyone's app without needing you to install an SDK or needing access to your source code. So especially for different kinds of browsers, especially on mobile. Mobile Safari is hard to test for. And then if you think about things like you built a Shopify Web portal and you want to make sure that it renders correctly on all these mobile web browsers, it's really hard to write automated tests for Shopify. That's something that we can do. If for example, your mobile application needs to talk to someone else's app as well. So, especially kind of not only what I was saying before about like native in-app browser testing, but let's say, you have a map application and you want to make sure that like, okay, I click on this address, it should open correctly in Google Maps with the correct route, the correct metadata that's been embedded, all of that should open to the correct Google Map link. That means the Google Map app has to be installed on that device. There are all these dependencies that are hard to work with in a traditional device farm. And so that's why I think we are a great complement. But any time you need to use someone else's app or you need to, a native share sheet, deep links are very hard to test, especially because there's attribution. There's a lot of marketing data that's being passed through. Mobot is very good for testing stuff like that. And I think longer term, where we're going to and we're already starting to do this is we have customers that have Apple Watch applications. So there is this like companion app where, yes, a lot of stuff happens on the mobile app itself. You register for an account, you pay for in-app purchases, you checkout with a real credit card. But then it's like you start your workout and you want to make sure, okay, did the workout correctly load on the Apple Watch? Once you run through the workout on the Apple Watch, the data has to sync back to the mobile app. And then not only that, but now it has to sync to Apple health. So you might want our robot to go to Apple Health and make sure that like your run or your bike ride was successfully logged. All the GPS data. And so we can automate tests with Apple health that normally you wouldn't be able to automate. And we're also automating the Apple Watch part. And so as we scale long term, we basically want to be able to give the robot anything, and it should test it like a human user. But currently, it's still very mobile-centric.
[00:28:32] Joe Colantonio Love it. Love it. And I like how you keep mentioning, it complements testing so you're not trying to replace everyone. It just adds better coverage. And I would take like you come from an FDA background like you mentioned, medical devices and things like that also that this would be a critical piece. I think a lot of companies are probably missing. And so you can just complement what you're doing and give you better coverage. Also sounds like to me. Nice.
[00:28:55] Eden Full Goh That's right. I feel like I've just done enough real-world testing involving, like, engineers and going to a hospital in a hurry and making sure everything is correct. And just like working with end-users that like true automation where you can leave it unattended and not have a human, I'm not going to say it's a fantasy, but it's an ideal. I feel like it's an ideal that we all aim towards. But any real informed industry, professional in the QA industry knows that that's actually truly hard to do. And so I think five, three, five years from now, like that is something that we'd like to build towards. But really we've embraced human intervention and feedback from customers, feedback from the folks that are watching our robots run as a part of our operations team. I think that's an integral part of the process. And so we're not trying to hide that part of the process behind a curtain. We're not embarrassed by it. It's we think that that's what is going to make Mobot more compelling than just regular manual testing you're doing yourself and more compelling than trying to actually automate some of that in a device farm. So I do think the right kind of balance is that you should be doing those tests anyway. If you can automate something in a device farm, you should. But let Mobot handle all the other stuff. And I think that if you sum up all of the test cases that are happening outside of a device farm setting in the next 5 to 10 to 20 years, that's only going to grow because I think technology is changing. And who knows? Like 50 years from now we might have like a contact lens in our eyeball that runs software. It's going to be really hard to build an Appium tester, whatever the testing framework is for that. And we want Mobot to be the solution that tests things the way that humans use them.
[00:30:38] Joe Colantonio Yeah, absolutely. I think it's only going to get more popular over time. I keep thinking of hearing and Internet of things, so like you said a contact. It could be all kinds of different hardware devices. Software is leading the world, but they all rely on software, but you need to interact with it as a human. So this is awesome, awesome stuff. Okay, Eden, before we go, is there one piece of actual advice you can give to someone to help them with their mobile testing robotic efforts? And what's the best way to find contact you or learn more about Mobot?
[00:31:05] Eden Full Goh Yeah, I think the biggest thing, especially as our industry and our technology evolves, is there's not going to be a one size fits all solution. I think sometimes I meet a lot of engineers and QA professionals that are like, I'm looking for something that can test web and mobile and physical. And I'm like, I don't think I know any solution on the market that's going to do all three or all of those, all of your needs. I think a good palette, a good portfolio QA tools is going to involve a Web first tool. There are a lot of excellent ones out there. And you do. There is a time and place for running automated UI tests, whether it's XCUI, Espresso, Appium, whatever it is. And then I do think there's a final place for physical testing, for UI testing. And so I think the one thing that I would sort of want to share with everyone is being open-minded that, yes, there's this convenience of, yeah, I could just pick one tool and it'll solve all my problems for me magically. But if anyone's run a QA process for a few years, you'll know that it's a combination of tools that will solve your solution, and that's okay and welcoming, like human intervention and feedback into that process. I think CI/CD is something that's great to aspire to. But I also think being reasonable with like what's actually happening in reality rather than just kind of like falling back on ideological DevOps best practices, I think that will come that will come with technology and come with time and innovation. But I think the stage where we're currently at and the value that Mobot delivers, we're a complement with everything else. And so I think it's good to think about having a portfolio of tools rather than like one size fits all.
[00:32:46] Joe Colantonio Awesome. And what's the best way for people to find out more about Mobot?
[00:32:49] Eden Full Goh Yeah. Feel free to check us out online. Mobot.io You can sign up for a demo and you'll probably have me or a member of our team show you what our robots look like. If you'd be interested in seeing your mobile app or your mobile web app or whatever your use case is, run on our platform. We'll show you a demo that has our robots running your app and we'll see what we can find and we'll show you the use cases, the test cases that are going to be compelling for your team. So we'd love to have the opportunity to share that with the Test Guild audience and just really learn from every engineering team, because I think we hear a lot of valuable feedback about what are the tests and the automation you guys currently have set up already, and where does Mobot play a role? Where is manual testing still playing a role? And there is a place for whether it's exploratory tests or usability testing, to kind of figure out what that balance of test coverage looks like for every engineering team. It's really interesting to learn. So I love meeting engineering teams and QA teams who are building out their processes.
[00:33:52] Joe Colantonio Thanks again for your automation awesomeness. That's everything we value we covered in this episode. Head on over to testguildcom.kinsta.cloud/a401 and while you're there make sure to click on the try it for free today link under the exclusive sponsor's section to learn all about SauceLab's awesome products and services. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show, and I read each and every one of them. So that's it for this episode of the Test Guild Automation podcast. I'm Joe, my mission is to help you succeed with creating end-to-end full stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:34:35] Speaker Thanks for listening to the Test Guild Automation podcast. Head on over to testguildcom.kinsta.cloud for full show notes. Amazing blog articles and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.