AI for Automation Testing and RPA using Aito With Tommi Holmgren

By Test Guild
  • Share:
Join the Guild for FREE
Tommi Holmgren featured image

About This Episode:

Looking for a way to add machine learning to your existing software automation and RPA projects? In this episode Tommi Holmgren, founder of Aito, will share his passion and vision for building next-generation machine learning tools for RPA developers. Discover how to quickly test, deploy and maintain a machine learning classifier for your automation testing workflow. Listen up!

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Tommi Holmgren

Tommi Holmgren headshot

Tommi has 20+ years of SW and tech leadership positions under his belt, with two startup exits. Now he is building next-generation machine learning tools for RPA developers at Aito.ai.

Connect with Tommi Holmgren

Full Transcript

Joe [00:01:31] Hey Tommi! Welcome to the Guild.

Tommi [00:01:35] Hi, everyone. Hi, Joe.

Joe [00:01:37] Awesome, great to have you on the show. Before we get into it Tommi, is there anything I missed in your bio that you want the Guild to know more about?

Tommi [00:01:43] No, I think it's an interesting coincidence that I was very heavily involved in testing and software test automation, but that was like maybe 15 years ago and the automation was really the beginning and now it's like I kind of seeing the same stuff with the RPA related topics that were popping up over a while ago.

Joe [00:02:03] Absolutely. I didn't actually know you started off as a test automation engineer. So I guess how does it compare to RPA? A lot of people are getting confused. Is it the same thing or is it different or can they be used interchangeably?

Tommi [00:02:15] I think for me my background started as a database engineer, not as much as a testing engineer, but I ended up working with a software test consulting company for quite a while. I see probably the same platforms I used, which is kind of natural then that the same tools I used,  people get involved in both of the product fields, test automation, and RPA. So I think the tools kind of drive the trend here.

Joe [00:02:40] Very nice. So, you know, one of the trends, obviously, is AI and especially what I've been seeing in functional automation is AI with automation testing. So I was doing a search. I do a report every year on the biggest trends for 2021. And I came across your technology just because I did AI with automation and I thought that why I never heard this. And you had a cool example of how to use this Robot Framework. So I guess at a high level, could you tell us a little bit about what your company does and how it helps with RPA?

Tommi [00:03:10] Yes, so Aito is a couple of four-year-old company. We do have some background with the consulting business in the Nordics and in Europe, so we've seen the digital transformation projects within large companies for quite a while. There was an innovation in machine learning in how to make things easier. And I think our track is in a way, very typical with start-ups. We thought if we were doing something else in the beginning and we've been drawn into the RPA field by our customers and users. But essentially we make it super simple and easy for RPA engineers or software robot engineers to deploy machine learning as part of their workflows, basically on any RPA platforms that they might be using.

Joe [00:03:50] Alright so maybe I'm wrong. It's almost like a software as a service type thing where any company can consume your solution with an API, and implement it with their technology. Is that how it works?

Tommi [00:04:01] Yeah, it is a software service solution. It's basically an API. That's all we provide. So essentially the API provides predictions and a typical use case would be replacing…so instead of writing a complex rule-based in the automation, you use machine learning to replace those rules. So typically they are decisions made by humans in categorizing things like putting, let's say, customer service or I.T. service desk tickets into the right categories or automating purchase invoicing, and so forth. So that's obviously like a number of possibilities. But very typically, Aito as a product replaces some decision that's either done by humans or needs massive rule-based to be automated.

Joe [00:04:46] So you would think with RPA solutions, they'd have some sort of AI baked into it. What does this give them that, you know, maybe a typical RPA solution doesn't have or maybe you could use this to augment maybe what they have already?

Tommi [00:05:00] Very typically when an RPA team or a, let's say, enterprise customer starts looking at the more complex automation covering processes instead of simple tasks and they encounter use case that requires machine learning, I think the first reaction always is, “Hey, we need data scientists. We need to make a data science project out of it.” But I see them the field of machine learning dualistic in a way that there's, of course, the research custom aspect of machine learning where you need the data scientist. But a lot of the automation-related needs are quite repeatable. And you can use products to get your use case done with way less costs and way less effort, and the smaller teams. And that's where we provide our tool for the RPA engineer. And in the projects, there's typically no data scientist needed to do the machine learning.

Joe [00:05:52] Very cool. So, you know, I know you also work with Python. So a lot of my audience uses a tool called Selenium, which does browser-based automation. And I always get asked, how can I add AI to my Selenium test? I was just I know they have a Python language binding. If someone was using it, just a straight-up Python language binding, can they use Aito to do what they need?

Tommi [00:06:13] Yes, so at Aito, we have Python SDK, so we offer everything, all the functionality over Python, like a fully featured Python SDK, so you can basically send in the datasets, manipulate your data, make the predictions using our Python SDK. And I think if there are any users of Robot Framework or any of the commercial tools that are built on top of a Robot Framework, we've shown to work really well in that ecosystem, which is basically Python-based.

Joe [00:06:43] So do you actually have a lot of users that use the Robot Framework and if so what do they typically use Aito for?

Tommi [00:06:51] So that's a…I would say one of the kind of up-and-coming ecosystems. Obviously, we see a lot of usage in commercial enterprise tools like UiPath, especially being from the Nordics, UiPath has a really strong market share here. So a lot of the corporate users would be using one of those tools, dominantly. But in the discussions with the customers and RPA consultancy, so I think there's a growing interest towards open source tools like Robot Framework. And there are some really interesting companies around that ecosystem, like Robocorp from our native Helsinki and a couple of others who are kind of pushing that ecosystem further. And I think that's a really interesting alternative to commercial tools.

Joe [00:07:37] So I know when I speak with fellow engineers, a lot of times they hear AI they think they have to really be AI experts. It's overly complex, but it seems like yours use kind of a unique approach using like a database that most developers are familiar with and an API in doing queries. So how much AI someone needs to actually be successful using Aito?

Tommi [00:08:00] If you can deal with probabilities, that's like a starting point. That's actually one of the most complex things that we've seen and we're trying to constantly make it easier and easier. We take an approach that we want to reduce the complexity of machine learning, not expose everything that is inside the box, like inside the data science box, but rather we bring the elements that are necessary for automation engineers that they need in their decision making, like what are their confidence in some probabilities. If you can work with those, I think you can achieve a lot already with the tools, tools that are available in the market.

Joe [00:08:36] So how flexible is it? I'm just thinking of Selenium a lot of times people have issues identifying elements on a page. Could they easily develop their own algorithm, that kind of figures that out? I mean, how would they do that?

Tommi [00:08:50] Our use cases mainly revolve around tabular data, so so we kind of replace the element that needs a human review or a human decision. So it's like a typical use case. And I think the most popular in our customer base at the moment would be automatically deciding the general ledger accounts of incoming purchase invoices. So we don't do the OCR or we don't do any of those processes. A lot of other tools for that that are available in the market. Once OCR has read in, like extracting the elements of an invoice, we basically can predict, Aito can predict things like what's the cost center, who’s the human reviewer that this invoice belongs to, what's the GL account where the invoice needs to be filed in, and then the automation takes care of the filing the invoice into SAP or whatever tools in use in the enterprise. But we are simply that prediction endpoint, the API that can provide that the prediction of a missing data point or value based on the history data.

Joe [00:09:52] So I don't mean to keep bringing it back to functional automation, just happens to be the space I'm in. I know a hard thing that a lot of people struggle with is test data. So I wonder if this could be used if you're trying to dynamically create an automated script that's interacting with the website if it can be context-aware enough to know, “Okay you're on a patient page. So, therefore, here's some patient data I can predict you're going to need or help you populate using Aito.” Just a user case that popped my head.

Tommi [00:10:23] We have not done such a case and maybe some creative engineer will figure out how to do this, but I think we been a bit further away from purely like test automation use cases to be more RPA use cases where the typical need would be a prediction of what is the action for it to be done with the data that is at hand in the automation. So I think it's kind of an opposite or different paradigm to use it in test automation that way.

Joe [00:10:53] So it can help you with things that maybe you think you couldn't automate before. It's not necessarily testing, like how to get an example of cleaning up a CRM, a lot of things that would take a lot of effort to manually do not testing wise, but automation wise.

Tommi [00:11:09] That's a really good example of a use case, for example, in the CRM, and we have a user who's using Aito to manage the cleaning of their CRM. So a typical problem would be that you end up with duplicate entries in the CRM or any master database. So what they do is that they double-check using Aito when somebody's trying to enter a new data point into the CRM, that is it likely to exist with a slightly different variation of a text. Like, for example, if the name is misspelled in one place where one place is Incorporated and the place is Inc and, you know, New Jersey is within NJ in some other place. So Aito can be used to identify those matches and help retain cleaner master data or records (unintelligible).

Joe [00:11:52] All right. So here's a good user case then for functional testing. A lot of times you need to do like tear up and tear down actions where the environment needs to be in a certain state or data needs to be cleaned up after each run then. It seems like this might be a good solution to use for those type of activities where it's not necessarily a test, but it's a manual process you're probably doing to get your environment or data in a state that then can be consumed by your automation script.

Tommi [00:12:18] Yeah, that might be a better match for functional automation. Yeah, yeah.

Joe [00:12:23] So how much overhead does this have? If someone's worried about time? Does it consume a lot of extra time to run or how does it work?

Tommi [00:12:34] The basic paradigm of Aito's usage would be it's a SaaS solution, so everything runs in the cloud and you get an API endpoint for the predictions. So there are typically two activities that you need to consider when using us. So there's one activity that feeds in the data points that keeps Aito's data set. That is basically a replica of your master data that is relevant for the predictions. So you need to keep that ongoing, like up to date ongoing. And typically our customers implement test framework (??) robots for that purpose. It takes data from whatever is the master data, puts it in Aito. Then the real-time part of usage would be the prediction of the inference parts. And that's also an API endpoint. If you have, let's say, typical automation data and they really are huge. So they are what's reasonable talking about maximum gigabytes of historic data. The prediction response times are somewhere between a hundred to three hundred milliseconds, so when the automation workflow runs, it's quite quick to return those predictions with a couple of milliseconds, not even seconds. And that's that kind of times what we've seen it really is an issue because the RPA workflow, it often is not real-time. So it runs in the background. It's getting (unintelligible) in some way. So it's not like humans would be waiting for the activities.

Joe [00:13:59] Nice. I guess we're prediction and inferences a lot of times comes up, you need a lot of data to train in order to get something to work. How does that work with Aito? Does it take care of that for you or is it a different kind of type of AI where you don't necessarily need a large data set to make decisions on?

Tommi [00:14:18] It's a great question. It's very commonly asked by the customers and users like, “I have this data. It might have been off.” And I think that as humans have been kind of a lot of the news and a lot of the article and talking in terms of AI tends to be focused on deep learning and like big data-related topics, which obviously need a lot of data for you to have any reasonable predictions on the accuracy. We operate a naive Bayes in algorithms, and we've shown it to work quite well with rather small amounts of data. But of course, you still need data. Your data needs to be somewhat high quality. It needs to have something to predict from. Of course, it can't operate out of nothing. But like a typical kind of returning back to the same invoice categorization or invoice in prediction case, we can do a lot even with a couple of thousands of previous invoices. You already get towards good predictions, but obviously more you have better than the results.

Joe [00:15:21] So I guess we're at the center of invoicing, is there a way you can run this in Jenkins to it's a job that look and say, “Oh, invoicing, I see you have this incoming invoice, let me match it for you automatically.” Is that a common user case where someone's running it on a certain time frame to do certain functions?

Tommi [00:15:38] Yeah, quite often. For example, those invoice cases would be scheduled daily so all the invoices that came in the previous day are automatically categorized and processed at 8:00 a.m. in the morning. And then the ones where the prediction confidence is not high enough for automatic processing will be then sent to an account team for a manual review and manual processing. So it's very typical that the decision or the prediction made by machine learning, the automation workflow diverts into two paths – one is automatic and one is manual.

Tommi [00:16:12] So, do you have any real-world user cases where someone's done that? How much time have you seen saved using this type of approach?

Tommi [00:16:21] I see the timesaving in two different ways. There's obviously the underlying customer use case and the savings potential that comes from, for example, automating a process or task in a company. And those savings are pretty big. We're talking about the massive potential of time savings in, for example, accounts payable or customer service teams. But for us, the value that we bring is typically the time saved in the process of implementing the whole automation. So our benefits, like for the RPA Consulting Company or the RPA team comes from the easiness of implementing it. And we talk about a design driver of being 10 times faster to the production than using custom-made machine learning algorithms.

Joe [00:17:07] So I think this would be an easy sell. How hard is it to convince people? How hard is it for someone to listen and go, “This sounds awesome, I want to try”? How hard is it to get started?

Tommi [00:17:17] I would say that our initial struggle was that we were rather technical, so you need to learn to use Aito's Security or you needed to learn to use Aito's Cura language. And that created kind of the barrier of getting on board quickly and seeing the value super quick. So what we've done within the last months and we're rolling out new features continuously, we've tried to lower the barrier of starting as low as possible and show the users the value of Aito's super quick in the early stages of usage by giving them visual charts on the automation potential and savings and data contribute to their predictions. So by bringing those tools and UI is available, we can really show quickly to the users what they would be able to get out, what's the value? And then they can expect to actually implement in their workflows.

Joe [00:18:14] I'm just curious to know, how much is the awareness out there in the market, you think? I mean, since you're a partner with these RPA Solutions, they must be able to say, “Hey you can use Aito to take care of these other user cases maybe.”

Tommi [00:18:27] So even though there's a lot of talk about RPA and the biggest growth in enterprise software in the past couple of years, I feel that a lot of the usage is still quite basic. So a lot of the RPA in automation are still very rule-based. They are quite basic tasks that are being automated. And I think only within last year, like the machine learning-driven automation that allows to cover large chunks of the of the process or more complicated processes, that's kind of happening as we speak. So the market is definitely maturing as we speak and those use cases are becoming available. The customers are getting more interested in the use cases continuously and I think in that way. So it's a really good time to be out with the machine learning-related tool, and it's definitely easy to generate interest. There's a lot of companies who are looking into it at the moment. And I think the trick here is to make the onboarding and usage as easy as possible so that as many as possible can deploy their use cases quickly and easily.

Joe [00:19:31] I think there's still a lot of skepticism for some reason in the space. How do you help people get over that skepticism? How accurate is your solution? I think you said a lot of people using it for easy things, but it probably could be applied to more complex things. How complex do you think you could get using a technology like this?

Tommi [00:19:49] So how we tackle this problem or a topic would be with transparency. So we transparently show how well Aito performs in your case. So the users flow that we are deploying as we speak, it's a new first step into any new use case using Aito so you can basically drag and drop a dataset into our console and we run some evaluations on the data set against your prediction targets so chosen column for your data and from your data basically. And we transparently say that out of the box you can get to this level of automation or write predictions, these many errors we would be making, and this is the amount that falls below the confidence threshold. So you see those numbers. And of course, like some customers said that you know, it's not good enough. And then they have always an option to go with the customized solutions to data scientists. And you can definitely find a way to make more accurate models using specialist resources. But our bread and butter would be those cases that you can actually implement without a single data science, a line of code written in your case.

Joe [00:20:56] So I guess that's a good point. So if someone tries using Aito and they have a user case they've been trying to work on,  do you offer a tier where they can get that type of data scientist's help or something in-house they still need to get?

Tommi [00:21:10] We don't offer that, so there's plenty of great data science consultancies in the market and we are happy to help our customers to find a good one. We've decided to work solely on the product and kind of put all our efforts into making our products as easy to use as possible and kind of find use cases where it works really well and leave the consulting and custom-made stuff for other people.

Joe [00:21:36] Fair enough, what protocol does it use. Is it just HTTP?

Tommi [00:21:39] Yeah. It's just HTTP and the current version of the API takes a …so basically you put a query, a database sequel type of query in the body of the HTTP call where you say that from this data set where you know, these knowns you want to predict certain features, certain column, but we are also in that field. We are introducing a like kind of a simplified version of the API, which highlights the need of using the body in the call complete,and everything will be parameters.

Joe [00:22:13] I think I also forgot to ask is this an on-prem or off-prem solution? Is it both?

Joe [00:22:18] Currently cloud only, everything in AWS. Simple decision of speed and easiness of our own development deployments. But of course, this is something that we are looking into in the future to offer on-prem possibilities.

Joe [00:22:35] You're pretty here, you're early, you're not early on, but it seems like you're one of the first companies out here that I'm aware of with a solution like this. What do you see the future going and what's on your roadmap? Can you give us a little hint of maybe where you think you're going and what you see could be possible if people get on board now?

Tommi [00:22:53] I think our road map if you look at the short term, we really want to bundle and integrate our offering super wealth (??) like a mainstream RPA platform, so that usage of us will be like a couple of clicks from UiPath or from Robocorp or Robot Framework so that it would be a breeze to start using machine learning predictions in those RPA platforms. That's absolutely a priority for us now. Looking a bit longer ahead, we currently support only classification, so you can only do classification type of machine learning with Aito today. And we have a lot of ideas how we can start broadening it, like the spread of possible use cases based on offering other machine learning algorithms or like prediction targets like a regression or something else.

Joe [00:23:44] So I guess the reason you start with classification, I assume that's the biggest user case that you saw to start off with maybe.

Tommi [00:23:51] It's kind of a dual answer to that. One would be that the technology that we've created was just more suitable for classification. It's like a Finnish approach, an engineering approach first and they get technology, and then see if anybody knows how to use it. But the second is that we've also seen that in RPA world, a lot of use cases kind of boil down to classification. So you need to predict a category of something like urgency or team where the customer service ticket belongs to and so forth. So I think a lot of a lot of use cases, even though it doesn't sound like classification at the beginning, they can be still implemented with the classification type of predictions.

Joe [00:24:34] Nice. So Tommi I have my last question. Before I ask that question, though, is there anything I missed that you think I should have asked, that you think people need to know about?

Tommi [00:24:40] No, I think you didn't miss it, but I think what's really important to enforce is that I see that RPA teams really have the tools available and it's not just us. There are other tools in the market. You can even look at the offering from Google, AWS, and Microsoft. But there are tools to implement machine learning super easy nowadays. And I think we've seen here in many cases that when you look at the question of, “Yes, we need machine learning for this use case or we need to implement some AI for it to get this done.” The instant first reaction is that “Hey, we need to get our data scientists in and we need to make a massive project out of it.” No, I don't think you need to make a massive project out of all of the things. So I think that mindset of, “Hey, can we use the tools out in the market which do the simple things for us with one API?” I think that's something we kind of want to preach.

Joe [00:25:37] So I think I forgot to mention it is a paid solution, but you do have a free plan where people can use it as a sandbox. So if they're thinking, “Hey, I should try this route.” Before you can use data scientists involved, try the sandbox out yourself and see what you can get before you scale up or get them involved.

Tommi [00:25:54] Yes, we have a free sandbox theater so you can get that. You can go to aito.ai and create your account or console. You can get the free sandbox. The new features that we are bringing that basically make you a full-on evaluation on your data set and prediction targets and the accurate potential accuracy, those actually will be completely free. So our business, where we only charge for live predictions, when you operate something in production, that's when you pay for Aito. We keep the whole first phase of the usage to get an evaluation of your data set and predictions. That will be free.

Joe [00:26:29] Awesome. Before we go, is there one piece of actionable advice you can give to someone to help them with their AI automation efforts? And what's the best way to find or contact you or learn more about Aito?

Tommi [00:26:56] Tangible advice, I think take an API and try it with your data set. That's the thing to do. You can reach me and you can find me on LinkedIn or Twitter. Or you can find me through tommi@aito.ai email or whatever you prefer. I'm there. I'm ready to help.

 

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Fascinating episode, Joe!
    It is heartening to hear you and Tommi speak about sharing the passion and vision for building next-generation machine learning tools for RPA developers. It is amazing to know how to quickly test, deploy, and maintain a machine learning classifier for your automation testing workflow. To drive automation, complex coding was required before RPA came into existence. Today, RPA has simplified the process with the usage of bots. However, RPA is restricted to automate repetitive tasks. In this age of digitalization, where consumers need change quite often, there is a dire need to scale beyond RPA. To do so, enterprises need to grow beyond the common approach of automation, which is focused mostly on delivery, and scale to the controlled iterative portions of hyperautomation. Here is an engaging article on the era of hyperautomation which I found to be useful – https://bit.ly/2QX1Lmf

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Matt Van Itallie Promotional graphic for a DevOps toolchain podcast episode featuring CTO insights on the impact of Gen AI on DevOps, with guest Matt Van Itallie, supported by Bugsnag

CTO Insights on Gen AI’s Impact on DevOps with Matt Van Itallie

Posted on 03/27/2024

About this DevOps Toolchain Episode: Today, we'll speak with the remarkable Matt Van ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Sideways Test Pyramid, WebDriver Visual Testing and More TGNS115

Posted on 03/25/2024

About This Episode: What is a Sideways Test Pyramid in testing Have you ...

Frank Van der Kuur Mark Moberts Tatu Aalto

RoboCon Recap: Testing, Networking, and Building with Robot Framework with Tatu Aalto, Mark Moberts and Frank Van der Kuur

Posted on 03/24/2024

About This Episode: Today's special episode, “Robocon Recapp,” is about the insights and ...