About This Episode:
Want a way to accelerate your test coverage and eliminate test maintenance? In this episode, Paul Grossman, an SDET at Utopia solutions, and Artem Golubev, Co-founder at testRigor, share a tool that allows you to easily create automation using a behavior-driven plain English approach to writing tests. Learn the benefits of this approach and hear real-world implementation stories of how it has helped others with their test automation.
The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!
About Paul Grossman
Paul Grossman has been delivering hybrid Test Automation framework solutions for nearly two decades in ALM, BPT and TAO. He has beta-tested many releases of HP QTP / UFT and in 2009 was a runner-up in HP’s Test Automation White Paper competition. He is a five-time HP Discover/Mercury World Conference Speaker, and has spoken at Maryland’s QAAM and Chicago’s QAI QUEST and CQAA, always with a live demo. He freely shares his real world technical experience. His framework designs focus on speed, accuracy and scalability. Paul is currently investigating LeanFT/UFT Pro along with Gallop's QuickLean for UFT to LeanFT script conversion.
Connect with Paul Grossman
- LinkedIn: pmgrossman
- YouTube: channel/UCdWjM7zA49KW1fXVm1cqJ4Q
- Github: qtpmgrossman
Check Out Paul at Vivit
Virtual Community Days – Agile, DevOps and Testing
December 1st – 4th, 2020
“Secrets of Test Automation: UFT, testRigor and the Magic Object Model”
Friday Dec 4th 09:15PM – 10:00AM EST
About Artem Golubev
Connect with Artem Golubev
- Company: www.testrigor.ai
- Blog: testrigor.com
- LinkedIn: /in/agolubev
- Twitter: testrigor
- Github: artgo
Full Transcript Paul Grossman and Artem Golubev
Joe [00:01:08] Hey, Paul, and Artem! Welcome to the Guild.
Paul [00:01:53] Hey, Joe, it's a pleasure to be here.
Artem [00:01:56] Yeah, pleasure to be here.
Joe [00:01:57] Awesome. Great to have you both. I guess I'll probably get into it. Artem, let's start with you. Is there anything I miss in your bio that you want to Guild to know more about?
Artem [00:02:05] No. You're good.
Joe [00:02:05] How about you, Paul? I know you always doing something new.
Paul [00:02:10] I am. But you cover most everything. The newest thing I'm playing around with is Test Rigor that we'll probably talk more about today. I spoke with a couple of conferences just a few weeks ago. So I'm still doing that. So I'm happy.
Joe [00:02:26] Paul how Vivit? I think you're speaking at an upcoming conference with Vivit Worldwide.
Paul [00:02:30] Yes. Speaking of Vivit Worldwide to talk a little bit about the magic object model, the idea on that and also on how Test Rigor itself is very close to that model and does a lot of cool things out there. So I would say check out that conference. It's going to be a ton of fun out there. Oh, it's virtual, by the way. It's VR. So if you've got some Oculus headsets, you could actually sit in a seat in an auditorium virtually, and watch the presentations. It's really cool.
Joe [00:02:57] Very nice. Very cool. So let's dive into the meat of the topic now. How to create automation tests faster and with less maintenance? So I guess first, before we get into it, why do you think there's a reason to create automation faster nowadays? Artem, you do work for a software company. Have you heard from other companies that there's a need for this type of ability?
Artem [00:03:20] Yes. Overall, the world is moving faster and it's just faster and faster and faster and faster. And if you're not fast enough, you're just losing it to your competition. And then think about it. It's just the efficiency of building things is improving over time. Whatever used to take us hours and days and weeks and stuff is now only takes minutes. In our case, it takes minutes to build a test case on our system versus it takes almost a week in certain cases for Selenium.
Joe [00:03:56] So Paul working hands-on with the tools and with automation, do you find over the years that not only is there more companies won't have automation, but is there more of a pressure on the software development or the software developer, or the software tester, to create the automation faster than we had in the past?
Paul [00:04:15] That pressure is always there. Obviously, I can tell you something. About 10 years ago I was working with a manager. We were, of course, we're always competing in the field and our competitors would come into a client and say, hey, we're going to do a proof of concept. Give us about a month or two. And then at the end of that, you pay us for all that. And if you like what you see, then keep paying us, and then we'll continue working on it. My manager would come in there and say, give us three days and we'll show you what we can do. You don't have to pay for it. If you like it, continue on. And that gave us a real big advantage. But nowadays with Test Rigor, I can tell you that you could be sitting down at that meeting and say in 30 minutes, I'm going to show you something. And if you like it, let's continue on. It's just the whole proof of concept just from the beginning, is just getting tighter and tighter to be competitive in the field.
Joe [00:05:09] Nice. So Paul as you know also a big struggle is that people just have automation as maintenance. So as we move faster and faster, I assume that's becoming more of a challenge. Thoughts around test maintenance and maybe how that's slowing teams down as well?
Paul [00:05:23] Yeah, when you get applications being changed from release to release, you've got a bunch of things are changing your element identifiers, your descriptions, your DOM is changing and then you've got to go and try and adjust your (unintelligible) your Selenium. If you've got the page object model, you're describing your elements inside your classes, and then if they're not generic enough to handle changes in there that they kind of release the release, you're spending a whole lot of your time doing maintenance. You might also be challenged by either the existence or removal of iframes. That used to be a big hot thing years ago. So that's where a lot of where your maintenance comes in. And if you can kind of avoid that as much as possible, that makes your whole endgame so much better, because most of your time is just writing more and more test scripts or analyzing your results.
Artem [00:06:21] Now, I'll give you another anecdote about that. Let's say you're building a website, a small e-commerce website, Amazon 2.0, and you have a button there saying buy for product. And you did a great job and you build 10000 functional end-to-end test cases for your website. Then guess what? You migrated now to a new way of purchasing products on your website. Now you. have a shopping cart, and now it's not just click-baits, click add to cart, click cart and click placeholder instead, plus anything that's on-field and in our case, in our platform, it takes seconds to adapt because like our underlying language is just one to one with end-user actions. So it literally can do things like find and replace it. If you didn't have it extracted and some subroutines. So you'll be able to quickly adapt versus think about 10000 functional end-to-end tests like the Manman software to adapt to the Selenium right?.
Joe [00:07:29] All right. So I guess another issue I see is obviously the quicker you need to create tests is you need people to be able to create these tests. And when I was in a large company, of course, they used Selenium, which is not a problem. That's fine. But they also chose Java, which also required people to be programmers and a lot of testers, maybe good testers but actually, if you're being honest, mediocre programmers and using Java of all the languages seems to be even a harder language to really pick up. So have you seen issues with this approach? And I know there's a lot of down people down on codeless technology. So Artem let's start with you. Any thoughts on codeless or issues that you saw that maybe codeless could solve or things like that?
Artem [00:08:14] Yeah, so. Well, whatever people call codeless is where you just click through the stuff. It's very, very limiting. And then remember that example we have support where you need to adopt from buy button to completely different flow with add to cart and click cart and those kinds of stuff where you have to… what they're going to recreate from scratch, 5000 end-to-end test is just impossible. So in our case, the closer we're giving basically manual key people and the ability to build proper tests is just our expression language. It's not in plain English expressing like end-user actions. So this way you can record it using a browser plugin but then it's a code, quote-unquote, in this plain English code. So we can modify it, you can improve it, you can actually maintain it. And that's the key.
Paul [00:09:10] And I also say that our tools today, we also probably have to mention Cucumber and Gherkin.
Joe [00:09:15] I was able to just bring that up.
Paul [00:09:17] Yeah. Cucumber and Gherkin, we're trying to make the creation of the test cases as easy as possible. I mean, you've only got four or five different initial statements given when then, and, or but are very rarely used over there. But then behind the scenes, you have to build all that code. So you do have a balance of people that necessarily have to be quite as technical in order to build those Gherkin test cases. But you also have to have your developer sides on the other side building all the support and whatever new kind of interesting functionality needs to be supported inside of that particular framework.
Joe [00:09:50] So Artem you actually developed a test tool called Test Rigor. And obviously, it's multiple iterations now and it seems really impressive. I'm just curious to know with all the other tools out there…I was just on a panel yesterday with the folks from Selenium, Cypress. Playwright now is the new darling of the automation scene. Why another test tool? Where do you see your tool that was fitting a need that maybe these other vendors are the tools? Not saying one's better than the other, but what hole do you think your tool fulfills that some of these others may not be a good use case for.
Artem [00:10:22] Yeah, great question. So basically what I have seen is that a lot of tools are great for union testing, that people got covered like union test tools right now, and it's a good thing. It's just hard to come up with anything better than that, like API Testing all covered or all kinds of tools and where you don't need to do anything else. Sometimes you can do like even integration testing is fine. However, what we have noticed is that when it comes to end-to-end testing, especially involving UI, the challenge is that UI often changes so fast. It places just an insane amount of tax in maintenance on the like on people who are building that test up to the point where people for the first year may be very happy and the building the tests and this is all great stuff. Starting from a second year, 50 percent of the engineers time is spent on test maintenance. Like maintaining the huge amount of time and up to a point where we have seen used cases where maintenance just killed the whole automation and companies just switched back to one 100 percent manual just because we couldn't keep up with maintenance. So maintenance is a problem we are actually trying to help people to solve. And this is a big problem for the UI and this is where we were doing that.
Joe [00:11:48] So before we get into a little bit more so can you use it for API testing? Is it just an end-to-end UI functional browser-based automation tool?.
Artem [00:11:57] You can call API as part of an end-to-end test, but we're not pure API testing better tools for just pure API testing.
Joe [00:12:05] Awesome. And Paul, as I mentioned in the beginning you've worked for every probably automation tool out there. So you have a bag of tricks with everything. So I saw a few demos he did with this. And I'm just curious to know how real those demos were because you were able to use Test Rigor in about under 10 minutes and you automated a site and rerun it without doing anything else. So was there any sleight of hand to those videos? Tell me a little bit about your experience with the tool itself.
Paul [00:12:28] Okay, so there is some time compression on that. The 10-minute video is about 20 minutes to write out and edit from start to finish. I would tell you the one of the reasons I was really amazed by Test Rigor was just a few weeks ago, Artem reached out to me as I think you saw me talking about this magic object model where you just described what I'm looking for. Go click on the okay button. And I said I've got this, you know, this candy mapper website is a challenge website for anyone to use their tool and just try to accomplish some of the test cases. The initial walkthrough on that is just to go and clear a pop-up, enter in a name, submit an email, verify there's a message saying you got to put an email address in there, and then click another button, verify that it got submitted. It's like ten steps. Most tools you play around with just to get through those ten steps will take you quite a bit to do. Artem was actually kind of walking me through it, giving it a little bit of help on that. But within about 10 minutes we had just basically we did a record on that. We tweaked up some of the text that was inside of the recording and it was basically done. And it goes through and clears everything, hits everything. I don't have to worry about element descriptions or other items. One of the other things I really liked that happens in there, and this is a feature that is specific for the candy mapper website, which is the pop-up. When you hit the website the first time it throws a cookie in there, it says this is your first time in here, shows the pop-up, right. You go hit at the second time if you don't clear cookies. And I know all the automation engineers, we always do that. If you don't, you'll never see that pop up again. That can be a big challenge in automation engineers for where we're saying, well, we might see a pop-up, we might not, but don't fail the test if it does pop up there. And if it's not there, don't fail if you didn't see it. In this test case demo, we basically say if it exists. So it says if you see it, clear it and if you didn't move on, but it doesn't fail anything. And that's one of the things that really kind of blew me away. Again, the time it took to create that test was about 10 minutes. And that's when I kind of went, wow, this is this is amazing. This is cool.
Artem [00:14:45] And yeah, I think it's the most interesting part, which I enjoyed when we found that close button remember? We should like with a specific challenge on use. Do you want to talk about it?
Paul [00:14:57] Sure. So in the demo, we've got in fact, it is that pop-up that comes up there. We did do a record and playback. And to be clear, that's just a tool used to learn how to use a tool. You don't want to use it every day, but it is something that you needed to learn how to do stuff. It created a line that just said click and give a really weird description. We went down and looked at the line and I said, you can just describe and say, click the button on the upper right-hand corner of pop-up challenge. And that's all you needed to do. And it figured it out. And it's like, okay, it clicks on the button and closes it. So we translated it into plain English and it still worked. So that's another thing that really I found fascinating in that tool.
Joe [00:15:40] So that's a new feature in Selenium 4 relative locators. Is that what you're talking about, Artem, where you say to the left of this or to the right of that, but it knows it automatically?
Artem [00:15:49] Cases completely new level. So whatever is introduced in Selenium 4 is just kind of children's play compared to what our system can do. In our case, we were saying, hey, click on the button on top right. I'm not sure if it is even supported on the top right of the header wherever is the header and that pop-up on the top right was the close button. So as I closed on the first button on the top right. But then in our system, you can also say, hey, click on the second button on the top right or hey click on a button which is below this and on the right of that and so on, so forth. So like there is just a completely new level compared to a Selenium first and second. Again in Selenium, you are working with low-level stuff, then you're working with Test Rigor, you're working with visible elements from the end-user perspective on the screen.
Joe [00:16:45] So I guess because this is an audio. People can't see it. We'll have links to Paul's demos in the show notes, but how do you write a test then if you're not using code and you're saying there has to be some sort of context when you're running the test and there has to be some sort of syntax, even if it is English. So does someone just use the record and playback with a Chrome plugin or can they just start writing, click and enter and it automatically will translate that into what it needs to do?
Paul [00:17:11] I can answer that one. The answer is both. Yeah, there is like I said, there is a recording plugin that you can go in, hit everything and it'll generate stuff for you. But you can also create your own test case and then just start writing out exactly what you wanted to do. And then it goes and executes, grab screen captures, shows you what it hit, and it goes on from there. Artem, anything else you want to add to that?
Artem [00:17:35] Yeah. So we actually use a combination of natural language processing and the purser and there is documentation and so on and so forth. So it's the goal because the reason why we use an (unintelligible) in the first place is that the goal of our system is just by describing just two rows each end-user action is one new line. And all parameters are in double codes to remove ambiguity. That's it. That should be enough for you to be able to express what you want to do. And this is like the exercise which we're doing over and over and over again with new customers seeing favor how we do it. And most of the cases we just are able to come in and do the test without freezing (??), even documentation whatsoever. But that's not the point as the point there is that we believe the only true BDD/ ATDD tool on the market is the only period because this is the only tool where your program manager can come in and express executable specification for engineers without involving QA engineering to write a new code.
Joe [00:18:49] I haven't seen the tool. So what happens when people hear this is they think they are going to have 100 tests and each 100 test will have their own 100 duplication network. So when you're recording and you have a login, when someone records their own test, they're gonna have a login as well. Does it realize there's another login or does it not matter because the tool will take care of its own anyway if anything changes? How do you make things reusable or does it matter in your particular flow for your tool?
Artem [00:19:17] Oh, you can do it like in any other code. You can extract the functions, you can extract it post factum. If you already recorded a bunch of stuff, you can extract method, extract functions like that. But most importantly unlike this, there's no code record in place stuff. In our case, you can start from a place where you want to start from somewhere in the middle. In case if you already have a function to get there so you can record just only the last part, then post factum, add that function to get you to the right state, and just add validation and that's it. Just like that. You don't have to start from scratch every single time.
Paul [00:19:59] Yeah. And what it's called is rules, but I kind of refer to them as those business rules so you can take those little bits of the plain English four or five lines exactly as you gave a great example, doing the login and then put that off and say, I want the login. We could create, let's say login admin, login user, login manager, login superuser. And then each one of those would do the process for a specific test. And again, just like modular design, something changes in the way that login occurs, password changes, user account changes. You change it in just one of those components, every test that reference to that is now updated.
Joe [00:20:40] Nice. So for some reason, I was thinking the scenario is a passing context between browsers, which is difficult and a lot of tools I've used. So with Test Rigor are you able to say, like startup second browser and that type of…how complex can your test be using Test Rigor?
Artem [00:20:56] It's trivial, actually. You can open a new tab. You can start up a new browser or you can run multiple browsers at the same time. And then you start up a new browser-based startup in a completely brand new separate instance, completely unrelated to the original one. So you can actually test things like chat by logging in with two different users at the same time in the same session and then test that. Then user one sends a message to user two. User two immediately sees the message. So this is also quite a unique feature in our system.
Joe [00:21:31] That's the exact scenario was thinking of. I was thinking I used to work for a company that radiology software. So you had to be as a radiologist, you had the machine open and as a…I forgot what the other scenario was. As a patient or something and you're reviewing it and you had a past contact. It was really difficult. So sounds like it handles it. Very cool. I guess the next question is Chrome browser extension, does it work with all browsers or do you write your test in Chrome and then it's able to run against any other browser?
Artem [00:21:57] So you live for recording if you only have Chrome extension right now, but then you can run it at the same time on any browser you'd like, including everyone's favorite A11. And if you're using an external infrastructure provider like Browser Stack, you can even enjoy running it on IE6
Paul [00:22:22] And if you're a Mac fan, you can run it on the Macintosh as well.
Joe [00:22:25] And just a shout out for my sponsor.
Artem [00:22:27] Yeah, you can run on Safari and all those kinds of stuff.
Joe [00:22:31] You can run and get SauceLabs, I guess as well then if you can run against other cloud providers.
Artem [00:22:34] You can run on SauceLabs, anything that they support. Yeah, go SauceLabs.
Joe [00:22:42] Also, a big trend is running in CI/CD I assume. Is there a command-line option to be able to interact with Test Rigor to make it part of your pipeline?
Artem [00:22:50] Yeah, absolutely. It's as simple as copy-paste of a bash script that will trigger Test Rigor and wait until it is done.
Joe [00:22:59] And so what are some unique features here? I think you told me it does something with email validation to a factor authentication, I guess. Paul. anything you saw here that you didn't see, that you don't see often, and other tools that it can handle.
Paul [00:23:11] Yeah, the email validation is pretty cool. You have it like from day one. You have it set an email over to the Test Rigor email account. It validates on that side that it sees what expected text is and then it responds back to you saying that either it did or did not find the expected data inside the email.
Artem [00:23:32] Yeah. So you can we run our own email so you can generate a new email every single time you run the test, which is absolutely a super important feature to test things like sign up right? Because you cannot test sign up with the same email every single time. So you must use different emails. And in our case, you can do it. Also from a unique feature perspective, our goal is to be end-to-end testing, remember everything end-to-end. And so the goal is to help people automate anything that the human can do. For example, we can test two factor authentication literally with text messages. We can check your text message and extract the code from the text message and use it as a part of login. We can validate that your downloaded file was successful. And it checks the contents of your downloaded file be it like report XLS, CI/CD, or whatever. We can, of course, help you to validate that your charts and diagrams look a certain way, we assure them, and so on so forth. We have tons of different features. We can even help people to test phone calls and things like audio. For example, if you build your own podcast website where you are yourself running, audio, we can help you to test whether your audio is actually running and the sound is actually playing and that is exactly the right sound.
Joe [00:25:02] Okay, how do you do that? Are you using a third party to do that or is that your own technology?
Artem [00:25:08] This is our own so we basically got into this. We have customers in the telecom space and we basically built a bunch of stuff for them like and some well-known telecom companies are actually using us to test various stuff right now.
Joe [00:25:25] It almost sounds like an RPA tool so is this kind of the same model, Paul? Or is it I guess you could use it for not necessarily functional automation, but you can use it for automating things that aren't necessarily a functional test?
Paul [00:25:37] Yeah, I think I'm just starting to get into the remote process automation field. One of the tests that we were looking at putting together was basically searching for a book title off of Amazon, pulling off the title and the price, and then jumping over and logging into Salesforce and then putting in like a request to go and buy this book at this price. With that, I mean, you could set up automation of a lot of different existing manual processes and to let them get taken care of, I kind of see like in the H.R. department getting a request for a job. Somebody wants to get a job and they have to send out an email that says, okay, fill up this information. Let it automate the process, you know, handle that and look at it and pass it back and maybe even search a little bit through the messaging of an email and tweak on what sort of response the H.R. department will have to give them and let that be fully automated. I see a whole lot of upside in the RPA field on that.
Joe [00:26:42] So, Paul, I also saw something in the demo, it looked like it was running on a separate server. You're running your test. It was running in the cloud. So it just came to my mind then are you able to run a test in parallel and does it automatically handle running your tests in parallel and running in different, almost different docker instances, or however it runs to make your test run faster in parallel?
Paul [00:27:01] It does run in parallel. And I'm going to have Artem explain that a little bit more in detail.
Artem [00:27:06] Yes, we are like serverless testing for you, serverless test execution. Usually, you would have to deal with all this infrastructure setup. You will manage your own servers even if you don't know that kind of stuff. It's a pain. And what if your promote updated and when you screw it t because you're a Chrome driver and all this kind of stuff. We in our case it's just completely serverless running in parallel automatically out of the box. All we need to do is just to write a test, ready to go, and it will run out.
Joe [00:27:43] Alright. So I guess the next question is people are in love with open source tools. And so I don't even know the model list. Is this is open source? Is it reasonably priced? Is it more like an enterprise solution? Who's your target audience that would make a good fit for your solution?
Artem [00:27:58] Well, we are partially open source. We have like this plug-in, which is an open-source or some bits and pieces which are open source and we are free. We only charge for infrastructure. You don't charge for the framework itself or anything like that. You can have unlimited tests, unlimited users. What you would expect from open source? We don't necessarily open source. However, it does not matter because what we help companies to do is we help companies to automate away a lot of manual testing. Before a lot of manual QA testers which are doing executing test cases manually when we help them to basically automated themselves. So and then they need some simplified tools with the UI where we can do it easily. It's not about open source or not. So you can just play with it and those kinds of stuff. It's about an effective tool, which actually helps companies.
Joe [00:28:57] That's cool. Yeah. So oh that reminds me. It sure reminds me then A.I. Is your tool an A.I. solution? Are you going to slap A.I. on or does it actually do some sort of machine learning behind the scenes?
Artem [00:29:08] We use a huge number of different models. So when you're talking about A.I. you need to understand that just A.I. is a general term. You can use it for anything you want. Your own hello world program is already A.I. by definition because the definition is so broad just so you can check it on Wikipedia. But in our case, we do use machine learning indeed because probably the most common example would be if we want to allow you to click on a cart I can by how it looks. So if it looks like a shopping cart, you should be able to click on it. And the only way to make it work is to assign labels. The only way to make it work is just to use our machine learning-based models to detect and classify this type of text icon. So tons of different models which we use specifically in specific cases. Like we're about like five or six of them, very like you literally cannot do anything else other than machine learning. But we're not like, you know, various genera A.I.. So don't mix it with general. We're not general A.I.. We just use machine learning models in the places where it should be used.
Joe [00:30:25] Okay, Paul and Artem before we go, is there one piece of actionable advice you can give to someone to help them to automation testing efforts? What's the best way to find and contact you to learn more about Test Rigor? Let's start with you, Paul and then we'll end with Artem.
Paul [00:30:36] I'd say always be open to lots of different tools that are available out there. I'm always excited about anything that will make my life easier and faster and remove all the maintenance and let me get things taken care of quicker and faster.
Joe [00:30:50] And Paul, the best way to find you and contact you?
Paul [00:30:52] Sure. Well, you can get me on email at the email@example.com. I'm on LinkedIn. I'm also on Twitter @ DarkArtsWizard. And if you want to see some of these cool videos, check out YouTube and just put in Paul Grossman, the Dark Arts Wizard. And that will take you directly to my cool videos. And by the way, I'm working with Utopia Solutions, and my boss, Lee Barnes, is a really great guy. So I got to give him a heads up, a shout out to my boss.
Joe [00:31:16] Awesome, awesome. Cool. Artem?
Artem [00:31:18] Okay, so basically I believe that you should use the right tool for the job. So whatever is the right tool for the job and it may not necessarily be Test Rigor who cares? So whatever will help you and make your life happier and you work more efficiently. This is what you should use. I believe in that.
Joe [00:31:41] So Artem, best way to find and contact you?
Artem [00:31:44] Yes. Feel free to connect with me on LinkedIn. You can find me Artem Golubev, Test Rigor. You will be, probably, hopefully (unintelligible). Artem Golubev of Test Rigor. And yes, please also check it out online. You can go to testrigor.com and request a free trial. We have a 30-day unlimited free trial. Thank you.
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.