When to choose a low-code automation tool with Diana Oks

By Test Guild
  • Share:
Join the Guild for FREE
Diana Oks TestGuild Automation

About This Episode:

When should you consider using a low-code test automation solution? In this episode, Diana Oks, an automation engineer at Vulcan Cyber, explains the benefits of using a low-code solution and how it can simplify your workflow. Discover what low-code automation tools are, how to choose one and how they can make your life easier. Listen up!

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Diana Oks

diana oks testguild

Diana is an Automation Engineer at Vulcan Cyber. She graduated from the Technion with a double bachelor’s degree, in physics, and material science, and engineering. After graduation, she came across a career retraining course to become a software developer. Diana got her first job as part of the Automation team at Philips (the medical part) company. She liked this area better than development and decided to stay. Diana also starting her MBA (‏Master of Business Administration‎) this year and has two kids and an American bulldog.

Connect with Diana Oks

Full Transcript Diana Oks

Intro:[00:00:01] Welcome to the Test Guild Automation podcast, where we all get together to learn more about automation and software testing with your host Joe Colantonio.  

Joe Colantonio: [00:00:16] Hey, it's Joe. Welcome to another episode of the Test Guild Automation podcast. Today, we'll be talking with Diana Oks all about when to choose low code automation tools as well as other automation topics and may even dive a little bit into security testing, we'll see. But Diana has over six years of experience as an automation engineer. She currently works as an engineer at Vulcan Cyber, which is a startup based in Israel. She has a really cool background. She has a double degree, I believe one is in engineering and one is in physics. Well, we'll dive into that as well, hopefully, see if I got that right. But really excited to have her on the show. She joined us to share our Automation Guild and really was an awesome speaker released, dropped a lot of knowledge. So I wanted to share it with you as well. You don't want to miss this episode. Check it out.  

Joe Colantonio: [00:01:00] The Test Guild Automation podcast is sponsored by the fantastic folks at SauceLabs. Their cloud-based test platform helps ensure your favorite mobile apps and websites work flawlessly in every browser, operating system, and device. Get a free trial, just go to TestGuild.com/SauceLabs and click on the Exclusive Sponsor section to try it for free for 14 days. Check it out.  

Joe Colantonio: [00:01:28] Hey, Diana, welcome to the Guild.  

Diana Oks: [00:01:32] Hi Joe. Great to be here. Thank you for having me.  

Joe Colantonio: [00:01:34] Awesome. Awesome. Great to have you. As I said, you were a Speaker at Automation Guild, so I'm really excited to have you join us on the podcast. I get asked about this all the time, it's about boot camps. Do they really help? How do I choose one? What made you even think of like, you know, I have a background of physics, material engineering. I heard this thing about like, how did you get into automation or software development, and then how did you know like, here's a good boot camp that's actually going to give me the skills I need to find a job. If I invest in this as well.  

Diana Oks: [00:02:01] While looking for a job, you kind of apply to all graduate people. Do you want to try this out? It was not an innovative idea, but it was like kind of there were only two places that did that, and they kind of applied. So I took both of them and tried and tested them. And one did not accept me, but the other actually took me. So this is how it started. But I didn't know if it was a good thing, but it kind of was a leap of faith because they really promised a job. In the end, unlike we'd try to help you, it will be more we will find you a job because you have to sign an agreement to work with them and for maybe a lower wage than it's used in the industry. But still, they find you a job and you signed to work 2 years through them. So by the end of these two years sorry, two years, and by the end of the two years, you're kind of free to choose whether to stay with this company or maybe sign another company. So you end up with the knowledge, with experience and you are kind of free agent to go after those.  

Joe Colantonio: [00:03:00] So the current company you are in now also I believe you were a first-time hire in testing or QA, Is that correct?  

Diana Oks: [00:03:07] It's the first time I'm using testing in the company. I wasn't aware actually of testing before I came there. I did more really testing, but my own framework, usually it was created inside the company. The company was responsible for that. Some testing depends on the company, actually, but were a different kind of testing. For example, while working at Philips, which was a medical company, not Philips the lighting, they had a lot of teams doing the manual testing and at some point, they decided to outsource that and doing automation testing because it really was too much for a test. Some tests were only tested once in two years, just so we can understand the number of tests. So there it was more desktop applications. So the whole infrastructure was built inside the company and it should work with their tools and not outside tools so think of everything you know about web testing or mobile testing. So it's really not applying here. You need to kind of invent your own infrastructure and things that will work. On the other hand, when you work with companies that are web-based or stuff like that you take all you know, because essentially automation or testing for that matter, it's not either testing mobile or API you test and tested the methodology. You can apply the same rules, the same knowledge, the same base rules on anything you test so it's more of how do you take your knowledge and transform it to other places?  

Joe Colantonio: [00:04:36] Absolutely. So I guess a lot of times when you join a new company, they already have a framework in place or, you know, they have, these are the tools we use. What is it like at Vulcan, when you joined, did they already have something in place, or were you charged with finding a solution to help them with their unique testing needs?  

Diana Oks: [00:04:52] Okay, so Vulcan was in kind of we are friends and we have an idea. So we take a few people we know and we create this real platform that does what we envisioned. But at some point when your product becomes a real thing, you try to sell it at some point. You know, every startup has this. We have this idea. We think it looks real. And now we kind of want to start to sell it to customers. And at some point, you start to be really, really responsible. You can't say it might work. It has to work. You have clients and now you're in charge with and you have this responsibility to make your product work. So about two years after this, the startup was founded, they kind of came to the conclusion that they don't have any QA. The QA was it was done by the product. They really don't do a good job. They kind of swoop through the app. Everybody's in charge of their area. So they kind of check if this works and this works and it wasn't working. So before even considering me, they took Testim, I think, for a try out to see maybe they can make this work. So when I came, they told me, look, “We tried this, but we don't have anyone who's in charge of that. So at this point, you can either choose to use them or create your own framework. We will go either way.” So I was actually interested, I called them we scheduled a demo because apparently there is a real company. I didn't know that. So I found a contact person on my WhatsApp company and I asked them, “Hey, I know you work there. Can you tell me about it? Can you walk with me inside? I can even do more, I can also schedule a demo so I'll tell you what we do and you can ask questions.” And we scheduled a demo and he answered all my questions. I do have to say that I really have a bad experience using Selenium, not because it's a bad tool, because a lot of time when developers develop things we know, we're not always able to test them. So you kind of start to look for all these hacks and works around and the Internet is full. How do I test this unique dropdown with unique and they start to pile up into a lot of solutions to really cool features that the user sees but are really hard to test. So I had these all small questions. How do you handle this and how do you handle that? I assume that it will have at least a partial solution to all sorts of issues. And if they can provide some answers to that and I think I can use this. At that point, I wasn't sure if we're going to do only UI or API. If I'm doing both, then testing wouldn't be a good answer. But if I'm doing only UI, maybe testing is a good answer. So it was a matter of decision. How do we proceed? And since the company didn't have any coverage whatsoever and the few tests that were good enough, it was a matter of, “Okay, so if we go with this UI and it will take me about two or three months to build a good framework because a good framework has to work not only with the testing itself but also with what we have already in the company.” So for example, we are using GitHub. So it has to be part of this Git or should I use Python or should I use…? So there are all sorts of decisions you need to make. And also if you use open sources like Selenium, for example, then so it's not really free because your time is not free. You need to maintain unique ownership for this project, then you need to make it really fit your product. So it's not really free in that manner, but it takes a lot of other time, the workaround. And you didn't even write a single test. So this was all part of the solution, it's where I am, the solution is if the testing provider to provide the testing.  

Joe Colantonio: [00:08:38] Nice. Now, I know a lot of people did either hardcore open source or, you know, they go crazy. But I think you put up some good points for some reasons why maybe you want to look into other options other than open-source. I believe even Testim behind the scenes, I think might use Selenium as well behind the scenes, but it has extra functionality built-in the handles all these scenarios, like you mentioned, that you don't have to create from scratch. So I guess one thing that might be concerning, as you said, you weren't sure if you're going to do API testing, what happens if you invested in a vendor-based tool like Testim and then maybe a year down the road, they're like, “Oh, by the way, we also needed to do API testing.” Does that make it more difficult now that you're tied into a vendor rather than an open-source solution, or is that something that even matters?  

Diana Oks: [00:09:19] So we're here down the road about a year and actually, it comes to I think we have good coverage, but now we have areas we're not handling and we can't handle them through the UI. So we do need to make those API tests and we really cannot use Testim at this point. So we need to create our own framework so does not eliminate all that work Testim has done because it's really in place. We have about two hundred tests that really test the UI and it probably will grow even more because UI is changing and we can do even more elaborated tests, but we cannot ignore the API. So we will have two automation projects. And I think everybody kind of think, we need to have one solution to a lot of things. But you may have more than one solution. And this is okay, think of it, like the developers have unit test and I have my UI test, for example, and this is not the same. So what? So it shouldn't be always like I want something that fits it all because you might not get this in tools. You need to understand that some tools will answer, will checkboxes on one thing and other tools will checkboxes on another thing. And it's okay. I don't think it's supposed, to scare us at this point. We should be able to use more than one tool and will create more than one framework to match.

Joe Colantonio: [00:10:40] Nice. So did you have an approach now than when the team is running the test, how do they know when to use API testing, or what tool to use for API testing, and when to use Testim? Do you have a method that they know, “Okay, this is UI test, I know I did use Testim, well, this is an API test, now I need to go to this open other solution to do API testing”?

Diana Oks: [00:10:59] So actually I guess it depends on company policy maybe or maybe the company agenda. But for me, if I can do it on the UI and it will cover me more bases, then I'll probably do the UI test because I can do it easily using Testim. However, if I see this test is prone to fail or maybe is not really accurate on the UI side, for example, if I need to check data, for example, I don't really need to open my browser and to see the whole data thing. It won't bring me added value. There is no added value in opening the browser and checking the data unless I want to know it's in place or how it looks or maybe if it corresponds with something. But if I only want to see the data like kind of JSON files, then API would do this quickly and easily. And I don't have to work this all alone. So it all depends on your definition of done in that manner. And how would you define your success, testing the data or testing the data in some place or some location, or does it, you know, in the boundaries or is it in the drop-down? So it all depends really on the matter of your definition.  

Joe Colantonio: [00:12:04] So I don't think with a vendor-based tool in this example, half of the test would be with Testim, but it could be any vendor-based tool. I think Testim, though, is one of those solutions that kind of markets itself as a low code tool. And a lot of times automation engineers, not only are they turned off by vendor-based tools, when they hear low code, they're turned off even more. So could you just tell me a little bit more like what is a low code solution or automation tool? And like, what benefits does it give you? And what maybe are things that are myths that people believe about low code that really aren't true?  

Diana Oks: [00:12:34] Absolutely. So low code is not, does not necessarily mean no code at all, because I can assure you, they have a lot of snippet of code inside because they really cannot check all the boxes Selenium has and they can really cover all the validation that my app has or your app has. But it does provide you a really simple solution to create UI tests through clicking and recording and then just rerunning the records and the added value is the screenshots. So every step has a screenshot, you can add a description of what it does. You can really add or define or redefine your selector or CSS so basically, you have control over every step. Just your steps in that essence are recorded instead of you writing them. So if you're scared it's not personalized enough, then you can personalize it enough. You can determine what it does. You can determine the validation you have. You can add it to suites, you can add it to a group. So it depends really on what you need to do. And you don't need to be scared of duplication because they have, for example, this whole feature of a shared group, which I use a lot because I have the same staff, for example, about 10 or 15 tests which check the same box, because at the end it comes to this flow, the check-in the box. So I just use the same step and I don't need to do it multiple times, for example. Or do you have a login for your application? You don't need to do it hundred times. You just click record it once and reuse it in all of your tests so you don't have to be scared of that because on one hand, it's easy to handle and on the other, it's really easy to maintain. So both and it's kind of really understandable by people that are not really writing your test. So if it's simple enough, anybody can understand and it's kind of self-explanatory.  

Joe Colantonio: [00:14:36] That is also a big concern people have is we'll have a lot of duplication. So how does it know there's already a login? If you have like a big team, if someone's like go “I'm going to do the script and they start recording and they do a log in, but there already has a login that exists, does it tell you, “Hey, by the way, this already exists”? Or how does that work?  

Diana Oks: [00:14:52] So they have this feature, just to clarify, we're talking about testing because this is what I use but there are really other tools, for example. So I'm totally for that, I just speak from my own experience. And so testing, for example, has let's talk about login because I think it's kind of been the most common step. So they kind of have this basic URL which they opened the browser and navigates to this basic URL. And then, for example, our application has your landing page, which is the login page. And so it has like those five steps. Click the username, set in your digits, for example, or your user, then click your password, set your password, and click login. So kind of five steps and validate, for example, you were login because if you weren't a so it will figure and then you save this as your shared group, and then you can call it in every step you create just so it will show you that you have x test that's using this, for example. But just you have to remember, if you change the user, then it will change your user for example, all across the shared groups, don't change shared groups without understanding the repercussions.  

Joe Colantonio: [00:15:59] Absolutely. So how do you create a test that is always recording or is it, if you already had like the frameworks been around for a year now, so they must have a bunch of functionality? Does UI have a dropdown where it shows all the existing functions or methods you can use already, or how does it work after you've already been, yeah?  

Diana Oks: [00:16:16] So two options. If you have like similar scenario where you can clone your test and add some just the few steps that are really different or if you need to create your own step, then you create a new step and then you can and they have like a box like that, that has a drop-down and you have two options or three. Now, they kind of change a little bit, but you either record your next step. You have testing mantras, like validate or add something that was really incorporate in the application. Or you can go to your own saved shared steps and use them. So you just have a dropdown with all your shared steps then you can choose that so you can basically build your own test using your own shared steps.  

Joe Colantonio: [00:16:59] Cool. So I also know a lot of people when they're using an API for automation like Selenium or whatever opensource tool, a lot of times they struggle with things that are time-consuming, like the wait approach like they don't have the right wait mechanisms in place to wait for something to be ready to be interacted with because it's not on the page and fully loaded. And so they end up having these weird issues with flakiness because they don't have the right waits in place, the right mechanism in place to handle that. So do you have to worry about that with a vendor-based tool, does the vendor base tool take care of waiting for you automatically? Well, what are you having to think about? Okay, now I need to add this wait or this wait for the type of assertion?  

Diana Oks: [00:17:39] They have 2 mechanisms, I think, and we can talk about them both. So the one is the classic wait and they have really inserted like wait for a certain element to be visible or for example, wait for not to be, wait for a tax, wait for a download, and stuff like that, which is incorporated in their application. On the other hand, they have this mechanism that tells you how long it will take to fail the first step so they can retry during this whole time. And you said like for ten seconds, it will try for ten seconds and then will fail. Or you can say wait until this element appears and then try the next step. So both will give you the same. But I think we're talking about the right method to work then wait until there is an element and test this element would be a more correct flow rather than more means to achieve your success, for that matter, rather than options. But both will work for that group.  

Joe Colantonio: [00:18:34] Cool. So how do you run your test with the vendor-based tool? Does it integrate with CI/CD? Do you have that in place in your company? What's your workflow for your automation?  

Diana Oks: [00:18:42] I actually really enjoy work with the UI. I know most people don't always, but it has its own benefits because UI means like everybody can open this URL and see it for themselves, which is not bad because as I said if we are a small team and I'm the only one who does them, I'm the only one who knows what these tests are doing. And on the other hand, I need to share it with my developers so they need to know what those tests are doing in case they fail. So it's kind of easy because here I send you my URL for the failed test and you can see the failure or where it failed or have them the screenshots. So it's kind of easy to understand. And on the other hand, you have to test your CI/CD in terms of what we have an integration doing in GitLab. I think it's not really a popular tool, but, but we use it. They have the CLI options that you can actually call your projects and to run and you have this suite name or the plan names. So essentially it kind of works both ways, either using the UI I have scheduled and suites and schedule time. So I actually have this for my end. And on the other hand, when my developers doing deployment, they have a test for predeployment so we call this predeployment suite for example, but we can use only from the CLI, I think it's kind of what's their purpose but I really like the UI.  

Joe Colantonio: [00:20:08] Cool. Can we talk a little bit about what you're testing? Because I'm looking at Vulcan.io now and it looks like it does some sort of security automation. So it's like you're automating stuff that's being automated. How does that work? What's that like?  

Speaker 4: [00:20:21] So a little bit about kind of cybersecurity, which is a fancy name for a lot of things. But essentially every company, every company that really has to or at least the companies has to have scanners and they have scanners in their companies, usually a large company. They have scanners like Tenable, Qualys, like Rapid 7 that scan their all assets for that matter. Every computer they have, every server they have, whether it's cloud-based or a real server, it's all scans through their network and their scanners let them know if they have vulnerabilities and what assets are really vulnerable. So what we do is we do not scan those systems. We only take the information from their scanners. And for that matter, we have part one team that does all the integration and we have integration about thirty connectors. Some of them are scanners, others just down bring you the data, others bring and fix the data. So it really depends on the type. And then we have our system that does this whole prioritizing and gives credit for vulnerabilities by our own algorithms, we can prioritize what is important to be fixed first or is not important to be fixed or can be postponed. So the most important thing to companies that have vulnerabilities is to know what can be fixed or what can be postponed or how it will affect. Because think about it, you have a large company and you need to do a reboot to some software or some browser or some, I don't know, operating system. And it will affect your team. And some data might be lost, you would consider, am I willing to take this risk, or am I not willing to take this risk? So sometimes it's kind of easy as like change your password, but other times it's kind of a little bit more complicated than that. So companies need to understand what is important and using algorithms, they can, actually inside their own favorite algorithms, for example, by tagging or by prioritizing, by weight, asset types, all sorts of things. And this will give you this list of what is at risk and critical, high, medium, for example, and the third parties, after all these data is in the system, you can choose to take your action. And by taking action, it means you can either choose manually how to take action or you can do like kind of a playbook we have in the system which you define a rule and it will automatically run. And once and something is defined, for example, a new vulnerability on your asset was defined, then it will take action, one of the actions you defined. And  action in that essence might be to open a ticket if you have Jira or ServiceNow or send an email or whether you want to deliver a fix. So usually fixes are a little bit more complicated because it's different and it's not different companies, but it's usually different people in different companies are responsible for tracking or tracing. So the whole cybersecurity, all the security companies involved is more complicated than that. So it's not always the same people that are responsible for scans and fixing. So it's kind of this platform that allows them to interact, to understand what is important and what can be fixed, and kind of using the same system to understand where or how your system is vulnerable. And the second thing they talk about is that we can deploy fix so we can deploy unstable scripts or Chef scripts or CCM scripts so this is kind of the take action part in that.  

Joe Colantonio: [00:24:08] So how do you know what the tests are automated in that flow? So it's very complicated because it sounds like you need to have something in a certain state. You need to have a bunch of data, like I found this vulnerability, and then this vulnerability can be fixed or this vulnerability can be delayed. Like how, how are you automating or testing that? How long does your test takes to run because it sounds like a long flow that you need to actually automate?  

Diana Oks: [00:24:28] It really depends. It's really kind of as I said, we can divide it into three parts. How do we bring the data, which is important to test if you bring all your data. Then once the data is in the system, you need to make sure it's really handled correctly. For example, it has fixes, how do you get fixes to vulnerabilities? How do you think about it? You have, for example, a Mac computer and then suddenly you have them a windows fix. It's kind of awkward and you want to avoid it. So we have a really big research ship but then it's kind of, it has this task of bringing the right solutions, the right fixes, the right remedies for the vulnerabilities. And they actually built if you look in our website, they have this remedy cloud, which you can look for vulnerabilities by only clicking them CVS of the vulnerabilities, which is a major project and it's free. So pretty much happy to use it if you want to. And it's really hard work. And the research team really does a lot of work in that essence to make sure you bring the right remedies, right fixes to the vulnerabilities. So it's a lot on our behalf and also the algorithms that really merge all of the solutions.  

Joe Colantonio: [00:25:49] So Diana, I've worked for a lot of enterprise companies, a lot of big, big, huge companies. And I've always had better luck with vendor-based solutions where I know a lot of times you have these, you know, in-house, like one or two people type companies and they have no problem using an open-source tool like Selenium. So I was curious to know to get your opinion on like when you're choosing a tool, does it matter if you're a big corporate enterprise company as opposed to like a one, two-person type operation?  

Diana Oks: [00:26:15] Absolutely. It's really a different way of work and thinking. Bigger enterprises usually have bigger teams or many teams that each is tasked with different, for example, chunk of information or testing on sometimes a lot of teams do not even test the same things in the application or in charge in different things. And then you have the whole framework team that really builds tools, for example, and I've had it. So it's usually not the same people, not the same group or not the same areas. But if you are a one-person or team of two, for example, in small companies, then you are the answer. You are the person that builds this whole framework. You design it, you use it, you make it work for you and you are in charge of it. So, for example, you need to interpret your test for your developers. You need to work with them, or along with them to plan the test, to see what features they do. So you're really hands-on all this process while it's being planned and executed. So once the feature is out or even before it's out, you already know what to test and how to test. And you can have the say or the feedback. Sometimes I really have to stop deploying because I think we have a critical bug or showstopper, for that matter. So you really have a say and it's a really big responsibility because essentially you are the one person and something they can talk to you in the evening. “Hey, we have this problem. Can you please help us or rerun the test or check what does matter?” And it happens. So I think this is the biggest difference because you have less people to be in charge with all the testing in the company.  

Joe Colantonio: [00:27:57] OK, Diana, before we go, is there one piece of actual advice you can give to someone to help them with their low code automation testing efforts? And what's the best way to find or contact you?  

Diana Oks: [00:28:06] Please reach out in LinkedIn. I think I answer everybody that asks a question. Personally, I used you to contact other people, so I vote for the community. But also I cannot tell you to use this tool or other tools but I think if you have the opportunity to use AI tools or tools that incorporate AI and makes our life easier, totally, you should consider using them. I'm all for making our life easier. And I think the main goal is not to create more code, but to make tests more reliable and more approachable. And so we should totally use tools that will help us do that. So never, at least don't reject using tools unless you really have a good and valid reason for that.  

Joe Colantonio: [00:28:53] Thanks again for your automation awesomeness. If you missed everything of value we covered in this episode, head on over to TestGuild.com/a350, and while you're there make sure to click on the Try it for Free Today link under the Exclusive Sponsor section to learn all about SauceLabs awesome products and services. And if the show has helped you in any way, why not read it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation podcast. I'm Joe. My mission is to help you succeed in creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.  

Outro: [00:29:34] Thanks for listening to the Test Guild Automation podcast head on over to TestGuild.com for full show notes, amazing blog articles, and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.  

 

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Alex Kearns TestGuild DevOps Toolchain

Leveraging GenAI to Accelerate Cloud Migration with Alex Kearns

Posted on 12/18/2024

About this DevOps Toolchain Episode: Today, we're diving deep into how you can ...

Three people are pictured on a graphic titled "AI Secrets You Should Know." Set against a striking red background, the image features the ZAPTALK logo in the top left corner, highlighting discussions on AI and automation.

The Secret to Embracing AI and Automation (ZAPTALK EP 02)

Posted on 12/17/2024

About Episode Join Alex (ZAP) Chernyak, Joe Colantonio, and David Moses in episode ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Top Gift For Testers, 70% Problem, Test Coverage and More TGNS144

Posted on 12/16/2024

About This Episode: Do you know the perfect Holiday gift to give that ...