TestOps with Oren Rubin and Maor Frankel

By Test Guild
  • Share:
Join the Guild for FREE
TestOps with Oren Rubin and Maor Frankel TestGuild Automation Feature

About This Episode:

Running automation testing across the enterprise is hard. There must be a better way. In this episode, Oren Rubin, founder of TestIm.io, and Maor Frankel, a Senior Software Engineer at Microsoft, share how to scale testing using TestOps principles. Discover how to leverage control, management, and insights to unjumble your automation’s growing complexity. This is not theory. Moar shares how he successfully implemented TestOps at Microsoft with his team. Listen up!

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Oren Rubin

Oren Rubin Headshot TestGuild

Oren is the Founder and CEO of Testim.io

Connect with Oren Rubin

  • About Maor Frankel

    maor frankel testguild

    Connect with Maor Frankel

  • Full Transcript Maor Frankel and

    Oren Rubin

  • Intro: [00:00:01] Welcome to the Test Guild Automation podcast, where we all get together to learn more about automation and software testing with your host Joe Colantonio.

    Joe Colantonio: [00:00:17] Hey, it's Joe, and welcome to another episode of the Test Guild Automation podcast. Today, we're talking with Oren and Maor all about test apps and how it can help you scale your automation efforts. Oren, if you don't know, is the founder and CEO of Testim.io, he also has done a lot with the open-source community, has been involved in speaking with the automation community for at least 20 years. Maybe I'm overstating it, but he's been around for a bit, so he knows a lot of things about automation. And Maor is a senior software engineer at Microsoft. I love Microsoft. When I started, I used to do visual support from Microsoft visual basic support online and there was a third party and I always wished while Microsoft was the thing in the 90s to work for. So I always loved to have someone on Microsoft on the show. So it's an honor to have Maor join us. And they both have a deep understanding of how enterprise-strength automation works at scale. So you really don't want to miss this episode, check it out.

    Joe Colantonio: [00:01:10] The Test Guild automation podcast is sponsored by the fantastic folks at SauceLabs, their cloud-based test platform helps ensure your favorite mobile apps and websites work flawlessly on every browser, operating system, and device. Get a free trial, just go to testguildcom.kinsta.cloud/SauceLabs and click on the Exclusive Sponsor section to try it for free for 14 days. Check it out.

    Joe Colantonio: [00:01:38] Hey, guys, welcome to the Guild.

    Maor Frankel: [00:01:41] Hey.

    Oren Rubin: [00:01:42] Hey, Joe.

    Joe Colantonio: [00:01:43] Hey, guys, good to have you on. I know I did a short intro about you, is there anything I missed in your bio that you want the Guild to know more about? Oren, let's start with you.

    Oren Rubin: [00:01:51] I think you're spot on. I've been basically building tools for developers for the last 20 years.

    Joe Colantonio: [00:01:59] Awesome. How about you Maor?

    Maor Frankel: [00:02:02] Yeah, I mean, I could just say I work for a part of a group called Microsoft Security. We do products for security, for enterprises and organizations and I'm part of a team that's called the Frontend Infra and we provide different tools for all our developers. And that includes the infrastructure for test automation using currently Testim IO.

    Joe Colantonio: [00:02:23] Oh, very cool. So we're going to get into that, but the show is not about Testim IO and any products like I said, this Testim IO is awesome, but a lot of the tools and techniques we're going to talk about today can be applied to any situation. So you want to stay tuned all the way to the end to hear this awesomeness coming your way. So I guess the first thing is, as you know, there's a lot of buzz words around and also I saw TestOps. I'm like, okay, what the heck is TestOps? I've heard of AIOps, Business Ops, Data Ops. So why another Ops? Oren, let's start with you.

    Oren Rubin: [00:02:52] Sure. I think yeah, I agree that there are so many different types of Ops and specifically even test Ops, people are describing it in different ways, I think the best way to describe it is about scale. Scale, that means how do I do something not, now where I have ten tests, how can I have ten thousand tests and continue adding those tests and maintaining them in the same, same speed. It's all about scale. Makes sense?

    Joe Colantonio: [00:03:23] In speaking of scale, yeah, absolutely. I'm sure at Microsoft Enterprise Company, a lot of automation, a lot of development. So Maor, did you think of Test Ops when you implemented your solution or where did TestOps come into play here?

    Maor Frankel: [00:03:34] Yeah, I think for us at least, I have quite a lot of experience with automation from other companies that I worked on but when I came to Microsoft, the first thing that we had to think about when we talked about automation was TestOps, was scale because the amount of people that work on our application and our codebase and the amount and the size of our application is huge. So when we thought about what technology do we want to use and how we want to approach our solution, first thing we thought about was how do we handle that amount of scale in that amount of developers working on the same thing at the same time.

    Joe Colantonio: [00:04:08] Oh right cool so is there a reason why I've been hearing more and more about scale? Are there any trends that are going on? I know integrating things into CI/CD has been something. So Oren, any thoughts?

    Oren Rubin: [00:04:18] Yeah, I think we see that a lot more, I think, that in the past where the majority of the testing was manually tested, you hear more about how can I scale my manual testing. Right. You want to make sure that you cover everything. Now is with the trend, it is moving toward more toward automation. And now you don't have ten tests, you have thousands of tests. Now you need to make sure how do you scale that.

    Joe Colantonio: [00:04:43] Absolutely and Maor, is that the reason why you thought about why we need to really scale better, we need to look at a process to do this better?

    Maor Frankel: [00:04:50] Yeah, I mean, I think, you know, like everything web applications, application in general, but specifically web applications have been getting bigger and bigger and also the amount of hands that handle the specific applications increasing. So it makes sense that you need tools to manage that. Writing a testing framework for a team of five, ten people where everybody can communicate, that's one thing but doing it for dozens of developers, sometimes hundreds of developers, which not necessarily can communicate with each other all the time? That's a completely different thing and you have to think about it differently.

    Oren Rubin: [00:05:24] Maor, I think you're also referring to make the shift left that the fact that you want to test earlier.

    Maor Frankel: [00:05:30] Yeah.

    Oren Rubin: [00:05:30] So it's about scale, not just the amount of tests, but the number of times that you're running them. You're running them earlier and earlier to find the bugs much faster.

    Maor Frankel: [00:05:40] Yeah. And we have that approach, too, I mean, when automation started, the idea was, you know, was just do end-to-end test and test everything. And today it's changed a bit and people trying to catch the bugs sooner as possible. That also relates to scale, because when you have a big organization, you don't want to handle regressions that other people have, you want to catch the bugs that you introduced. That has to happen even before the code goes in. So you definitely have to start thinking about how you manage that and how you move things sooner in that process.

    Joe Colantonio: [00:06:12] Yeah, absolutely. So I used to work for a large enterprise company and then went from releasing like once a year to they want to do it every six weeks. And so there are more people involved because they want developers to do testing as well. We had more tests, they were bad tests because everyone's writing tests and more releases, which made it really hard to scale. So are those are typical kinds of challenges you've seen as well, maybe holding teams back that you've spoken with? Oren, I know you speak with a lot of companies, probably? 

    Oren Rubin: [00:06:40] Yes, we see that a lot. I think you said people want to go to the six weeks. I think that's the more slower teams. We see companies that want to be on continuous deployment. That means that they can release in a click of a button. The developer changes a few lines of code, they want in one click to build everything, test everything and deploy it.

    Maor Frankel: [00:07:04] Yeah, I mean, we release versions to production every few minutes, there's a new version, we deploy parts. So every few minutes production is changing and that has to be tested continuously to make sure there are no regressions that can be done manually.

    Joe Colantonio: [00:07:18] Absolutely. I don't know how long you've been in Microsoft, I know back in the day they're the ones that started this SDET role where they actually had like a tester embedded with the team. But I've spoken with a lot of engineers like Abel Wang, and it sounded like developers now are responsible for testing and that was kind of a big shift.

    Maor Frankel: [00:07:33] Yeah.

    Joe Colantonio: [00:07:34] That probably causes issues to scale as well because they're just learning how to test. Is that something new you saw when you joined Microsoft or is that a common thing?

    Maor Frankel: [00:07:41] Yeah, yeah.

    [00:07:42] So you've seen maybe at other companies?

    [00:07:43] There are still test engineers and manual testers in Microsoft but definitely, Microsoft changed the focus and want to give more of that control to the developers. That doesn't mean there's not test engineers, but you want the developer to be able to test his own changes. That's like what everyone said. We want to shift left. We want to be able to give that developer the tools to test his own code before it even gets to anybody else. And Microsoft has definitely adopted that approach. And it's a challenge. Not all developers, you know, in a state of mind to think about automation and testing and not all developers are, take that very naturally. But it's definitely something that Microsoft is taking very seriously and it's widespread throughout Microsoft. It's not easy for sure.

    [00:08:25] Right, no doubt. And Oren, I think when I was researching Test Ops of course Testim came up, a few other companies as well. And like you said, they all have different slants on it. Yours has like four core tenets and I thought, mean, we go through each tenet, how what you see and how it really works in the wild at Microsoft. So the first one was planning. What are your thoughts on what people need to do to plan or anything that's holding people back from planning to make the test more scalable?

    Oren Rubin: [00:08:48] What we want to give is when I'm talking about I'm a person inside the organization. I want to push test automation to the limit and scale it. So this is where we want to help? How can I plan who's doing what? Wait a second, do we have test automation SDETs that are helping out? How can we assign, oh those are new features that need to be tested, who does those things? Is there already reusable components that we can use and or which one do we need to build? Planning as, doesn't mean just by building new stuff. What are the tests are right now that we need to, that we have technical data on? Which one are the flaky tests that we need to fix, make sure that they're not flaky, which are the tests which are quarantined because of a specific bug that was introduced? So everything here, you need to manage that and say, hey, what am I doing today? What am I doing this week and what is my team going to do? So that's really the thing that you want to manage and make sure that everyone knows what's going on.

    Joe Colantonio: [00:09:51] Absolutely. Maor, when I worked for once again a company Big Enterprise and developers started testing, we always had these discussions. Well, is this covered by the unit test or do I need a cover, create a fully functional, full-blown test where we have a unit test for an integration test, or using microservices? So how do you go about planning what to test and who's going to test and all that, that type of thing?

    Maor Frankel: [00:10:11] Yeah, that's a good question. Where should a test be? It should be a unit test or an intern test. Generally, our approach is similar to what Alvin said. His shift left. Whatever you can test, the sooner and smaller the better and the further up you go that where is where integration has happened. When you want to test a certain component, that would usually be a unit test. If you want to test the integration between two components, usually a page or a feature, that would be a UI test without, for instance, a back end with mark data. And when you want to test the integration between the front end and the back in there, you'd probably have a full-blown end-to-end test that tests the whole scenario. So that's kind of the rule of thumb. But, you know, that's very naive. Obviously, it gets much more complicated than that but that's the approach that we try and follow.

    Joe Colantonio: [00:10:57] Absolutely, it does get really complicated. Another thing I saw was we moved from vendor-based tools which I loved because they had a full suite end-to-end solution where you write a test, you can map it. And we went to open source and then you had to create these weird integrations to try to map what you're testing, what the lifecycle was, what the results were of the test execution so I think test management comes to be an issue when you're scaling as well. Oren, any thoughts on that?

    Oren Rubin: [00:11:22] Yeah, I think we do see that. We see, especially when we go in tools like Selenium, which is, by the way, the amazing tool. Everyone keeps asking me about Selenium and thinks I don't like Selenium, I love Selenium. It's a great tool, but it provides the, it's the browser information. It doesn't provide as you said, the how do you manage, how do you know the state of your test you know, the history of, those are the things that Selenium doesn't provide out of the box and not intending to. So those are the things that are kind of missing when you're right now when you're trying to scale.

    Joe Colantonio: [00:11:57] So Maor, how did you handle the management of test when you started implementing your solution?

    Maor Frankel: [00:12:02] Yeah, I'll make Oren a bit happy not promote Testim IO because like Oren said, when we had the option between writing our own infrastructure and which is, you know like you said, Selenium and other automation tools generally give you the ability to actually write the tests and the API for them, but they don't give you that whole infrastructure around you to manage them and run them and that's what Testim IO helped us. Our application was written very fast and didn't have automation from day one. So we were looking for something that can give that whole suite of tools that we needed around automation, Selenium grids and test management and user management, all that. And Testim gave us most of that stuff. And that's how we kind of, you know, had a shortcut and managed to get all that stuff out of the box. And but definitely, that's a big part of automation today, is not just writing the tests, it's maintaining that whole thing. And there are so many tools around it that you need to manage everything.

    Joe Colantonio: [00:12:56] So crazy, though Maor, I don't know why I was thinking this as, as Oren mentioned Selenium is awesome, no one's dogging Selenium. Selenium is awesome. But there are other tools coming along as well. And I know Microsoft has invested heavily in PlayWright. So when you're developing tests, does management tell you what tools or techniques to use, are you kind of have to use Playwright, or have you used Playwright or anything like that?

    Maor Frankel: [00:13:16] Yeah, no, I think maybe a while ago Microsoft might have had that state of mind that was before my time. But Microsoft today is a completely open company, use whatever tool you need to get the job done and do it the best way. I use the Mac personally and really there are no constraints on what tools to use. Obviously, you have to justify it. You can do whatever you want, but if you have a good justification, use anything. And Playwright's had a lot of momentum, when we started our infrastructure it was just getting started. I know they're doing work around some of the toolings around it too, so it's looking very promising. But again, even with Playwright, you still going to have to do some work to get that infrastructure around all the management of your tests. There still a lot of work to be done for sure.

    Joe Colantonio: [00:14:00] So Oren, kind of going down the rabbit hole here. Testim, do you only support Selenium or if someone is using Playwright, do you give them the management type of tools to help them with their test as well?

    Oren Rubin: [00:14:10] We've started adding more, more support, I think, for Playwright and Puppeteer, we've even released for Playwright, we released an open-source version of root cause analysis. So one of the aspects that we help with is understanding when there's a fail of what's going on. Is it giving you the screenshots or the logs? The console logs, the network logs are pinpoint saying, hey, here's the bug exactly. So that's one of the toolings that we helped and we even open-source that. And there's a free version for Playwright.

    Joe Colantonio: [00:14:42] Alright, cool so let's get back to the topic at hand. I just had a touch on Playwright a little because it is a burning topic that a lot of people like to know about. So the next issue, after management, planning management was control. And once again, my company, when they started, they had me on every single code review for every developer that created a test. And it was they'd mark me as a roadblock because I was able to get to them. So every sprint review, Joe's a roadblock, couldn't get my test done. So how do you handle the controlling aspect of testing or Oren and then we'll go to Maor, any thoughts on that?

    Oren Rubin: [00:15:12] Yeah, I think that's important, you want to keep the high standards? I don't think the control is, it's a word that has a little bit of a negative connotation.

    Joe Colantonio: [00:15:21] Right.

    [00:15:21] I think what people are trying is to actually make it into a good thing. You want to make sure that you stay with high quality, high-quality tests, and your own code. So I think that, giving that intention, how can we make it even high standards, code reviews, test reviews, that's a great thing that can help, making sure that nobody writes directly for, for example, to main or master, then making sure that someone, a peer looks at that, those kind of things are something that really helps to make sure that you have those nice standards. That's in general, I think. We're trying to force to provide more value for scanning people's test and saying, hey, wait a second, you have here, you wrote the same thing that someone else did and trying to say, hey, you have code duplications. That's not a stand-out, that's not a good way. So will prompt that and say, hey, are you sure you want to add that instead of reusing the same logging method that somebody else used? [00:16:21][59.9]

    Joe Colantonio: [00:16:22] Absolutely not. Actually want to dive into that soon, probably the next question is going to be on insights. I think you probably could help with, to have better control. Before we do though, Maor, how do you handle this on Microsoft? I'm sure people are always curious to know how people test Microsoft or Google, these big companies. Do you have any tips on how, how do you it like part of your definition of done, do you all create automation tests for every piece of code you write? How does it work there?

    Maor Frankel: [00:16:46] Yeah, well, we have kind of checklists where you have to go over it when you write a feature and you complete it. We try not to enforce things. I mean, the biggest balance, as you said, we don't want to get to a point where we're a roadblock for anything. On the one hand, we want to have control. We want to make sure that we don't get bad coding or bad tests and stuff like that. On the other hand, we don't want to block people, we want to let people move. So we have to find that balance between things. So generally, we let the teams kind of manage the things themselves. We allow them to within each team. They can review their own test, their own code, their own changes. We allow them the freedom to decide what's important for them and whatnot. We do have guidelines. We do have things that they can follow, but we don't enforce them because we don't want to slow them down. That's kind of the way we do it in Microsoft. And we also have, as we said, we have the levels, you know, unit tests. Right, as many as you can. It's pretty quick, pretty easy. We want to make sure that you have a CI test. If you have an integration between components and if you have an integration with the back end, you need to have an end-to-end test to that, we can always enforce it but at least when you get bugs and stuff like that, we know where we went, where we're missing coverage's. That's our approach, our cool.

    Joe Colantonio: [00:17:53] Alright, cool. And Oren, I'm sure as people are doing this and they're running their tests and trying to get a handle on it, and you did mention something about like finding trends about like almost insights were of a team or is not reusing code. We have this issue as well where there each sprint teams and everyone write their own login just because they didn't want to touch anyone else's code. So it was hard to find sometimes because you're like they just rename it something different, like, you have any help with this?

    Oren Rubin: [00:18:20] We all know that. We all know that. Yeah. So this is what happens when you have a beat when you're talking about scale, you have if there are just ten tests and ten methods like usable components, that obviously that's not an issue. But when you're talking about a huge application with thousands of tests, thousands of reusable steps that you can reuse, how do you find those? How do you know not to, how do you know how to change something very safely? I think it's very important for patterns to help out with that because as a human it's very hard for me to scan a thousand tests, but for a machine to scan a thousand tests and finding out that, hey, those steps exactly that you use right now, there's an identical one over there, it's called the login. Please use that and try it and see that it works. And if something that machines can help us and I guess this is where AI helps, comes into play, it's very easy for machines to actually point that out and do that. And we've released that, that actually helps even on the fly. Imagine that you're saying, hey, I want to start a test, my test starts with the login. I'm going to the login page and then imagine something pops up and says, OK, someone already wrote a login step that fits right here, do you want to use that? So instead of trying to guess or going over thousands of functions out there, if someone suggests that, hey, there's only one login that fits this page, do you want to use that? Yes or no? Very simple.

    Joe Colantonio: [00:19:51] Absolutely now I guess another issue I'd seen a lot of if you had a test that used functionality, that was, that had a bug in it, that was flaky overtime was hard to know, as a test pass pass, fail, pass failed, that type of insight as well. Are you able to find those types of insights as well using a tool like, say, Testim?

    Oren Rubin: [00:20:09] Yeah, it's very easy to…I think there are two things. One is making sure that you're aware that, hey, show me all my tests that are right now which are flaky. That means a test pass only on the second run. I tried it passed and then it failed, those kinds of things that are easy to find. I think there's, AI can help even more, and telling you there's a, wait a second, there are a hundred failures here. I ran a thousand tests, a hundred failed, do I have to go one by one hundred test that's a lot. If AI can tell you, hey, ninety of them actually failed for the same reason, the exactly the same step in the application that would save you a big deal. You don't have to go over the 90 test, just one. And imagine they will tell you, wait a second, it failed yesterday for the same reason, and here's the bug report that someone did. Someone already investigated that, you don't even need to investigate that one issue, one test because you can see the reports from yesterday. So that's a big help.

    Joe Colantonio: [00:21:06] Absolutely. Another thing, obviously in order to scale you're going to need tools. I always ask people about tooling somewhere, what tools did you use to help you. I'm sure when you started this before you were able to scale and then you had issues then you somehow resolved it to scale better, so what kind of tools do you maybe use to help you scale for all your tests to get better?

    Maor Frankel: [00:21:25] Yeah, one thing that we saw important with scaling not just test, by the way, is federation fragmentation is because we have so many different teams with so many different developers, we can't control everything. We can't be one central point where everything is controlled. We have to kind of federate that and allow teams to manage the things themselves. We also want to have some kind of ownership model where we allow these teams and developers themselves and the people who write the test to own that test. So if something happens to one of those tests, you know who to approach, right? I mean, if you've got thousands of tests, one of them fail, what are you going to do with that? How do you know? What will you most likely you have no idea what that test is about? And another thing that we found is we try as much as possible, run specific tests for the granularity, make sure that what we test is exactly what changed. Because the more test you write, the more flakiness you're going to have, the more difficult it's going to be to find out who and what change broke it. And the more specific you can be about what you run, the higher resolution you're going to get from your tests. And it's going to be much easier to maintain that and basically find bugs and not have flaky tests that just make noise and make everybody lose faith in the system.

    Oren Rubin: [00:22:33] Going on with what Maor said, by the way, I think it's super important when we're talking about end-to-end testing. I think there are two meanings when people mean end-to-end. Some people mean it as I want to have all my components tested, which means front and back and all of it at the same time. And I'm testing one specific path, one scenario, but it goes through all the way from the backend to the frontend, etc., and is opposed to end-to-end when someone says the user scenario is end-to-end. I want to log in, I want to do a checkout, I want to add to cart and I want to make the payment. That's a user experience end-to-end, that experience end-to-end could be only on the frontend if all the network could be marked and as opposed to, hey, we're checking only the login or only the checkout, we're only testing that, but we're testing that capability. That's a mini scenario, but with all the components frontend and backend. So having said that, I think just strengthening with the old said that if you're testing, you have a test that checks the checkout, test the checkout. Try not to test every other feature right now and understand the device.

    Joe Colantonio: [00:23:41] Great advice. So I know you mentioned Testim a few times in this interview so far. So is there anything specific in Testim you use to help you with Test Ops, per se?

    Oren Rubin: [00:23:48] So, yeah, first of all, as I said, the whole concept of test owner is something we find useful. That's been a big change for us because for that when the test failed everybody was like, alright, you know, who do we need to approach, do we generally approach the infra team? And then we had to try to figure out who wrote that test and who owns it and what happened. And here we don't even need to be involved. They can go straight to the person who wrote that test and say, hey, you know, is this I think the test you've written is wrong or is broken or maybe you can help me figure out how to fix it. So that's been a big improvement. Pull requests, also a big help for us because you can, people we're not familiar with how to write tests. They can basically ask somebody, hey, can you look at this test before I put it in, and can you review it and tell me if I'm doing something wrong or can I do something better? That's been a big improvement for us. Also, Testim has the concept of branches where developers can make their changes separately and not put it in the master of the test. So that way, you know, they don't put noise into the system without being sure that they're finished with what they were working on. And the biggest thing we've they've actually added to the whole concept of quarantine and evaluating testing, especially evaluating is really good because test generally takes some time to be stable. And that's usually where people lose faith in the system because tests get added all the time and those tests are not stable. And that means the system is always not stable. When you have something like evaluating and what Testim have is the concept of having a test run but not fail the suite, you can say, all right, put that test in evaluation, let it run, let it, you know, get mature for a while. And when I feel ready that it's stable enough, I'll move it to active. And that way it can run and actually give value and also quarantine where you can just say that's common in the test. We can just say this test has a problem. Let's put it on the side until somebody can fix it.

    Joe Colantonio: [00:25:31] I love that feature just once again, from my experience, it would be a lot like we'd have to tag something as flaky, but it was really hard to track and was a lot of manual work to do that when I used to do it. And it was always hard to tell management, well, these five tests failed because of this. And, you know, we'd have to keep track in an Excel spreadsheet or something. So could you just explain this feature, like how easy is it to use, to help put things in quarantine and things like that.

    Maor Frankel: [00:25:54] I'll start with the quarantine, that so, that's every test as a test status. So you can move into quarantine. If you're saying, for example, you have a bug and you're not going to fix that right now. So you don't want your CI to keep failing. You want to say, hey, I want to release, I'm going to quarantine that I've got quality that means the test will not even run. And why is that different, by the way, than just taking the test out and saying, hey, I don't want to run this test in a different way. This special tagging helps you with, first of all, not forgetting what happens. I get seen it in so many companies that people say, oh, my God, this is feature is amazing because if someone took something out, they forgot to bring it back and you have thousands of tests who can remember that there's this specific test and you put it back. Oh, no, wait a second now we have only nine hundred ninety-nine tests. Oh, we forgot one. So, people, this little thing we're just talking about a company with like most case when you have one hundred and twenty developers that you can forget these small things, keeping track and understanding this test is cool one team, this test is flaky understanding the status and give it meaning and semantics to those labeling, it really helps. So the evaluation, I think the, when you write a test, I think what works very, very, very well when you write a test, when you, it's not really how do you know when it's ready? On one hand, you want to connect to the CI to see it run. But on the other hand, wait a second, if I put it in the CI too early, it's going to break my builds. Could be because it's not ready. How do I know? Do I want to start building a schedule that's running it every night for the next week and then I'll know that my test is ready? So the concept there was, there's a test in live view instead of saying my test is being drafted right now, I think it's ready and putting it in evaluating, that means it's going to run in the CI but it's not going to fail the build. I can track it. I can come in two days, can look at it and say, hey, show me your light test, which is evaluating. And I see that it's been running, it's been run like 100 times and it works all the time. Sure. Let's move that and say that it's CI-ready. And if it doesn't, if it's flaky, then we want to say, hey, let's go back to that test and let's fix that, make sure that it's super CI ready.

    Joe Colantonio: [00:28:09] I definitely love this feature. 120 developers. How did you resist the urge not to develop a solution yourself? A lot of people like, oh, we only do open source where developers will create our own solution. And what gave you the thing like, hey, there's a solution out there, it's called Testim and it has all the functionality we actually need to help us.

    Oren Rubin: [00:28:26] Well, like I said, first of all, when I got to the team, we didn't have any automation and we already had quite a large-scale product and we needed to get up to speed fast. And Testim was a good option because they allowed us to do that. Also, there are so many things around test well yeah, you can do it yourself, but do you really need to, I mean, do you really need to manage your own Test grid? Do you really need to manage your own test runner, screenshot recording? Yeah, you can do that. How much are you going to get by handling that? Because you're going to have to maintain that. You're going to have to support that over time. And that's going to take resources and that's going to shift your focus from what you really want to do. And that's focused on writing your tests. Obviously, using a third-party tool does limit you on certain things. But, you know, it's a balance where you say, all right, what I get here is worth, you know, giving up some of that control. And I'm not sure I want control over all those aspects. So that's what convinced us, you know, the ramp-up speed, which was very fast. I mean, we had hundreds of tests written, you know, in a couple of months and, you know, it's very easy for the developers to onboard because it's a UI tool. And also, over time, we get things free from Testim that they develop for us or for other people. We don't have to invest that. And that allows us to focus on other things.

    Joe Colantonio: [00:29:36] Okay, Oren and Maor before we go, is there one piece of actual advice you can give to someone to help them with their test ops efforts? And what's the best way to find or contact you Maor, we'll start with you and end with Oren?

    Maor Frankel: [00:29:45] Well, first of all, how can somebody approach me? I can publish my email, no problem. Anybody who has any questions can send me an email, reach out to me. No problem. Or LinkedIn or whatever, and advice for handling Test Ops. Wow, that's a good question. I think one thing I could say is that there's no perfect solution. You're always going to have to accept certain things. If you want control, then you have to take into account that you're going to slow things down. There's no way you can control everything and still move fast. And if you want to move fast, you're going to have to take into account that things are going to break. But if you move fast, you can fix them fast. So you need to find that balance and what works for you and just accept the fact that it's never going to be perfect. But, you know, you need to do what's right for you and your team and your organization's DNA.

    Joe Colantonio: [00:30:30] Great advice. And Oren?

    Oren Rubin: [00:30:33] My suggestion is the land and expand. You want to keep having things don't be slow, don't be flaky. Try, start small by having ten tests. If you're having a flaky test right now, start to have ten tests that running in the CI widget test and then grow, have eleven tests. Trust is the most important thing. Always keep the CI super clean and so everyone in the organization would trust their tests. So that's my advice. Make sure everyone tests for tests.

    Joe Colantonio: [00:31:04] Thanks again for your automation awesomeness. If you missed anything of value we covered in this episode. Head on over to Test Guild.com/a353 and while you're there, sure to click on the Try for Free today link under the exclusive sponsor section to learn all about SauceLabs awesome products and services. And if the show has helped you in any way, why not read it in, review it in iTunes. Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation podcast. I'm Joe. My mission is to help you succeed in creating and full-stack automation awesomeness. As always, test everything and keep the good. Cheers! 

    Intro: [00:31:47] Thanks for listening to the Test Guild Automation podcast head on over to Test Guild.com Propulsion Node amazing blog articles and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.

  • Rate and Review TestGuild

    Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Matt Van Itallie Promotional graphic for a DevOps toolchain podcast episode featuring CTO insights on the impact of Gen AI on DevOps, with guest Matt Van Itallie, supported by Bugsnag

CTO Insights on Gen AI’s Impact on DevOps with Matt Van Itallie

Posted on 03/27/2024

About this DevOps Toolchain Episode: Today, we'll speak with the remarkable Matt Van ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Sideways Test Pyramid, WebDriver Visual Testing and More TGNS115

Posted on 03/25/2024

About This Episode: What is a Sideways Test Pyramid in testing Have you ...

Frank Van der Kuur Mark Moberts Tatu Aalto

RoboCon Recap: Testing, Networking, and Building with Robot Framework with Tatu Aalto, Mark Moberts and Frank Van der Kuur

Posted on 03/24/2024

About This Episode: Today's special episode, “Robocon Recapp,” is about the insights and ...