Analyzing Automation Results with Nikita Sidorevich

By Test Guild
  • Share:
Join the Guild for FREE
Nikita Sidorevich Test Guild Automation Feature

About This Episode:

Want to know the best way to increase the visibility of your test automation? In this episode, Nikita Sidorevich a product manager at Zebrunner shares why automated test reporting is critical to the success of your project.

Discover how to analyze and debug your failing tests quicker, and detect flaky tests earlier. Also, hear how to get your whole team involved using flexible, team-specific automation test results dashboarding.

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Nikita Sidorevich

Nikita Sidorevich headshot

Connect with Nikita Sidorevich

Full Transcript Nikita Sidorevich

Intro Welcome to the Test Guild Automation podcast, where we all get together to learn more about automation and software testing with your host, Joe Colantonio.

Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. Today, we'll be talking with Nikita about analyzing automation results. Nikita is an engineer with a passion for web development, design, best practices, and software development methodologies. He's also currently the product manager at Zebrunner, which is a tool that helps with smart analysis of test results. I think it's a highly overlooked area with automation is reporting, and that's the type of things we'll be diving into today. So if you want to learn how to shorten your release cycle and deliver high quality, you don't want to miss this episode, check it out. The Test Guild automation podcast is sponsored by the fantastic folks at SauceLabs. Their cloud-based test platform helps ensure your favorite mobile apps and websites work flawlessly in every browser, operating system, and device. Get a free trial, visit and click on the exclusive sponsors' section to try it for free for 14 days. Check it out.

Joe Colantonio Hey Nikita, welcome to the Guild.

Nikita Sidorevich Hi Joe, hi everyone.

Joe Colantonio Awesome, awesome to have you. Really excited to have you on the show today. Nikita, before we get into it, is there anything I missed in your bio that you want the Guild to know more about?

Nikita Sidorevich Yeah. Yeah, sure. It's probably a good idea to talk a little bit about myself. Yeah. My name is Nikita and I started my journey in this industry as a software developer. And in fact, this year marks my 10 year anniversary in the industry. And well, the thing is that I started to work for a huge service provider company, and for like seven years, I used to work as a contractor and I used to work for mostly enterprise-scale organizations. And well, I guess it would be fair to say that during that period of time I realized how messy automation can be, right. So. Well, anyway, let's keep this for later. So except for the development, I used to well, to do a little bit of DevOps TestOps. Later, I started to manage teams and projects, and yeah, I mean like three years ago I joined the Zebrunner team and in fact, back in the day it wasn't really called Zebrunner, so it was just an internal tool we were developing for our own needs. And yet a lot of things have happened since then. But now, well, three years later, here I am playing the role of Zebrunner product manager.

Joe Colantonio Very cool, so you mentioned as you would, you've been a developer for 10 years, automation could become messy. What are some of the reasons for that?

Nikita Sidorevich I guess, like maybe 10 years ago, automation really became trendy, like a trendy thing. And, you know, like the higher the complexity of the application are developing or of your business, like, the more test cases have to be automated. And that's especially critical for some domains. For example, for the financial sector, pretty much everything needs to be automated. And the problem is that, well, automation, by definition was aimed to ease things, to simplify things for you so it can speed up processes that were done manually before. And I believe the biggest problem with automation, when it becomes less effective than it should be, you know, and this effectiveness is especially important because I've seen situations when testing becomes a bottleneck and test maintenance becomes a nightmare when you have to deal with a lot of unstable tests that aren't really well, results are not really reliable. And that's when, you know, you have to think about the efficiency of your automation and, you know, maybe you're doing things wrong if you got in such a situation.

Joe Colantonio Absolutely, and I think the efficiency part is a big issue. A lot of times it's, it's great to create automated tests, right. But then what do you do after the test are run? You have to run them in parallel. You have to run them continuously and they're failing and they're flaky.

Nikita Sidorevich Yeah, correct.

Joe Colantonio I found working for Enterprise most of my time was spent actually debugging failures and most of them weren't even failures. So I think that's where reporting comes in. Any thoughts around that?

Nikita Sidorevich Yes, in fact, yeah, I have a very similar feeling, you know, because I remember my days working as part of large teams and we had this. It was a set tradition, I would say. So every morning we started with a test run results review and it could take up to two hours. And we even had a dedicated person responsible for that. It was called K Captain. So K Captain was common, you know, starting his workday from, you know, opening the Jenkins report, for example. And he was going through all of those automated test results. And it wasn't always clear what went wrong if it was, you know, like a latency issue or it was indeed an application box. So a lot of debugging took place. And sometimes, you know, we had to work as a whole team like both test automation engineers, developers, just, you know, to make some sense out of those reports. So we're literally wasting hours of our time just doing that.

Joe Colantonio One hundred percent. I love that you're a developer, so I'm just curious to get your take on this with automation. There's been this push towards AI machine learning, but I think a lot of it's been marketing focus. But you're an actual developer. How much do you think AI can actually help with automation?

Nikita Sidorevich I guess, well, think is that AI is a very broad term, right? And I would say as a developer myself, you know, who was taking part in a lot of development conferences. I tried to be proactive and I tried to read a lot of stuff on the Internet. So it seems like nowadays AI is well, it's turning into a buzzword. And it's actually not it's not like it happened yesterday. Right. So it's been like this for maybe the last few years for sure. I believe that indeed there are aspects of AI and machine learning that can indeed help, you know, engineers to take care, test automation engineers to take care of their issues with automation. So,. for example, when you have re-appearing issues causing your tests to fail.  Sometimes, you have to process too much information, way more than you can keep in your head, and that's when machine learning can come in handy. So some of the aspects of the analysis can be delegated to machine learning and you can train those machine learning algorithms, you can train those networks, and at some point of time, you can really see the benefits of incorporating machine learning, especially in the aspects of test run results review. So that's what I think about that.

Joe Colantonio Yeah, I totally agree. I think the thing that people miss out on is machine learning is probably good for analyzing lots of data over history or over time and things like performance logs or even automation logs. A test run seems to be a perfect user case for machine learning, is that true?

Nikita Sidorevich Correct. Yeah, exactly.

Joe Colantonio Cool. So how then could machine learning help with analyzing automated test results? What are some key areas you think that could actually benefit people from using something that has machine learning built into it to help you with your reporting test results analysis?

Nikita Sidorevich Yeah, so so if we'll think about, you know, like a routine that Test Guild Automation engineers, you know, comes through. So we work with a lot of, a lot of artifacts. Most of those artifacts are test-based. So now we have stuff like videos and screenshots, but we still have to analyze a lot of text. And even when it comes to images, we still can benefit from machine learning. So we can do visual regression testing. We can do that automatically with the help of systems built on top of AI and when it comes to text analysis, we actually can do a lot of interesting stuff by, you know, well in the area of failure of classification because like, well, failure is a very broad term once again. So you have to be very sure about what is causing this failure. So you have to do some sort of classification first. Well, without this classification, it's really hard to, to understand what your reaction should be, because if it's, for example, an application issue, you should probably file it back and go to the developer and say, hey, this is something that needs to be fixed. If it's, for example, testing needs to be updated. Once again, you want to have some guidance telling you that. So that's I would say the aspect of classification is when it comes really helpful, all of those artifacts that we used to process manually, application logs, test logs, stacked raises, responses from the back-end, that kind of stuff. All of this can be, you know, really processed by a machine learning system. And it can help us to classify those issues. And by having the specification in place, we can just have an idea of what needs to be done about those issues so we can actually focus on what matters instead of spending our time analyzing those results ourselves manually.

Joe Colantonio Absolutely, and I love like this helps when you're coming in the morning when the test results are done, you can look at it and, you know, which is most likely a real issue as opposed to maybe a flaky test or a known issue. That's what it sounds like to me.

Nikita Sidorevich Yeah.

Joe Colantonio So I guess along those lines then is once again, when I was working for a large enterprise, there be a bug introduced during a sprint, and people knew about that, but we still would run our tests after every check-in. I still have to remember, oh, this is one of the tests that failed, I'd have to say, oh, that's this is actually not a real issue, it's a known issue they're fixing it. So I don't have to worry about it right now, but then I'd have to tag it in the code, and then it gets untagged and became a nightmare. So can Machine Learning or can reporting help sort this, categorize this out as well?

Nikita Sidorevich Yeah, absolutely. So basically, even in the way, you described it, so it's basically a set of repeatable steps you had to do every day. So this is obviously something that can be done automatically and a lot of reporting tools are actually doing that. Well, I don't think that this particular well-use case is a concern for machine learning. I mean, machine learning can be incorporated if you are doing some probability algorithms before in some probability-based algorithms so machine learning can be used in this scenario. But yeah, most of those steps, for example, let's say that you have a test that failed yesterday and someone went ahead and filed it back in project management tool, for example, bug tracker and it was linked, for example, it was linked to this failed test execution yesterday. So the next day it feels, you know, it feels really stupid to do the same thing. If it wasn't, fix it. If this wasn't really addressed, if the issue wasn't fixed, it feels really stupid, you know, to do the same routine once again. So this is something that reporting tools help you to speed up to you know, to take off your yeah, work desk probably.

Joe Colantonio Right yeah I guess also maybe not machine learning specific, but historically, having that data, knowing that maybe this test failed two months ago, they fixed it, and then it failed again. And you don't remember what they did to fix it. You'd be able to go into a report and be able to kind of find that information, I would think if it was set up correctly.

Nikita Sidorevich Yeah. So this correlation is especially important indeed. So when you have. Yeah. If you find that basically if you see the failing test that has the same, the same root cause, like the one that was failing, for example, six months ago, and maybe there was a bug six months ago filed in a bug tracking system. And someone fixed that bug then it would be really easy for you, you know, by having this correlation in place, it would be really easy for you to, you know, to find what actually fix that bug and speed up not only the well the failure analysis process, but the process of fixing the issue itself.

Joe Colantonio Absolutely. I guess, once again, going back to my work experience, after every sprint, we'd have to have a review with our management over the automation results and what happened. And we'd have to look at the trends. And I have to keep keep track of everything in  Excel spreadsheets to try to explain why test runs were less than they were before or what the failure rate was, why we failed on one day, but not the other. I guess also a lot of people don't leverage reporting to have that type of information or be able to maybe give up a dashboard for management as opposed to technical people. Is that something you see as a need as well for reporting?

Nikita Sidorevich Yeah, that is true. I mean, I wouldn't I would say that that actually depends on the business, on the organization. So from my experience, smaller businesses, smaller organizations, start-ups especially, don't really pay much attention to the analysis process. And I believe one of the reasons behind that is the fact that they don't really do much automation as a small organization, as a startup. They're focused more on delivery sometimes, you know, you know, forgetting about automation and quality, which is unfortunate. But once you have, once you have a running business, once you have a complex software solution running and making your money, you're more and more concerned about automation. And you definitely want to have more transparency and visibility of what's over what's going on. So in our experience, like we've been talking a lot, well, to people from the community to representatives of organizations large and small, so we believe that it is especially important once organizations reach a certain scale, especially when it comes to enterprise-scale organizations that have, well, thousands of tests running. And when you have thousands of tests running, even if you don't have that big percentage of failures, for example, if you have like five percent of tests failing every day, which is terrible if you ask me. But once again, if you think about that five percent out of, well, let's say ten thousand tests, the reviewing part can be a nightmare. But not only that, the reviewing part is problematic but you want to understand what is actually going on. They want to know what tests are unstable, what tests are flaky. And so basically you can have an idea of what needs to be done, how those issues can be addressed, and maybe should refactor some of your tests. So ideally, you want your regression reports to be green always. And yes, in our experience, that mostly comes at a certain level of organizational maturity. So that's not really well applicable for all organizations out there.

Joe Colantonio No, no, absolutely. And like you said, when you have large test suites that run for enterprise, even a small failure rate like three percent, that's a lot of tests to have to debug and maintain. And a lot of times I saw the team struggling to do it in time before the next test run. So eventually the sprint teams would be like, you know, they didn't know what was going on, so they just would ignore the results. It seems like reporting could help you say to each sprint team, look at this dashboard for the sprint team. These are actual failures that aren't known issues, they are not flaky tests. You should look at these right away. I would think that would help get teams more involved as well and not just ignore the result.

Nikita Sidorevich That's a very good point. That actually makes me think about other things that sometimes some organizations do automation like this. So basically there is a development team that does, you know, the official development and then there is another team. And this team can be, you know, located in another country or another continent maybe. And that is the team that is doing automation. And it's really out of sync with the development. And yes, so like you said, it doesn't really motivate developers to care about automation much because they're like, OK, that's this guy's problem. So once those things start running out of things, out of sync, that when it becomes especially problematic. Once again, we are going back to the automation should be effective and in a scenario like this, well, the effectiveness of automation is probably, well, not very high, unfortunately.

Joe Colantonio Absolutely. So at the very beginning of the episode, you talked about how you work at Zebrunner, but it wasn't Zebrunner when you started it, just you were working on something, and eventually, whatever you're working on turned into the product Zebrunner. We never actually talked about what is Zebrunner at a high level. So what is the Bronner I guess is the main question?

Nikita Sidorevich Yeah, that's a very good one. So yeah as I said, Zebrunner initially started as an internal tool. Well, and there was basically an internal tool for a company that was in fact offering services and quality assurance. So basically it was a company that was a service provider. And our goal back in the day was to make the test automation process, well, as transparent for our customers as possible. And that's how Zebrunner was born. So it wasn't Zebrunner back in the days, but that's how this solution, the idea was born. And it all started from a set of relatively simple static reports and a very basic dashboard providing a very high-level idea, like how many failing tasks you have in a test run or, for example, what is a daily pass rate, that kind of stuff? Well, pretty high level. And the thing is that, yeah, while this organization was onboarding more and more customers, this tool came really handy to our teams. So at some point in time, there was a dedicated team of like, well, a few developers who are doing this half-time. And they started to evolve this tool that some named unknown tool. Right. And around the same time, we made it open source. And so we started to evolve, evolve it, we focused on engineer productivity and advanced analytics, more advanced that's what I just mentioned. And in terms of test results review, we are told the feature is speeding up the process for engineers. We created a whole bunch of integrations with the standard project management toolset, test case management tools. Then we improved our dashboards, building analytics on top of historical data that was collected by Zebrunner. And yeah so basically Zebrunner became very popular among our dev teams and among our customers, our customer-customers' engineers and that's when automation engineers decided to use this as well. And I believe that's when we realized that we want to try to move further as a standalone product company. Maybe so we wanted to, you know, to bring this Zebrunner out to the world. And these days, in fact, these days Zebrunner is more than just a reporting tool. So we have to tie a lot of aspects of testing together in one place. And we have our own open-source test framework. For example, we have our own mobile device farm. We have our own managed scalable Selenium grid. And this is basically, you know, the things that most of the organizations are using. So they have to do some sort of web testing so they can buy services from companies like BrowserStack, LambdaTest. And sometimes they use real devices, manual cases are usually used in real devices, but nowadays it's all automated as well. So you need something to tie everything together. And basically, this is where Zebrunner comes into the game. So basically the reporting and analytics part of Zebrunner is something that ties all of those things, all of those aspects together in one place under one umbrella.

Joe Colantonio I love that story because it's not a product that was just made up to say maybe there's a need for this, let's create it and put it out in the, in the field and see what the result is. You actually, it was based on what you actually received from multiple clients and then you picked an internal tool for in-house and then realized you could help other people. So I love that's how it originated.

Nikita Sidorevich Yes. So we're actually talking, so we still it's not like we thought that we have enough data and we are fine with what we have and we are going to evolve Zebrunner in a certain way. So we are doing a lot of, we are doing a lot of interviews. We are talking to developers, engineers, product managers, product owners, etc. all kinds of people from the industry to understand what kind of issues they are facing in their automation process, in their, well, software delivery process. And yes so we are trying to hear their needs, trying to hear about their pains and that's what we do in order to get an idea of how we should evolve Zebrunner in the future.

Joe Colantonio Nice, I know a lot of people listen to the show, a lot of guilders are really into helping make products better. How do you do that, is that just a formalized process? If someone says, oh, I have an idea that I would love to have in a reporting tool, can someone reach out to you? Is there a website or any URL they can go to submit feedback?

Nikita Sidorevich Yeah, absolutely. So basically, it would be worth, it would be worth mentioning that Zebrunner is offered like there are three options, how we offer Zebrunner and that that's important in the scope of question you asked. So basically, Zebrunner is offered as a form of the free, open-source community and it's basically open-source on GitHub and we have some repositories on GitHub. Everything is public there. So we can go ahead. You can submit an issue or what you can do there. You can find a channel that you can join. So we have our own Telegram channel. We have our own Slack workspace. So basically we do instant messaging with our community members as well. So that's one way of reaching out to us. And then there is a cloud offering of Zebrunner, which is based on, and there is a form basically you can fill out this form if you have like a broader question about iOS test automation in general or about particular need so we can reach out to us. Describe your situation and we'll get back to you. We'll schedule a call and we'll be happy, you know, not to talk just about the Zebrunner but about how you do automation in general and see if we, as a company providing a solution can help you to deal with this problem, with this issue.

Joe Colantonio Nice. Now, I know there are other solutions out there as well, you have things like Allure and Report Portal. How does it run a different like if someone's trying to decide, oh, there are all these solutions? Which one should I go with? Anything that you think Zebrunner would differentiate itself from these others, not make it better or worse. Just wondering, like if someone had a certain needs a Zebrunner match there that need better than these other solutions?

Nikita Sidorevich Yeah, that's a very good question. So Zebrunner is more of a live dashboard rather than a static dashboard compared to, for example, Allure so we keep a lot of our analytics is actually built on top of historical data ever seen, that we store on Zebrunner end so we use it to build analytics, to build predictions and trends. So this is one thing that defines us from tools like Allure, and we rely heavily on machine learning, which is not something that you see often. And we do a lot of classification with machine learning. So basically a lot of things that improve engineering, productivity, the speed up process of test results, reviews are built on top of machine learning algorithms.

Joe Colantonio Is that functionality built into the open-source one, is it only for the paid solution?

Nikita Sidorevich Unfortunately not. So this machine learning is something that is only available for our cloud customers. So we are thinking about open-sourcing the machine learning part to open-source into open source. So that is probably something that is going to happen. We're still considering it. But well, instead of machine learning, we have a lot of just non-probability-based algorithms that are also used to, for example, track known issues. Well, yeah, to do auto-linking based on historical data, to link test executions, to test case management system, that that kind of stuff, so the core functionality is still in the open-source. There are so many features in the cloud options that are not available to open source. And those are better, I would call those enterprise-grade features because aside from, for example, machine learning, we have a thing called Project. So basically the concept is very similar to Jira. And this is really helpful when you're operating as part of the larger organization, running well, running multiple projects at the same time. So all of those projects can coexist in the same workspace. So it's basically like a workspace inside a workspace. So this is not something that you have in an open-source. And usually, this is not something that is relevant for a smaller organization running one product, developing one product. So some of those features and as I said, those are more enterprise-grade features, and all of the core functionality is still available in the open-source offering as well.

Joe Colantonio You mentioned that it works with Jira, I believe it has a bunch of different integrations. Yes, if someone doesn't find an integration, how hard is it to implement their own integration with Zebrunner if they had some other type of solution that's currently not available?

Nikita Sidorevich Well, it's probably important to understand how Zebrunner works transfer because there are two types of integrations when it comes to Zebrunner. So basically the very first type of integration is something that actually allows you to push data to Zebrunner. And our initial goal was to make this process as simple as possible. And that's why we created a bunch of out-of-the-box integrations. We call those repotting agents and right now we have integrations for the most popular Java frameworks. So those are the ones working out of the box. And guess what? We have the support of Cypress. We are going to announce this is supporting agents very soon. So we're just wrapping up the documentation. If you're doing automation in .net, we have end unit support, Python engineers can use our Pi-test engines. And basically, if I give you something that I didn't mention or something that is not listed on our website, all of this reporting stuff is built on top of the Rest API. So there is no magic behind it. We have a rest API guide allowing you to create your own integration. So if you are using some test framework that is not currently supported by Zebrunner, it is really easy to create your own. So we provide a guide and in fact, if the integration is demanding, we also offer our own help so we can, it can be a shared effort of creating this sort of integration. So that would be the one integration, the first kind that actually allows you to push data to Zebrunner. And then as a report in analytics tools, your browser itself is integrated with a lot of other tools. Well, from different categories. Those can be project management tools like Jira, test case management tools. So we have an integration of, for example, with TestWell, we have integration with XCUI test. We have integration with popular Jira plug-ins such as x-ray, different integration is around the quarter. We are also integrated with the most well, popular with the major vendors in the world in terms of managed Selenium grid so we have integration of this with BrowserStack, SauceLabs, LambdaTest, and yeah, and more. Actually, if you're willing to become our partner like we, we do, we have a partnership agreement with many of those providers. So basically usually how it happens is that while representatives of their teams reach out to us and we basically just, you know, just discuss what we want to do on our end and what and what can be done on their end. So both their customers and our customers and users can benefit from such an integration. And basically, that happens as a collaboration of two development teams. But anyway, we have a lot of public APIs. So if you want to well, like if you want on your own to incorporate your own tool, for example, if you have your own in-house backtracking tool or something like that, we will be happy to provide an API guide. So this is not something that's available publicly. Because, you know, every request is special when it comes to this kind of integration, so you need to reach out to the team of our support channels. And basically, this can be done via the channels I mentioned before. And we'll be happy to help. We'll be happy to provide API documentation. And once again, if this is something demanding, if you're not the first person to ask us about this integration, we'll be happy to provide our own help and our own developers to help you create this integration.

Joe Colantonio So basically, we've talked all about reporting, but when I was researching Zebrunner, I kept running across the phrase handle's parallel test running. So the Zebrunner reporting tool so how does it help with parallel testing then?

Nikita Sidorevich Well, there is one more aspect of Zebrunner, we have a thing called the launcher. And basically, the launcher is a user-friendly UI that allows you to, you know, actually schedule and execute test runs. And so there are two things that are probably important about launchers is, well, the one thing that we have, what we have right now, it's relatively simple. It only supports the limited number of test frameworks because what the launcher does, in fact, actually creates a level of abstraction between a user and the particular test framework you're using. And it allows you to configure your test runs per UI. And that's when you can control the level of parallelization. And that will be one thing. And I think was mentioned about launchers and well, this is something really exciting. This is something we are well hopefully we'll release by the end of this year. So that would be like a whole new world, a brand new revision of launchers. And what we want to do, we want to build well, we want to still keep this layer of abstraction between and a user and a test written by the developer. And it's going to be built on top of containers. So basically, we want to run tests in containers. And while this idea of containerized environment makes things simpler for us and for insurance, so basically, if you have, let's say, a docker image that needs to be used to execute your tests and you can basically do some sort of configuration on Zebrunner and then you can schedule your test runs via this by this UI, with this special UI. I would say that it's not something that is really easy to explain in a few words, unfortunately, but the main idea that we want to you know, we want to make sure that test automation engineer doesn't need to go to Jenkins, for example, to control parameters of their test launch, et cetera. So basically everything happens within the same ecosystem, so you don't need to switch between different systems a lot.

Joe Colantonio No. So I first heard about Zebrunner when I was doing my new show. I came across your required Carina. And you mentioned Carina very briefly earlier in this episode. So could you just explain really quickly what is Carina for folks that may be said, oh, well, I've never heard of that?

Nikita Sidorevich So Carina is basically a test framework that is built on top of TestNG and it basically combines a framework for API testing, well, web testing. So, yeah, we basically just added some popular features to TestNG and to have like the framework that would cover them all. Yeah. So as I said, API testing. So for example, some teams used Rest-assured for API testing. So we kinda incorporated some concepts of those to Carina. And our idea was to create this framework that can solve well, if not all if not will address not all of those concerns, but at least most of those concerns related to different kinds of testing. So this is the idea that was initially thought of when we started to work on Carina. So this is something that we still use daily and we have a decent community around it. So this is not something that only we use, a lot of people around the world are using Carina, and they are contributing to Carina. And from our experience, it turned out that it's really easy, you know, to get started with Kaarina, especially if you are not an experienced test automation engineer because you have everything in one place. You don't need to, you know, to learn two or three different tools to test APIs and then to do a Web test. And so basically, we have ever seen in one place under one umbrella.

Joe Colantonio Now, it's also open source as well, correct?

Nikita Sidorevich Yes, it's one hundred percent open source. There was no paid version of Carina, nothing like that, it is completely free.

Joe Colantonio So Nikita I know you work with a lot of customers that actually use Zebrunner. Any use cases or anything interesting you see your clients using Zebrunner for that really is working for them or any success stories you like to share?

Nikita Sidorevich Yeah, absolutely. So in fact, I can share two stories demonstrating two different perspectives on Zebrunner. So when we started our conversation, we were talking about how messy automation can be. And that's not always the case, especially when you are starting a new project, have a chance to do everything the right way to do things from scratch. I have one story about such a scenario. So basically one of our customers just started to build automation and they started to use Zebrunner since day one. And in our experience, they mostly use it as a collaboration tool. So this is basically something that is used by their engineering staff, test automation engineers, and developers. So basically they use Zebrunner as a shared space where they can well both you know, review the results, and then they can bring the developers in so they can see the artifacts related to the particular test execution to get an idea of what went wrong and speed up the process of detecting what discussing the issue for them. So it's basically it's an engineering team that is using Zebrunner on a daily basis. So that would be one scenario. And another scenario is, well, it's more complicated, I would say. So there is, there is one large company that is using Zebrunner and they basically started to using Zebrunner when they had a lot of automated test cases in place already. So they had, if I'm not mistaken, they had like two hundred and fifty test cases automated and think is that well, yet by this number you can imagine how large this organization is. And they were doing a lot of outsourcing for test automation. And at some point in time, they realized that they well, they don't have an idea of what's going on with automation, how reliable it is, and how different teams, how can they combine and aggregate the results provided by different teams in one place. And that's when they started to incorporate Zebrunner basically to keep everything in one place. And the management is actually using those dashboards. And some of those dashboards are pretty advanced these days. So, for example, right now they can give you an idea of how your pass rate changes over time, and in a way it can help you to understand what is the ROI of your test automation. Basically, you can see how much new these cases are automated, how much of the existing test cases are unstable. So you can get an idea of where you need to invest your engineering time and fix those problems. Zebrunner for them is a tool that helps them to, you know, to clear up this mess and yet make those test automation great again, I guess. And yeah, they're happy using Zebrunner these days. So I would say that those would be two to most representative stories about customer success with Zebrunner.

Joe Colantonio Very cool. Nikita, before we go, is there one piece of actual advice you can give to someone to help them with their automation testing reporting efforts? And what's the best way to find or contact you or learn more about Zebrunner?

Nikita Sidorevich Talking about finding me and learning about Zebrunner, you can always go to and you can find all of the links related to all of the products we support and maintain, such as Carina and mCloud, which is our basically our managed device farm. Or you can join our community in Telegram or in Slack or we can start a conversation on GitHub. You can get in touch with us and we'll be happy to chat. And usually, our response time is really quick. So it's not like you'll have to wait for an answer from us for like days. So it's usually a matter of hours to get in touch with us. And once again, we are very supportive, especially when it comes to people who are willing to use Zebrunner. If something is not working for them, if they're not understanding something, we are there to explain things to them. We are there to help. And usually, we can give you advice even on how to how to build and structure your automation. So one of the things we did and that's actually going back to the first part of the question about like advice from test automation, one of the things we did, in fact, we were doing some sort of audit of test automation. So, for example, people were coming to us and they were like, hey, like we don't know if are doing the automation the right way. Can you guess? Help us. So we're doing some sort of audit. And basically, we were given a recommendation like best practices in terms of grading tests, you know, so they can be easily parallelled. For example, you know, we were given advice on how to not end up with flaky tests. And yet I would say that. You guys should really use reporting if you have a lot of tests, so this is something they said already. So if you have a lot of tests and if you are struggling with the analysis process, if you feel like you are dealing with the same routine again and again and again, reporting tool is probably something that can really, really help you to make your automation effective again.

Joe Colantonio Thanks again for your automation awesomeness. If you missed something of value we covered in this episode head on over to Test And while you're there, make sure to click on the try for free today link under the exclusive sponsors' section to learn all about SauceLabs awesome products and services. And if the show has helped you in any way, why not rate it and review it on iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation podcast. I'm Joe. My mission is to help you succeed in creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

Outro Thanks for listening to the Test Guild Automation podcast head on over to for a full show note. Amazing blog articles and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Testguild devops news show.

Browser Conference, OpenSource LLM Testing, Up-skill Test AI, and more TGNS125

Posted on 06/17/2024

About This Episode: What free must attend the vendor agnostic Browser Automation Conference ...

Harpreet Singh-TestGuild_DevOps-Toolchain

DevOps Crime Scenes: Using AI-Driven Failure Diagnostics with Harpreet Singh

Posted on 06/12/2024

About this DevOps Toolchain Episode: Today, we have a special guest, Harpreet Singh, ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

AI-Powered Salesforce Testing, Shocking Agile Failure Rates, and More TGNS124

Posted on 06/10/2024

About This Episode: What automation tool just announced a new AI-driven solution for ...