Keep Track of Your Automated Tests using Delta Reporter with Juan Negrier

By Test Guild
  • Share:
Join the Guild for FREE
Juan Negrier TestGuild AutomationFeature

About This Episode:

Want to know how to monitor all your automation test results in one place? In this episode, Juan Negrier, a co-founder of Delta Reporter, will discuss why he created this solution and how it can help you. Discover how it can help your organization with automation testing, main features, possible implementations, and ways to deploy it. Listen up!

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Juan Negrier

Juan Negrier headshot

Juan Negrier is the co-founder of Delta Reporter, a solution to integrate test results, from all the different tests that are run when there is a new release. Juan has more than 7 years of experience working as a QA Engineer, where he has served in companies like Everis, Yapo.cl, and Apple. Its currently working at Distilled SCH in Ireland, where Delta Reporter is being used to control the quality of most of the releases.

Connect with Juan Negrier

Full Transcript Juan Negrier

 

Joe [00:01:30] Hey, Juan! Welcome to the Guild.

Juan [00:01:33] Hey, Joe! Thank you for inviting me.

Joe [00:01:35] Awesome, awesome to have you. I guess before we get into it, is there anything in your bio that I missed that you want the Guild to know more about?

Juan [00:01:40]  Well, I don't know really. I don't know. I'm not sure really.

Joe [00:01:44] Okay. Sure. So I guess as I mentioned in the intro, Delta Reporter, I guess people are curious to know at a high level what is Delta reporter?

Juan [00:01:51] Oh yeah. It's basically a tool that you can run on a server, on any (unintelligible), on a docker, on whatever you have. Actually what with recommend is to deploy it into EKS using Helm. Basically, when you have it alive in a server, you basically use one of the plugins we have for your framework, and then you can send the data from your test directly into the Delta Reporter. And the cool thing about it is that you can send data for different types of tests. I mean, if you have testing details (??), test for contract or unit test, you can send those to Delta and also your end to end test if you are using printers WebdriverIO you can also send that. So in Delta, you can basically integrate all these results. We have things like a pyramid, like the famous testing pyramid. And so we're actually able to draw that in the lounge. So basically, you can see your lounge, and then you can see if you actually are achieving this pyramid of testing with a lot of unit tests, a lot of contracts, and just a small amount of end to end or maybe start (??) reverse or whatever. So basically, in just one view you can check what is your cost of quality? Or how is your execution of tests and how is everything basically?

Joe [00:03:08] Nice. So I like to drill down into this a little bit more. But you know what were you struggling with that made you want to create this? Were there any other tools that you were looking for that you couldn't find? And you're like, “Oh, we're going to have to create it ourselves.” Like, why take the time to create your own open-source project like this?

Juan [00:03:23] Yeah, that is a very good question. Thank you. The reason is that we were using Allure before and the unit is normally in Jenkins, but we have the problem of how to keep track of the history of the tests because what happens is a lot of processes. We also have a flaky test. We also have tests that you need to maintain, and then when they fail you need to be fast into figuring out if the test is failing or if the system is failing or maybe it's the environment that is an issue. And so basically went into something that was able to provide us feedback immediately or maybe that it's so easy to get the information we want in order to figure out this kind of stuff basically, the root cause of all the failures. And we were looking at some tools. For instance, we really tried to use one tool that is called ReportPortal, which is amazing. It's really amazing. But the problem is that when we wanted to implement it, we impact some problems with the other teams like with SRE and DevOps teams because the problem is that the tool was so weak (??), that the effort to employ it was basically not a priority for them. So basically it was only up to us in Kuwait (??) and I don't know we just believe that probably it was easier for us to just do something ourselves just to improvise basically. We didn't think much about it. We just wanted to make it more simple, easy to employ and everything, and then I don't know we just did it really.

Joe [00:04:53] Nice. I was going to ask about ReportPortal because it sounds like it's similar to ReportPortal. So what makes it different? Is it easier for you to deploy without getting your other team members involved? Is that the reason why you went with it again?

Juan [00:05:06] Yeah, exactly. For instance, on Delta Reporter, we don't have control for users at the moment. We are thinking about it, but we don't have that at the moment. So, for instance, if maybe if you're using Junit Reporter or Allure or something like that, or Timeline Reporter which is also very good and you just want something simple. You don't need a huge system to control everything. Probably Delta is good for that because you can basically integrate everything. It's easy to employ. It's small. So that's the difference we have with ReportPortal basically. But ReportPortal is amazing, by the way, it's really impressive the work (unintelligible).

Joe [00:05:43] Absolutely. I see other people, too, when they create dashboards as they use something like the ELK Stack. So Elasticsearch on the back end, but the database and Kibana on the front end. Is this something similar or does it make it even easier, more tester focus – Delta Report? I don't know if you've seen Elasticsearch and what the differences between the two or not.

Juan [00:06:03] Yeah, I have never used ElasticSearch, but I know what it is. We're using it in my company actually. Yeah, I think that it's a lot simpler really. For instance, on the back end, it's just running on Python with Flask and the front end is using Next.js in order to store everything. But the approach is really, really simple. Like even some of the biggest screenshots,  since they are not… they are compressed and everything. We're actually storing those directly on the database. So we are always erasing stuff and everything. So it's really simple. We are not using anything fancy for the moment. It's really simple to employ and really to use. (unintelligible) which is the other good thing.

Joe [00:06:44] That's probably a key feature, though, the simplicity, because you don't want to get bogged down in making this awesome tech that no one uses. It seems like you create it to be easy, but gives you all the information your team needs to be successful. It's what it sounds like.

Juan [00:06:57] Yeah. Yeah, pretty much. It's something like that. So basically if it's a war and you need a tank then it's basically the simplest tank ever, but it does its best.

Joe [00:07:07] So I guess the next question is, once you do have it implemented because we should dive into that also, how does it help your organization with automation testing? So you created it and now it's in your organization. How is it helping you and your team?

Juan [00:07:19] Yeah, basically the thing is that in my company we have basically two systems that we maintain mostly but you are dealing with different architectures. One is monolithic and the other is based on microservices. SO it happens that Delta is helping us with all these releases. For instance, for the monolithic system, we are basically releasing this big piece of all the components in one place and we always run all the tests, you know, basically the integration, end to end, and everything. So basically the cool thing with Delta test is that when we have a release and something fails and we need to check it to give Delta the right that is okay to release or not. It's just basically making one-click going to Delta and then checking all the history of the test, tell you exactly fail, also it's cool to see the integration of the end to end test. And recently we have a new feature that we call smart links. And these basically are links where you can pass data from the test. So one thing we did with this feature is to create a link to Kibana where we have all the locks about the system. And it's cool because basically when you click on this Kibana button you are sent to Kibana and we filter on the timestamp where the test was running, an end-to-end test. So then we can check the locks there as well. So it's pretty amazing because basically if something fails we have access to the locks too, well, screenshots of everything. So figuring out what was broken is faster than before. (Unintelligible).

Joe [00:08:49] Now that is awesome because a lot of times you're like, “Okay it broke. Now I need to go into this other system. I need to find it.” So because it's timestamped you just click on the link. It takes you directly there. That's pretty awesome. What other information can you pass with a smart link? Can you do something like performance, like high level, how long it took the test to pass? Things like that.

Juan [00:09:07] Things like that. And also on the client. For instance, we have a client for WebdriverIO, and for that client, we are… where you see an integration visual relation test with another tool called Spectre, which is very worthwhile. So I'm also the one trainer for the visual relation plugin for WebdriverIO, that one for Spectre.  So basically what I did is to create a link that is sent back to Delta with the URL for Spectre. And basically, that is not a command.  It's a command on the client or WebdriverIO where you can send any data back to Delta and then use this data into the smart link. So in my case, what I did was to send the URL for Spectre where the screenshots of the test run are stored and then I just click the button using this data. You also have the button to go to the locks on Kibana and then you have this button to go to Spectre, this visual relation tool. So you also can check the screenshots for the visual relation test.

Joe [00:10:04] That's nice. So it sounds like it works with WebdriverIO. What else does it work with then? Say I'm using another tool can I implement it easily using another tool like Serenity or TestProject or something like that?

Juan [00:10:18] Yeah. What we're facing now is basically creating more plugins for other test frameworks. We're now working on the one for JUnit. Actually, Christian Moisa is another collaborator and he's working on that. And then we have collected the unit. We have WebdriverIO. We have (unintelligible) TestNG. And yeah, we are now looking into which ones it can also be implemented. We're looking into Jest (??) factory. Jest is going to be the next one. So, yeah, basically we're just looking and starting which framework we're using at first. Obviously, we're setting priority for the ones we use in our work. But yeah, I mean we're just looking into that actually.

Joe [00:10:59] Awesome. Because it is open-source if someone wants to create their own plug-in, how easy is it for them to do that?

Juan [00:11:04] Yeah, from what I saw normally to expand the test framework we're using…for instance for WebdriverIO it was using these hooks when the test starts, when the test is finished, and everything. And then you see the test name and the status, stuff like that is sending that information back to Delta and then controlling everything. It's actually complicated, really. When I was doing the one for TestNG was actually complicated because when you are running several tests in parallel, they ran. You need to use x amount of stuff and it's messy. So we still have issues with big players, actually, because it's messy. (Unintelligible).

Joe [00:11:49] It sounds a bit difficult. So if someone was to try this, then it takes some time. It's not like an easy, easy one and done type of deal, it sounds like.

Juan [00:11:58] Yeah, but it can be done. That's the good thing.

Joe [00:12:02] Cool. So also how does it integrate with your system? Is it a cult from your CI/CD system? Like if you're using Jenkins how do you integrate it? You said it works with the pyramid. So how do you know when, how do you get your developers to use their JUnit or the unit test Delta send data to it? Do they have to code anything or is it just a matter of plug it into say Jenkins or when it runs it just knows how to collect the data?

Juan [00:12:25] Yeah, that is a good one as well, because actually, we have in our backlog to create the interaction with Jenkins like a direct link or something like that to Jenkins from Delta. Because what we are doing at the moment is just to use a simple select button. Basically, we have channels where we have all the releases and stuff and we just have buttons there that link from Slack back to Delta. That is what we're doing at the moment. It's part of the plan to actually do something for Jenkins. For Jenkins what we are doing at the moment is just sending information about the pipeline back to Delta. So basically it's possible to put a button on Delta to move from Delta to Jenkins using these smart links feature, but not from Jenkins to Delta that is what we need to do now. And I don't know if it's possible maybe to send to Azure or Travis or (unintelligible). We need to take a look at that.

Joe [00:13:18] Nice so how long you've been developing this? Is it fairly new? Is it a year, months?

Juan [00:13:24] I spent more than a year like a year and a month actually.

Joe [00:13:28] And you are actually using it in-house for like a year and you're getting a lot of benefits from it. Sounds like.

Juan [00:13:34] Yeah, yeah. We have started using probably around six months or something like that.

Joe [00:13:40] Cool. So I love the future of the smart link. What are some other features in the tool that you think people would really benefit from?

Juan [00:13:47] And it's also possible to leave notes on the test. For instance, imagine that you are troubleshooting what is happening with the tool, with the test. And then there is something that is probably a bit alien. So it is something good that you can actually leave a note since we don't have users or login at the moment. You need to leave your comment and then your name. It's the only way at the moment but that is a feature we have. And also since we have basically a structure that is based on tests. And then we have like the test that is run for the specific test run. So the notes (unintelligible), it's one of your stores on the higher test. So basically, if you make a new run and the same test is running again and again, you're going to see the note again on the desk. It's the same with test history. Test history is basically you want to show the result for that test registrants. And also, if you're going to test this history, you can also open the previous test, check why are testing to pass-fail, check also screenshots to see maybe if the same failure or something. So it's not only checking if I produce test fail. It's also checking information about the test. You can be sure it may be similar or not or not. And we also have a feature call (unintelligible) to set the resolution for the test. You can imagine that maybe I know you're going on vacation, so until this test is failing so you want to be sure that people don't waste time, maybe looking at them again. You can set the resolution like saying that the test is flaky or there's a probe (??) effect or it needs more research or maybe it's an environment issue, maybe, as you mentioned, ElasticSearch was not (unintelligible) was it. So, yeah, basically those are the features we have for Delta at the moment. Something we want to implement since we have a back end based on Python is to see if we can use something like a Scikit or Panda in order to automate some of this stuff. Basically like pass, for instance, these locks we have and really give an alert. Or maybe they suggest an automatic resolution. That will be amazing to do. And we are trying to examine how we can do that.

Joe [00:15:48] Very cool so it uses Python behind the scenes. Python has a lot of awesome libraries. So if someone's trying to implement it themselves, I guess they would have to use Python, is that correct?

Juan [00:15:56] Yeah, exactly. I was taking a look and it looks like it's not going to be that difficult. It looks like the logins (??) we have in Python for machine learning in this case are actually really strong. So probably it's not that difficult to do. But, you know, priority is something so it's like…

Joe [00:16:10]  Right. I love the concept of notes and be able to flag things as flaky. I used to work for a large enterprise. We have like eight sprint teams. They'd have a test and we're going to fix that the next sprint. And so you'd always have to write time on Excel file. And then when you're reviewing every day…

Juan [00:16:27] Oh, yes.

Joe [00:16:27] …why it failed. I have to go. I forgot why because I couldn't annotate it in Jenkins. It sounds like this type of notes feature can help you with that.

Juan [00:16:35] Yeah, true, because for us it was the same, keeping us frigid with everything. And then also keeping track of data risk is normally messy and everything so it's like I don't know. I mean for us it's been great to be honest because we can now keep track of all this information in one place. Also since we have this difference between environments with microservices that are not yet that monolithic so basically some approaches that we apply for the monolithic one don't work for the microservice process. I don't know, given the (uintelligible) as well so…

Joe [00:17:02] Cool. I like the approach of using Python behind the scenes because you can make it. It sounds like it's easy to expand it out to other things because there are so many libraries in Python that can help. So you talked a little bit about plans for the future, but anything else you see maybe years down the road you'd like to see implemented, that you probably can't do right now, but you like to definitely get or, you know, in the near future, basically?

Juan [00:17:23] Yeah, I don't know. It's complicated to say, really, because for one thing, we want to really keep it simple. So I don't know really. I mean, as you mentioned, Python has so many possibilities that probably an expansion to something else is really doable and probably not as difficult as well. We are still studying if we want to implement users. We probably want actually. So basically, like for years to come, I really don't know, probably starting out automatic, analysis of locks and everything. For instance, in our case, in our company, we are actually storing information in some data sources so basically, we are also thinking now that maybe we can actually connect to some of the sources where we start the locks and get data from there. I'm not talking about locks from the test. I'm talking about locks from the system when the test is running.

Joe [00:18:11] Right.

Juan [00:18:11] So that will we be…

Joe [00:18:13] That will be a  killer.

Juan [00:18:14] …also amazing to use the process and get something like five or more or smart links (??).

Joe [00:18:19] Yeah, that would be very cool. So if someone wants to implement a Delta Report, what kind of infrastructure do they need? Do they need to have anything in place like containers? Or what do you need to get running if someone wants to do this from scratch, just take Delta Reporter and start using it within their companies?

Juan [00:18:35] Yeah, what we have at the moment is we have a file log to recompose file (??). That is the easier way to run it, really, because it's just a log recomposer(??) running everything. There is actually a starter script in our main repo. So that is actually cool because if someone wants to just test this one on a local machine and see how it works and everything that is crazy with work (??). Then for something more enterprise, we have a Helm repo. So basically deploy it into Kubernetes. It is actually really, really easy. It's just pointing, creating a profile, pointing to this repo on HELM, and then just deploy it. That will deploy a microservice with the backend service for the front end and then we deploy our database. That is really what we have at the moment.  I'm thinking about the rest. But I think yeah, at least we have the configuration as well to run it serverless. But we haven't tried to be honest. In theory, it will work but I can't make promises at the moment.

Joe [00:19:33] Nice. So if someone's using Selenium let's say with Java will they implement this using JUnit to be able to plug into Delta Report?

Juan [00:19:40] Yeah. Yeah. Exactly with JUnit it actually… Sorry. TestNG is the one that is working. (Unintelligible) is on the (unintelligible) at the moment but it's going to be ready probably during this month. So for Java, it's one of the JUnit and TestNG. We have both flavors.

Joe [00:19:55] Nice. So you are a QA engineer and I'm really impressed when people have an issue in-house, they create a solution and then they open-source it. It seems like there's a lot of extra effort there. That you went out of the way to help the community. So how much work did it take? How much programming did you need? Say someone's listening to this is like, “Juan inspired me. I want to make my solution open source as well.” How much programming knowledge do they need to know to do this or how much extra work are you putting in to have this work? And you're working full time and you're working on this now almost like a side hustle.

Juan [00:20:29] Yeah, that is an amazing question because part of the reason we started with the project was also because we wanted to upscale our coding abilities basically. And in my case, for me actually, it's kind of normal to build tools like this. I always end up making something really with Python or Javascript whatever. So then I would suggest to someone who is also kind of like me who is interested in doing something like that is don't be fearful of all that stuff. Just do it. And about open source and everything, I think it's good to be honest with the company about what you're working on, request the time, talk about your plan to open source it, to have it independent from the company, or maybe with sponsors (??) to company and be honest about during times like that. You really want to improve your coding skills. You want to also fix a problem because, in the end, it is a win-win for everyone, it's a win-win for the community, for the company, for yourself. So I think it's something that anyone can do really. In the case of Delta, it's not actually really that complicated because the Flask is different and what we're using in our case and is well known for being really easy to use, really microservices. It's not really that complicated. So it's actually very good to begin with creating microservices because it's so simple to use. There's a lot of tutorials as well. If there is any problem the community for Flask is also big and amazing. And I think the other solution we use was the Next framework and Next is nice. There is also a community on it. And it was more challenging for me, to be honest because also it was my first time using React and it's totally different from Vanilla JavaScript. I can tell you. But still, after you understand the concept and you apply your knowledge, apply what you are learning. it's amazing. And if I need to say something I think is something really beautiful, like think that you're improving yourself. You're giving back to the community, that you are also fixing the problem as well.

Juan [00:22:38] Yeah, it really is a beautiful thing, like you said, because you are improving yourself and you're giving. A lot of times feel like they're just waiting around to do something. But if you're doing something every day, you create a solution it kind of incentivizes you to keep working on it as well. So absolutely win-win for everyone.

Juan [00:22:54] Yeah. Exactly that's true.

Joe [00:22:56] You made it opensource now and say someone wants to contribute, how does that work? Does someone make a pool request, they make a change, and then do you review it before you incorporate it into the core? I guess there's a section called Delta Core. How does someone contribute to Delta Report is my main question?

Juan [00:23:15] Yeah, it works something like that. Basically to create a PR with the change, with an explanation of what was changed. Then I think we have some checks that are automatic, basically to check that it compiled some stuff like that. And basically, it can be assigned to me. At the moment we have three persons working on Delta. One area myself, Aleksandra Pishcheyko, Christian Moisa. So anyone of us can really review it and basically, then it's just merging it and check that everything's okay and create a new release on Helm as well with the template. So that (unintelligible).

Joe [00:23:49] Have you got any feedback since you made it open source from the community?

Joe [00:23:53] Not really, not really. Actually, we were trying to make it more visible in a way basically to see if we can generate some kind of community behind it, in order to improve it and see the value of it or everything. So basically, that was our intention now. Actually, yeah when you were asking me about this for the future yeah probably something like create a community behind it.

Joe [00:24:18] Okay Juan before we go, is there one piece of actionable advice you can give to someone to help them with their automation testing efforts? And what's the best way to find or contact you?

Juan [00:24:28] Okay, what advice can I give? My best advice is to always look for tools. There are always amazing tools out there. And I think it's a good practice to actually take some time, check all the news about automation and check the latest technology we have in testing. Probably that's the best advice I can give to anyone. The best way to contact me, normally I'm pretty active on LinkedIn. That is my main network at the moment.

 

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Matt Van Itallie Promotional graphic for a DevOps toolchain podcast episode featuring CTO insights on the impact of Gen AI on DevOps, with guest Matt Van Itallie, supported by Bugsnag

CTO Insights on Gen AI’s Impact on DevOps with Matt Van Itallie

Posted on 03/27/2024

About this DevOps Toolchain Episode: Today, we'll speak with the remarkable Matt Van ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Sideways Test Pyramid, WebDriver Visual Testing and More TGNS115

Posted on 03/25/2024

About This Episode: What is a Sideways Test Pyramid in testing Have you ...

Frank Van der Kuur Mark Moberts Tatu Aalto

RoboCon Recap: Testing, Networking, and Building with Robot Framework with Tatu Aalto, Mark Moberts and Frank Van der Kuur

Posted on 03/24/2024

About This Episode: Today's special episode, “Robocon Recapp,” is about the insights and ...