Automation in the Modern Age using BigTest with Taras Mankovski

Published on:
Taras Mankovski

About This Episode:

Minimize your testing feedback loop while at the same time expanding the size of the systems you can test. Find out how with Taras Mankovki, CEO of Frontside and Product Manager for BigTest. Discover a new, open-source-testing platform for Cloud Native World that allows you to create test harnesses that invert the testing pyramid. Also, learn how the creator of Capybara, Jonas Nicklas, contributed his experience to help eliminate flakiness in testing at BigTest. Listen up!

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Taras Mankovski

Taras Mankovski

Taras is the CEO of Frontside and Product Manager for BigTest – new open source platform testing platform for Cloud Native world.

Connect with Taras Mankovski

Full Transcript taras Mankovski

Joe [00:01:14] Hey Taras! Welcome to the Guild.

Taras [00:01:18] Hey, Joe, it's a pleasure to be here.

Joe [00:01:19] Awesome, I guess before we get into it, is there anything I missed in your bio that you want the Guild to know more about?

Taras [00:01:24] Well, I mean, I'm an engineer first and foremost, and I've been kind of swimming in the business world for a long time. And so it's it's been a really fun journey. And this project BigTest has been really a passion project and a love project, I guess.

Joe [00:01:40] Nice. So I just heard about BigTest recently. For the folks that don't know, how would you explain BigTest at a really high level?

Taras [00:01:47] I would say it's a test suite that developers and engineers, developers, and QA can share. The goal of the test is to create a tool that developers want to use. So something that is fast and reliable, but it tests the application the way QA tests application. So the way it tests as users use it so that you can have a shared confidence of your whole organization. So QA and developers are looking at the same tests and they know what's being tested. You don't have to compromise reliability and speed for developer experience.

Joe [00:02:24] Nice. Now, I guess, why was this developed then? Is this an algorithm? Is it a tool or is it something like an add-on to other tools?

Taras [00:02:31] I would say it's a combination. So ultimately want to get to is a kind of cohesive package that you could use to do testing right off the shelf. But we are a small team, so bringing that to the world, it requires a lot of kind of iterative development. And the other part is it requires a lot of research because we have to go back and to improve certain areas of testing, we need to actually rethink the fundamental practices around testing, because a lot of assumptions that we base our technologies on these days are based on ideas that have been around for a very long time, but they haven't really been updated to match the evolution happening in a native world. And so it requires that because everything is changing, we are shipping faster, we're free shipping more frequently, applications are being shipped as microservices as opposed to a monolith. So all of these things require that we apply the same kind of rigor that we see in architecture into place in regard to automated testing and make sure that we have solutions that match the modern world. I would say that the approach that we're taking is to first develop things that we know. There's a lot of the ideas that we have in BigTest. We've been experimenting or using them on projects for over three years. So like we have, one of the aspects of BigTest right now is coming out is BigTest interactors, which is essentially for web applications. It's a way for QA and developers to test, to share composable page objects that are essentially part of the component library. So you've got essentially kind of an API that you're providing as part of your component library. And this API is used for testing these components in tests. And so the benefit of that is that you have a very reliable test because if the component library changes, your tests don't break because the tests use interactors that are kind of in sync with the components. And so you've got this kind of connection that happens between test suite and the component library. And so this tool is available. It works with Jest and Cypress and also with BigTest Runner, which is an alpha right now. It's very much like a work in progress. So our goal is to make these tools, make aspects of the framework available for use with other tools, but then eventually offer a kind of cohesive package that you could use off the shelf.

Joe [00:04:55] So you do mention there are some assumptions that people use that are kind of old that maybe hold back to their testing efforts. What are some of those assumptions you think?

Taras [00:05:03] I think one of the biggest things…so we had this previous version of BigTest that was called Frankenstein. It was a collection of tools like part of the test suite to be able to test the application you had Mocha expect, which was used for writing assertions. You had Karma for the actual runner. There's a web pack in there somewhere to build the application. You've got these five things that are supposed to work together to give you the ability to run your tests in a browser. But you can't. What we found is that you actually can build a good developer experience with the system being composed that way, because what happens is that in every aspect of the architecture, you have a silo where data is siloed inside of that piece. In the runner, you have all the information about being able to start a browser and then execute, inject the test, and then open the web page like the runner has, owns all of that. Then you've got the test suite, all of the definitions of the actual test that you want to run a part of Mocha. And that information is not available to the runner. It's actually not available until you actually open up the browser and then you execute the script and then the assertions are actually not a part of the data structure at all, because you only really know about what's the assertion isn't when you're actually executing the test. So you've got all of these different aspects of the system. I mean, the other piece to that is you've got like if you have something like React testing library where you have a mechanism for asserting something in the DOM, that thing knows about what you're touching in the DOM. But each of these things works separately and you don't actually have a cohesive data structure that represents each of those things together. So what happens essentially is that you don't have a way if you want to create a really precise and comfortable and a very responsive developer experience or test writing experience or test running experience like doing that when you have these five silos is actually really difficult because each one of those it brings its kind of quirks. And then you have to…you basically mitigate those quirks in the glue. And it's really hard to make a cohesive experience where you've got this like four things, four or five things that are glued together and you've got this kind of glue that it's just really hard to work with. And so we have to essentially, like when we realized that it wasn't possible to create a good developer experience, basically to start from scratch. And so at that point, when we realized that this wasn't that, that's what we need to do at that point, we're basically, “Well, if we're doing that from scratch, then what other assumptions can we rethink?” So, for example, if one of the things that we've done with the test definition syntax is so fundamentally one of the things that's unique about BigTest I would say, is that the tests are not scripts. They're essentially a data structure. So the entire state of the runner, the builder, the text execution down to interactors like every single aspect of the test suite, is available as part of a data structure. So we can actually query that state using GraphQL. That's the mechanism that's built into the actual runner itself. So we can actually use that to build a CLI tool like the way of interacting with the BigTest system is using GraphQL. So when you want to test or run a test, you just like you run a mutation and the mutation executes a test on a specific browser like you'll be responsible for using WebDriver to spin up the actual browser. Like once the browser spun up, it actually just there's a mechanism for injecting the script well, injecting the test mechanism, and then it executes that test. There's going to be an interpreter inside of the testing system. So all of that, the entire kind of architecture has been rethought. What it allows us to do is it allows us to have a really granular control about how we execute the actual tests and then having this architecture have been rethought it then brings us to the point where we can actually design our test in such a way that it makes the tests very, very reliable.

Joe [00:09:03] Yeah, I was curious to know with this approach, I think you've actually addressed one of the biggest concerns for a lot of people with automation, especially developers, what they get really discouraged quickly is flakiness. So I guess can you tell about how this approach can help with flakiness? I think you call it the convergence strategy. I believe in your blog post, you talked about how you retain references and things like that to help you interact with the Web UI.

Taras [00:09:25] Yeah, yeah, absolutely. So this approach actually when we started working with the Frankenstein BigTest, we were inspired by the work that happened in Cappybara community because there's a lot of…in Capybara they have a mechanism for convergence, which is essentially like kind of rechecking if something is available on-screen or like if you're asserting that, for example, that the button is visible, you can attempt to check that several times until you reach a point where you're like, “Okay, well, I found it” or “It's been like two seconds. I'm not going to find it. There's a problem there.” So that's a fundamental, the convergence mechanism and this was inspired by a Capybara was kind of really surprising, but also kind of serendipitous in some ways, is that because we were inspired by Capybara, the creator of Capybara actually joined our effort and he's working with us on building BigTest and he was able to..so Jonas Niklas, who is the original creator of Capybara, he's working with us to build the browser agent. What he brought was the understanding of how that mechanism and the limitations of what they have in Capybara and so what we were able to do in a second iteration of using this approach is we like invict us, we're able to have very tight control of the run loop. So there are two things that are kind of important to understand about how the runner mechanism works, which is the execution of the test. There are two specific, two kinds of separate distinct things. There are asynchronous actions that you can perform and then there are assertions, you know, actions like steps. Like you want to click on a specific thing. But before you can click on that thing, you have to make sure that the element is visible on the screen so you can attempt to check for that element's presence. And so interactor is a mechanism, interactors actually do that. So if you use interactors with Jest for example, they have this convergent mechanism without having all the BigTest runner so you can actually get the benefits of that stability because the way that the architecture of interactor is such that they're running with the same JavaScript run loop as the application itself, so there's no way to let the time slip. So there isn't like any synchrony where you can wear between the moment that you checked for precondition and the moment that you actually invoked an action or the moment that you satisfy the condition, there is no way for the JavaScript application to kind of escape that. And that's the really big difference in… even there are some really excellent tools that become available recently like Playwright is really powerful for spinning up browsers and API for controlling automation. It's a really awesome tool. But because you're running the script, your execution script is running in a different environment than your browser, there's always a moment when there could be a mismatch where the pre-condition that you've checked is no longer true. And that's where flake comes in. It's in that moment where you lost control of the run loop that you don't know if it's actually still the case. And when you continue, the applications are not quite the same, but you don't really know that that's where you get is kind of weird problems where if you run the application in your computer, the test works perfectly because you have a really fast CPU. But then when you run it on CI, because CI is much slower, your tests are now failing. We got this weird problem where you can't really test on CI. And we're all familiar with that with these kinds of problems. And it really comes down to I think it's the proximity to which the test runs relative to the application. So the closer you can run those, if you can run them in the same run loop, that's when you get the ability to have really tight control and as a result, very, very reliable tests.

Joe [00:13:05] Nice and I think this helps with I think a lot of people who use Selenium or are familiar with the still reference error. So this would help with those types of issues. It sounds like.

Taras [00:13:13] Yeah, it's very similar. We actually thought about how we could make this work with Selenium. But unfortunately, the architecture of Selenium was not really compatible with this approach. That's where I think that the biggest value I think that we can bring to the QA world is that it feels to me like, unfortunately, that there hasn't been in developer testing, even if you look at like how much work has gone into perfecting the developer experience of using Jest you know, it's a really, really comfortable test runner. But the same kind of expectations, like the single reason why developers love Jest is because it's just like it just fits like a glove, but hasn't really translated to QA world. And I think it's a real shame because those two groups look like an industry, are kind of missing out on the opportunity to bridge that gap. And so that's what I'm hoping they will be able to address by making the tool that matches developers' needs. But at the same time, it provides the kind of confidence that QA expects.

Joe [00:14:13] Absolutely. Now I know originally Selenium was based like it was tied to the browser like it ran inside the browser. And I think they went away now and they create WebDriver. And now Cypress came out and I think Selenium originally did. So it sounds like when you ran the browser in that same instance, it gives you more control in that BigTest and then on top of that other functionality to help you control that, that the DOM at runtime that you lose out using like a solution like Selenium.

Taras [00:14:40] Yeah, yes. I would say that BgTest is kind of like a hybrid, kind of in the middle between Cypress and WebDriver in the sense that it's…so we're using the WebDriver and BigTest has a…it was called agent architecture, which is actually something similar to Selenium. So we have what we call the web agents. The Web agent essentially runs Right now, we're using WebDriver to open web browsers. It's kind of like a level up of the way that Cypress works, but inside of the WebDriver. So we're using the WebDriver to control the browsers. And what's good about Cypress is that it is a script that you load. It runs inside of the browser environment. That I think is really valuable. So what we do, so we have an interpreter for the test. So when you write a test, that test actually right now you right JavaScript. But that can change because, at the end of the day, the result is a data structure. So we think that data structure, which we loaded into the browser and the inside of the browser, we have an interpreter that takes the data structure and essentially runs through the data structure, executing every step inside of the environment. But one of the reasons why Selenium is really powerful is because this architecture is really powerful, which allows you to be able to use Selenium with mobile applications, for example. It's a similar idea. And I think with the BigTest, we ultimately are going in the same direction where we were going to have agents that are native agents that are going to be essentially interpreters for test data structure that is going to execute natively inside of the mobile environment. And so it's going to give you the benefits that you get from the proximity of execution to the application while having the same kind of ergonomics and speed that you get when you're using the browser.

Joe [00:16:24] Nice. Now, you talked a little bit about how it helps with waiting, so waiting a lot of times because of flakiness. But there are other issues that also cause flaking. So is there any other functionality that helps eliminate the flakiness a lot of people see with automation?

Taras [00:16:37] Yeah, I think one of the challenges, I think is at the front side we talk a lot about having different knobs to turn that would allow you to adjust the size of the test. So when we talk about the size of the test, when we say like BigTest, which is named the framework, it refers to being able to run, to be able to increase the size of the test without increasing the cost of the test. So, for example, being able to render the application and be able to invoke the application, increases. So if you're testing a single component, that is a small test, if you're able to render the components part of the application, that is a bigger test. Now, when there's another element to this, which is that makes it really flaky, which is what happens when you hit the HTTP layer. So when you start to handle with so we talk about the cross-boundary communication. When you go over it, when you start talking to the server, that is another kind of aspect that tends to increase flakiness. But there are ways to manage that. The challenge is that a lot of those things are not really separable from the way that the application is implemented. And so this is where I think we're starting to kind of enter into the cloud-native world because in the cloud-native world, the separations that we had before we have a developer team and a QA team and ops team like all working separately, the separation doesn't really work anymore because to be able to go faster, you need to be able to leverage each other's work much more effectively. And so where this starts to overlap is that there are certain things you can use like you can actually architect your application differently to make the tests more reliable. And this is the thing that's usually not available to…when you think of QA in development being separate things, that knob is definitely not available to you. But one of the things I have been using a lot to mitigate that so regardless if you can't adjust your application, one thing you can adjust is you could actually simulate the HTTP layer. When I say simulation, I mean like having a high-fidelity simulation of your server. There are tools like Mirage that allow you to essentially create an in-browser server that simulates how your server will behave. Now you can make the test even bigger if you were to run that same simulation on the server. But now if you can control both your browser test and the simulation of your server, then you have a lot of control over how your application is going to behave in the very specific test scenarios and you have a lot of power there. So being able to kind of adjust the knobs on these things and having ways to do this without being able to kind of introduce these tools and introduce these techniques without having to completely change all of your tooling is really what BigTest wants to explore and be able to make it super accessible so that there is a simple path their team can say like “I don't need right now like we don't need HTTP simulation right now. But it's something that I know if I wanted to plug it in, like, it's really easy for me to plugin.” So that's one of the really like, you know, this is what kind of speaks to this idea of making it easy to expand the size of the test and adjust the size of the test without increasing the costs.

Joe [00:19:31] Now besides that, does it also help with speed? Because another thing that developers tend to say is slowing down, like checking code into CI and runs these tests and the test takes too long. Does BigTest help with that issue as well?

Taras [00:19:42] Well, it's yeah, absolutely. I mean, so running test in the browser itself, so running the test interpreter alongside the application like that super-fast because that's the fastest way you could possibly run the application because you're really essentially running it at the same speed the application runs. Unfortunately, most of the real slowdowns of the applications are really outside of the test. So if your application is using like if you have a single page application using microservices and you have like 20 or 30 requests to render a single screen, no matter how fast your automated testing framework, your test is going to be slow. So being able to simulate those HTTP requests so that those HTTP requests that are handled in a browser can be a way to control the speed of the test. BigTest does offer ways to increase the tests. But we do need to really want to think about increasing the performance of tests when you think about the application and testing a lot more holistically.

Joe [00:20:37] So if someone's listening they're like this sounds great, but it sounds complicated or I have a bunch of tests already like do I need to start over again? Like if you already had a lot of Jest tests or Cypress tests, are you able to incorporate BigTest fairly easily?

[00:20:51] You can start using aspects of it. I mean our goal is to be able to say if you have a Jest test suite if you're using something like we have one of the organizations called FOLIO, which is an open-source platform for managing libraries. They are using BigTest with Jest and React Testing Library and they're essentially rendering the components of React Testing Library. And they're using interactors to interact with the component elements because interactors ship together with their component library. You can use the interactor separately. We are also working right now, we have a project happening right now to create a simulation platform that is going to work alongside existing Cypress and Jest tooling. We're working on making pieces of this available. But the reality is that it is complicated. And I think that that's something that's just inevitable. Having a good test suite is fundamentally complicated, and that's why we are working hard to find ways to make these things more accessible is because there's just built-in complexity that needs to be understood well. And it's really hard to understand because there are so many pieces involved.

Joe [00:21:55] So this is a random question. I was in a conference and I kept hearing about Flutter, Flutter, Flutter, and how hard it was to test. But I was looking over BigTest, I could have sworn, and mentioned something about Flutter. Did you know anything about this BigTest helps you test Flutter native apps?

Taras [00:22:08] Not at the moment, but we have one of the nice things about being consulting companies is that we work with companies who have had this kind of big problem. And one of the problems they have today is being able to test the Flutter application. It's actually on our roadmap. We haven't started the project yet, but essentially it's our goal is to create a Flutter agent. So the nice thing about Flutter is that because it has like it has a lot of the mechanisms available for us to do this, we just need to do the actual implementation so we know what a Flutter Test looks like without BigTest. So it's just a matter of writing the agent that will be able to interpret the test data and execute a test. So we have a clear path of how we're going to get there. We just haven't started the actual work of writing that agent. But it's like, you know, itching, itching to start on it.

Taras [00:22:54] Now, this is all open source and eventually, I guess it's pretty, pretty early. I think you're still an alpha, is that correct? And eventually, if more users start using it and giving feedback, it's probably going to get even tighter.

Taras [00:23:05] Yeah, it's I mean, it's an open-source project. It is I would say the design of this thing has been ongoing for five years. The latest iteration of BigTest has been under development for the last two years. And this is both an ongoing research project and also ongoing development and also ongoing implementation, putting this into practice with our clients and others. And so it's available open-source, and we want people to be able to use it. But we think we really try to be really responsible about which how we communicate with what's ready. Because I know, for example, the interactors, we have customers that are using it. It's a solution that works. It's in beta. API is fairly stable. We don't expect there to be very many changes, but it hasn't been like it's not 1.0 yet. But we're getting there. The runner is a completely different state in that we know what we want it to be, but we haven't gotten to a point where it matches our expectations. And so it's an alpha right now. And for companies that want to do, that are interested in finding better ways to do testing, it's a good potential conversation to have about whether you can use BigTest. But there are pieces that are in different stages and I encourage people to look at each aspect separately and see what value you can provide to their particular application in isolation.

Joe [00:24:17] That was one of my tests like if someone's listening, who do you think would be the perfect audience to start utilizing right away or is it just for research right now with the potential to actually help expand their framework using BigTest?

Taras [00:24:30] Yeah. The instructors are ready to use. I think there are some very interesting possibilities around accessibility. And one of the things that we're looking at with interactors right now is improving keyboard navigation. We want to make the keyboard more similar to how people think about working with the keyboard. Like when you're you know, there is something strange about the way that we write tests because the people say, you know, like you select the input field and you type your input into the field. But quite often, people navigate with their keyboard. So we want to make a keyboard interactor that would allow people to have really granular control over how they input information into forms. And that will allow us to do some really detailed accessibility testing. That's what we're thinking about right now. But working with interactors for just regular writing regular tests and interacting with the existing elements of the application that's available, people should start using that today. And you could use that with Cypress and Jest that's just like right there. And if it doesn't work for whatever reason, we have folks who are ready to take feedback and start to make it better. So it's a really good time to start using interactors in particular.

Joe [00:25:32] Very cool. Okay Taras before we go, is there one piece of actionable advice you can give to someone to help them with their automation testing efforts? And what's the best way to find or contact you?

Taras [00:25:42] Yeah, one thing that I think would be really helpful and that can apply to any test suite if you have to wait for something like if you have, like, explicit wait, find some way to introduce convergence. So just some way of saying have some piece of code that says I'm going to check every ten milliseconds to see if that thing showed up because that fixed wait, period, that is a fixed cost. It's like going to pay for parking where, you know, you only need to potentially only need five minutes or you have to pay for an hour. Like that's how that fixed wait works. So. If you can introduce convergence to replace that wait with a convergent behavior and there are some like Testing React Library offers this where you could say like wait for this element to become visible, that is way better than wait for 300 milliseconds of 500 milliseconds. If you can do that, that in itself is going to bring value to your test suite which is just adjusting that one thing. And to find me, I'm Taras on Twitter and Taras on GitHub, and taras@frontside.com on email. I talk about testing all the time and everyone on my team loves talking about testing. So any time we could talk about testing, we're happy to.

 

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Taras Mankovski