Secure Automation Testing at Scale Leveraging SBOX with Michael Palotas and Lee Walsh

By Test Guild
  • Share:
Join the Guild for FREE
Michael Palotas Lee Walsh TestGuild Automation Feature 2 guests

About This Episode:

Welcome to the TestGuild Automation Podcast! In this episode, host Joe Colantonio is joined by automation experts Lee Walsh and Michael Palotas to discuss the fascinating world of secure automation testing at scale, leveraging the power of Sbox. Michael, the Head of Product at Element 34, brings his wealth of experience from companies like eBay and Intel, while Lee, the Director of Customer Success at Element 34, adds his expertise from his time at BrowserStack.

Throughout the conversation, the guests explore the importance of security and compliance in automation testing, the benefits and challenges of using different test tools like Selenium and Playwright, and the considerations enterprises must consider when choosing an automation solution. They also dive into the features and advantages of Sbox, Element 34's flagship product, including its ability to run tests within the customer's firewall, ensuring data privacy.

Join us as we uncover the secrets to successful automation testing at scale and gain insights from industry leaders who have experienced firsthand challenges and triumphs. Let's dive in and discover the fantastic world of secure automation testing with Sbox!

See a demo for yourself now

Exclusive Sponsor

Discover TestGuild – a vibrant community of over 34,000 of the world's most innovative and dedicated Automation testers. This dynamic collective is at the forefront of the industry, curating and sharing the most effective tools, cutting-edge software, profound knowledge, and unparalleled services specifically for test automation.

We believe in collaboration and value the power of collective knowledge. If you're as passionate about automation testing as we are and have a solution, tool, or service that can enhance the skills of our members or address a critical problem, we want to hear from you.

Take the first step towards transforming your and our community's future. Check out our done-for-you services awareness and lead generation demand packages, and let's explore the awesome possibilities together.

About Michael Palotas

Michael Palotas

Michael Palotas is Head of Product at Element 34, the market leader for enterprise testing grid infrastructure solutions inside the corporate firewall. He was the head of test engineering at eBay, where he was instrumental in the design, development, and open-sourcing of Selenium grid, Selendroid, and iOS drivers. Michael also worked as a software engineer at Intel and Nortel, so he knows his stuff about enterprise application testing.

Connect with Michael Palotas

About Lee Walsh

Lee Walsh

Lee Walsh also joins us. Lee is currently the Director of Customer Success @ Element 34. Lee and his team's primary focus is to ensure. Element 34 customers are getting the most out of Sbox, which is Element 34’s flagship product. Before Element 34, Lee was a Team Lead at BrowserStack, so he has great knowledge regarding Test Automation.

Connect with Lee Walsh

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

See a demo for yourself now

Secure Automation Testing at Scale Leveraging SBOX with Michael Palotas and Lee Walsh

[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:25] Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. And today we'll be talking with Michael and Lee, all about Secure Automation Testing at Scale, leveraging something called SBOX. If you do anything with enterprise testing, you want to listen to this episode. If you don't know, Michael is currently the head of product at Element 34, which is the market leader for enterprise testing grid infrastructure solutions that run inside the corporate firewall, which I know is a big, big deal for enterprise companies. He was the head of test engineering at eBay, where he was instrumental in designing, developing, and open sourcing of the Selenium Grid. He really has a deep understanding of how this all works. He's also worked at I think on Selendroid and iOS drivers. Michael also worked as a software engineer at Intel and Nortel, so he really knows his stuff when it comes to enterprise application testing. Also joining him we have Lee Walsh. Lee is currently the director of Customer Success at Element 34. So he speaks to a lot of people. So excited to get his insight on this. Lee and his team primarily focus on ensuring that Element 34 customers are getting the most out of SBOX, which is Element 34's flagship product which we can learn all about and why you need to know about it, especially if you're working on the enterprise. And prior to Element 34, Lee was the team lead at BrowserStack, so he has some understanding of the space as well. So, he has a lot of knowledge, a lot of great information around automation testing and how to scale at the enterprise. You don't miss this episode. Check it out.

[00:01:57] This episode of the TestGuild Automation Podcast is sponsored by the Test Guild. Test Guild offers amazing partnership plans that cater to your brand awareness, lead generation, and thought leadership goals to get your products and services in front of your ideal target audience. Our satisfied clients rave about the results they've seen from partnering with us from boosted event attendance to impressive ROI. Visit our website and let's talk about how Test Guild could take your brand to the next level. Head on over to and let's talk.

[00:02:30] Joe Colantonio Hey, guys. Welcome to the Guild.

[00:02:34] Michael Palotas Hi. Thanks for having us back.

[00:02:36] Lee Walsh Hey, Joe.

[00:02:37] Joe Colantonio Good to have you, Michael, Hey, Lee.

[00:02:38] Lee Walsh Yeah, thanks for having me.

[00:02:39] Joe Colantonio Absolutely. So, Lee, this your first time on the show? But Michael, I know we talked, I think, in 2020, so welcome back. I'm just curious to know for people that may have missed that episode, which was I can't believe it's almost three years ago. At a high level, what is Element 34 or what is SBOX? Maybe we can just ease into it that way and give people a little flavor of what to be in store for?

[00:03:00] Michael Palotas Yeah, SBOX is basically behind the firewall or on-prem test automation infrastructure solution that runs completely secure inside your firewall. No data going out and no external access is required from the outside. That's really in a nutshell what it is.

[00:03:24] Joe Colantonio Awesome. And so once again, I want to remind listeners that we spoke to you earlier and you also. So you created one of the first implementations of Selenium Grids. I'm curious to know, like how that influenced the solutions that you're working on now?

[00:03:36] Michael Palotas Yes, what you said is both correct and incorrect. Let me provide some background to clarify. Back in the day, I oversaw quality engineering at eBay International. Given the scale and pace at which we operated, automation emerged as a crucial factor in delivering top-quality software to our customers. Keep in mind, this was in the early 2000s. We began our search for a suitable tool and settled on Selenium. Our initial approach to automation was like most—adding one test after another and running them sequentially. However, we soon realized that this method wouldn't scale. We needed a way to run tests in parallel and across different browsers. The challenge was that there wasn't a tool available, even within the Selenium ecosystem, that met our needs. So, we decided to develop our own solution.

During this time, we were in contact with key figures from the Selenium project, like Simon Stewart. Once our internal solution was ready, we discussed the possibility of integrating it into the Selenium project. This collaboration led to the birth of Selenium Grid. Now, to address the point of contention: while I played a pivotal role in the process, I didn't personally write the code for Selenium Grid. That credit goes to François Reynaud, our current VP of Engineering, along with Christian Rosenboldt and Kevin Menard. My primary responsibility was overseeing the open-sourcing aspect, ensuring the integration of Selenium Grid into the Selenium project, and facilitating its open-sourcing.

[00:05:27] Joe Colantonio Michael, when did you first work on Selenium Grid then? It's been quite a while what, 10 or 12 years at least?

[00:05:31] Michael Palotas Yes. That was around. I think we introduced it at the very first Selenium conference in San Francisco. I believe that was 2011 or 2012. Yeah, so it's been a while. And of course, a lot has happened in the Selenium space and with Selenium Grid. But yeah, it's been around for a while and we're happy that we made our contribution there and to change the world.

[00:05:56] Joe Colantonio Awesome. Lee, I want to get you in there quick. Just a random thought. You're from an infrastructure company, another company that did kind of like what you all are doing now at Element 34. What got you interested in this space or what are your thoughts on Element 34 and why it might be different from what you've worked on in the past?

[00:06:15] Lee Walsh Yes. So previously, I would have worked with a traditional SaaS solution in this industry. So, your testing infrastructure is great. You get access to it. You run tests either manually or in an automated fashion. But where that infrastructure sits are a key component. So that would sit in the vendor's public cloud. Over time, I saw a use case of some potential customers not being able to use this because they wanted to use real data or they needed to test their early-stage environment. So that got me thinking. I did a little bit of searching, came across Element 34, and then was obviously put in touch with Michael and Francois. And then, yeah, straightaway I understood the key benefits of the product and the offering that they have, and it was something that I wanted to be part of as well.

[00:07:02] Joe Colantonio Do you speak to a lot of customers? How many people are still using the in-house Selenium Grid from your experience? Is a lot of people or?

[00:07:09] Lee Walsh Yeah, you'll find that there are people that still use the in-house solution as well. So, they'll have a combination there, whether that's something like SBOX or they build their own, they'll have their own at certain points for certain use cases, but they also may make use of their build versus buy conversation. They may be sitting in the buy part as well. So it's a mixed bag, but you'll see people making use of their own grid and making use of that vendor, whether that be public cloud offering or something like SBOX.

[00:07:39] Joe Colantonio and Michael, what do you think some of the benefits are then from like an in-house grid versus like an enterprise solution, which I think that's what kind of separates you? I've spoken to a lot of people over the years, and it seems like your sweet spot really is the enterprise and it's something that is critical that a lot of people like us maybe not be aware of.

[00:07:55] Michael Palotas Yeah, maybe one way to look at it is why do people create their own Selenium Grid in-house in the first place. Right? Maybe if we start from that perspective and I would typically find is because it's super easy to get started with. Basically, you download the software, you spin it up and there you go. You've got something to show, you've got something to run tests again. So that's great. It becomes a bit more problematic when we look at things like maintenance. All right. That's a whole different story. There's a lot happening all the time in the browser ecosystem, in the Selenium space as well. New releases very frequently. It's becoming very cumbersome and very time-consuming to keep a Selenium Grid once you have it up to date. And effectively that's what you want. You want to make sure you're running your tests, of course, also against all the browser versions, but very or most importantly, you want to test it against the new stuff. All right. So that's where people usually start. And now where SBOX comes in or how we solve some of those issues is basically we are the only enterprise solution that was built from the ground up to be inside the customer's firewall. Typically, if a company decides to build their own grid, they are probably looking at having it inside their own firewall. And so, they may just go off and build it on their own. But of course, they end up with all those maintenance issues. And that's exactly what we solve because we basically give you all the bells and whistles and the convenience of the SaaS products because those products are great, right? But with the difference that it's running completely inside your own firewall. So there's no data going to the outside. You don't need any external access from outside to get back in. So that's the key differentiator.

[00:09:59] Joe Colantonio Nice. So what are some key benefits then that people at the Enterprise may need the solution for? They already have something working. I mean, yeah, sure, we must maintain it, but there are other pieces that they may not think of as like any sort of compliance issues or things like that would make them say, okay, we need this other solution to help us.

[00:10:20] Lee Walsh I can probably address that so that there are some advantages, some are more obvious than others. But when we speak with potential customers, we normally mention five key considerations that are maybe more. But those key considerations relate to our offering. But the conversation would be quite similar when we're looking at building versus buying a solution. And if buying a solution, where should that be hosted? And Michael touched on it already, but the consideration is the first two would be the security and compliance piece. If testing within our own corporate network, you're staying secure because you don't need to open that network like you would to a traditional SaaS solution. Remember, if everything involved in testing now is hosted internally, why would our testing infrastructure be any different? And you may have the answer to that question, but a question that needs to be answered, nonetheless. And if we want to test these as early and often as possible as well, we need to know what's involved in opening that environment. So, some will create tunnels and so on, but something that we need to be aware of, and given where SBOX is installed, this is not a concern compared to others. And the second piece that I mentioned there touched on there was the compliance piece, and this is straightforward. If no data is leaving my network, then there's little risk of those breaching any data privacy concerns or regulations, sorry. Also, if we're using real data, we may have agreements with some of our customers that that real data can't be used outside of our network. They're kind of the two main ones that we see come across. Performance is another key consideration whereas the infrastructure hosted will impact performance. So logically, if I'm testing within my own network versus reaching out to a solution that may be hosted in a different country or a different continent, we're going to see different results. What are we not to share what that will be? But less latency normally equals improved performance and a reduction in tests potentially failing due to timeouts or order scenarios caused by performance issues. And then the last two that sometimes are overlooked is scalability and cost efficiency. Scalability from an infrastructure standpoint and not from a test group standpoint, I'll answer it from an infrastructure standpoint, is what's involved for me to run my tests at scale? How many tests can I run in parallel or concurrently? What kind of queuing system do we get access to? These are all important for me as I grow on my automation journey, but potentially for my organization as more people make use of the solution I've chosen. Okay. And then the cost efficiency part is straightforward. The cost of scaling or continuing this journey and there's sort of a hidden cost inside of infrastructure, but we won't talk on them today.

[00:13:05] Joe Colantonio No, it's a good point. A lot of times people say, oh, it's I have an open-source solution or so it costs me nothing when in fact it does cost them very much. I think our cost efficiency there may be like, What? Can you explain a little bit more what that means? because if someone says I'm using open source, so it doesn't cost anything, what is the cost efficiency? Is it easy to maintain as well? And if your tests run faster and quicker, then you get quicker results and less time, which is going to save money.

[00:13:31] Lee Walsh Yeah. So, cost efficiency is if we go from 10 tests to 1000 tests. Let's take that example. How much is my infrastructure going to cost me to host that solution internally? Also, do I need a bigger team to maintain this? So the maintenance and management costs and the work that's required to troubleshoot any issues that we see in our own grid. These are all part of that cost that you may not initially consider as you grow or as you start off. But further down the road, it starts to become more and more of a pain or a headache and something that starts to go out of control pretty quickly.

[00:14:03] Joe Colantonio Absolutely. I used to work for an enterprise company doing radiology equipment, and it was hard to even get people to like I said, open up the tunnels and the ports for an outside solution and compliance issues with the FDA. So we didn't have an in-house solution, but it sound like a perfect thing for that particular environment. So I guess, in that case, we had thousands of tests. What would we have needed to do then to get our tests to work with the SBOX solution?

[00:14:30] Lee Walsh From a test script perspective, there are very little changes that need to be made. So if we take someone that has Selenium test scripts already written in the language they've chosen or the framework they're making use of Selenium is Selenium when I run a locally to a traditional SaaS solution or on something like SBOX. The important piece is the driver initialization. So where are we pointing our test scripts to? So once I change that from, let's say, local chrome driver to the SBOX home address, I can now start to execute my tests. Other things that you might have in play already is the likes of test authoring, so you might be using something to build your test scripts. In the end, tools like that normally create or leave you with a Selenium test script. So within the UI you can then go and pointed to a home address as well. So very little change is required to move across or make use of something like SBOX. And I think that's why this market is so competitive as well. How easy it is for me to go from one solution to another means that everyone in this industry needs to make sure they're on top of their game really?

[00:15:37] Joe Colantonio Absolutely. So I alluded to some benefits and some issues of running an enterprise. One of them also was a security, security risk. They said you can't do that for security reasons. I know security is a huge requirement for a lot of enterprises. So Michael, do you have any insights on being on-prem, how it helps with some of these security concerns you probably hear all the time from enterprises you work with?

[00:15:58] Lee Walsh Yeah, So just with SBOX by being inside the customer's network already we have naturally much deeper integration points with the rest of the customer's development infrastructure, be it the ability to hook into their customers ID or identity provider and like something like Active Directory or Open ID or else much more complex integrations like Kerberos NTLM, which may not be possible if you're using a SaaS solution that sits outside of your network. The other kind of talk tracker point to this as well is that SaaS solutions to get into your network from the outside, you potentially to open your firewall whitelist IP that they provide it to you to let them back into your infrastructure or create these tunnels that we spoke on. With SBOX, you don't have any of that headache given where we're located or were installed, so we have no idea how many tests are running or anything like that. It's completely airtight.

[00:16:53] Joe Colantonio Absolutely. Michael, any insights around security? From your point of view?

[00:16:59] Michael Palotas Yeah. I mean, I usually see this security and compliance kind of go hand in hand in enterprises, that the compliance part is more about what we touched on the data, depending on what kind of data you're using, you may or may not be allowed to send that elsewhere. And then, of course, the security part is what Lee just mentioned, to use solutions that are sitting outside of your firewall, you have to drill a hole into your firewall. You must let them back in. And that's typically done through these tunneling mechanisms or essentially that's the VPN. And we all know once you're on the VPN, you can get to anywhere else you want as well. There are certain risks involved when you do that. And that's of course, if you're maybe a small startup, maybe not so much concern. But if you're an enterprise, if you're a government organization, those things are absolutely key that they are taken care of.

[00:17:56] Joe Colantonio So just random thought, Is this only work with Selenium or can you also scale like a Playwright test or any other software? Is this a strictly Selenium-based grid solution?

[00:18:08] Michael Palotas No, it's not. I think last time when we spoke it was and back then it was the product was also called Selenium box. And a lot has happened over the last three years from a product perspective, also from a company perspective as well. But so, I think probably the biggest pieces that we added in terms of the product were the ability on one hand to run Appium as well, for the whole mobile side. But then also we had a Playwright about a year ago I believe because we saw there's traction in the market for Playwright. Some customers were starting to ask about it. So we listened to our customers and we implemented that. So we do actually have quite a few customers that are using both Playwright and Selenium, and it seems to work quite well. It's funny because when you look at some of these two comparisons, it oftentimes sounds like it's an either-or kind of decision. And what we're seeing, it's more of an end kind of thing. Maybe for some teams, Selenium is better. For other teams, Playwright may be the better solution. So what we're trying to do is we're trying to provide one place where all the tests can be run.

[00:19:22] Joe Colantonio Interesting. So if there's someone if they have a mixed test suite of Selenium or Playwright. Do you notice how do they went the same on SBOX or do they need to do anything different to get them to work? Or is it just consistently that you just point the driver to your environment? You're off and running?

[00:19:38] Lee Walsh Yes, it's a seamless process for all of the integrations. It is where the initialization is happening. So again, if I have test scripts built out already or if I was to look at a test script right now, large parts of it would not need to change. So what you're actually trying to do and the initialization piece is key. So for Appium as well, we would just point to the home address and start obviously would specify an app that we want to test as well. But the script itself would run wherever it needs to run really just for pointing it to SBOX in this case.

[00:20:10] Joe Colantonio Nice. A lot of these other solutions as well, you're able to run on different devices because it is in the cloud. It's a SaaS solution. So if you have it low code, then I guess is a solution not focused so much on the devices as well, or how does that work? Does that make any sense?

[00:20:25] Lee Walsh Around the maintenance piece? I suppose Michael kind of touched on this area here, but one of the benefits of our product is, yes, it is installed within your corporate network, but we do have the benefits that you would see with a traditional SaaS solution from a maintenance standpoint. So once SBOX is installed, you automate the maintenance part, which will then give you access to the latest and greatest or what your customer base is using to interact with your website or native application. The first installation is done very quickly and from that point onwards you can run the test in parallel and you can automate new browsers, mobile devices, and so on.

[00:21:02] Joe Colantonio Awesome. I know the problem I know a lot of people have is, yeah, I could scale my tests here, but my tests are a mess that they're not going to be able to scale. Do you have any advice on how to help people get to the point where they're like, they can really benefit from something like SBOX?

[00:21:19] Lee Walsh Yeah. So the scaling part is I took one. I touched on scalability earlier from an infrastructure standpoint, but as the test landscape starts to evolve, you'll know from running tests sequentially and once it scales and run more tests simultaneously concurrently and parallel. Parallelizing tests can be hard. What typically happens is that we take the tests that we have built, and we had built them not designed to run in parallel and we start to fire them all off. The best-case scenario is that they all start to fail. The worst case is when some of them start to fail, and some of them start to pass. We start to see flaky results and we need to start troubleshooting more. So, focusing initially when we're building out our tests to make them more atomic is critical because that ensures that we're ready to then run them in parallel or we're preparing for that scale further down the road. I think it's something that you need to be aware of at the start, which is making sure they're more atomic.

[00:22:15] Michael Palotas This is something that even back in the eBay days, we got that wrong. We didn't think about it when we started with automation. We just wrote tests one after another, not really thinking, what happens when you say, I have a thousand tests now and here we go, let's just run them all at the same time. I think a lot goes with which data are you using for testing or obviously if you're sharing data between tests that are catastrophic typically, then your tests will have those flaky behaviors. Sometimes it may work, sometimes it doesn't. That's absolutely not what we want.

[00:22:55] Joe Colantonio Well, that's a great piece of advice. I always recommend people when they're starting to run right away in CI/CD and start running in parallel just to find these issues for sure. Absolutely. A lot has changed. Like I said, it's been three years. I know you mentioned some new features since last time we've been on the ability now to run Playwright, which seems like that's something I've been seeing a trend in as well as a lot of people are, some use Playwright as well. There are a bunch of other things one, Michael, is there anything other, anything else that's built into SBOX and maybe wasn't there three years ago that we haven't covered?

[00:23:28] Michael Palotas That's a good question. I mean, our engineering team worked very hard over the last three years. And typically, it's two parts. One is to keep up with what's happening in the browser and the ecosystem space. But then, of course, the other one is also to add more features. So we've done a lot of work in, for example, adding OpenID Connect as an identity provider system, just like an as example, adding Kerberos NTLM to be able to mimic the user who you're running as and tap into your enterprise identity system, things like that. This kind of also goes along with what Lee mentioned because we are sitting inside your network, we have a lot deeper touchpoints than what you can have when you're coming from the outside. And so that allows us to integrate much deeper and tighter with the rest of your enterprise development and test infrastructure.

[00:24:26] Joe Colantonio Do you all work with like SaaS-based applications? Has someone had a like scale a bunch of tests against Oracle or SAP? Did you care what the application they test is?

[00:24:38] Michael Palotas No. Not at all, mate. So that's completely up to the customer to decide what they want to test and where that application sits. So, of course, maybe one thing to clarify is that when we say on-prem or behind a firewall, it can absolutely be in your cloud as well. Most of our customers are running SBOX in their cloud, in AWS, Azure, or Google Cloud, or such kind of infrastructure, right? So we don't really care where the rest of your development pipeline is. But that said, typically the customers that come to us, they want to have everything in one place which is behind their firewall.

[00:25:23] Lee Walsh And I think to that and there are certain different scenarios that make their considerations that we spoke on earlier and more relevant done and potentially testing something that's already in production available, used as a monitoring tool or something like that. So and some of those key considerations become a lot more relevant depending on what you're testing as well.

[00:25:44] Joe Colantonio I'll be honest, one trend I've been seeing, if I go to any testing company, not your site, stops on, AI machine learning, even though they may have been around for 12 years, all sudden I see AI machine learning popping up. So any thoughts about machine learning AI, especially how it applies to maybe a grid type of infrastructure?

[00:26:01] Michael Palotas Yeah, I mean, we're watching the space as well. And yeah, I think in general it feels oftentimes very confusing about all the solutions that are out there. I think it starts not even with AI and machine learning, but even just test automation. It can be a bit of a loaded and bloated word that's used to describe a lot of things, and it's sometimes hard to understand. So what is it exactly what I'm getting from this solution or that solution? And of course, AI is one of those shiny new things. Everybody's adding it into their pitch to make it the product look attractive. I think what we're seeing with the AI is that in fact, our customers are using or are starting to adopt AI and machine learning. And this is something that we didn't really think of to be quite honest. What we're hearing from those customers that are using AI in their products is that in most cases they have to test with real or at least realistic data in order to train their models, in order to make sure that their algorithms work. And so they're saying, we have no other choice than actually using real data to do that. And with that, actually being behind a firewall becomes a must-have. And so that's something that's quite interesting that we're seeing, which drives even further adoption for what we're providing.

[00:27:29] Joe Colantonio Absolutely. I think once again, I've worked at a medical company and we use patient DD. You couldn't you had to normalize it. So this seems like it's a great point. If they were creating an in-house model, they definitely would want an in-house grid solution to run against the test. Great point. So what's on based on that and other things we talked about, what is the future of Element 34 SBOX? Anything on the roadmap you can reveal or tell us about?

[00:27:55] Michael Palotas Yeah. I mean, there are lots of things on the roadmap, I think our product vision is to ensure that security, scalability, and performance that were always best in class. That's whatever we do, it's always going to be centered around that. And the experts in the space and we understand what problems our customers face and we listen to our customers. So we're going to implement what our customers need. Obviously, technology keeps evolving. Playwright one good example three years ago wasn't there. Today it's a major player. We monitor that space. We obviously make sure that as things change, we bring it into the product, we add support for that. But first and foremost, as any I think software product companies should do, you should listen to your customers. What do they need? That's the most important thing. And then from that, we put together a roadmap. We is in very close collaboration with all of our customers. We're constantly listening to them, seeing what is it that they need. And that really also helps drive the roadmap. So all in all, we're super excited for the next phase of the company. We see amazing potential in what's happening in the market and what we can bring to that to solve the issues. Yeah, so we're looking into a bright future.

[00:29:23] Joe Colantonio Nice. So I know I keep mentioning it, but like, I really think the enterprise is kind of overlooked or a lot of people just pick up an open-source thing. So I think if anyone has to do anything or cares about security, risk, compliance, scalability, performance, cost efficiency, all the things we talked about, I think it really much applies to the enterprise. They definitely should be checking out SBOX. So I guess before we go then, what's the best way to find or learn more about SBOX or Element 34? If someone listening to this, they said, Oh my gosh, I'm an enterprise. I need to try this.

[00:29:52] Michael Palotas Well, I think the easiest way is go to our website. It's called and there you can request a demo or you can request to speak to somebody. And we're super happy to show you a personalized demo and see if it fits your needs.

[00:30:07] Joe Colantonio Awesome. And before we go, Michael and Lee, I like to ask this one question that is, what is one piece of actionable advice you can give to someone to help them with their automation testing grid efforts? I also created a book on it for folks of the best answers, so give me your best answer. Let's start with Lee and we'll end with Michael.

[00:30:25] Lee Walsh To folks that to keep things relevant to what we've just spoken on. And something like flaky test that we touched on a little bit. And there's a whole host of different reasons why you might see flaky tests. Try to understand what that is. Is it down to performance? Is it down to the data that you're using? And if it is, then potentially it is some things like SBOX that you need to look at. So let's review how your test are or your flaky tests and understand why they're failing and come to a solution then at that point.

[00:30:55] Michael Palotas Yeah. I like to give one piece of advice to anybody in the software space, not just around automation, but I always say don't just jump on the first stand wagon. When you read about a new tool that came, that somebody found, or that you read about. I always advise watching the space and see how particular tools evolve because we've all been there. When you read about a cool new tool, you bring it into your company, you start really relying on it, and then at some point, you find out A It's actually just one person looking after it and the person moves on and the project dies and you have a problem, right? So that's one thing that I think can prevent a lot of headaches, and a lot of rework if you choose your tools wisely. And don't just jump on the next best thing that you may have read or heard about.

[00:31:49] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:32:25] Hey, thanks again for listening. If you're not already part of our awesome community of 34,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to And let's make it happen.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Promotional image for "AI Observability" hosted by Eran Grabiner from TestGuild DevOps Toolchain, supported by SmartBear. Eran Grabiner is shown on the left and another person speaking into a microphone on the right.

AI Observability with Eran Grabiner

Posted on 06/19/2024

About this DevOps Toolchain Episode: Today, we are honored to be in conversation ...

Testguild devops news show.

Browser Conference, OpenSource LLM Testing, Up-skill Test AI, and more TGNS125

Posted on 06/17/2024

About This Episode: What free must attend the vendor agnostic Browser Automation Conference ...

Harpreet Singh-TestGuild_DevOps-Toolchain

DevOps Crime Scenes: Using AI-Driven Failure Diagnostics with Harpreet Singh

Posted on 06/12/2024

About this DevOps Toolchain Episode: Today, we have a special guest, Harpreet Singh, ...