How to Build a Culture of Comprehensive Quality with Matt Gilbert

By Test Guild
  • Share:
Join the Guild for FREE
Matt Gilbert TestGuild Automation feature

About This Episode:

On this episode of the TestGuild Automation Podcast, host Joe Colantonio welcomes Matt Gilbert, founder of Clear Insight, for a conversation packed with practical wisdom for testers looking to break into, or advance in, modern software quality practices.

Drawing on his experience working across industries such as insurance, healthcare, and SaaS, Matt shares how he transitioned from an aspiring developer to a passionate advocate for testing. They dive deep into the challenges of uniting disparate testing tools, making complex data from performance, accessibility, and security checks accessible to all team members, and the critical importance of educating both testers and business leaders about quality risks.

Matt discusses how Clear Insight was born from the need to simplify usability and performance feedback, making actionable insights available for everyone, regardless of technical background.

Plus, you’ll hear Matt’s take on using AI responsibly in regulated spaces and his advice for testers who want to expand their role beyond functional testing. Whether you’re new to automation, a testing veteran, or curious about the growing intersection of AI and quality engineering, this episode has actionable strategies and real-world stories you won’t want to miss.

About Matt Gilbert

A man with a full beard and tousled hair, wearing a dark suit, white shirt, and black tie, stands in front of a plain gray background, smiling slightly.

I am an accomplished Software Testing professional with over a decade of extensive experience spanning several industries, including Insurance, Startups, SaaS, and Healthcare, among others.

Throughout my career, I have taken on diverse roles and contract work, each adding a new facet to my testing proficiency. My skill set encompasses a wide array of testing techniques such as API, Integration, Performance, Accessibility, UI, Usability, Mobile, and Contract testing.

An integral part of my expertise lies in the development of Test Automation Frameworks, where I have leveraged programming languages like Java, C#, Typescript, and Python. I am also well-versed in using AI-driven automation tools like Testim and Mabl.

In my recent role as a team lead, I have fostered and enhanced individual testing capabilities within my team. This role involved nurturing growth through one-on-one mentoring, facilitating group training sessions, and creating an open and collaborative environment that encouraged exchanging ideas and best practices. My focus as a coach has been not only to improve the technical competence of the team members but also to develop their problem-solving abilities, critical thinking skills, and adaptability in the face of evolving technologies and methodologies.

My coaching extends beyond my immediate team. I have shared my insights and knowledge with the broader community through my blog. I believe in the power of continuous learning and make it a point to stay updated on the latest trends in software testing to ensure the advice I give is current and relevant.

You can check out some of the thoughts, techniques, and strategies I share with my team and the wider software testing community on my blog at https://beardedtester.substack.com/.

Connect with Matt Gilbert

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:35] Hey, I'm hearing a lot from different testers wondering how do they get real analysis and clear insights around AI and testing, security, performance, accessibility, all the things and that's you. You're in for a special treat because we have Matt Gilbert joining us. If you don't know, Matt is a versatile professional who's been an impact across many different industries. I think he's worked like an insurance and startups and SaaS and healthcare. He's also the founder of Clear Insite, a web testing platform focused on usability and performance. And beyond building tools, Matt is a prolific blogger, which shares insights on everything from testing fundamentals to leadership, team dynamics, and the role of AI in testing. He's passionate about turning complex testing ideas into practical, actual steps that anyone can apply that you'll hear already on this show as well. You want to miss it? Check it out.

[00:01:24] Joe Colantonio Hey, Matt, look up to The Guild.

[00:01:28] Matt Gilber Hey, Joe. Thanks for having me. It's great to be here.

[00:01:31] Joe Colantonio Awesome to have you. So you have quite a background. I'm always curious to know, especially with founders, I guess the first thing is how did you get into testing?

[00:01:40] Matt Gilber So very early on in my career, I knew, well, I thought at the time that I wanted to be a developer, I enjoyed building things, seeing how things worked. And I still do every day, but just out of college, I heard about software testing and I felt like it was a good stepping stone to get into software development eventually. And so I took a job. It was very early on my first professional job, I would say. And that's where I kind of got my feet wet with software testing and learned what it was and maybe picked up a few bad habits along the way, but I'm here now and better for it. So love every bit of it.

[00:02:20] Joe Colantonio Nice. So why do you think it's so hard to get these tools to work with one another? Is it because they're all open source? Is it because a lot of companies just don't have the expertise to implement a lot of these different other types of testing that you probably don't even think about?

[00:02:35] Matt Gilber I think it's a little bit of both for accessibility testing, especially that's not something that's very commonplace nowadays. And it's very concerning considering that that's a legal requirement in some aspects. During like healthcare, like I am, our space is very regulated and we have to ensure that we're making our websites and our tooling accessible for a lot of different people of different varying backgrounds. And performance is one of those things that you don't really notice until somebody brings it up to you. It's typically at that point, it's users. So people like me who are using the site and giving you feedback and saying, hey, this is slow or why is this happening? And at that it's kind of too late.

[00:03:21] Joe Colantonio Absolutely. You've worked in many different industries, do you think a lot of people just forget about security and accessibility and performance because maybe it's not a risk area for them or maybe they don't know it's a risk area for them?

[00:03:32] Matt Gilber Yeah, I think people probably don't realize how impactful it is to users and honestly, their business security being obviously, if you're hacked, like that's really bad, you're going to lose customers. You're going lose money or you're gonna lose data. You're gonna have potentially have some fines associated with that. And with performance and accessibility, it's just not something you think about every day. As you're visiting a website, as an end user yourself, you might think about it, but as you're building an application or a website or anything like that, it's not the first thing that comes to mind. The first thing comes to mind for a lot of people is does this do what I want it to do, not does it perform well and is it secure and is it accessible and is usable and all those things.

[00:04:21] Joe Colantonio So why is that? So I guess even if there is a solution, how does someone know that that's a problem in plenty of sense. So if someone's listening, like they may not even know like, Oh, I need to worry about security accessibility. How do you get over that hump of not only is an issue, but like, here's some tools-

[00:04:37] Matt Gilber It's really just about education. And so with Clear Insite, that's kind of the path that I took for that is what does performance look like? One, for your website or your webpage and what do these values mean? So like you and I know that, FCP, LCP, TLS, all those values are impactful to users, but really getting granular and saying, hey, education and really deciphering those values and letting business users know that X value is bad because Y, right? So it's really about education.

[00:05:13] Matt Gilber Sometimes I get a lot of slack for some clues out there. Oh, and I say a tester should be able to do security testing, some high level performance testing. Like, Oh no, God forbid, I'm a performance engineering professional and therefore on the I shall do performance. I don't know if you get any pushback like that. Do you think like people say you need to stay in your lane, you're a functional tester, focus on that functional, though, all these other aspects.

[00:05:32] Matt Gilber I've been fortunate enough in my career that, my manager and director, especially at my current job is they're very appreciative and they want us to continue learning and growing. And if that's learning about security testing and helping augment the team that we already have that does that, as well as teaching others how to do that. That's something that everyone's always encouraged me to do, but I have heard that being a case at other companies or other individuals who have been kind of stifled into a particular role or forced to do manual testing all the time. You have to expose yourself to different areas and learn to grow, otherwise you're going to be getting pretty stagnant in your career.

[00:06:21] Joe Colantonio Yeah, 100% agree. I guess another, I think maybe, maybe I'm, or maybe another hurdle some testers go over is a lot of the information probably returned is really technical and so how do they wrap up that technical feedback to make it so that management can make good judgments on or make better outcomes on the information, both. So the, again, from the system, from tests that may be like, ah, I have no idea what this is, number one, and number two, once I understand it, how do I explain it to my manager?

[00:06:49] Matt Gilber The information that the testers are providing to business or the information that they're getting from maybe the system under test. Gotcha! It's going to come from experience and just doing it. When you're learning about, when you're testing a system and you're getting data back from it, that's where you have to kind of dig into it a little bit and do a little bit of your own research to understand what the data is and where it's coming from and why it's from that place and all the inner dependencies that go along with that. And then when it comes to playing that to the business, a few things that I found helpful are leveraging AI. You've had a few people on your podcast talking about AI in the past, few months or year, everybody has a different perspective on it. But my thoughts are it's going to help augment and already has helped augment a lot of different roles. I don't think it's to the point where it's gonna replace anybody, but it is very good at basically text generation. If you give it a clump of text and say, Hey, analyze this or explain this in a way that makes sense to X, Y, and Z users who might not be technical. I think that would be a good starting point from there. And then yourself as a testing professional can then look at that output. And if you agree with it, continue. If not, then you can tweak it a little bit. I think it's a good tool in that aspect.

[00:08:20] Joe Colantonio You mentioned it's not like all knowing puts output out, but it's up to you to look at it and put in your expertise and mark with it almost.

[00:08:27] Matt Gilber Exactly.

[00:08:30] Joe Colantonio I guess you are in health care, do you get any pushback like how you can't use AI because we're regulated or anything like that?

[00:08:38] Matt Gilber Yeah, there's a few areas that obviously we're not supposed to use AI for. And a lot of the AI tooling that we're using now, all of that is protected. It's all, none of it is being used to train any public models. So any of the AIs that are in place now are, I don't know the correct term for it, but none of the data that we are passing it is, being used to train any public models and then going a little bit further, there are guardrails put in place. If patient data were to get in there, there are guard rails you can set up to not allow that to happen. That way, you're only passing in data to the model that you know is not going to get you find basically. Oh, that would be the infrastructure team usually.

[00:09:33] Joe Colantonio Who put in those guardrails? It's curious. All right. I know once again, going back to performance, sometimes when testers get involved, functional testers, it's not too late, but like the architecture is already in place. Decisions are already been made to make it as performant as it's going to be. How do you make it so it's more shift-left, let's say.

[00:09:54] Matt Gilber Yeah, so you want to provide that feedback as early as possible. Ideally, you also want to educate your developers on what good performance is and why it matters. I've been fortunate enough to work hand in hand and collaboratively with a lot of developers who want to do the right thing the first time rather than just throw something over the wall and hopefully I won't complain about it or send it back or whatever. A lot of those things. It's just going to come with education, working with developers. And that's where, a relationship and being able to talk their language is super important. So if they respect you and you have that mutual respect, then, they're going to be more receptive to actually, taking your feedback and applying it.

[00:10:41] Joe Colantonio It seems like your developers are pretty open-minded already, but do you know any ways maybe to encourage leaders to create a culture that's more open, maybe to quality?

[00:10:51] Matt Gilber Yeah, it really starts with the people at the top, but then it also is up to people like me and the people that are on my team. And if I can explain in a way that is useful to business users, like we were talking about earlier. I'm gonna restart that. Can you repeat the question? I'm sorry.

[00:11:13] Joe Colantonio Sure, Eric, edit, Eric's my editor, Eric edit, edit, edit, okay, go. Being used to talking to leaders and having developers are pretty open-minded. There's anything you can do or you suggest to how people can create more of a culture of being more open-minded to testing, not just functional testing, but all types of quality.

[00:11:32] Matt Gilber Yeah, it really starts at the top, but then it also starts with people like me and showcasing the value that we provide every single day. So that's going to be the collaboration that we have with product comes to requirements analysis, it's going be collaborating with developers, making sure that they have the tooling they need to perform their testing well. You talked about shifting test left earlier, that's kind of what my role now is a lot of days. That's what I'm doing is building tooling out for developers to actually help them perform testing and doing it well before it even gets to me. And then it's really, it comes down to education. The things that I'm, doing every single day to help improve requirements, help improve user stories. All of that trickles down to the end result being a releasable product that users want to use and performs well. Hopefully, business users see that side of it. It's really developer focused right now. So a lot of pre-existing tools that are less UI-based to help get data into the system, data creation, and then some UI based tools as well, just to visualize certain aspects of the system and query and search for things that you're looking for without jumping through 10 to 12 different hoops to get there.

[00:12:57] Joe Colantonio I guess another thing I see testers getting caught up on is they have like almost like Guardrails on that they put on from themselves. Like I can't do this. I can do that. It seems like you have your hand in everything. I assume when you get hired, they're like, Oh, you're going to be able to write tools for your developers and do this and that. I assume you just started during it, like how do you do be able do cool stuff and not be so pigeonholed. I don't know if that makes sense.

[00:13:22] Matt Gilber Now, like I said earlier, I've been really lucky at my current job to have that bandwidth to actually go and do those cool things. Like I said, earlier, I enjoy building things. I enjoy coding. It's something I like doing. And I think you just have to do it. There's something we do at my current job where it's called passion project Fridays. And we do that once a month and it just started out as a small project every Friday, once a month, I would start iterating on it. And, eventually grew beyond just like a simple POC. This particular project was grabbing data from our repositories for our user stories, but we use Azure DevOps for that. We store our user stories in there, our acceptance criteria, all the descriptions, all of the tasks. This particular tool was basically a way for testers to get that data into a model. And then that model would then generate some test scenarios for that. And then spit it out into an Excel file with those scenarios outlined in it. And then from there we could then, tweak those, modify those, delete those, add to those scenarios and saves us 8 to 12 hours a week on generating those. And those are test charters. I don't know if you're familiar with those. But that's less like test scripts, do this and then click this button, it's more high level, like, okay, as an X user doing Y thing, what happens? And so that's kind of where it originated. And then from there, it was just talking to the right people and showing them that that's something that you actually are passionate about and you care about and does add value in the end. And that's where I'm at today.

[00:15:08] Joe Colantonio Nice. So I know a lot of people just doing functional testing, flaky tests, takes up a lot at that time. I could imagine if you're sat in like security scanners, accessibility, performance, that you have all these results, like how did you prioritize that? Is that something you could see like, okay, it's great. Now you've been doing all the things, but now you have to deal with all the things.

[00:15:28] Matt Gilber Right. I'm sure, you know, there's no limit to the number of tests that we could run. Testing is endless. It's up to us to kind of decide what tests we are going to run and things that we're going to check. It's context-based. Depending on the system, depending on the timeline, depending on how much time I have, that's going to all be dependent on the scenarios that I hit. It's risk-based testing. It's going to be performance maybe, if I have the bandwidth, unless it's a customer facing web application that has SLAs that need to be met, then obviously performance is going to be up there. But again, that's contextual. It really comes down to the tester needing to understand all of those details and apply it towards their testing flow.

[00:16:13] Joe Colantonio There are a lot of people, it's a lot to ask to get them to understand how to do this, getting it up and running. I assume, like, why Clear Insite then? It seems like you have a kind of unique model where you don't have to invest billions of dollars just to get a result to see if it's worthwhile. Maybe you could tell us a little bit more about that.

[00:16:30] Matt Gilber Yeah, I was hoping to do a live demo today and then I was playing with it beforehand and it didn't go so well. Yeah, can't actually show what it's doing.

[00:16:41] Joe Colantonio When you do get them, you can do a quick video and send it my way and I'll add it into the podcast and we'll see. All right. Like check out the video or something. Let's check our quick demo.

[00:16:49] Matt Gilber Do you want me to show the accessibility or performance? Let me see if I can share actually.

[00:16:55] Joe Colantonio Oh, I mean, that'd be better because I did most people know the functional testing.

[00:17:00] Matt Gilber I know we were talking earlier about potentially making this different subscriptions and allowing people to log in and run their own tests. The reason I didn't do that was this is all running locally. As you can see, I have my own APIs that are set up and I don't have to pay for huge AWS bills for being able to host all this infrastructure. It's just running in a Docker container. Pretty simple. I can go out here. I already have myself selected as a client. Ideally I would have more out there, but times are tough, so go out here, choose my category. I'm going to do accessibility and then I'm only going to run it on one page. Normally, if I don't have this selected, or if I have the selected, it's going to go through a bunch of different sub pages and perform the checks on those as well. Right now it's spinning up a browser in Playwright. It's navigating to the web page and it's running the validations on that web page. Completed, get HTML reports and let me actually re-share. This is the report it just generated, July 16th, 3:29 PM. Gives you a high level view of the past evaluations and failed evaluations. The raw analysis is really cool to look at. You can see like what type of coverage. This using Axe, if you're familiar with that. On the webpage, it looked at 367 elements and it found one violation. My accessibility is good, at least on this page.

[00:18:33] Joe Colantonio It's all fixed. I like that. Sort of something me now I understand it. The very bottom.

[00:18:40] Matt Gilber Oh! Down here. So this links you just to the Axe's documentation for the particular rule that was failing. We have the different rules here. This is the region rule and then we have the different category types. Again, just going back to the education aspect of it, hopefully this is useful for end users. The performance and security is set up similarly, and then the functional test generation. Which I'm not able to demo right now. There were some AWS errors I was getting, but those are going to be basically all packaged up into a zip file and it'll give you a read me along with a package JSON, so a user will be able to download that do an install and then be able to kick off the tests in a matter of a minute.

[00:19:27] Joe Colantonio How often would someone use this or need this to think? Is it every release? Is it ever order? Is it, every build?

[00:19:37] Matt Gilber For accessibility and security and honestly, performance testing, it's probably not something you're going to want to be doing every week. I would say it's going to be it's again, contextual as often as you need it, right? Running these accessibility checks, it gives you a good baseline on the state of your application as it is today. And then if you want to iterate on that, check it again, a few ones down the road, just to make sure you haven't regressed any. I would say that's reasonable to do, but it's not something you're going to want to be doing every month, maybe even every other month.

[00:20:13] Joe Colantonio Okay Matt, before we go, is there one piece of actual advice you can give to someone to help them with their testing efforts and what's the best way to find contact you and learn more about Clear Insite.

[00:20:23] Matt Gilber Yeah, I would say, if you're just getting started out there testing, or if you've been testing for a while, like me, just explore and create and experiment, try different tools and see what you like and don't give up. And then as far as getting in touch with me, I'm on LinkedIn blogging on Substack and you can visit clearinsite.io and that's I N S I T E.

[00:20:49] Joe Colantonio We'll have the links to these awesomeness in the comments down below.

[00:20:51] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a555. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:21:26] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.

[00:22:09] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Two men are featured in a promotional image for "TestGuild Automation Testing" discussing Playwright and AI in QA, with the text "with Ben Fellows.

Playwright, Cursor & AI in QA with Ben Fellows

Posted on 08/31/2025

About This Episode: In this episode of the TestGuild podcast, Joe Colantonio sits ...

Why AI + DevSecOps Is the Future of Software Security

Why AI + DevSecOps Is the Future of Software Security With Patrick J. Quilter Jr

Posted on 08/27/2025

About this DevOps Toolchain Episode: Support the show – try out Insight Hub ...

A man with glasses and a beard speaks animatedly into a microphone. Text reads "TestGuild News Show: Weekly DevOps Automation, Performance Testing, and AI Reliability. Breaking News.

Playwright MCP, Cypress FHIR API, AI Test Management and More TGNS167

Posted on 08/25/2025

About This Episode: Is AI the future of Test management? Have you seen ...