The Secret Life of Automation with Michael Bolton

By Test Guild
  • Share:
Join the Guild for FREE
Michael Bolton TestGuild Automation Feature

About This Episode:

In today's episode, host Joe Colantonio is joined by renowned software testing expert Michael Bolton, who returns to the show after an eight-year hiatus.

Join Michael and me at Breakpoint online event: https://testguild.me/breakpoint

In part 1 of a two-part series, Michael dives deep into software testing, reflecting on what's changed (and what hasn't!) since their last chat, the impact of emerging technologies like machine learning and AI, and what it really means to “do” software testing in 2025.

Michael explores the true essence of testing beyond the buzzwords, challenging the industry's love affair with automation and redefining its role as a tool to augment, rather than replace, human insight. Michael shares insights from his latest travels, collaborative projects, and the exciting new class he's co-developed with James Bach, focusing on smarter, tool-empowered testing practices.

Prepare for a candid, thoughtful discussion full of practical advice, personal anecdotes, and a few laughs along the way. This will include live demos, tool recommendations, and real-life stories highlighting the “secret life” of automation many teams overlook.

Also, make sure not to miss our next episode next week, in which Michael takes a deeper dive into AI's role in testing.

You don't want to miss it – listen up.

Episode Sponsored By Browserstack

This episode is sponsored by Breakpoint 2025 by BrowserStack — the premier virtual event for developers and QA professionals.

Join thousands this May for three days of expert talks, hands-on workshops, and the latest innovations shaping the future of testing, including AI in testing, scaling automation, accessibility, and more.

Hear from leaders at Atlassian, Amazon, Walmart, Reddit, and others, and learn how BrowserStack's platform gives teams instant access to 3,500+ real devices and browsers to streamline testing workflows.
Bonus: I’ll also be speaking on How to Become an AI-Driven Testing Leader!

It’s completely free and fully online — register now and discover what’s next in testing.

Register for Breakpoint 2025: https://testguild.me/breakpoint

About Michael Bolton

Michael Bolton

Michael Bolton is a consulting software tester and testing teacher who helps people to solve testing problems that they didn't realize they could solve. In 2006, he became co-author (with James Bach) of Rapid Software Testing (RST), a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure. Since then, he has flown over a million miles to teach RST in 35 countries on six continents.

Michael has over 30 years of experience testing, developing, managing, and writing about software. For over 20 years, he has led DevelopSense, a Toronto-based testing and development consultancy. Prior to that, he was with Quarterdeck Corporation for eight years, during which he managed the company's flagship products and directed project and testing teams both in-house and around the world.

Connect with Michael Bolton

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:06] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:00:34] Hey, joining us today is renowned software testing expert Michael Bolton, who returns to the show after an 8 year hiatus. In this part, one of a two part series, Michael dives deep into software testing, reflecting on what's changed and what hasn't since our last chat and also the impact of technologies like machine learning and AI and what it really means to do software testing in 2025. Also make sure not to miss out on our next episode next week, in which Michael takes a deeper dive into AI's role in testing. You don't want to miss it. Listen up. And before we get into it, I just want to share with you a quick shoutout to an event that I'm speaking at along with Michael Bolton.

[00:01:12] Hey, before we go into today's episode, I'm happy to let you know that Breakpoint 2025 by BrowserStack is almost here. What's it all about? Well, this May join thousands of developers and QA engineers from around the globe for three days packed with expert talks, interactive workshops and the latest testing trends shaping the future of technology hosted by BrowserStack, the leading Cloud-based testing platform. Breakpoint 2025 is your chance to explore cutting edge innovations in AI and testing, scaling automation, accessibility in the real world stories behind the tools you rely on every day. You'll hear directly from industry experts at top companies like Atlassian, Amazon, Walmart, Reddit, Glassdoor, Publicist, Sapient. Plus, you'll connect with the testing community and learn how BrowserStack's platform empowers teams with instant access to 3,500 real devices and browsers to scale their testing workflows, whether you're leading a QA team or deep in automation workflows, Breakpoint is your opportunity to discover what's next in testing, connect with peers and walk away with actual takeaways that make a difference. And I'm even more excited because I'm actually going to be speaking at this event on how to become an AI-driven testing leader. Best of all, it's absolutely free, fully virtual and tailored for busy professionals like you. Don't miss out on Breakpoint 2025 by BrowserStack where testing meets the future. Register now using the special link down below to secure your spot and hope to see you there.

[00:02:40] Joe Colantonio Hey Michael, welcome back to The Guild.

[00:02:45] Michael Bolton Thanks very much. It's great to be here.

[00:02:47] Joe Colantonio Great to have you. It's been a while. I just looked at my calendar. I think last time we spoke officially on the podcast was 2017, so I can't believe it's been that long, but it has, so that's crazy.

[00:02:59] Michael Bolton 8 years, the kids in school already at this point.

[00:03:04] Joe Colantonio Wow, that's nuts. So I guess a lot has changed and a lot hasn't changed in software testing. I want to ease into some topics I want to get into, but like, I guess what have you been up to in the past 8 years? Anything new or exciting?

[00:03:18] Michael Bolton Well, there was this pandemic thing, as I seem to recall. Lots of stuff. Obviously, in 2017 or was it 2018, I was working on a project with a colleague that incorporated a certain form of machine learning, certain forms, in fact, several of them, of machine-learning and AI. And we could see this starting to creep up in the Zeitgeist. But then in November of 22, along came this version of ChatGPT, which took the world by storm and it got everybody terribly, terribly excited. And I started getting pretty seriously bored with it within a year, but the rest of the universe seems to have gone nuts for it. I've been traveling, of course, in the periods before and after the pandemic. Just finish the trip that included Abu Dhabi in New Zealand and Istanbul and a couple of places in Romania. The other thing that I've been interested in lately and developing with my colleague, James Bach is a class, w e're now teaching a new thing on a testing and automation, avoiding the traps. At one point, I want to reconsider that title because a little bit more in terms of what we're actually promoting and advocating in terms of tool use. But I've been having a lot of fun teaching that class and showing to people what we've learned about how to use certain kinds of tools, especially focused on data. That's been a lot fun lately.

[00:05:04] Joe Colantonio Love it. I'm going to dive into some of those points, but I think probably I think you have a unique or maybe unique. You have a philosophy, I think on what is software testing? I think a lot of people nowadays don't actually have a really solid background in software testing. I want to start on making defining what is software testing and then start diving deeper into these other forms of testing. I guess the question is, how do you define what is software testing?

[00:05:31] Michael Bolton Well, testing is evaluating a product by learning about it through experiencing and exploring and experimenting. Now that includes a ton of other stuff, examining, explaining. It includes making conjectures about the product, making inferences about it, modeling it, questioning it, studying it, deliberately manipulating it, generating ideas, overproducing ideas, abandoning ideas that we've overproduced, refining ideas, expanding on ideas that were refined, and then re-refining ideas that we've expanded on, navigating, map-making, collaborating with other people. Significantly, importantly, looking at products with a critical eye, looking at products with the perspective that there's trouble there to be found and that trouble will hurt our businesses and their clients. When there are problems in the product, those are problems that matter to people, then we wanna be aware of them and find them so that the business can decide to address them in whatever way the business wants to. And, of course, resourcing and using tools and developing tools is part of that in a natural kind of way. We don't put tools at the center of things. We use tools to augment our capabilities as people, as investigators, as testers. But the essence of testing is that learning about the product to find the problems that matter.

[00:07:13] Joe Colantonio Love it. I think this is a great starting point for the rest of our conversation. So the first one, I think there's a lot of misunderstanding sometimes when people hear your views on software testing is almost sounds like you're against automation, but it's not. You just mentioned there, you're all for instrumentation or tooling to help augment but not replace. So is that correct?

[00:07:33] Michael Bolton Yeah, a big problem with automation as commonly conceived out there in the wild, so far as I can tell, is that automation refers to one kind of tool use in testing, and that is operating the product via some mechanism, and then every now and again, making an assertion about the product's output or about its behavior somehow, and reducing testing to that, which you could certainly do a certain amount of testing that way, but it's really, really limited because what that amounts to much of the time is testing that demonstrates that the product can do something and not a probe into finding problems about the product that would threaten its value. Now, I did a brief video on this. It's nicely brief, it's only about 3 minutes, where one of the things that I talk about is this weird delineation we have in the software world about functional requirements and non-functional requirements, where just by calling it non-fictional, we seem to sort of dismiss it a little bit, seems to me at least. Now, it is really important to make sure that the functions in the product do their thing. That's because all of the requirements for the product, pretty much, depend on a function to make something change or happen in the project. That's in software, we need functions to make that happen. That's how code works. And that's important because if a function doesn't do what it's supposed to do, some requirement or another probably won't get met. Somebody intended for that function to do something to help meet a requirement. And if the function doesn't do it, the requirement doesn't get met. But there's an asymmetry here. The asymmetry is just because functions can be observed to produce correct output does not mean that the requirements will be met. There are all kinds of requirements associated with a product, and many of them, by the way, I want to make it clear, that many of those requirements are not written down. They're not specified in advance. They are not made explicit. But everybody wants a product to be capable of doing something. Everybody wants a project to be usable by their lights in terms of their notion of what's usable. Everybody who uses the product wants it to be usable from their perspective. People want products to be reliable. They want them to be esthetically pleasing and solve a problem in some kind of unique or interesting or valuable way. They want the products to engage them and to entrench them to some degree. They want their products to performant, to be secure, to be scalable, to be configurable and installable, and uninstallable. And they want them to be maintainable, testable. Certain other people, right, people inside the project want the product to be supportable, maintainable portable, localizable, and testable, that was the one I was forgetting. Supportability, testability, maintainability, portability, and localizability. All these things are things that can be tested, but it's difficult for me to imagine how they could be checked without experiencing those products so that interacting with them. And my colleague James has said over the years that much of what passes for test automation these days amounts to some kind of conspiracy to make sure that nobody actually interacts with the product directly. I mean, that's a bit of a, only a bit of overreach, so it seems to me.

[00:11:46] Joe Colantonio And the problem with that is people are just relying on the tooling and the biases of the tooling rather than actually getting the hands dirty. And that's why?

[00:11:55] Michael Bolton There's a big difference between checking the output from functions in a product and testing the product. I think one of the big problems is it happens because programming is glamorous in our business, right? We are software development people and the most important people in making software happen the developers arguably because you don't have developers, you don' have a code, there's no prodcut. That focus on programming, I think has sort of infested the testing world. And we've kind of forgotten about people's experiences with the product. And, we've also forgotten about the necessity, I would say, at least, of performing experiments on the bits of the product as they're being built and on the whole built product. There's a certain crowd of folks who think that as long as we've got a built pipeline and the product passes, it's automated checks, it's always automatically 100% ready to deploy. And as a tester, as sort of amateur epistemologist, I've got to be really careful about that because the built product has a reality that its components do not have. The built product as a reality that the last version of that built product didn't have. Sometimes, frequently it seems to me, we want to interact with the built product using human eyes and human minds and human fingertips to get experience with the product and to perform experiments on it as something that we would do in a high risk kind of situation where we want to be careful about the difference between the bits and pieces of the product that we've tested nice and thoroughly and that we checked really thoroughly, and the product as it's actually been built this time. We don't want to inflict that on people if there's going to be problems with it. It goes back to what I was saying before, that much of what's going on in what people call automation, one kind of use of tools for testing, is a demonstration that everything's okay. And there's something to be said for that. But as a tester and a skeptic and somebody who worries about the possibility that there could be problems hidden, lurking, that we haven't noticed yet, I really believe in engagement with the product. And it doesn't have to happen every time and it doesn't have to happened on every build. But sometimes we've got to take the built product and experiment with it while the developers go on and make other incrementally different builds with little changes in them along the way. And every now and again, we pull out a complete build and really take it for a good workout.

[00:15:08] Joe Colantonio When should we do that, though? Is there a rubrics or a way of knowing? I guess risk is risk an indicator of all the messing around with the payment plan here. So I better check it out because it's a point to my company or how do you know?

[00:15:21] Michael Bolton You never know for sure. That's the thing. There are heuristics you can apply. Heuristic is a fallible means of solving a problem, or as we used to say, making a decision, but making decisions, solving a problem. There are decisions we can make about the state of the product. And some of those decisions are going to be influenced by past experiences of bugs getting past us. Some of those are going to be influenced by our reasoned belief that there are no problems in the product, and no reason to believe that there might be problems. When risk is elevated, when money is on the line, when human health, safety, opportunity, liberty, when those things are on the lines, it might be important to give our product some exercise before we inflict its consequences on people.

[00:16:16] Joe Colantonio Absolutely. Do you have an automation strategy you usually recommend that people can use to approach it so that they don't get all in on just automating and not necessarily experiencing the software as they use it would?

[00:16:27] Michael Bolton Well, yeah, several. The first thing is that we want to look really carefully at what is being automated. In the world of automated checking, what is been automated? Input to the product and operating the product, that can be done algorithmically, mechanistically. Then the product operates and we apply a decision rule to the project. And that decision rule basically says if the output from the product is consistent with something, pre-described, presumably desirable result, or some calculated result that comes from a table, or from an algorithm someplace, apply that decision rule. And if the decision rule comes out favorably, then we turn on a little green light. And if it comes out unfavorably, we turn a little red light. That's the BART that's being automated in much of what people are calling automation these days. But there's a bit beforehand, as you mentioned, strategizing, deciding what we are going to choose to apply automated checks to. The framing of a risk in terms of a question that we might like to ask of the product, an experiment that we like to perform on it. The framing that question into a yes or no question, because computers do the binary thing. The encoding of that into a bit of code that makes the check happen. All of these things require substantial amounts of skill, which is why I'm kind of jaundiced about the no-code or the low-code people. There's always code. It's code that you write in control or it's a code that, you don't write in controlled, but there's always a code. So the no code notion to me is a non-starter. But then, after the check is performed, a light glows red, and at that point, the testing begins again. There was testing work that was going on before the check, and now after the check there's more testing work, which includes observation, analysis, investigation, discovery, learning, and then relaying what we've learned to the people who need to make some kind change in order for that presumably desirable outcome to appear. Now, that could involve fixing the product code. It could involve the fixing the check code. It could involves realigning those two things with the environments that they're running on. It could include all kinds of investigation and rework. But that rework happens when we've made errors and that's what we're trying to trap with these automated checks. That's the strategy associated with automated checking is that let's look at the risk of not being aware of some function somewhere in the product, doing something that it shouldn't or not doing something it should. That's a great thing to do. But there are lots of other ways to apply strategy to different kinds of tooling and more to the point strategies for applying tooling to different aspects of testing. We could have a look at those. One of the first things to do is to review what is available to us, what we can do. There's a computer and people using it. What can we do with that? Well, one of the things that we could do is we could simulate the user. Now, I was having a chat with a coaching client yesterday in which I asked her, what is being simulated here? And she said, well, user actions. And at that point, I wanted to inject a little bit of caution because there's a fellow that we learned a lot from, a sociologist by the name of Harry Collins. And he points out a distinction between action and behavior. Behavior is something just a physical thing that happens in the world, something you could look at and see. For example, here's an example of a behavior. Take the glasses off, but you can see that behavior there. Most people would call that blinking. But then there's a different kind of blinking which goes like this. That's what we call a wink. And that distinction really helps to underscore this important point that an action is behavior plus intention. When we're simulating the user, we say that we're stimulating the user in a suite of automated checks, in a set of end-to-end checks, for instance. We say that were stimulating user actions, but we should be careful about that because we're signaling user behaviors. The machinery doesn't know anything about the action, doesn't about the intention, it doesn't not about the desire, it doesn't know about the user's concept of what would be happy or not happy. What would be useful versus problematic. Important for us to recognize that the behaviors are what we're simulating, not the actions. That gets abstracted away in the user simulation. Anyway, that's one of the things you can do, and that's what people commonly talk about when they're talking about automation. But there's so many other things that we can do. Static analysis, that has been around for a long time, and it gets a bit of attention. It's sort of getting brushed aside and the excitement about the tools that automate browsers, for instance. But we can also simulate the system. We can use tooling to do that for backend simulations, mocks, and aspects of the product that we want to interact with. If we're testing an API, one of the things that we can do is we can set up, it's something that queries systems that aren't necessarily the real thing. And that's all right. That's okay. And certainly, once we're testing a system that's behind an API, there may be other systems behind that which we can also simulate. We can simulate systems. That's cool. Another aspect of tooling that we can apply is monitoring and analyzing and observing the behavior of the product in production. That's largely associated with things like logging and log files and gathering lots and analysis looking at the interactions between people and product, watching the JSONs going back and forth, doing that sort of stuff. Another thing we can do, very helpful, is tool-supported test design, including things like generation of data, modeling of flows, mapping various perspectives on the products in various ways. One of the big ones, one of the ones that we're kind of excited about these days is what we would call augmented experience. Now, James has invented and developed some tools which we're finding really, really handy. One of which is a coverage recorder which indicates various places that you've hit as you've interacted with the product and then graphs and notes it. I wanted to show you an instance of that.

[00:24:24] Joe Colantonio That'd be cool to have that.

[00:24:28] Michael Bolton Yeah, let me do that. Here's a dopey kind of automation that I apply all the time. It's called a batch file and it just takes me where I want to go right away so that I mean silly is it my sound this saves me, many minutes a day so it's a way of doing that now that doesn't solve the problem of me not having been here in the last few days. Let's look at that. Oh let's at the screen analyzer. That's what we're looking at. Whoops, CD screen analyzer, how about that? All right. Okay, let's see. Okay, so here we go. Thank you very much. As predicted, the coffee. Now, let's see here. Let me take, I want to put these, I want see if I can put these all on the same. Yeah, I can too, cool. Let's look at this. One of the things that we can use tooling for is what we would call augmented experiential testing. We have a little sample here. What you're seeing is a record of data gathered by James's coverage analyzer. And what this does is it goes through a record using this particular little tool here. You can't see it in action at the moment, but it's just something you pop up and you associate it with a given URL. It's trying to grab this one right now. What it does is it identifies everything you've clicked on and everything that you've interacted with and captures that and then logs it and then graphs it. Here you can see stuff where, for instance, a bit of testing that happened on the Tricentis sample insurance app there. And on the left-hand side, you can see elements in the DOM with which James interacted at a particular time. What's happening here is that we're tracking a person's interaction with the product. And then if we go back up here to the thing that looks like a home, there we go, we can reset the axis. And you can see what happened over this period of what? I guess it was an hour and 30 minutes or so of interaction with this product. You can see where it got hit. Here is something from one of James's students. This was her interaction with the product. And you can see, first of all, you can say that she took a break at a certain point, but you can also see very different patterns of what got accessed and what didn't. In fact, her coverage, her visits here seem to be a bunch of places where James didn't go. If you, or at least he went at a different kind of time for different amounts of time. By the way, the tool also captures notes and bugs.

[00:28:08] Joe Colantonio Oh, cool.

[00:28:09] So it's something else you can do when this window is up here. There's a ways by which you can grab the notes and you can do a sort of console stamp and there we go. Whoops. That doesn't work at the moment because I'm showing you the, I'm not on a product that I'm testing, I'm showing the output.

[00:28:30] Joe Colantonio Right, right, right.

[00:28:30] Michael Bolton But that was a bad time to do that. But anyway, at this point, you can drag down or drop down a thing and put in a note or a bug or this time of the session start or the time of session end, and that'll get recorded as well. Here is an instance of automation being run on exactly the same thing. And notice, the number of things that got hit here in the automation is substantially lower than the number of things it got hit here. And it did so in a very repetitive and uniform kind of way, which, for routine output checking, is probably not that bad an idea. But it does show you that, by its nature, the automation doesn't explore. It doesn't investigate, it doesn't ponder, it just looks for some kind of checkable result and then off it goes. That's one kind of augmented experiential testing. Now I'd like to show you another. This is kind of funny.

[00:29:46] Joe Colantonio Is this a free Chrome plugin by any chance?

[00:29:50] Michael Bolton It is a Chrome extension. Let me- You wanna ask me that again?

[00:29:56] Joe Colantonio Sure. So James, is this available for anyone from the Chrome store?

[00:29:58] Michael Bolton That'd be Michael.

[00:29:59] Joe Colantonio Oh, sorry, Michael. I'm sorry.

[00:30:05] Michael Bolton Not to worry, give me a chance to laugh and clear this thing out of my lungs there. You can do that as an outtake. That'd be funny.

[00:30:15] Joe Colantonio So Michael, is this available on the Chrome Store?

[00:30:17] Michael Bolton No, it's not. Not just yet. James and I provide it to our students after a class. We're a little leery about it being ready for prime time. Some of the tools though are available. Let me give you, let me show you an example of a place where, yeah. Here's an example of something that is available, I think, I hope. You actually gonna come up there, Bob? Dear the whole thing is down. That's it right a tragic because it's such a wonderful. Oh, there we go. Okay, okay. Let me run something and I'm not sure if the echo cancelation will make this happen. We'll allow this to happen properly, but we'll see. So this is a program called Chrome Log Watch. And it's a little way of sort of augmenting your testing as you go. What this does is it looks at the Chrome debug log. Now the question is, is the debug log even enabled? This is what I absolutely love about live demos. That the, for some reason, when I start the browser, the browser is not bringing up the logging. Why is that? I believe it's because it's running. Let me do this. Let me see if this works. Cut this. Here's a tool that we use all the time. And this is not the kind of tools that people don't talk about very. Do you know about search everything?

[00:32:36] Joe Colantonio I do not.

[00:32:37] Michael Bolton Oh, it is really cool. Simple tool, it's free. It's a free tool. And what it does is it monitors every single file on your system in real time. Well, it models them, that's not quite right. It looks at the file system in real-time and indexes every single on your system. What I'm looking for is the link to Chrome that comes on the Start menu rather than, ah, you see, these are a bunch of things that the operating system appears to have put in against my will. I want to take those out. You can do various kinds of file manipulation with this. And I wanna start Chrome from this one. First of the thing is I want to open it and see. Yeah, you see that's not coming up. Somewhere along the line, and this happens all the time. This is part of the secret life of automation. The secret life of automation is whatever looks good in a demo, got there by painstaking rehearsal and probably by editing. Whatever looks smooth. This is one of the things that we talk about a lot in our classes, the secret life. Testers are very tempted to hide the secret life for lots of reasons. If you don't mind, we'll go into a bit of a detour about that from, and then we'll return to our regularly scheduled programming. There are elements of a secret life to automation. Now, why is that? There are some answers. And that isn't right either. Why is that not right? It's supposed to be this one. That's not, ah, see, secret life again. We're going to address this. And why might there be secret processes? That is the slide that I'm looking for. Right here, oh it's 85, it's not 70 at all, so 85, here we go. There's a big secret life in automation, and there's a secret life and all kinds of processes, but in the world of automation and tool use and stuff, there are processes that we don't talk about very much, that they're kept secret. Now, why is that? Well, one of the reasons is that people don't notice their process. For instance, in an automated check, Joe, let me ask you, What specifically? Is being automated.

[00:35:36] Joe Colantonio What you tell to automate, step by step.

[00:35:39] Michael Bolton So, but specific, what specific things are being automated?

[00:35:47] Joe Colantonio User interface, the interaction between a browser or an application under test, the objects, the interact with an object.

[00:35:56] Michael Bolton What kind of interaction specifically?

[00:35:58] Joe Colantonio Usually clicks, tabs, enter, minimize, maximize, close, opens.

[00:36:07] Michael Bolton Interaction via a simulated.

[00:36:10] Joe Colantonio Simulated.

[00:36:12] Michael Bolton Yeah. And that simulation is really important. There's a little video I can show you about that because believe it or not, hitting a real key on a real keyboard is different from a tool operating a virtual key on virtual keyboard. When we hit click, we are not actually performing exactly the same thing as a user would, we are catching the process somewhere along the way such that we call a click function somewhere in the browser. But we had a wonderful experience one time with a, it was a web-based product that had an email field in it. And one of the testers, this was in a class this was happening, one of testers reported that 8, the 8 in his email address. His email address was something like, BobSmith1980@gmail.com. He would type BobSmith19 and he'd hit the 8 key and it wouldn't register. This seemed bizarre to us. So we set up a, I think at the time we were using Selenium, Playwright does the same thing, to just pop a string of digits in there. And there it goes, one, two, three, four, five, six, seven, eight, nine, zero. But then when we tried to do it on the actual keyboard, didn't work. Well, it turns out, and I've got some video I could show you about that, and we're jumping around quite a bit, but it turns that there was code in the application that was attempting to filter out stars. Because stars aren't allowed in email addresses. Yep, yep. So the developer had trapped the downstroke of the 8 key and was filtering that out, forgetting to check whether the shift key was also down. So the consequence of this was that anybody who had an 8 anywhere in their email address was unable to enter their email address because of this filtering function.

[00:38:30] Joe Colantonio Oh my gosh, that's wild.

[00:38:32] Yeah, now the key thing to notice there is automation would be unable to detect that problem, or at least automation of the kind that most people implement using the popular Selenium, Playwright, Cypress kinds of tools, or indeed anything that is driving the browser by interacting with stuff at the browser level like that. Now, that's not to say that these problems are necessarily common, or that the developer was particularly swift in this case. That's a whole different deal. But it is the kind of problem that automation would not alert us to. But maybe a good way to do this is to look into, to start again with the secret process. I'll do it this way, I guess. There's a secret life to work in any kind of work, really. A lot of what we do because people are hiring us not to think about it, their job, they don't want to think of it in their job, so they hire us to do it, which is great. Yeah, that's fine, it's a good thing. But there's some processes that we like to talk about, and then there are other processes that we don't like to about so much. We prefer to keep it secret. Partly because we don't want to necessarily expose people to the sausage being made, but also because sometimes there are certain things that we might not be aware of, or certain things we might be a little bit embarrassed about if they were to know about it. Here are some reasons why there might be secret processes, and we're focusing in particular on the world of automation. One thing that we hear from a lot of testers when we quiz them to kind of interrogate them on what they're doing, is that there are lots of things that they actually do that they've not noticed before that they're not taking account of. For instance, I was speaking with a tester yesterday and asking her about what she does. And she says, well, I get a set of checks that have been developed by other testers and I turn those into code. Okay. Is that really all that happens? And she said, yeah, yeah. Well, okay, so what else happens? What else is happening in there? Let's break that down a little bit. And she says, well, I start the product up and I interact with it and I make sure that I can do the things that the other testers were able to do. I said, oh, okay. There's actually some interaction that you're going through yourself. What happens when you find a bug? I asked, and she said, well, if the check crashes or if it yields a bug, well then I investigate it and I try to figure out whether it's a bug in the check code or a bug in the product code or something about that. Her epiphany at that point was, oh, wow, if we really break it down, there's lots of stuff that I've never noticed before. Lots of stuff in the process of developing and applying tools that people don't talk about because they don't think about it. It just goes below the level of their consciousness and they're just doing work. That's a big factor. Sometimes, there's something going on that they don t really know how to describe very well, and it's partly because lots of what we do is involved, is embedded in tacit processes, stuff that even we have not made explicit to ourselves. We don't notice it and then when we do notice it, well, then we don't have words for it, we don' have terms for it. Part of what James and I do a lot in our work is we try to make visible and legible and comprehensible things that are sort of below the surface of what we think about and talk about much of the time. We work on that. We try to name heuristics that people apply, for instance. Another reason that people are kind of secret about some of the work they do is that when we describe it to people, they don't get it. They don't have the lived experience that would be required for them to understand what's going on and why it's important and why makes sense. Some things are secret because we don't record them. And we don't want to record them. That feels icky to lots of people. Some processes are forgotten. Now, one of the things we say about note taking and testing is, listen, it's perfectly okay not to keep records and not to take notes as you go. Perfectly okay to do that. As long as you're cool with forgetting lots of stuff. I mean, that's my experience and your experience too, that it's stuff that we don' diligently record. We forget about it. And lots of the time that's perfectly okay. And sometimes it's important. Sometimes people say, well, managers actually don't care what I do. They just want results and hear about that. Or even worse, they say they just want numbers. I don't think that's true. Actually, they don't want numbers one way or the other. What they want is a feeling. They want a belief. They want an understanding that was John Bach that James's brother once said to me. What do they want to know, they want to know of me as a tester, they want know, am I on top of it? Well, that's true to some degree, but I think what managers really want to know is from the manager's perspective, am I on top of it? Managers want to be aware of the status of their product. Now, there's some people who have a sort of cynical belief that managers don't actually want that. And to a certain degree, that is always true. Nobody wants to hear about problems in the product. Nobody wants the hear about that. It's just that pragmatic, responsible people would probably prefer to know about them so they could deal with them rather than being oblivious to them in such a way that they could cause harm. This is one thing that afflicts me all the time. And that is I'm shy about the code that I write because among other things, I cut my teeth in a company where the programmers were assembly language level programmers. And I feel like if I showed them my code, first of all, it wouldn't be up to their exacting standards, but also it would look simplistic and like a newbie stuff and that sort of thing. And in a way that's just kind of silly because these guys were a lot smarter than that. They've recognized that, hey, code is what you need to get the job done. When you do that, don't worry too much about it. If it's not frightfully important that it be super, super precise or super clean or super elegant, then it's a not a big deal. But as humans, we kind of get shy about that. A lot of the time, people are embarrassed about the way they test as being redundant. And there's actually something to that sometimes. I'm sure you've heard stories of people who are testing at the GUI level, things that are not only easier to test at the API level, but have been tested already at the API level but because the GUIs automation is dazzling, they sort of get away with that. It looks good. The opinion of the person who's not looking too closely is, well, if it looks good, it must be good. Sometimes the process involves recognizing that there's maintenance stuff. Sometimes people talk, decline from talking about certain aspects of their process because it's really hard to do and they can't tell when it's gonna end. They just dampen that conversation. Here's something that I actually often have a hard time talking about. I don't know sometimes how to talk about things in a general way, such that other people won't come along and say, well, that doesn't apply to us. Well, let's actually true but if it doesn't apply, oh, well, it does apply usefully in this sort of circumstance. So that's something that all of us need to get over a little bit and recognize, hey, if we found a useful way of approaching a problem and using tools to do it in a certain situation, we can talk about how useful it is in that kind of situation and not worry about the circumstances where it's not. Sometimes the code that we inherit is opaque. Sometimes, we're secretive about our process. We like to tell people, oh, well, I read the requirements documents and I read specifications and I read the API documentation. Well, no, actually we don't do that sometimes. A lot of the time you just sort of dive in and try to get to work. And that, again, that can be a little bit embarrassing. Lots and lots of text on this slide. But basically, we're trying to help people become aware of their processes in our work. And there's certain obstacles to that, with respect to knowledge and awareness of the work, the nature of the work itself, right, this sort of higher level strategizing part of it, the relationship to the work that people have. How much agency they feel they have over it, how much they have to be accountable for it, the actual data about it, the data that needs to get collected, and then our own feelings and the feelings of our clients about that. And so sometimes those processes are secret. The outcome of that is that a process may not be described in a legible way. Legibility is an important kind of idea in our work. Legibility, it's almost literally readability, the ability to observe something and to make sense of it because it's been presented in a way that's easy to see and easy to wrap your mind around, easy to understand. One of the things that's kind of seductive about automated GUI-level output checking is that it is quite legible on a certain level. You can see stuff going by on the screen. You can see stuff that looks like user interaction. You can see browser behaviors flying by. And that looks impressive. It looks like a superhuman doing things at superhuman speeds. But there's lots of stuff that might be going on in that product that the dazzling nature, the legible nature of that stuff doesn't illustrate so well. The processes that underneath that are the processes that lead to it or the processes of what we do with the information after the checks run, that is not so legible, that's not so visible. For instance, when I was asking my student the other day about, well, what actually goes on in your work? She didn't talk at all about the processes that involve investigation of what happens after a check has run red or after a cheque has crashed. She made a distinction between her work and what the other people in her crew do. She is the automation tester and they are the manual testers. Now this is a distinction that bugs me no end. And the reason that it bugs me is because it seems to me that neither one of those terms, manual nor automated, do very useful work in describing what we actually do, especially the manual bit. By manual, people often mean, well, let me give you a little visualization of this. By manual people often means, and where did that slide go? Oh, there it is! And that is slide number 50, so it seems. So people talk about manual testing. Now, what do they actually mean by that? For a long time, it seemed to me simply that when people talked about manual testing, they didn't say, I do manual testing, what they meant was I don't write code to operate the browser. But it bugged me one day because a tester was saying, I'm worried that I'm gonna get replaced because I'm a manual tester and my company is going whole hog into automated testing. And first of all, testing can't be automated. All those things that I mentioned earlier on about investigating, critical thinking, risk analysis, interacting with the product, and that sort of stuff. Interaction with the project on a sort of surface-y level, that can be automated, but all of the thinking stuff and all of investigative stuff and all the analytical stuff that goes into testing. That can't be automated. Those are distinctly human processes. So it bugged me when she said, I'm a manual tester. I said, no, you're not a manual tester. You interact with the product. You get experience with it. You're an experiential tester. And that sort of started turning on a bunch of light bulbs for me. How can we express what testers actually do so that we recognize how kind of bankrupt the term manual testing really is. What do people actually mean when they say manual testing? Sometimes what they mean is they mean interactive testing. They're interacting with the product directly and they gotta be there. There are some processes that go on within testing work that can indeed be done in an unattended, non-interactive kind of way, right? You give a process for a machine to perform and it performs that process and you come back later and you look at the output and you look at, the logs and you look at the state of the application of the data afterwards and you make some decisions about what to do next. Whereupon you're back into interactive work again. Another thing you could say though, is that you interact experientially with the product. That is to say, your encounter with the product is practically the same as that of a particular user that you had in mind. If somebody were watching you working, they'd have a really hard time figuring out the difference between you, the tester, and some contemplated user, except for one thing, maybe, and that is when the contemplated users encounters a problem, the user is going to say, Damn it! And try to work around it. Whereas, a tester is going to say, whoa, that's interesting, and it's gonna start investigating. But when we talk about experiential testing, we're talking about testing where we are interacting with the product in a way that reproduces or that anticipates the way that somebody affected by it is going be interacting with it. Sometimes we say user, I wanna open the fence on that idea and say that an ops person is a user of a product. The person who is the client of somebody who is directly using the product is also a user. We want to have a very expansive notion of that. Now, contrast experiential testing with instrumented testing, where something gets in between the person interacting with the product and the product itself. And automated checking is like that, right? There's a medium. Something in between that is going to alter, or change, or distort, or accelerate, or intensify, or extend, or enable, or disable, or limit, or hyperspeed the interaction with the product. A lot of problems in automated checking are the consequence of that the tool is running too fast for the product to respond. The distortion that we're seeing in instrumented testing can be a feature, it can also be a bug. This is not a value-laden statement that the experience is distorted. It's just changed in some way from the naturalistic experience of the person using the product. We could be testing, when we say manual testing, what we could be talking about is testing where the choices have not already been made for us, where we're exercising agency over what's going on and our procedures might be open rather than constrained by something. The essence of exploratory work is making choices. And we do that in automation work all the time. We make decisions about how to write and maintain our scripts. Another way we put it is there's no script that tells you how to write a script, and there's no script the that you how to investigate a problem that you found, that your check has revealed. That's an issue. And then finally, sometimes this is on the list, and sometimes it's not. But let's put it on the lists for today that some interactions with the product that we call manual testing are sort of transactional. We don't learn anything from them in particular. We're just walking through a routine that somebody set down beforehand, and the test doesn't affect the tester. But to us, if it's going to be a test, it has to affect the testers some way. At very least, the tester needs to learn something from the experience of performing the test. People who are heavily engaged in developing and maintaining and executing automated checks are doing this kind of stuff all the time. So-called test automation work involves an enormous amount of what people would call manual testing. And the trouble with manual to us is that it hides what's really going on. It could manual, after all, it means using your hands. But the hands are merely an input mechanism. They're not the essence of what's going on. So we just encourage people to drop the idea of manual or automated testing and talk about testing instead, and then talk about how we would apply tools in testing.

[00:58:58] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a543. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:59:33] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.

[01:00:16] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Test-Guild-News-Show-Automation-DevOps

AI Test Management, AI Prompts for Playwright, Codex and More TGNS158

Posted on 05/19/2025

About This Episode: Have you seen the lates AI Powered Test Management Tool? ...

Showing 81 of 6864 media items Load more Uploading 1 / 1 – Judy-Mosley-TestGuild_AutomationFeature.jpg Attachment Details Judy Mosley TestGuild Automation Feature

Building a Career in QA with Judy Mosley

Posted on 05/18/2025

About This Episode: In today’s episode, host Joe Colantonio sits down with Judy ...

Jacob Leverich TestGuild DevOps Toolchain

Observability at Scale with AI with Jacob Leverich

Posted on 05/14/2025

About this DevOps Toolchain Episode: In this episode of the DevOps Toolchain podcast, ...