Testing Beyond Bugs to Unlock Growth with Gojko Adzic

By Test Guild
  • Share:
Join the Guild for FREE
Gojko Adzic TestGuild Automation Feature

About This Episode:

Today, we have a special guest, Gojko Adzic, who introduces his new book “Lizard Optimization.”

Check out BrowserStack Test Management testguild.me/browserstack

Join us as Gojko walks us through the significant value testers can bring to organizations beyond merely reporting bugs and evaluating product readiness. We'll delve into his rich experiences with user behavior at scale and the formidable challenges he's faced while offering practical strategies to unlock remarkable product growth.

In this episode, we discuss Gojko's innovative four-step lizard optimization method and the hidden power of edge cases. Gojko emphasizes testers' fundamental role in product innovation, mainly through experimentation and user behavior analysis.

We'll also touch on the balance between addressing accessibility needs and identifying obstacles in user workflows to enhance usability for everyone.

Listen in as Joe and Gojko explore the insightful intersection of user expectations and software design, leveraging customer support data for product improvements and the transformative potential of monitoring production environments. Whether you're a software tester, developer, or product manager, this episode is packed with actionable insights and thought-provoking perspectives that promise to elevate your approach to software development and user engagement. Don't miss out!

Exclusive Sponsor

BrowserStack logo, featuring a multicolored circular icon resembling an eye next to the text "BrowserStack" in a bold, modern font.

Are you tired of managing your test cases using multiple tools and spreadsheets? Introducing BrowserStack Test Management, an AI-driven solution that I think will really help your QA teams.

Imagine a world where test case authoring is a breeze, automation coverage is tracked effortlessly, and your entire testing process is streamlined in one modern, intuitive platform. That's the power of BrowserStack Test Management.

With AI-powered features, you can generate test cases based on Jira stories, get smart recommendations, and identify the most relevant tests to run. Plus, their automation-first approach means seamless integration with your existing frameworks and CI/CD tools.

But don't just take our word for it – BrowserStack is a Leader in the Test Management G2 Grid®️ Report for Winter 2024.

Ready to transform your testing process? Learn more about BrowserStack Test Management and support the show by heading over to https://testguild.me/browserstack and see if for yourself.

About Gojko Adzic

Gojko Adzic

Gojko Adzic is a partner at Neuri Consulting LLP. He is one of the 2019 AWS Serverless Heroes, the winner of the 2016 European Software Testing Outstanding Achievement Award, and the 2011 Most Influential Agile Testing Professional Award. Gojko’s book Specification by Example won the Jolt Award for the best book of 2012, and his blog won the UK Agile Award for the best online publication in 2010.

Gojko is a frequent speaker at software development conferences and one of the authors of MindMup and Narakeet.

Connect with Gojko Adzic

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:00] Then testers are selling themselves kind of short or cheap. Testers can provide a lot more value to the organization by figuring out these things, rather than just reporting bugs or evaluating whether the product is ready for a release or providing kind of feedback on that quality, I think.

[00:00:19] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:00:54] Joe Colantonio Hey, if you're ignoring test result outliers or weird random bugs, you're missing out. Here's why. Today, we'll be talking with Gojko Adzic, all about his new book, Lizard Optimization: Unlock the growth by engaging long tail users. You don't want to miss it. Check it out.

[00:01:11] Hey, are you tired of managing your test cases using multiple tools and spreadsheets? Introducing Browserstack's test management. It's an AI driven solution I think will really help your QA teams. Imagine a world where test case authoring is a breeze. Automated coverage is tracked effortlessly, and your entire test process is streamlined in one a modern, intuitive platform. That's the power of Browserstack's test management. And with all their AI powered features, you can generate test cases based on Jira stories, get smart recommendations, and identify the most relevant tests to run. Plus, their automation first approach means seamless integration with your existing frameworks in CI/CD tools. So ready to transform your testing process. Learn more about Browserstack's test management and support the show by heading over to testguild.me/browserstack and see the difference for yourself.

[00:02:08] Joe Colantonio Hey, Gojko, welcome back to The Guild.

[00:02:09] Gojko Adzic Hey, Joe, thanks for inviting me again.

[00:02:12] Joe Colantonio Great to have you. Maybe a little background why you wrote about lizard optimization. Then we'll talk about what that even means?

[00:02:18] Gojko Adzic Last couple of years, I've been building my own software and operating it, and it's been like tremendous joy to do that. But again, building anything is a struggle and you need to do good testing. You need to do good product management. You need to do, I, as a self-funded founder, developer, tester, and a support person to everything. At some point, you just start kind of experiencing crazy stuff from people. And then the product had 9 million active users last year. And at that scale, people do crazy things. And some of these crazy things are just crazy things that you can't really figure out why somebody would never do something like that. Some of these crazy things are actually really good ideas. Some people struggle using the software because the software is bad and it might pass all the testing and be functionally the way where you want it to be as a product but it's not where the users want it to be, and then people do crazy things with it. And kind of I wrote the book because I was really inspired by what had happened. And in a sense, I did, the product kind of started growing nicely. And then one of my competitors stole my traffic. They got good funding and investors much more in marketing, and the product started kind of dying slowly, although it looked like it was on its way to success. And I was trying to do lots of things to focus where I thought I could make the biggest impact, which is kind of the mainstream, the main group of users, the main features, and helping people do more of that. And nothing really helped. And I was going to actually just put the product device and abandon it. And in preparation to do that, because I was also doing support, I didn't want to disappoint existing customers. I tried to kind of optimize all the sharp pitches that I'll get, everywhere, where people were kind of getting stuck or getting hurt or not experiencing good UX and having to contact me about that. And really surprisingly kind of doing these things started to unlock a lot of growth. And a couple of months later, the product started growing like mad and fluct through November 2021 to November 2022. Actually, the usage increased 500 times which is insane. And seeing something grow 500 times in 12 months is crazy. I mean, that means that things that would normally happen once every two years started happening every day.

[00:04:58] Joe Colantonio Wow! That is incredible growth. I guess, can you talk a little bit more about the hidden power of edge cases that you were seeing them with all that traffic?

[00:05:06] Gojko Adzic If you have a crazy edge case that somebody would say, reports a bug and says, well, I tried to press this button, the thing exploded. And then you do some analysis, somebody tells the developer concludes, well, that's such a crazy edge case. It happen once every three years. It's not worth fixing it. Fast forward happens twice a day. Now, you need to fix these things because there's so much more people doing it. And it was just a stream of crazy things that was happening. For example, the product started as a way to create videos from PowerPoint presentations very easily, and some people kept building blank videos, which made no sense at all. And they were paying me to build blank videos. And digging deeper into that, I figured that I made text to speech conversion so easy. That's kind of they were doing blank videos just to extract an audio track and use it as voice messages, or use it as something else. They didn't really need the video, they needed an audio file. And there was like less than 1% usage. But it was one of these crazy things that I couldn't really kind of figure out what was going on. And I think, the book is called Lizard Optimization because when you find something like that, it's not been done by humans. It's a little bit like somebody following the wrong lizard logic or something like that. You can't really understand it. But if you try to figure out what these people are doing, they don't look crazy anymore. It's not lunatic just creating blank videos. It's a person that really likes the text to speech conversion, just does not need the video, and then there's no reason to put them through the hoops. Building a blank video, creating a PowerPoint, removing your visuals to extract the multitrack. And I made a very simple screen for people to just use the text to speech function that was there. And then that's kind of part of what really unlocked growth so much, because at the moment, more than 99% of the usage of the product is people creating audio files without video, which was like a discovery of this, what are the crazy people doing?

[00:07:13] Joe Colantonio With my experience as a tester, how do you report something like this without it being shut down, saying, oh, a real user wouldn't do that or that's unrealistic.

[00:07:23] Gojko Adzic Discovering something like that is often a struggle. And then when they do discover it. So it's an uphill battle with the product management because product management wants to keep true to the vision that at least when I was working as a consultant with companies, you have this very strong vision of a product, and then you don't want to kind of get sidetracked. And then when people actually do something like this, it's often considered a stroke of luck or serendipity or something like that. Well, I think actually my experience with this is that it's a genuine product growth strategy. It's not something that happens is a stroke of luck. It's not something that we should fight against. And I started researching this and I found that lots and lots of other people do something like that, just that they don't really think about it as a proper strategy for doing things. And it's done hazardly. It's done kind of, again with a lot of struggle. One of the best examples I found was PayPal, where basically at the start, they wanted to create a mechanism for transferring money across popular devices. That was in 1998, 1999. And when they built the Palm Pilot app because the web was emerging, they built a marketing website to promote the app on the web. And the idea was that people would use the website to learn about the Palm Pilot app and then download it to their Palm Pilot and use it and things like that. In order for people to experience some value, they built a way to transfer money across the website, and then a bit later, they had a million and a half active users on the website and 12,000 active people on the PayPal, or a sort of not too long on the Palm Pilot app. And the product management was fighting against it all the time. They were trying to shut down people who are using the website on eBay and fight against them and prevent them from using it until they realized that actually, nobody has Palm pilots, but everybody's using the website and they restructured the whole thing. And now, they're one of the biggest payment processors in the world because they were able to stop thinking about people for using it on a website is crazy edge cases and fighting against it and embrace that as a way of discovery. And I think, if you look at science that a lot of scientific discoveries kind of help by chance or helped by experimentation and looking at something that's maybe not that kind of obvious or something that happens as a fluke. I remember, when I was still teaching people testing, when you do an exploratory test and you test for some purpose, but you see something else and you say, well, that's interesting. That's usually one of the kind of most interesting discoveries you can have. There's lots of stuff hidden behind something that happens that's interesting, but it's side effects and there's lots of knowledge that you can discover there. And I think this whole experience of doing exploratory testing has given me a different perspective of project management. I think lizard optimization is kind of a combination of exploratory testing and product management that really helped me unlock exponential growth.

[00:10:46] Joe Colantonio I know a lot of times people hear probably know about pivoting, but I think you actually have like a four step method to help people with the pivot, which seems a little too like, like you said, it just random almost. You're looking at outliers it seems like. What does the four step process then to the lizard optimization? I think the first one is how do you do learn how people are misusing a product without going down a rabbit hole?

[00:11:09] Gojko Adzic I think it's this 4 steps. And then for people to remember it easy. The initial letters, that kind of a lizard out of this lizard and it's kind of learn about how people are misusing a product that is zero and one behavior change or whatever you want to improve, then remove obstacles to user success and then double check that no unintended or second order effects happened. And there's lots and lots of ways you can discover how people are misusing your product. For example, for ...., I have, lots of monitoring around what people are doing with the site and especially when they get into an error condition. That's really interesting. For example, we have a dialog where people can upload different types of documents, and then the product imports it, and occasionally people upload crazy things. I mean, we have, you might see on a text to speech conversion screen, people keep uploading Android package files. Makes no sense at all. I can't explain it. My best explanation is that it's one of these hacking techniques where you leave a USB outside of an office and you hope that somebody is going to love it. But some people upload things that kind of make sense. We just don't support that. And we had a bunch of people uploading a subtitle files from videos, and this is like, okay, if I can kind of see the logic in it, it was a fluke, really rare edge case, but I can see the logic in it. You can actually upload the subtitle file. You want to do text to speech conversion. And you don't think there was enough of these people that, let's implement it. Let's support that. Let's see what happens. And there was kind of a direct result of us capturing these workflow errors. So you upload the not supported file, you get an error saying this file is not supported. But I also get some monitoring and it get some aggregate statistics what people are uploading so we can track what crazy things people are doing. And then what we kind of I built this kind of subtitle conversion to audio with timestamps and making sure that's aligned that turned out to be one of the most profitable things ever in the product because people from large e-learning departments, in enterprise organizations, they do these training videos all the time. They need to translate into different languages. You need to kind of you want to have a voiceover talk in a different language. And even with the text to speech engine, that kind of speeds up how quickly you generate the audio. You're still left with hours and hours and hours of editing the video to align everything nicely and make sure it's done. But if you upload the subtitle file, you get back a synchronized version. It saves you hours of time and not a lot of people needed that. But those of our users that needed it use it for quite a lot. And a couple of our biggest kind of spending customers, came to the product because we had this kind of subtitle to do audio conversion that nobody else had in time. And again, that was a direct result of discovering what crazy people were doing on the site. And I think that there's a balance between research with your users and what I think is common sense now in the product communities that you have to do a lot of research with your users. You can't represent users, but there's a balance of what you can do with research upfront, and at some point you need to stop and build the product. And whether you involve the right people in the research, whether the audience changes, whether the audience preference change over time becomes a really interesting thing. Because I think when I originally built the product, subtitle importing wasn't really kind of that nobody needed it. But then it turned out to kind of have a simpler text to speech function, a video function than people tried to do lots of these text speech things from weird formats. And somehow we came to this subtitling import. I just focused on the initial release, and we would have kind of completely mixed it because we were not researching that. We were researching a different audience. And I think of this optimization where we look at unexpected things that people are doing. These are products, how people are hacking the products, how people are misusing the product. It can complement this initial research and can point to our blind spots, you can point to changing preferences or changing audiences. Your product audience is one thing this year then you have a lot of growth. It's a different audience. Their preferences are different, their needs are different, technology moves very quickly. The research that you've done two years ago might no longer be valid. And by looking at what unexpected things happen with our software or usage of the software is a really interesting thing. There was a lovely video I found researching this topic a bit more from Rachel Neumann in Australia, and she was at Eventbrite, I think, looking at bridging the gap between product management and product support. And her idea was that you have this kind of product support function or customer success function where people follow them and report problems and complain about bugs and all this other stuff, and that usually, then goes to testers, testers do some testing, then it goes to developers fixing whatnot. And she said, kind of these people are speaking to hundreds of thousands of your users every day or every year or every month, depending on your product kind of volume. It's such a rich data set that needs to be mined for product ideas needs to be addressed, and lots of organizations have this big disconnect between product management and support, which I think is very weird in a sense, because you can get all these ideas to complement your research for free effectively. Not every customer call about the bug is a bug. Not every bug is important. Not every unimportant bug is that unimportant. Maybe this is a hidden gem that you need to know. Hey, I think bridging this gap is easy for me because I said, everything in this product. But larger organizations, I really think should look at this disconnect. And I remember well, maybe even 10 years ago or something like that, we were working with an insurance company as consultants here in England to help them rebuild their testing function. But lots of testers left very quickly, and they were left without internal testing knowledge, and they tried to kind of hire new people to take over, and they wanted to make sure that everything gets started quickly. My colleague helped them hire this kind of head of testing, and this lady turned out to be such a wonderful, cross-functional expert in everything that she became the head of product at the end. Because she learned so much about kind of the whole product, approaching it from a testing perspective. And she had this mentality that you need to evaluate everything, you need to figure out how to do things. And I think this from like most of your audience are going to be testers in this podcast. I think, if you look at something like that, then testers are selling themselves kind of short or cheap. Testers can provide a lot more value to the organization by figuring out these things rather than just reporting bugs or evaluating whether the product is ready for a release or providing kind of feedback on that quality, I think. I'm thinking more about this in a sense where when there's a problem with the product, it might be. I kind of as a consultant, I think about quadrants. And it might be that the product is doing something that the user expected or user didn't expect, and it might be that the product is doing something to the product management expected or didn't expect like did we want the product to do this? Yes or no? Do the users expect that to happen? Yes or no. You have this thing where if the product does something that you didn't expect it to do, as a tester or as a person developer, product owner. And the users didn't expect that, that's probably a bug. That's a straight problem. But then you have these things where the product does something that you wanted it to do, but users expected something else. Is that a bug? Is that ux issue? Is that unexplored potential for growth? Is that something that is very, very interesting feedback that says, we don't know something about that audience or we've missed something that. And then you have these kind of strokes of luck where the product does something the users expect, but it's not what you wanted, which is sometimes a security issue, sometimes you have people figure out how to exploit you. Sometimes it's, again, an interesting area of growth because it might be that you think it's a bug and it's actually not a bug that's how people expect it to work. And then the I think, there's a lot more here than just is it a bug or it's not a bug. And I never had kind of I think that your book called mismatch by Kat Holmes while researching this and it's a book about accessibility in software and this concept of a mismatch, that's kind of a much more neutral word. It's not a bug, it's not a problem, it's not an issue, it's a mismatch between what the product does and what the users want it to do or expected to do, or what the user's capabilities are? Now, that mismatch can come from lots of different sources. You can have a mismatch of somebodies cognitive abilities. You give them too much text, they can't read it quickly enough. Or it might be a mismatch of ux where the button is in the wrong place. So it might be a mismatch of thinking about this mismatch is really, really interesting because then you can figure out whether you want to address it, whether it's worth addressing, whether it's, something that we do differently, maybe, and things like that. And one thing I really love about this whole mismatch concept is they talk about one of the four principles of accessible design from Microsoft, where the author worked. And one of those principles is he solve for the one expand to many. When you solve a problem like that, you don't solve it just for that particular group, but you try to improve the product so the product generally improves for everybody. And I think that's kind of thinking back it, some of the best kind of bugs I found working on other products and some of the kind of most interesting experiences improving that that's actually. You find a weird thing? You find the weird bug, but then, you don't fix just that bug, you fix a whole class of different things. You improve it in a way that makes the product generally better. And I think that compounds growth. And that's really, really interesting. They have some statistics. And again the book is focused a lot on accessibility and people's capabilities and things like that. But I think there's lots of things we can take from that and also parallels we can take from there. And you talk about kind of three types of disabilities that cause three types of mismatches there. And one is permanent. Say, if you're looking at accessibility and disability and things like people, there's I think they said the 16,000 people in the U.S who have a permanent disability of one of their arms, their arms was amputated or, some kind of genetic disorder that they can't use their arm. And then you suddenly have somebody who cannot use two hands with your software. That's not something that people often test like, can you use your software with one hand? But in the book, they talk about how if you just look at that population and then it's kind of often not economically justifiable to fix it. It's one of these things where you say, well, it's not the use case we want to cover, but actually so for want to expand too many, then you look at two different to more types of mismatches and disabilities. Let's say, you have these kind of temporary. It's not just somebody who is completely without one arm, but some like a mother holding a baby in her arm and she does not have to free arm. So you have somebody who's carrying a bag from a shop with one arm. He can only use your software with one hand. And then you have these kind of situational where somebody's profession prevents them from using both arms like they have to hold something, and then you talk about how in total, there's like 30 million of these people in the U.S at the moment. By optimizing the software to be usable with one hand, you're not making it better for 16,000 people. You're making better for 30 million people. And that's a totally different economic proposition. And I think and looking at that, that's kind of I think drawing parallels with lizard optimization, if you find somebody was doing something crazy, like trying to upload subtitle files or trying to create like videos, and you make the software better for these people, you don't just make the software better for these people, you make it better for lots of other use cases. And that's why I think doing that is a wonderful way of thinking about product development experiments, figuring out how people are misusing the software. And then you can say, well, out of all these misuses, we caught because there's going to be lots of lots of noise. You need to figure out the signal in the noise. You can't, of course, fix everything. You can then kind of from a product management perspective design, well, this is the one worth pursuing. And let's then design some experiments around that. And I think that becomes a really interesting aspect of the whole process. We collect lots of noise, and then from the noise we need to kind of draw some signals. And then when you do that, I think one of the kind of the way my thinking really evolved around this is that when you have a mismatch like that, people often think about, oh, I don't want to build more features or we don't want to invest in kind of just this because it's adding more complexity to the software. But if you think about it as well, it's not adding complexity to the software, it's about removing obstacles that the product is putting in front of users unintentionally. But my product was getting forcing people to create a blank PowerPoint to upload it, then to download an MPEG-4 file to extract an audio. Those are obstacles in front of a user's workflow. They're not additional features that should be something completely different. And then removing obstacles to user's success is really, I think, the kind of key there because then we can look at what the user success looks like, how do we make them more successful. And this is where kind of growth really comes from because when you get people to be more successful, they will stay longer with the product. They will bring their friends, they recommend it. And then the last step is, is double checking whether we've caused any additional unintended impact. And this is really, really important because the whole premise is that people are doing something we don't understand or we didn't expect. That means that our research, our internal knowledge, our data, our mental models about our users are wrong. And then, we try to do something else, but they still might be wrong. They might be some other kind of type of crazy and things like that. And one really good example of that is I was trying to optimize gaming success rates, especially for people in Europe. Companies in Europe need a bit more bureaucracy than in the U.S, in general, and they need to have their tax numbers on the invoices, they need to have a proper invoice and things like that. I'm using a American payment processor. They don't really understand Europe. That's kind of and they have this field where you can put in a tax id, but they don't validate it correctly, lots of people were dropping off because they couldn't input the number correctly, and no trivial number of people were kind of selecting Russia as their country just because the payment processor doesn't validate Russian tax numbers. And then I would have to go and say, well, this is a French company. You selected Russia country for tax. This is insane. And I thought, well, what about just removing the number there. And I think it later in the step on a web page, I control and say go through the process. You can put in a tax number later that will generate the invoice. And that way I was able to configure the payment processor not to prevent these people from succeeding. But then actually the results were that fewer people were paying because they were expecting the tax fields. And if they meant a lot was to send me an email saying, where do I put the tax field when I removed it? It's something, again, where I thought this was a good idea. Actually, it wasn't. And we double checking that it was good then and measuring and proving what the effects are, and if there are second order unintended effects is also really interesting because then it helps us kind of again, improve the loop and get some insights about what we want to do.

[00:29:24] Joe Colantonio Nice. I loved the four step process. I love the idea of mismatch review process with the Lizard Optimization, that's awesome stuff. And how do you ignore people using the Russian field? How do you ignore the default thinking, oh, our users are dumb and just glossing over it rather than saying this is probably actually an issue, rather than just saying oh uses a dumb don't worry about it.

[00:29:45] Gojko Adzic Well, I mean, this is kind of it's very, very easy to say, users are dumb. That's kind of especially if you are in industry kind of you need to you need a decent IQ to work in, in the things you do. It's very easy to say, users are dumb, users are stupid. But I think, one of the things that really I want to do with the book is help people understand that people are not dumb, they might be just following a different logic. And that's why I like kind of this term lizard because it's not human logic, it's lizard logic. But then you kind of thinking a bit deeper there and then we can think about, well, if you think about something's not done by humans, it's done by lizards, let's go and understand what the lizards are doing, what the logic is, then I think part of that is thinking about that we can't represent our users, but they can't get inspired throughout that one of the biggest mistakes product managers make is think that they can speak for the user. We can do lots of research, we can do monitoring, we can do lots of interesting stuff, but we do not speak for the user. The user speaks for the user, and figuring out what the users do or want to do is a really important part of our job. If the users are stupid, if the users are dumb, then it's a mismatch between the user's cognitive ability and the difficulty of the user interface. Then maybe the user interface should be simpler if the users are actually dumb, but maybe the users are not dumb. Maybe the users are just doing something different. One really interesting example from that is one of my products is my mapping tool. We originally built my map expecting that professionals will use it, we built this from scratch ...., the idea was that people will create my maps to brainstorm during testing exploration, during product management and things like that. But we kind of built the Google Drive integration at the very interesting time when Google was promoting their Chromebooks in schools and their services in schools, and they were keeping hardware away, I think for free to get people hooked on their services in schools. And all of the sudden the usage in schools started to increase quite a lot. And the software was designed for adults. The software was designed for professional users who are generally comfortable using a keyboard and things like that. And all of the sudden, we started to get through all these weird metrics that would suggest the users are dumb. But users are not dumb, users are kids. That's a totally different use case. They do want to put images rather than words. And we actually kind of as the usage in school started to grow, we removed a lot of the complexity from the software. We made the software significantly simpler. We did a major redesign to make it not that smart anymore. And as a result of that, product grew significantly. I think we made the journey a better product because if you make something easy for kids to use, then the adults are going to use it easily as well. And so yeah. Sometimes you do have people who just start playing dumb of course. But I think in a large majority of cases that's a copout. That's basically saying our UX is bad, let's say, this feature is too complex to use, let's say, we've messed up, but it's a lot more comfortable to say users are dumb then we've messed up.

[00:33:23] Joe Colantonio I love this book. And I know it was written mainly for software product managers, but as you mentioned multiple times, I think testers pay players significant role in implementing this to help really expand usage of the product for features, especially at the beginning of, maybe planning for a sprint. Maybe, hey, here's an idea we found from production. Let's do an experiment type deal. Is that's something you see a tester being involved in as well, or marking something rather than as a bug like you said, mark it as a mismatch. And maybe at the end you have a database of all these mismatches. You can go over and see there any commonalities and maybe that aren't really bugs, but things you can maybe help make the product more usable or better features for your customers?

[00:34:04] Gojko Adzic I think the unfortunate part of the industry is that testers are kind of at the bottom of the food chain, and I always thought that was crazy, but I think people with skills that are critical thinking skills that are kind of skills to evaluate a product from lots of different perspectives to kind of putting a different hat. Michael Bolton talks about kind of a builders hat and a critic hat, if you can put on a critic hat and evaluate or criticize the product like a food critic or a movie critic, then you have really interesting skills that can be used to drive a lot of product innovation. Whether kind of that's the should be doing it. Absolutely. Whether the organizations are going to let them do that. That's a totally different question. And I think that's kind of part of what people have to kind of fight for, for their role. And I think, I came into testing from kind of the development side of things, came into testing because I was running a small team, and when we build stuff, when it goes out, it wasn't supposed to come back. I had to learn how to test the software, not just build it because it was my money, if it came back and we had to fix it. And I think, for me it was kind of slightly easier than to argue about lots of different perspectives. But I think it is possible for testers to kind of assume more responsibility and get more responsibility. And in particular, as you said, running experiments, figuring out how to evaluate things, taking responsibility of. That's really interesting. I think more and more we will be seeing product experimentation is an important part of development. I quite like how kind of that area is growing, and there's lots of kind of interesting voices in that community becoming more and more popular. And I think product managers are learning there's a need to do that. But testers can provide quite an interesting, valuable service that's very complimentary to their skills if they learn how to do product experimentation.

[00:36:13] Joe Colantonio Absolutely. We also love about the book, AI is all the rage at this book is very human in the loop type thing where, well, I think only human can evaluate what is a mismatch or what's not. Did you agree with that? Or you think maybe AI could step in and try to find these anomalies and flag it as a potential experiment?

[00:36:31] Gojko Adzic Oh that's interesting. Like from my perspective, AI, so far has only generated support for me. I regularly have people who try to use the API in a way that doesn't even compile or doesn't work with my API. And then when I ask them, have you even read the documentation? I said, no. I try to use ChatGPT to generate an API call and then it uses a different type of authentication that's not supported. It's kind of talking to non-existent URLs and things like that. I think lots of people who are inherently lazy are using, ChatGPT to be even lazier. I've not really seen big benefit of these generative AI things, but at the same time, my product is based on text to speech, which is kind of AI know generative AI but old school AI, I think figuring out the signal in the noise is a big challenge. It's a big challenge for me, especially when I look at all the data I'm collecting because when you start collecting data about how people are misusing our products and you have millions of people using your product, then there's going to be a lot of weird misuse. We're talking about things that are very, very long tail in the usage curve. And I think it probably is a good use case for AI to figured out the signal in that noise and propose some trends or propose some patterns that you might not see, obviously. I think, for me, the challenge is always keeping up with everything that is going on. And something like that might be a good use case for AI. I think the other stuff is is probably more human, at least for now.

[00:38:24] Joe Colantonio Absolutely. Okay, Gojko, before we go, is there one piece of actionable advice you can give to someone to help them get started with lizard optimization? And what's the best way to find or contact you and get our hands on your new book?

[00:38:36] Gojko Adzic The best way to get hands on the book is basically going to wait until the 1st of September, until the book comes out and then get it on Amazon, or kind of a bookstore near you. The lizardoptimization.org website is going live tonight in preparation for that. That's going to be a good way to do that, I think, number one is asked one piece of actionable advice is if you currently don't have monitoring in your software to tell you when people are getting stuck or doing something unexpected. Put that kind of monitoring in. And you would be surprised how many interesting ideas come as a result of that. That, I think is the biggest lesson for me is when I started putting analytics to tell me where people are getting stuck and where people are getting problems. All sorts of weird ideas started kind of materializing, and I realized how much I've missed in research. And I think partially, lots of companies that do have product analytics, product analytics are mostly looking at things you expect to happen. You're looking at how many payments do we have? You looking at, your user families you expect it to be. You're looking at how are people using this feature, that feature. Putting in analytics around what's happening that we don't really expect is really, really interesting. And where are our blind spots? There are some interesting ideas in that market that segment from all the observability folks. And I think that's kind of the again, some overlap where testers can help is figuring out from testing in production, from observing in production what we missed in the original product. And again, that was my initial approach is from a testing perspective, if you monitor the software in production, you can look at things you've missed during testing before the release. Those are kind of problems that went there. But sometimes that said, problems, as in the product is not doing what it was supposed to do. Sometimes the problem is in the product is not doing what the users expected it to do. And then you wouldn't catch that with testing at all. You can only catch that in monitoring production. So the observability guys and folks have developed lots of interesting tools to catch technical exceptions. But these are kind of user exceptions, business exceptions, workflow exceptions, and capturing these things, analyzing those things is, I think, a really, really good first actual step for people to do.

[00:41:10] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a509. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:41:45] Thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at Testguild.com when you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit Testguild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.

[00:42:29] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Stephen Kilbourn TestGuild Automation Feature

API Testing 3 ways: Postman, Playwright, and Jest Compared with Stephen Kilbourn

Posted on 09/15/2024

About This Episode: You limit your career potential if you only know how ...

Steve-FlandersTestGuild_DevOps-Toolchain

Mastering OpenTelemetry and Observability with Steve Flanders

Posted on 09/11/2024

About this DevOps Toolchain Episode: Today, we've got an exciting episode lined up ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Testing Copilot, Playwright Studio, Continuous Quality Prediction and more TGNS135

Posted on 09/09/2024

About This Episode: How can you automate the most time-consuming aspects of testing? ...