Automation Testing

Periodic Automation: Finding Intermittent Issues [PODCAST]

By Test Guild
  • Share:
Join the Guild for FREE

Welcome to Episode 90 of TestTalks. In this episode, we'll discuss how to hunt down elusive, intermittent issues that can cause your automaton tests to randomly fail due to actual product defects.


PaulGrizzaffiTestTalksPeriodicAutomation

Intermittent issues are an automation killer. You know the kind of tests I’m talking about…the ones that pass most of the time, but fail every so often. You rerun them, and they pass. But you wonder each time: is it just a flaky test, or is there really a defect in my application that’s causing the occasional failure? 


In this episode, Paul Grizzaffi shares his approach on how to uncover race conditions in your product; a situation in which it’s valid for events to come in a non-deterministic order. You'll also discover how to socially engineer your test automation efforts with empathy and conversations in order to build relationships that will help your automation thrive.

In this episode, you'll discover:

  • What is Periodic automation
  • That error messages, logs, and results need to be delivered in a format and vocabulary that the people using them are accustomed to.
  • Why empathy and building relationships is just as important as your automation framework
  • Tips to improve your automation efforts
  • Some questions to ask yourself, like “How can I provide some value using technology?” to help build real quality into your products 
  • Much, much more!

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:

Question: What techniques do you use to troubleshoot intermittent issues ? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Kama

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on Stitcher.com so if you prefer Stitcher, please subscribe there.


Read the Full Transcript

Joe:         Hey, Paul. Welcome to Test Talks.

Paul:        Hey. Thanks, Joe. Thanks for having me.

Joe:         It’s great to have you on the show today, but before we get into it, could you just tell us a little bit more about yourself?

Paul:        Sure. I am the Program Architect and Manager of Automation at MedAssets. MedAssets is actually being acquired right now, so we’ll have … probably have a new company name here pretty soon, but I spent my entire career on automation and automation-related things largely by accident right out of college. I was recruited by the company that became Nortel to join a test automation team. Over time, I just found that I had a knack for it, and I really love doing it. Yeah. That’s really basically it.

Joe:         Awesome. As you mentioned, you have a lot of experience with test automation, so I just want touch on a few topics with test automation and just see where it leads us.

Paul:        Absolutely. It sounds good.

Joe:         Awesome. First, I know that you’re presenting at STPCon this year.

Paul:        Yes. Yes, I am.

Joe:         I think I noticed one of your sessions is called “Using Periodic Automation.”

Paul:        Yes.

Joe:         What do you mean by periodic automation I guess?

Paul:        Periodic automation in the context of what we’re talking about here is typically in a continuous integration environment. The basic steps are developer writes code, developer checks in code, code is compiled. Perhaps, there are some unit checks that run. Perhaps, there are some automated functional test scripts that execute. Then, you get a, “Yes, this check-in was good,” or, “No, this check-in had some problems. Go solve it.”

It’s good at catching certain classes of errors and issues, but there are other types of issues that are more timing related, race conditions, other intermittent type issues that … It’s a little harder to catch issues that way, but if you rerun your automation, or a subset of your automation, or perhaps, some even different types of automation at different periods, not necessarily on an event boundary, you have the opportunity to catch this type of issues a little more often.

Joe:         It’s so interesting. I’m thinking of an example. I know sometimes on Wednesdays like at 9:00, our company runs some sort of extra skin that will knock out our automation, and we … so those are the types of events you’re trying to figure out? Some sort of anomalies with your traffic during a given day? How does that work?

Paul:        That’s definitely one type of these intermittent issues that I do touch on that’s sort of your environment issues or your configuration type issues. The ones that I was really focusing on and at the core of the presentation is about a race condition actually in the product where you’ve got a situation that it is valid for events to come in a non-deterministic order, but perhaps, the software itself does not account for all of those. Once in a while, you see this, this error, this issue that something didn’t behave exactly the way you expected, and you say, “Hey, did I just see that?” You go back, and you try it again, and it works the way you expect it, so you say, “Well, maybe I did something wrong. Maybe I … Let me write that down, and I’ll let somebody know if I see it again.”

Maybe you don’t see it again, but if you’re running your automation frequently, more frequently than a human is just going to be able to do it based on the fact that we humans are fallible, right? We make mistakes. We need to eat. We need to sleep. We have time with our families and such. Automation doesn’t have those considerations. It can run on a schedule around the clock provided you’ve got the bandwidth to look at the results and the bandwidth to actually do the executions. If you’re looking more often, you’re more likely to find these types of issues.

Joe:         Cool. I guess my next question is this. Maybe I’m being a little cynic here, but I have a hard time having people look at real issues with automation in the CI environment. How do we get them to look at what appears to be an almost flaky test then if it seems to pass consistently, but then fails randomly ever so often? How do we know that, “Hey, this is a real issue,” and then the team really needs to spend time in figuring out why that occurred?

Paul:        The way I had success with this and the story I’ll be telling at STPCon was something that happened at my previous job when I was at GameStop was we did go through all that. We went through the, “Oh, that’s an automation problem.” Eventually, somebody not running automation saw the problem, and it became, “Oh, that’s just a problem in QA.” “Oh, wait, wait, wait. One of the devs saw it.” “Oh, that’s just a problem with IE9. We don’t have that with IE8.” “Oh, wait, so and so.” Slowly, this onion starts getting unraveled.

As soon as you have that first win or the periodic automation said, “I see a problem that the humans are not seeing because I’m looking for it more often,” to personify the automation here, once you have that first win, people start to take notice. They start to give it a little more credence. The first one is hard, and it really is.

Like you said, it’s difficult to get people to look at these things, so having a good relationship between the test organization and the development organization are actually key to be able … being able to make this a success because then, you start … Without that, you will start to devolve into what I call “failure fatigue” where you start getting desensitized to the failures, and you start to ignore things, and make assumptions about, “Oh, that thing came back red again? That happens once in a while. Ignore it. It’s probably okay.” Then, you start to miss other problems that are masquerading as the previous problem that you became desensitized to.

Joe:         It’s a great point, and I’m just thinking of my experience with performance testing back in the day when we’d run a test, and then we’d take the average like 90% of what the average was of the response time. But then, you have these outliers, and sometimes, we’d ignore those outliers, and it would just randomly take longer than normal. I guess what you’re saying is it probably makes sense to look at those outliers because it actually may very well be a real issue that you’re just ignoring.

Paul:        Yes, because it really is something that you do need to take look at all of your results. I call it “auditing your results.” When you get results back from any sort of tool, your automation tool for example, you’re going to get potentially some failure, some reds, some, “Hey, I have a problem,” that you need to go and look at, so obviously, you want to focus on those because those are the things that the tool is flagging as anomalies, things you perhaps were not expecting. What you do need to audit and look at your successes, you passes as well because the passes can show you the delta between what happened when it worked and what happened when the tool said it did not work.

Also, automation gets stale. Things that were valuable before are no longer valuable. Perhaps, we should cull those scripts things that have expanded beyond what the script does and we didn’t go back and expand the script’s coverage, so now, we’re assuming that that script is doing something it is not, and we don’t have the coverage we expected to have. We can get problems that escape us that way as well. Looking at the results as a whole for trends is very interesting, but looking at and auditing individual things along the way is very valuable as well.

Joe:         Cool. I guess it leads me to my next question. I guess the only metric I’ve ever heard that made sense for automation is “mean time to diagnose,” and I heard this from Ellen Page, is the amount of time it takes you to figure out what the issue is in your environment. Do you have any tips for how to … where to look and how to find issues within automation scripts when there is an error like this? Do we look at log files? Like what’s your approach to that?

Paul:        The main focus I have, and it’s the focus I’ve had with almost all of the initiatives that I started since I became a leader for companies doing the automation and not just one of the team members following another leader is the error messages, the logs, the results. Those all need to be delivered in a format and a vocabulary that the people consuming those logs, and messages, and errors are used to. Things like log-in link was not there when you tried to click it. Very helpful when it’s associated with a screen shot. HTML element with ID 74618 not available is not as helpful.

There’s a lot more digging and involving potentially of other people to figure out what that particular error meant, so when you start getting this multiple people, having to look at something just to decipher that first step into the archeology of figuring out this problem, the failure fatigue starts to set in earlier because the tolerance for that is very low by humans, right? It’s tedious. It’s “extra effort.” Putting the effort in up front to make these messages more digestible by the appropriate audience is always going to help you be able to debug these things because you’re looking at with a vocabulary and a jargon that you’re used to using.

Joe:         Another great point. This could be as simple I think as … We use Java in our framework, and we use JUnit, so a lot of times, when you go to JUnit, there’s all these assertions and they’re … You can overload them, so at the minimum, you can just pass on like an “assertTrue,” and so a lot of times, we have teams that just do assertTrue. But then, when you look at the logs, you just see assertTrue, right? There’s another one with the same assertTrue that you could pass in a message of, “AssertTrue patient name Joe wasn’t available,” right? Just by passing it, that extra message, would make debugging that much easier, so I think little things like that would help. I don’t know if you find that in your own experience.

Paul:        That’s exactly the kind of thing that I’m talking about, and it will be different based on the language you use, the tool you use, who your audience is. If your audience understands code 748 because your audience … that’s something that’s meaningful to them, by all means, deliver the information that way. Just know that when you bring someone else in, you’re going to have to teach that, that particular vocabulary, but that’s part of doing the job anyway, right? Nobody goes into a job and has to learn zero things to be productive at that job.

There’s always a learning curve for something, but I’ve always liked to take the philosophy of, “I don’t want the tool to be your job. I want the tool to help you do your job.” The more I can make it such that the tool, the framework, and all the insularly pieces like the error messages are conducive to helping you be more effective and more efficient, I’m going to bake those things into the framework, I’m going to coach you on, “Here’s a better way to write your assertions,” where you do have the, “Here’s what I was looking for. Here was the actual value,” as opposed to a true-false binary type thing.

It’s a little more typing, but at the end, it winds up being far less work because really … and you know this. You’ve been involved in automation for years. The cost of automation is not in the creation, right? It’s in the overtime maintenance, the care and feeding of the initiative, and of the framework, and of the scripts. That’s where you really are going to spend your money, so you really want to take an approach that is going to minimize what I call “total cost of ownership” of your automation initiative.

Joe:         Awesome. Great stuff. Now, Paul, I just want to expand on that a little bit. As you mentioned, you’ve been involved in automation, but you’ve been at a leadership level also as more as an architect I see, high level, moving people, coaching them. My question is then, how do you move an organization towards these best practices for automation knowing that it is an effort? It’s not a one-and-done thing when you write a script that it actually is an effort like you said and that there is refactoring that has to be done. Just like any software project, automation is the same way. How do you coach teams, or do you have any tips and [weekend 00:12:10] approach on managers to enlighten them in this type of knowledge?

Paul:        Some of it really is social engineering. I go and I really try to empathize with, “What is your plight?” I don’t go in and say, “How many test cases do you have in your regression suite that you want to automate?” I really talk about it and say, “Where are your pain points? What sucks about what you do? What are you having to do 7 times a week and it takes an hour each time that maybe we could point some technology at and better that particular part of your laden life?”

I like to look at things like that because it builds that relationship, and you start building that trust, and it also comes from a different area as opposed to focusing on this, this fairly nebulous and pyrrhic goal of 100% automation of my regression, and focuses really on, “Let’s solve the problem with technology.” Those problems also tend to be a bit smaller and a bit more digestible, so as long as you have a leadership culture where they’re not counting success on number of test causes automated and they’re counting on productivity and value provided, you’re going to make small wins this way, then say, “Let me help you create all of those user IDs that you’re doing by hand right now. Let’s write a little script in Pearl, Python, Ruby, you name it, that generates these things for you and gives you back an hour a day.”

That’s a big gain not only to that person, but that person’s management. When you do stuff like that, it tends to be pretty cheap. Pretty cheap, pretty quick, delivered quickly, adds value, and it’s what we jokingly call the “drug dealer approach.” The first one is free. The next one is going to cost you. We start building this repertoire and repository of little tools that we can then tinker toward together or then we can say, “Hey, look. We’ve shown that these approaches work. Now, let’s take a bigger bite of this elephant, right? We’re going to eat the elephant in multiple bites, but let’s take a big bite out of this effort, this elephant that you are … you’re facing with …” I don’t know, whatever the particular problem is that this organization is.

Usually, it falls around, “I have a thousand things to do during a test period, and I get to about 200 of them,” so we start looking at things to say, “Let’s increase that coverage. Let’s help you get more of the job done,” and people buy into that. They want to help you. You get this camaraderie, what I call “friendlies.” You get a bunch of friendlies that are willing to help you get them over that hump.

Joe:         I guess I have a few things that I want to pull out and expand on a little bit more. The first one is I actually know of a company and a project that is very metric-drive, and it’s metric-driven from the top down, and it’s really bizarre metrics like, “You shall have 80% automation,” but they ignore things like testability. They think when they hear automation or testing, that is just, “We need to automate this testing.” Never think about, “Is this application even automatable, or is it even testable?” because no one ever thought about automation or testing as they were developing it, so when we get it, it’s hard to automate because it doesn’t have, like you said, unique IDs, unique fields.

Have you experienced this, and how do you … Also, is this something that … How would you coach someone to get around this? Is this the same thing that you just mentioned earlier with the … just helping, showing people, and using that approach rather than hitting them over the head, just churning out benefit and show them why, by changing and making something automatable or testable, would that benefit to the overall result?

Paul:        That is my general approach, but that doesn’t work for everyone every time to be really extreme and a little on the cynical side there. There are some people you just can’t reach like they say in Cool Hand Luke. There are some people you just can’t reach and perhaps, it’s bad fit for you. Perhaps, you’re a bad fit for them. I have left and changed organizations because we’re bad fits for each other. Not that they were doing anything wrong, but the approaches that they wanted weren’t things that I believed in, so I felt that needed to make a change there, but we can work on what is it that you’re trying to get at? What’s the difference between 79% automation and 80% automation?

If they can really come in and say, “Look, we have this risk analysis, and we got these actuary guys in. We did all this math, and for our product, and our business, and our software, that difference is a million dollars a quarter.” Then, maybe we have to make that push because the business really, really depends on that, and it’s really going to make a business distinction there to bump up to that 1 percentage. If we start really delving down, most people have wet fingers in the breeze, they read something in CIO Magazine, their boss wanted it that way, so the inherited it. If you start peeling apart these assumptions and having these conversations, you can get to the core of, “What is it you really want? What is it you’re really trying to do?”

If what they’re trying to do is drive down the number of defects, then let’s talk about, “Why are we having defects? Why are they not getting caught earlier? What can we do to get these things caught earlier before they’re making it out to the customer?” and really appealing to these guys and appealing to them on a level that they care about. Do they care about money? Do they care about time? Do they care about the count of a particular thing and they think the way that you manage that count? Did you count something else? It’s really, really context-specific, and you have to go in and have that level of conversation with them and really try and empathize with them.

I use that word “empathy” a lot because I have to put myself in the position of the different people that I work with, everybody from the person who’s just going to kick off the automated scripts and consume the results to the people who are working on the front work with me to the people who are actually paying me to … funding me to build frameworks and deliver automation capability into their teams’ hands and let them know, “Here’s what you were getting, and if we don’t do this thing you’re asking for, I can give you this other thing, and I can quantify it by reducing your opportunity cost or by helping you get to your next milestone quicker.”

Again, empathizing and talking with them on a plane that they care about because if you go with them and explain automation theory and testing theory, their eyes are going to cross after a few minutes, and they’re going to quit listening to you because that’s a means to an end. The end is what they want, so let’s talk about how do we achieve what they want.

Joe:         Great. I guess the reason why I brought that up is I just had a conversation with someone. They wanted me to give them my metric for testability. I said, “It’s a good practice. You need to make a test. You need to make it automatable, testable, and to automate it,” but he wanted me to give him some sort of metric he could use to go back to them, say, “Well, this metric needs to be defined.” It’s just crazy, so anyway.

Paul:        That’s very interesting. I saw James Bach gave a talk, and I think it was called “The Heuristics of Testability” where he talks about different types of testability. That particular … I think if you Google that, you can actually get the slides and stuff.

Joe:         Nice.

Paul:        That might be of help for you in that particular conversation.

Joe:         Awesome. Okay, so a few more things I’d like to touch on there was you mentioned maybe creating automation as almost a helper tool, and I think I saw this in one of your YouTube training videos. What do you mean by that?

Paul:        Myself and some of the people I’ve worked with over the last few years, we’ve written a different definition of automation, and it’s not completely unique. You can see people like Richard Bradshaw talking about automation in test as opposed to test automation. What we’re talking about here is when talk about automation is I mean taking technology and applying it judiciously in order to help the test organization be more effective or more efficient. Sometimes, the right answer is to create a smoke or sanity suite that can run on every code check-in.

Sometimes, that’s the right answer, this test-case-based automation, but sometimes, the right answer is, “Oh, let’s build you a context-sensitive diff between these thousand different result files that you get in, and build a dashboard for me to show trends up or down. That particular example I built here at MedAssets, it took me about a day or so, maybe a day and a half to put that first cut together and, plus or minus, a couple of features and a couple of bug fixes. That’s still in use 2-1/2 years in because it gave these guys a way to view the results of these massive data runs that they didn’t have before, that they could look at day-minus-one trends. Did we trend with more successes or more failures on which part of the code?

That was invaluable use of the time because they’re getting so much value back out of that. I would suggest that that is automation, and that was automation in test because it helped the testers understand their results, but it had nothing to do with taking a functional script and having the computer do it and even though … There’s nothing wrong with that. We do that here, right? We’ve got the traditional automation that we do here, but we think about automation a little differently more around assistance as opposed to automation and say, “How can we be of assistance to these testers to help them get their jobs done better?”

Joe:         I 100% agree with you, and actually, it’s funny. I put together a presentation a while ago, and I … That was one of my points, and when I looked at some of your videos from YouTube, it’s pretty much the same points that I made, so it’s interesting how a lot of different people are coming up with the same exact things about automation. I think that’s a good thing.

Paul:        I agree. It’s quite refreshing to see other people because you come to these realizations sometimes without seeing what other people are doing, and then you start doing it, and you wonder, “Gee, am I an outlier?” and you talk to the people around you, and gee, you’re an outlier. That could be good or bad, right? So then, it just starts working out well, and you start seeing other people that were doing what you’re doing. Some after you, some before you, and you start building on that momentum to say, “Wait a minute. It’s not just me. Let me go see what they’re doing, and I can incorporate some of that in with what I’m doing, and let me share vice versa, and we can make this whole thing a lot healthier than it has been in the past.”

Joe:         Do you have any tips on how to make our test automation scripts more maintainable and more reliable?

Paul:        The main thing I have to say there is treat it like it’s software because it is. I coach basic software encapsulation and abstraction principles in your framework. If you’ve got something you’re going to refer to more than one like the ID or the name of the HTML element for your log-in link, put that in one place. There are a lot of people that are really, really opinionated about where that should go. That should be baked into the code in one place, know where you’re going in object repository, know where you’re going in a database.

I’m very moderate when it comes to that because what I choose to do here at MedAssets smells a lot like what we did at GameStop, but it’s not the same because I’ve got different products, different teams, different organization, different business owners, so not everything translates exactly the same way in every organization, but the basic principles of treat it like software always works out pretty well for me. That’s the main thing I would say is treat it like software.

Make reusable components, whatever that means in your world whether that’s using something like SpecFlow or Cucumber and reusing the steps, whether it’s build your own multi-step methods for objects that you can call in your own DSL-esque language like we did at GameStop. It really depends on who’s writing the stuff, who’s executing it, who’s consuming the results, who’s maintaining it. Is all of your automation a one-stop shop? The automation gumball machine like I like to call it where a test case comes in and test script goes out?

Then perhaps, you can get away with a little more … Shall we say programmer or specific APIs where it doesn’t have to be quite so friendly to someone who isn’t a programmer because they’re going to see very little of that? If you’re doing like what we’re doing here at MedAssets and what I did at GameStop where I’m providing the base layers of framework and the education on how to use that framework to write your own test scripts, I want to make that as … I want to make that learning curve as shallow as possible, so the teams can get up and running quickly, keep their maintenance low, have all of the stunt programming done by my team, and keep the cost of ownership down again with that mindset of, “I don’t want the tool to be your job. I want you to use it to do your job.”

Joe:         Yeah. Once again, I definitely agree with you there 100%. I’m not going to go off on a rant, but sometimes …

Paul:        Feel free, it’s your show.

Joe:         Oh my gosh. Sometimes, people … Certain companies, they obsess over, “We need a perfect framework. We need a framework,” and yet they never use the principles like you said like, “Treat it like code.” Rather than take these high-level best practices, sometimes, they’re just looking for some magic tool that’s going to save it rather than doing the hard things like treating it like software development, using a layered approach, don’t repeat yourself … I don’t know. It just drives me nuts.

Paul:        Dorothy Graham has a really good layered architecture diagram in one of her books, “Experiences of Test Automation,” that I actually thought I had invented until I saw it in her book and went, “Oh, wow, she beat me to it.” She’s also a far better writer than I am. There are models for this stuff out there that you don’t have to do it exactly that way because again, the layered architecture that we have here is not the same at what I had at my previous jobs, but the principles are the same.

Joe:         Absolutely, and that’s another point that … I forgot my point. The point is everyone is not the same, so what’s right for your company and your group maybe different than another company and another group, but some of these high-level principles and theory usually would apply to all situations or most situations.

Paul:        That’s the best advice I give is look at what’s out there. Listen to what I have to say, sure, but take it with a grain of salt. I’ve done some things for some companies, and we’ve had some amounts of success with those things. I would not Xerox what I’m doing and overlay it over on company B blindly because it’s probably not going to work as well there, not without some tweaks, not without some understanding of the context.

The principles, the stuff out there by some of the leaders in automation and testing, they’re sound. Go and read those. Go and think about them, and then internalize them, and make them appropriate for your company, for your organization. That’s a word I like to use a lot along with “empathize.” I like to use the word “appropriate.” Do things that are appropriate, and then when they become not appropriate, evolve to something that is appropriate now.

Joe:         That’s a great, great, great point because a lot of times, sometimes, people just … following something that has been done for years and never questioning it. Like you said, maybe you should question it and say, “Is this still valid? Do we still need to do this?” I think that sometimes people neglect that piece. It seems so simple, but it’s something that you can get caught into. It’s just the way we’ve always done it, and sometimes, just questioning it to see, “Is this still valid? Should we still be doing this?” is maybe a good question that we should be asking.

Paul:        Definitely. You should always audit and self-reflect at points throughout your initiatives of maturity because everything, almost everything has a shelf life. When it starts getting stale, you need to do something to freshen it up.

Joe:         Is there anything new that you think is coming up that most testers need to be aware of or should get a handle on before it really explodes?

Paul:        I don’t know that it’s new because again, this is something that I’ve been doing for a while, but I’m starting to see other people are doing it now and/or have been doing it. I’m only finding out that they’ve been doing it for a while, and it’s this notion of applying technology as automation, so getting away from thinking about everything as automation is test-case-based and test cases are automation. That’s the beginning point there.

That’s valid, but really, taking test cases and turning them to test scripts, that’s an implementation of automation, and I think we’re starting to see more people understand and undertake this notion of, “Let me apply technology judiciously to make my laden life better,” and then that at the end of the day really is automation because if you’re not doing it, air quotes, “by hand” and a computer is doing it for you, then I submit to you that that really is automation, and it’s automation in test. Again, like Richard Bradshaw calls it as opposed to just automation and testing.

Joe:         Yeah. Absolutely. If you can automate it and it’s going to help your software development life cycle, then automate it. It doesn’t need to be an end-to-end functional test. It could be just like you said, writing scripts to help the build be automated, so you don’t have to manually do it. Just helping people get the job done quicker in a reputable way, a lot of times there’s a lot of benefits to that that people are missing out on because they’re just focusing on some weird and never-ending functional tests when they could make people’s lives easier just by automating some of these other things that are not traditionally known as testing or test automation.

Paul:        Exactly. The other thing that I’m seeing, and again, it’s not around technology. It’s more around mindset around using the technology is that automation traditionally has been this pyrrhic effort of all or nothing. If I don’t have 100% coverage or if I don’t have my goal of 80% or whatever it is, then I just have not done it anyway. Falling along with this judicious application of technology, I think people are saying, “Well, if I can get the hard 80% done and the easy 20% is what’s left,” then that’s probably still a win.

I would suggest that you’re making the absolute right business value decision there, and you should keep up that particular thought process because when you pitch it that way to the people who hold the purse strings, eventually, they will get it. If you start talking about reducing opportunity cost and reducing total cost of ownership, they start to appreciate that because you’re speaking not only about things they care about, you’re speaking in a language that they do understand.

Joe:         Awesome. Yeah. I’m still waiting for that trend to hit my company, but maybe someday like you said.

Paul:        Yeah. I see it out there, right? I see it in the Twitterverse and I see it somewhat less on LinkedIn, but there are inroads being might made by certain people, and they’re starting to talk about them at conferences. They’re starting to talk about them on webinars. If this information just starts to seep out and slowly infiltrate this mindset of test cases, and regression, and “I spend all my time on regression, and I don’t get any time to do exploratory testing,” whatever the right mantra is around this, what people have today, if we start applying this with a value mindset as opposed to an automation old-definition mindset, I think we’re going to start seeing more value, more progress, and a more judicious use of human effort as well.

Instead of dealing with this brittle mammoth behemoth that you’ve created, you’ve been a little more deliberate about what you understand is going to be maintainable going forward as opposed to, “That’s going to fail 8 out of 10 times, and the failure fatigue I’m going to get from that is going to be so great that I’m going to become desensitized to it, so let’s just not do that. Let’s do something else with our technology.”

Joe:         Okay. Paul, before we go, is there one piece of actionable advice you can give someone to improve their test automation efforts, and let us know the best way to find or contact you?

Paul:        From an improving standpoint, from a conceptual standpoint, I would say really start thinking about it less from a test case count standpoint and a test script count standpoint at a percentage done and think more about, “How can I provide some value using technology? I’ve only got X amount of time to spend helping me get my job done. Where can I most effectively spend that?” If it’s building a smoke suite that’s going to run on continuous integration check-ins, great. Go do that by all means, but if it’s something that’s going to build a dashboard for you or do a giant data conversion for you, spend your time doing that because you can save hours, days, weeks of effort potentially doing these types of things, and I’ll submit that that is automation because a computer did the bulk of the work for you.

From an actual technology standpoint, most of the time, what I see on failures or challenges with test automation approaches is not treating the initiative like a software development initiative like it should be, so I definitely encourage you to look at it as a software development initiative. If you are actually building a framework and you’re not an experienced programmer, practice programming. Practice programming first. Learn to program. Learn some basic program design skills, and I’m not talking like get a degree in computer science. I’m talking about making the mistakes. That’s how I learned, right?

I have a degree in computer science, but the way I got to be an established professional software developer is I made mistakes in college, and I made mistakes in grad school, and I made mistakes in my professional life. Learn from those mistakes. Improve. Make new mistakes because if you’re not making some mistakes, if you’re not failing along the way, you’re not pushing the envelope enough, but you do need a basis. You need an understanding of how programs and software needs to be put together to make it sustainable, to make it supportable and maintainable. As far as contacting me, you can follow me on Twitter @pgrizzaffi, P-G-R-I-Z-Z-A-F-F-I, or you can look me up on LinkedIn.

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Leveraging AI and Playwright for Test Case Generation

Posted on 11/22/2024

Two BIG trends the past few years in the software testing space been ...

Symbolic AI vs. Gen AI: The Dynamic Duo in Test Automation

Posted on 09/23/2024

You've probably been having conversations lately about whether to use AI for testing. ...

8 Special Ops Principles for Automation Testing

Posted on 08/01/2024

I recently had a conversation, with Alex “ZAP” Chernyak about his journey to ...

Sponsor The Industry-Standard E2E Automation Testing Annual Online Event (Limited Spots Left) - Reach Out Now >>