Software Testing

Why Testing is Harder than Developing [PODCAST]

By Test Guild
  • Share:
Join the Guild for FREE
Gerald Weinberg Software Testing

Welcome to Episode 100 of TestTalks. It’s not often you get a chance to hear from a real life Obi-Wan Kenobi or Yoda. But that’s what this experience will be like as we Test Talk with someone I consider a living legend — Jerry Weinberg.

I can’t believe this is the 100th episode of TestTalks! Thanks to all of you, I’m able to talk with some of the coolest folks in the software testing and development space — so I was ecstatic to land an interview with Jerry Weinberg to celebrate this landmark episode. Jerry shares his years of wisdom on what it takes to be successful in testing and offers some of the best analogies I’ve heard from any of my guests.

Jerry has worked on some of the toughest software development projects, including NASA’s Mercury Program, where he assembled what was probably one of the first test groups in history to work on the design. He also implemented the space tracking network, focusing on the first multi-programmed operating systems. Jerry definitely knows what he’s talking about—he’s lived it! So listen up and bask in the testing force.

Listen to the Audio for Why Testing is Harder than Developing

In this episode, you'll discover:

  • The “secret” to achieving maximum productivity in your efforts
  • Why testing is harder than development
  • Is automation able to solve all your testing problems?
  • Tips for improving your software testing efforts
  • Which areas of an application to check based on RISK
  • Much, much more!

[tweet_box design=”box_2″]If we call a dog's tail a leg, how many legs does a dog have? @JerryWeinberg #TestingWisdom[/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:


What role does automation play in your test teams development process?

Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Karma

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on so if you prefer Stitcher, please subscribe there.

Read the Full Transcript


Joe: Hey, Jerry, welcome to Test Talks.


Jerry: Well, welcome to New Mexico.


Joe: It's an honor to have you on the show. You have so many books out there, probably around 50 I think, but today, I think I'd like to focus on probably Perfect Software and Other Illusions About Testing and probably touch on whatever other topics may come up.


Jerry: Okay, fair enough.


Joe: Cool, so first I guess I have to ask, how are you so productive as a developer and writer? What's the secret?


Jerry: I don't know if there's one secret. Maybe the secret is, a sort of middle level, is stop looking for secrets and just figure out one little improvement at a time. Get rid of things that are using your time that are not productive and that you don't like to do. Part of it is you have to love what you're doing and if you don't love it, then it's pretty hard to bring yourself back to it. Writing, for instance, and writing code and so on, is easy to postpone so you really learn to love it. When people tell me, “Oh, I hate writing and it's very hard for me.” I'm like well, then maybe you're just not cut out to be a writer. You find a way to love it. Same way with testing, find a way to love it and then you'll find, instead of excuses not to do it, you'll find excuses to do it.


Joe: Yeah, that's great advice. I started reading your Fieldstone method, Weinberg on Writing, and that's actually one of the first things you say, you need to love what you're writing. Also I've been seeing lately and I know this might be weird but this Fieldstone method, I have been applying it to other areas of what I've been doing. For example I started doing a video training course and i was struggling with it, but after reading this book I was able to apply the principles here to the video training based on what my energy level was at a given time, I know I have to follow a certain pattern and it really was helpful. I just want to recommend this book to anyone that's looking on how to write better or actually to do anything in life better. I think the Fieldstone method seems to able to be applied to many things.


Jerry: It's realistic. I mean you take the rule as it comes. We would like to be able to plan everything in advance and know exactly do this on this day, at this hour on this minute and so on, but life doesn't work out that way. I was thinking about that yesterday. I have to go deposit some legal papers to a county courthouse thinking I've got to get there before they break for lunch, and I'd had plenty of time and everything, great plan, everything is fine. When I get close and there's a big sign that says “detour on this road” and I can't get there. I have to go around long way then of course I was late. Everyday stuff like that happens, so people who make really solid plans about things, and this is definitely applies to software testing, I mean you're testing because you're trying to discover things and when you discover something then your plans change or auto change and if you just blast the head with what you thought at the beginning you're not doing a very good job.


I guess that's the secret of being productive if there is a secret, is adapt to what is as opposed to what should be or whatever and of course another way to describe testing is that it's finding out what actually is as opposed to what's supposed to be. Program is supposed to work in a certain way and the tester finds out oh it doesn't work in that way. When you report that then somebody does something about if and you live your life the same way you'll be pretty productive.


Joe: What you just mentioned may not seem intuitive to most people, most people think that testing is very structured and that you have to follow a set of procedures but it almost sounds like just like you can't plan certain things in life you may not necessarily build to plan your testing effort perfectly. That's okay, that's what testing is, it's exploring and finding things out. What are your thoughts on that?


Jerry: Well, Woody Allen said, and Yogi Bear said, and a number of other people said that I can predict anything except the future and that's the problem. We'd love to be able to predict the future, we think, although if you knew the day you were going to die and so I don't know if that would be good or not …


Joe: I know. That's true.


Jerry: To some extent we can predict the future quite a lot so that we get in the illusion that we can predict it all the time and we can predict it very precisely. You can predict it and you can plan, just be ready if your plan isn't working out, then you make a revision of the plan as you plan to re-plan. It's like all this business about the waterfall method of software development. There's a waterfall method in testing too and people can say bad things about the waterfall method but waterfall method is great if you can do it. Right? If it works then it's great. You can do one thing after another and you're not running into any difficulties then that's fine, that's the fastest way to get things done, and no anxiety or anything else. As soon as something happens that you didn't plan on then you have to drop that plan and favor a new plan, which is what I'm going to do next. It applies to development, applies to testing, and applies to life.


Even though we would wish that we wouldn't have to do that, we think we'd like to plan and everything goes according to plan and that would be very nice and sometimes it happens, and then it's very nice. I mean I had arguments with people who are advocates or devotees of agile adaptive programming. They say things always change. Maybe not always, see that's a prediction about the future too and you're predicting that things are always changing, sometimes they don't change, so that prediction is wrong. I had a student who was in the business of converting COBOL programs into COBOL programs. You know you can come up with a different version of a compiler, a little different standard, and people have this huge investment in their COBOL programs, so he wrote software that did 99% of the job and then a little handwork and the customers didn't want any changes. They wanted exactly what they had before, and he could get fixed price contracts and he could work to schedule and deliver error free code and so on.


It can happen but to plan on that happening and not be ready to [inaudible 00:07:26] that just isn't smart that's all.


Joe: Absolutely. I totally agree. Another concept you go over besides this in Perfect Software and Other Illusions About Testing is that risk about product risk, and that a lot of people when they're testing, if they're not thinking about risk they'll probably may not be testing. Can you just explain what risk has to do with testing and when someone's planning testing how risk comes into play.


Jerry: Yeah. I'll try. First thing you have to do, and I find that this is not done very well, is you have to ask yourself whose risks are you concerned about. For example, I make a piece of software, test it, and I deliver it to the world, and it doesn't work right. If it doesn't work right what's it's cost, the question first is who's at cost? As a developer might lose my job, I might have to spend a lot of time doing work over, as the company or organization that behind the development, I might lose my client, or customer, or lots of customers if I deliver software that doesn't work so there's a risk I could go out of business if I deliver software that doesn't work. Then there's the risk to the users of the software which can be way bigger. I've worked on systems where if the systems didn't work right, somebody died, not the developer didn't die, the tester didn't die so the risks to them were different.


I mean if you're a developer and you build some software and somebody dies because of it, well you're still alive so your risk isn't as great. You might get a bad reputation, you might lose your job, you might feel bad, those are all risks for you but your customer is dead and that's a pretty big risk. I don't know how you put that on a money scale or anything. One of the problems we have in the industry in our testing is that the testers are, they're testing to get rid of risks for the developers of the company that's selling some software as opposed to risks to the customers. Sometimes they have to do that I'll give an example I think it's in perfect software. My niece is a writer in her first book she used Microsoft Word and when the book came out, she found out that every several hundred words, maybe several thousand words, I don't remember, a word was just missing from the text. The book was printed and many copies re-printed and they didn't notice it until after they were printed.


Well, I happened to be consulting with Microsoft at the time and I, working with the Word people and some others and I told them about this. I was visiting and I said, “Look what happened,” they said, “Oh yeah we know about that, it's a bug, and a really large document is going to lose a word now and then.” I said, “Well, when are you going to fix it?” They said, “Well, we're not going to fix it,” and I, “What do you mean you're not going to fix it?” They said, “Well, see it only happens if you're doing something really big, and most people using Word they just writing a letter or memo or something maybe a small article, and it never happens to them. It only happens, and we don't recommend that people draw up a whole book on one word document, and it's only there, so there might be a thousand people in the world that are affected by this and we've sold a hundred million copies of work.”


“The risk of trying to fix that is greater that we're going to make a worse error somewhere else and it's going to affect a whole lot more people. It's just too bad for those thousand people they are at risk but it's no big risk to us. Okay so we lose a thousand customers out of a hundred million, they're going to ask for a refund that's fine.” See they were doing a risk assessment and it comes out, it doesn't sound right when you hear somebody say, “Well, we know the errors are there we just going to leave them there because the risk of trying to fix them is greater than the reward of fixing them,” and that's very logical, right? It's very good engineering in a way but we would like to believe that we could do otherwise but we can't.


Joe: Absolutely. Risk does come into play a lot. I actually work for a medical company and we actually have certain software that if it doesn't behave correctly we could kill someone so we have to worry about the FDA. I guess how do you combat though the other three major we have managed and say, “Well, we'll just test everything,” and this is something that's been going on for years this concept where you can't test some of the things. How do you explain that to a manager or to an organization where they really have legitimate risk where they could do bodily harm to someone or actually cause death. How do you still say you can't test everything, how do you have that conversation I guess?


Jerry: I have clients who are in the medical business also. Some of them make medical devices, some of them that are internal to the body, some of them make devices that are external, some make medical test equipment, and it goes say in the doctors office or in a hospital. Those who make equipment, they understand that you can't make a machine that's perfect. I think they understand that absolutely.


Joe: Some of them don't, I don't know.


Jerry: I mean you know and they work to confine the risk as small as they can but they look at what does it cost them if the device fails. Well, somebody let's say makes something a medical device that's implanted in somebody's body and one person dies. Well, okay so they'll sue you and maybe they'll win $10,000,000, well that's a cost of doing business. It's like you have equipment in your office and your keyboard fails and you're going to have to buy a new keyboard. That risk is not very great and most people just live with it but if somebody dies because your keyboard fails, then it's a much bigger risk. Again you calculate and you do what you can to make the right decision. It's the best you can do, but if you think you can do it perfectly, that there will never be any risk of a failure then you're just wrong. Physics tells us that perpetual motion machines can't be built and so on and you live with it, and we live with imperfection all the time.


Some people get very upset by it. Some places it's more important some places it's less but people who have a perfection rule, perfection need where any little perfection bothers them as much as any big imperfection. These people do not do well in testing for example, one of the problems you get into is a tester who says I have to test everything or a manager tell you that. The tester feels you never finished, well yeah you never finished, it's just part of the business. It's just like if you're a doctor or surgeon and once in a while somebody dies from the surgery that you do. This is a critical moment in your career and you could start to say, “Well, somebody died I'm going to stop being a doctor,” you just have to get through that I don't know what else to tell you. To appeal to a manager who says, “Test everything,” you have to be ready with examples of what would involve to even come close to testing everything.


We have a algorithm that we use, I used it for people who want to build a piece of software or a machine that does certain things, and sometimes the problem they're trying to do is so big that we can't do it, like the problem of testing everything. It's fascinating to know, I mean Charles Babbage, you know who Babbage is right?


Joe: He invented the computer?


Jerry: Yeah. Almost 200 years ago and made a mechanical computer, a pretty simple one and he demonstrated to people and he wrote this up and this is documented that you could have this machine turn out the numbers from one to a million in succession, in sequence. Then it'd go to million and one and then the next number would be what? A million and two, I have a million cases where it always gives a success of number, it's like the sun coming up in the morning. The next number is not a million and two it's a million and three or something like that, and he demonstrated this and showed people. We've known that for hundreds of years. That you no matter how much you've observed and tested you're never sure, and we have examples of that for example in the security business, somebody plants a time bomb in your software right? You can test and you can test, you can use it, and I give examples in the book where my books I kept track of typos for over 30 years.


Then all of sudden somebody came up with a new one that nobody had found, and literally a million people had read that book. If your manager can't accept this and downgrades you because of that you're working in the wrong place. You try to teach them if they don't get it and then instead you teach of the risk assessment, “Okay we're going to allow an error to get through once in a while, what happens when that happens? Can we fix it, can we compensate the people? If somebody irreversible happens like somebody dies, can we live with that, if not we have to get out of that business. That's just life.


Joe: I guess this concept of certainty I just thought of this, could actually be kind of dangerous. The reason why I think of this I was speaking to another guest where they say they had 100% of test coverage and so management said this is going to be successful, they released it and then no one bought the application. They had in their result on that something that should have been successful based on that metric but I don't know I was just thinking about that you will never have that certainty when you do, if someone really thinks that certainty could be dangerous if someone thinks that way.


Jerry: Definitely. It could be funny if your … My skin doctor removed a little growth from my skin few years ago and she said to me before she did it, I said, “Oh is it serious?” “No it's just minor surgery,” she said. I said, “What is minor surgery?” “That's surgery done to anybody else.” That concept is important in the testing business. Who is going to suffer and can you live with it or not. I don't know I have so many cases in mind of people who were so sure that there weren't any errors in the software, and then something turns up and because they were not prepared for the idea that something might go wrong, that's what caused the trouble. It's what I call the Titanic effect you'll see in one of my books. The reason that the Titanic sank and the reason it was a disaster was that they believed it was a sink proof ship. If they go through the icebergs and they said, “Well, nothing can sink this ship so we don't have to worry about hitting an iceberg or something bad.


They were careless. That's the thing you have to watch out for, you always have to reserve a little bit of your mind, your emotions for the idea that something could go wrong. Probably something will go wrong, are you prepared to respond to that? This explains a lot of practices that we try to teach people for example, why do you care if your software is maintainable? If you make it once and it's perfect, it doesn't have to be maintainable, but if it's not maintainable and something goes wrong, somebody goes in there and tries to fix it and they can't figure it out, then a little problem becomes a huge problem. For maintenance and all kinds of other stuff, this belief that you do things perfectly is what gets you into trouble.


Joe: Absolutely. Besides risk as being an indicator what testers should focus on, are there any questions a tester can ask before testing to make sure that they are focusing on the right things?


Jerry: I think that you need to highlight certain things, like I need to just sit down and talk with the developers and ask them what went on, what happened, what was interesting and so on in an informal way. This gives you a lot of clues where you might be having trouble. For certain kinds of talking you are very specific. For instance and I'm sure you know this, if you're talking to a developer and the developer says, “Well, one thing for sure is you don't have to test block X because that one really is solid.” You're laughing right, why are you laughing?


Joe: You know that's the first thing you should check for sure.


Jerry: Yeah. Why is that, because that developer has this confidence that's unjustifiable right? Which means that they haven't really tested it very well themselves, and they don't know it very well. The opposite thing is also true when something goes wrong and you're trying to pin point it, which is another part of the testing, you know there's something wrong, you talk to the developer and the developer says, “Well, I know that it's in block X,” that's the place you don't have to look because they've been looking in block X and they haven't found it so that's the last place you want to bother to look. You look where the finger is pointing and you go somewhere else. Those are the kinds of things just talking to people about and you talk to the customers and you try, if you've got a working product, you try to get real people to sit down with it and just sit next to them and watch what they do. Don't tell them things, just watch what they do.


Your talking to developers is not nearly sufficient because you're testing not just for errors of the kind of the code there's something bizarre. You were talking about whether, rather the case you gave, nobody bought the stuff. Maybe it didn't have any code bugs in it or errors in it except that wasn't what the customer wanted.


Joe: Right.


Jerry: [inaudible 00:23:44] once told me something and I had trouble believing it at first and I came to realize how wise it was. He said, there's no such thing a wrong program, there's only different programs. I gave an example I think in the book, we were doing some training, we talked about training, we were training computer operators when multi-programming, multitasking systems first came into use. If a system would crash and it's running 10 different programs it's very hard to restart it correctly and it's a real problem for the operators. We were developing operator training and we wanted to train them how to respond to crashes, so we tried to write a program that would crash the operating system at random times as a training program. We couldn't do it. We couldn't make the operating system crash and we tried and tried. Then somebody said well actually we have a program that crashes the operating system. The people in physics department, this was at the university, the people in Physics department keep crushing the operating system on us, they've got some big FORTRAN program that does this.


We went to them and we got a copy of this program and we used it to train operators. The Physics department wanted to simulate the hydrogen molecule or something like that and it never worked for them but it worked perfectly for us, right? Did the program have a bug or not? See you can't answer that question unless you know who is going to use it and what they are going to use it for. You need to talk to, it's good I'm just looking at your logo here and this is Test Talks, talking is a big thing and of course listening really for testers. Good testers have to know how to talk of course because they have to present their results to people who don't want to believe it, or there's something wrong with the thing you produce? Oh no there isn't. You have to be able to be a convincing talker but you have to be an even better listener because you're testing to find out if what this thing does is what people really want.


It's another reason why testers have to be involved in the development process right from the beginning. When you're starting to do requirements and doing market surveys and so on, because you have to know what it is you're trying to accomplish with this piece of software or you can't test it. There was a line I was going to set for years ago in IBM, IBM produced FORTRAN which was going to solve all programming problems of course and it hasn't done it yet. In New York they wanted to have FORTRAN because they have developed Algol and that was kind of, they wanted Algol compilers for their computers and IBM kept hearing that but they wanted to push FORTRAN so some guy in a Scandinavian country I think it was in Denmark but I'm not sure any more, developed an Algol compiler by himself and put it out there and IBM said now we can tell people we have an Algol compiler and so they'll buy our computers. They gave them a huge monetarial award for doing this.


They gave them a small award but after a year it was almost error free and this was unknown in most software that IBM produced, getting a lot of field reports on errors, but this one in it's whole first year of operation there was only one error report from the field. Then they gave them a much larger award. Well, the thing was it turned out that the error was that the thing wouldn't install in the system, that was the only error. Of course it was the only error because you couldn't run the thing.


Joe: Oh it's funny.


Jerry: All right? That's the first level of the story is you can't just count errors, on the other hand if you think more deeply about it IBM actually got what they wanted because they could say they had an Algol compiler and so people bought their computers and IBM actually believed it, all right? They weren't lying to their customers, they said, “We have an algorithm,” it turned out that none of the customers actually cared that much about an Algol compiler, well they tried it, it didn't install so they went ahead and used FORTRAN. Was this program wrong or was it right?


Joe: That's a great story. It reminds me of metrics for some reason. If you have a bad metric and people feel that, like no bugs, so we have no bugs we're successful.


Jerry: Oh yeah. IBM used to make all the IBM 650, they used to say it had never made an undetected error. What does that mean? My favorite metrics then out here in New Mexico, when I get the students from Seattle for example, and they're enjoying the beautiful weather we have and say, “Oh, thing is New Mexico is much cloudier than Seattle,” and they say, “What? What are you talking about, look the sun is shining.” I said, “Well, no let's go look at the sky and let's see how many clouds do you see?” There's one, oh there's another one, oh yeah there's a third one, yeah oh there's five clouds.” How many clouds are there in Seattle? “One. Carries the whole sky and so that's a metric.


Joe: Absolutely. That's awesome. You were the architect project for the Mercury project space tracking network that was few years ago. Is there anything in testing you think is still misunderstood you can't believe it's still misunderstood between developers and testers?


Jerry: Well I'll tell you I don't know if there's something that's misunderstood by all developers and testers, but many, maybe the great majority of them. We made the first separate testing group that I know of historically I've never found another for that Mercury project because we knew astronauts could die if we had errors. They're stuck out in space, not only die but die slowly while they run out of oxygen because we couldn't bring them back. That's very bad publicity for IBM when two billion people are watching this space launch and they are saying well, the poor guy is going to die in a few days as he runs out of oxygen and that's because of a bug in a IBM computer. It's not the kind of advertising you want so whether we cared about their lives or not I don't know, I cared and I think most of our people cared, but as a business decision we couldn't afford just like in your medical business, we can't afford to have people dying, it's bad for us.


We took our best developers and we made them into a group, it's job was to see that that didn't happen. They build test tools and all kinds of procedures and went through all kinds of thinking and so on. The record over half a century shows that they were able to achieve a higher level of perfection but it wasn't quite perfect then, had never been achieved before since. We got a number of other places in IBM, a lot of our places in the federal division adopted our tools, adopted our approaches and so on, and other people came from the outside and we showed them what we did, we taught them. I kind of fitted out the testing business for a few years and when I came back I found that lots of people had set up separate testing rooms and I thought that's great. Our message is getting out and things are getting better everywhere and it was a few more years before I woke up to a fact right now the reason was that I missed this, or is it I only get as a consultant, I only get either the top places or the bottom places.


Most people who are developing software are just kind of average and they would have paid to have someone come in and to advice them, to consult with them. They think they know what they are doing. The top people know that they don't know what they're doing and they hire consultants. I worked with the best companies in the world and we were able to improve what they were doing, I also got to work with the worst ones because there were lawsuits. Somebody was suing them, many people were suing them and I get called as an expert witness or trying to settle some terrible thing. I had a really biased look at what was going on at the testing business and all these middle people I wasn't seeing. I just heard well they have testing rooms now so they've learned. No. That wasn't true. What they learned was they thought we could hire people to do testing cheaper than we could hire developers. Same goes in my early days developers tested their own stuff.


We did the programming, reviewed our stuff and so on, but it was all developers. There was nobody with a title of tester. Now we have people with that title and managers who believe you could hire monkeys to bang on keys and that's how you test the programs. We've been through 40 or 50 years of that thing. People still don't know people in the middle still do not know. Even if you have a test group, testing is harder than developing and this is what they don't know. That if you want to have good testing you need to put your best people in testing. Your smartest people and maybe a little different type of person, someone as we said who listens better, talks better, so it's a very exceptional kind of person that makes a great tester. If you believe in that kind of thing then you should reward them better than you reward the developers. Instead, I go around and I find that people habitually pay their testers less than they pay their developers. That's number one thing that is not understood.


Joe: That's a great point I actually see this almost every day, we have developers that do the test automation and they write test automation code. At the beginning the others can be great they can use all these best practices, it's going to be better than what a tester could write and I'm looking at their code thinking there's no way they could be developing our application because if this is how they write their code we're in trouble.


Jerry: Exactly. What makes you think that your test code is better than the system code?


Joe: Exactly.


Jerry: I can't even see. Usually if you ask again that goes back to talking, just listening to when they talk. Generally speaking self developed test code or even the test cases is done carelessly compared with the way even the poor way that they might develop the software they don't review their test plans, their test cases, their test procedures, their test tools and why would you think that your testing is good enough to test with when it itself is not tested.


Joe: Exactly. That's another point you bring up in your book and that is, do you think test automation can replace a real tester? Would it outrun test automation?


Jerry: No. I think tester and you guys at first glance you might not like to hear me say this because you're in that business, but it's another example of what the anthropologists call name magic. I caught off and Abraham Lincoln had this riddle, he says, “If you call a tail a leg, how many legs does a dog have?” What's the answer to that? Well, some of the people would say five and Lincoln would say, “No. Calling it a leg doesn't make it a leg,” so calling it automation doesn't make it tester automation. It's just a label, it sells well and so on. To do successful test automation there's so much other stuff surrounding it. You may have tools that automate certain tests, which is what I think you guys deal with, but it doesn't automate testing. I want to say this, suppose for example we're talking about baseball and somebody says we have an automatic batter, but they call it automatic baseball, well, there's more to baseball than batting.


There's running, there's catching, and there's throwing and so on, there's knowing the rules. By naming it something doesn't make it something. Naming automatic baseball does not make it automatic baseball. Right? I see a lot of clients who get into trouble, some manager thinks, “Okay I can buy this thing that's called test automation tool and now I don't need testers, I just need some guy one guy to press buttons or something, and it will automatically test everything. Well, no. That isn't the way it works and you guys know that because I imagine when people who sell test automation tools get really frustrated by this because somebody buys a tool then they complain, “Well it didn't work, what it really means is they didn't work. Right? They didn't use it properly, they didn't use it in the right cases, the tried to use it in cases where it didn't apply.


You can automate part tasks that are involved in testing and maybe many tests and save yourself a lot of efforts and do things very reliably, and that's great. It doesn't do the whole job.


Joe: I definitely agree with this wording I think, I don't know if you've ever heard about Richard Bradshaw but he uses automation in testing rather than test automation, so put the emphasis on it's helping you to do some aspect of testing but it's not necessarily testing it's checking. Testing is much more than just validating something automatically, it's thinking about a lot of other things that that go on in it. Using a different word almost is like a mind shift there.


Jerry: No. That's great I haven't heard his expression, that's a great thing just by adding that word “in?”


Joe: Yeah.


Jerry: It could be very helpful. Which is also is an example of how the tiny change could make an enormous difference.


Joe: Right. Okay Jerry, before we go, is there one piece of actual advice you can give someone to improve their testing efforts, and let us know the best way to find or contact you?


Jerry: Well, for each person there's one way but it will be different way for different people. For example I have and this is a commercial message and you can add it this out of your, but I have written a number of books that help people in doing a better job of testing but they're in different subjects because we just talked about, testing is many different things that have to be done in testing. You have to find the thing for yourself. In the middle level, one thing you need to do to become a better tester is to discover the one thing you need to do to become a better tester. Okay? I know that sounds silly but think about it, it's true. That's your first test and unfortunately people tend to do the opposite. I don't know if you work out in a gym or you've ever worked out in gym or not? If you go to a gym you might see body builders there and one of the things that you notice about body builders they want to win some body building competition.


They've been told, “You have great biceps and you see these guys and they have great biceps and in the gym they spend most of their time developing their biceps. Which is why they have great biceps. When they get into a competition and they get a good score on biceps and they've got skinny little calves and so they never win. What they should be doing is looking at the worst part of their body from body building point of view and that's what they should be working on, spending most of their time not the only thing but that's … it's the same way in testing. For example I've met many testers who do a beautiful job of detecting and isolating errors in code. The way they talk to their customers, that is to the developer and their manager, is so offensive or so unclear that they're not successful testers. Those people need to study and practice and learn how to talk to people so they'll be listened to and they'll be believed.


Or some talk pretty good but if they write up an error report it's just not clear, in English or whatever language they're using. Those people need to work on their writing. There are those people who are not good problem solvers and they tend to overlook something, they need to work on their problem-solving skills. All these things and some others, are things that I've written books about and I put together in what I call the testers library which is a collection a bundle of my books. There's e-book form and a nice discounted price to try to get testers who want to improve themselves to get the whole set and look and say, and this isn't everything, I don't cover everything but it covers a lot of different things. It covers writing, it covers talking and feedback, it covers problem solving, problem definition and so on. Leadership, those are all things that a good tester needs to work on.


The little trick I give people is that when you find yourself saying, “Well that's one thing I don't need to know about,” then stop, catch yourself and go and know about that, because it's exactly like the finger pointing that we talked about before, when the developer says, “Well that's a module you don't need to look at,” that's the one you look at first. You do the same thing yourself say, “That's a skill I don't need, it has nothing to do with testing,” then it does and you better work on it. I have two websites, my principle website is,, and there is links there so you can get to me and all my books. Then I have a separate website because I write fiction, it's called




Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Hi Joe,

    A great podcast, having Jerry for your 100th episode makes it still awesome.
    lot of insights and examples to take away from this.

    Thanks, loved listening to.


{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 03/14/2024

What is ETL Testing Let's get right into it! An ETL test is ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...