Automation Testing

5 Proven Secrets of Test Automation (#2 is a MUST)

By Test Guild
  • Share:
Join the Guild for FREE

In the latest episode of TestTalks, Paul Grossman reveals his top Test Automation Secrets(aka the “Not to do” List).

Here are the highlights, but be sure to read the full transcript below to discover even more automation awesomeness:

1) No documentation

This may seem simple, but I can’t count the number of times I’ve seen an automation project fail because no one could understand what it was doing, or because it broke each time it was updated. If you work for a large firm that is doing enterprise application development and have many teams contributing to your automation efforts, your testing libraries can quickly grow out of control.

If your engineers are not providing readable test names for their variables and methods along with comments, it will be difficult for you to keep your automation maintainable over the long run. When it comes to automation, you want to ensure that it’s easy to tell what the expected results are, and what the actual result captured was. This info needs to be documented!

What Paul did when he inherited an existing automation framework was that he made sure any time a developer or tester made a change to the system that it was clearly documented.

2) No modular design

Paul made a great point when he mentioned that automation is programming. The best practices your developers use for your application development efforts should also be the same ones used in your automation efforts.

Remember: you’re developing an application that will test another application. One of those good practices is having a test automation framework that has a modular design.

Basically speaking, encapsulating information into functions in a framework as you go along is the best way to get around modular design.

For example, if you’ve got 15 tests and they all log in and something changes or breaks, you don’t want to have to change all 15 scripts; you’ll want to change only the one segment that is being called by all those scripts so they can all get up and running as quickly as possible.

I’m a big believer in what I call a layered approach, where you have one place designated so that if anything changes, you need only go to that one place that handles it.

3) No team member support or management support

Occasionally, when someone begins promoting test automation, you may have a problem with the manual testers. That’s because the first thing a manual tester is going to think when an automation engineer shows up is that they’re going to lose their jobs.

Paul points out, however, that you sometimes need to educate your teams and manager to let them know that there are a couple of things manual testers do better than automation — they use their brains, they actually check things, and they perform exploratory testing. Exploratory testing is done better with manual testers.

Automation is better at performing tasks that are repetitive; we humans are generally bored by doing the same thing over and over and over again. Sometimes we get tired, sometimes we skip steps and then look and say, “Oh my gosh! I just found a defect.” How did you find it? Turns out many time it is not an actually defect because we mistakenly skipped one mandatory step during our manual test.

There are often times that something cannot be 100% automated, or may not be a functional test.

Because of this, a quick way to get your manual testers to become believers in automation is to start automating not only tests but utilities for the manual testers.

Once you show them the time savings they’ll get from these scripts, they’ll usually fall in love with automation.

4) No Detected Defects

One of the biggest red flags that will show you the health of your test automation efforts is the amount of bugs it catches.

Granted, automated regression tests in general don’t catch as many bugs as an exploratory test can, but if you’ve been running a test suite for years and it hasn’t found any defects, that’s a definite concern.

Paul recommends that your teams actually demo what the script is doing, because the test may not even be testing what you expected it to, or lacks good software traceability.

5) No Metrics, No ROI

A frequent problem when you’re doing automation or testing is that people don’t see the value of it. With application development, management can see output, whereas with testing they don’t really see value because nothing tangible is being created.

If you can somehow quantify what you’re doing, that’s one key software testing metrics you can use to help justify or show management the work you’re doing.

Keeping track of your automation ROI metric is a good way of showing what you're saving the company in the long run.

Read Full TestTalks Interview Transcript with Paul Grossman

Joe: Hey Paul welcome to TestTalks.

 

Paul: Thank you very much Joe I’m really pleased to be here.

 

Joe: Awesome. It’s great to finally have you on the show, but before we get into it can you just tell us a little about more about yourself?

 

Paul: Sure. Well I’m trying to go by the [inaudible 00:00:13] of the dark arts automation engineer. I was going to go by Evil Tester but Alan Richardson already took that one. I’ve been an automation engineer for about 15 years, I started with Winrunner initially and moved into QTP in UFT. I’ve worked with Quality Center which is not ALM, BPT, BPT for SAP. I use [inaudible 00:00:34] as my glitter glue in my automation projects to make things look cool. I’m trying to add in LeanFT in there as well. Something you may not know about me that’s interesting, my degree is from the prestigious Savannah College of Art and Design. I was very lucky to meet a guy named Steve LaVietes who taught me a lot about C.

 

He went on to do some stuff for Sony Pictures and special effects and a bunch of movies and I went to do this stuff, automation. Which I love my job man.

 

Joe: A lot of creative people tend to get into automation. I went to Berklee College of Music, I know a lot of musicians are into test automation and development. Why did you get into test automation and how did you get into test automation?

 

Paul: That’s a great little story. I got in because of a small practical joke that I played on my manager.

 

Joe: Okay.

 

Paul: I started out basically I had … I’d been basically playing around with coding and programming since the 80s on Counter 64 and Amiga. As I got out of college I found Winrunner and I used the 2 week license to go and automate a game, basically a slot machine, go click at it with XY coordinate, very old school automation. Then I got a little bit advanced and I went, “Okay I think I can automate this Blackjack game and put card counting into it. I thought that was great and I decided to go and show this to my manager. Now, the thing about it is that when I was working at this lab we had systems that would … Well they’d shut down every once in a while. They were big systems, they would take about 5 minutes to shut down 5 minutes to boot up.

 

I had 10 minutes to do whatever it is I wanted to do while waiting for the system so I started working on my side job which is learning magic. Here I am in the lab, I am practicing rope tricks, putting knots into ropes and that takes some dexterity and timing. Who had better timing was my manger who happened to walk in at that very moment and see me doing rope tricks in the middle of my lab. She looked at me and she said, “Is it possible that you could do something a little bit more useful than that?” I said, “Yeah, absolutely.” I realized my system is about 5 seconds from shutting down. I pulled up my little finger hand and I shocked my system, I went bang! System powers down, shuts off.

 

She says, “Is it supposed to do that?” I realized my system was actually connected to a universal power supply, it can detect when the system’s powered down and it sends a message that says, “Power me back up again.” I told her, “No it’s not supposed to do that.” I snapped my fingers and the system powered up, turned on the lights, powered up all the drives. She looked at me and says, “You guys in QI you’re a little off-kilter here. I am not coming back out here again.” That long story just explained the next day I walk into her office and I said, “Listen I’d really like to try and do some automation. I’ve been doing some stuff not on company time.”

 

I showed her these 2 videos I had of these little games that I had automated and I said, “I’d like to try and do this for the company here.” She looks at me and she says, “Do you know who the expert is in test automation here at the company?” I said, “No, I don’t but I want to meet them. I’ll be a sponge, I will suck up all that knowledge and they will be my mentor.” She looks at me and she goes, “You are the expert in test automation here because you are the guy who opened the box.” I started thinking to myself, “I was also the guy who opened my mouth.” She puts me on … basically makes me the lead test automation engineer of a defunct automation group.

 

There’s nobody left, there’s 3 guys who were there, 1 had already left the company, a second one has moved into another area and the third one was retiring that day. I looked at this project and what they were doing and saw that they were … maybe some things that they were doing that I didn’t want to repeat. They had been working for like 2 to 4 years depending on who you talked to and there were like 5 things I said, “I’ve got to make myself a not to do list. I’ve got to find ways to overcome all those things that they were doing and me not to do.” That’s where we come up to the 5 secrets of test automation.

 

Joe: Today I really want to dive into your 5 secrets of test automation. I guess you also call it the not to do list. Let’s take it point by point. I think the first one is know documentation.

 

Paul: Correct.

 

Joe: What’s that all about?

 

Paul: These guys basically gave me the directory of where their code was and that was about it. There wasn’t a whole lot of documentation inside the code that gave me any clue as to what they were attempting to do. 1 of the things we, you and I, are taught was automation as a programmer, any programmer is go and document your code. You’re writing your functions, put in headers, document who did it, what it’s supposed to do. In automation we want to make sure that we know what our expected result is, what our actual result is and where it’s going out. They didn’t have any documentation to go with. What I started to pick up heavily with that company was that every time a developer came over and made a change on the system they had to write down their name, write down what the change was and put it into this book.

 

They had tons and tons of books. I started the same habit of writing down everything I tried with automation. All my little experiments, what worked, what didn’t, what might be neat or cool and started putting these into books. Today I've got about 15 books over 15 years just filled with stuff. If I might steal a line from somebody I very much think is a great person I write down everything and I share the good stories. There’s our documentation. We handle that first.

 

Joe: Is this still a practice you do currently?

 

Paul: Yes.

 

Joe: It is.

 

Paul: Every day I write down everything I do. In my current project I’m basically using Notepad++ and documenting everything I’m doing and keeping track of … I actually do the daily stand-ups in our agile meetings and keep track of it that way.

 

Joe: I also notice you’re very active on Facebook. There’s a test automation … Advanced Test Automation Group. You seem to document there also different things that you’ve been experimenting with. It’s a useful resource. It seems like you really do it in multiple places. You document what you’ve been doing and try to share what you’ve been learning. I think that’s really beneficial to everyone.

 

Paul: I’ve given my best shot. I’m just here trying to help people out. I did have a different opinion years ago, I thought, “Man if I let anyone else know what I’m doing there’s going to be just more competition for me.” The way the internet turned out I’m more in the idea of we need to share information and get people to really know what are some good practices, what are bad practices and what to do, what to avoid.

 

Joe: I guess another one of your top 5 secrets of automation was know modular design.

 

Paul: Yeah. These guys when they’d written it they basically did a lot of their scripts with just record and playback. I've just recently been to a conference where I met another one of my mentors. Someone I follow a lot, Linda Hayes and she was saying we have to really build a framework around what’s going on. Now, I do want to separate out what’s changed over the last few years to the word framework. Usually when you talk to someone who has been working in Selenium quite a bit they’ll say, “What frame are you using? Are you using JUnit? Are you using TestNG?” Those are well built, already pre-built frameworks for taking care of things in Selenium.

 

When I’m looking at a framework I’m looking at a whole collection of really good ideas and I’m going to give you example Joe. I’m going to give you a little test on your knowledge here. I know you’ve worked with QTP and UFT, let’s say you’ve just written a script that has just clicked a button. What would you say is the very next thing that you should probably do almost every time after you’ve clicked a button?

 

Joe: I would make sure I verify after that point that something occur that I anticipated.

 

Paul: Verifying, we might also … in order to verify that we’re on the next page we might be looking for an object. We also might want to at least sync on the page or the browser to make sure that that’s been built before we move on or we’ll overrun our thing.

 

Joe: Yeah.

 

Paul: Here’s the idea is that if we’re doing that we have to remember every time I got to put a sync after every click, sync after every click. Why do that when you can write a function that’s called Click. You pass it an object and instead of that it automatically goes, “Oh I’m going to click on something and I’m going to sync on something.” That’s a small little good practice at least in UFT to follow. You take all these little good ideas. You put them together and you start actually basically building a framework out of really good ideas of things that you need to do in order to interact. Not throw big mass of runtime errors on your system. That’s 1 thing I try to avoid is I don't want my system to stop.

 

I want it to keep going, get to the next test and start running. Basically encapsulating information into functions into a framework as we go along is the way to get around modular design. Also make sure that if we have changes in multiple areas and of course the best example is a log in. If you’ve got 15 tests and they all log in and something breaks with the log in or changes you don’t want to change all 15 scripts. You want to change 1 segment that is being called by all those scripts so that they all get up and running as fast as possible.

 

Joe: Absolutely. I’m a big believer in what I call … it’s like a layered approach where you have 1 place that if anything changes you just go to that 1 place that handles it. You don’t want to change on multiple places or hard code things in all your tests so if anything changes, if you had 2000 tests you don’t have to touch 2000 tests. You just change that 1 place that you have it. I definitely agree with that approach.

 

All right, so after we have documentation in place and we have a modular design, a layered approach what are your thoughts on know team members support and management support?

 

Paul: Yeah, those are the worst things I had. Well first of all you knew I probably did have support from my manger because she threw me under the project that was defunct. I was able to actually turn that around, spoiler alert. What happened was that when we join as an automation engineer occasionally we may have a problem with the manual testers. Because what’s the first thing a manual tester is going to think when an automation engineer shows up?

 

Joe: They’re going to lose their job for sure.

 

Paul: That’s right, we’re going to automate everything because they already heard from their manager they’re going to automate 100% of the entire application which is a completely different story. Here’s the thing Joe is that there is a couple of things that manual testers do better than automation. Do you have anything in mind that you could think of that the manual testers do better than automation?

 

Joe: They use their brains, they actually check things, exploratory testing.

 

Paul: Excellent, exploratory testing is exactly what I was looking for, automation isn’t really all that great at it. There is a guy out there named Cem Kaner if you ever look him up. He says he can do a lot of exploratory testing with automation, fantastic brilliant guy but for the most part you’re right. Exploratory testing is done better with manual testers. Also automation is better at doing stuff that are repetitive and us humans we’re bored by doing the same thing over and over and over again. Sometimes we get tired, sometimes we skip steps and then we look and go, “Oh my gosh I just found a defect.” How did you find it? I skipped this 1 step here that’s what it was I can go back. I didn’t actually find a defect.

 

All right. The important thing there was that what I started to do was create not only tests but I created utilities for my manual testers. They had what we would call the FNG tests. FNG stands for the Freaking New Guy because this was the 1 test that nobody wanted to do because basically you had to sit there for 30 minutes. You get to push this button over and over and over again while the rest of us go off to lunch, we’ll see you in 30 minutes new guy. Everybody hated to do that. I said, “That’s a fantastic test or even just a process to automate.” That’s the first thing I did, is I automated that. They said, “Great, you got about 30 …” we covered 30 minutes we can go off and do more smoke breaks … I’m sorry, smoke tests. There we go.

 

The thing about that is we started to actually realize a little bit of an ROI out of that. The way we can do that is let’s say we go out to Google and you google let's say a text like, “Average salary of a manual tester in USA.” It comes back around $66,000 a year. I’m going to round that down say it’s about $30 an hour. I’m going to add in some additional stuff the guys might be working overtime. They might a 401K, they might get some other benefits so let’s say it’s about $40 an hour. I just saved up half an hour of these guys sitting there hitting the button for a bunch of times and going off and doing something a little bit more effective. We got ourselves a $20 return on investment. Woo-hoo!

 

Basically I was able to get team member support by automating the things that they thought was a major pain point and building more of those little tools to get them on my side. Then they started coming back and saying, “Can you do this? Can you do that? This will be great if we didn’t have to sit down and do this, all this extra stuff.” That’s how we get the team member support.

 

Joe: I love this concept of like you said working with someone to say, “What are you doing that’s taking up your time and can we automate it?” It may not necessarily be an end to end solution. For example I worked for an insurance company and we had people that would have to rate these huge policies. They were manually creating all these different policies for all these different vehicles, all these different farm equipment just so they can rate it for every state. The thinking part of it was actually to check the rating and it was very, very difficult to do so you really needed a human person to do it. All the other stuff was busy work.

 

What I did, the first thing I did was I just automated that piece where it just populated all that data for them. Then from there on that’s how I started build more and more successes. It sounds like your approach is 1 way to get buy-in is to work with your team members, find their pain points and automate it. It may not be a traditional intense solution but as long as it can help them do their job quicker and faster that's a win.

 

Paul: Absolutely. By the way I’ve been there, done that as well Joe.

 

Joe: I guess the next point then is No Detected Defects. What do you mean by know detected defects?

 

Paul: These guys like I said they ran for about 2 to 4 years depending on how you talked about it, when you ask people, “Well what did they find? What was out there? What could they attribute to their project?” They went, “I have no idea what those guys did.” They actually had no defects and not only that they didn’t actually even have any demonstrations of the script going. Now, me, I am a crazy demo guy and I love to get up and show people, “Hey check this out, check that out, watch me try and catch this object with just a string. You just gave me this string of okay and I found a web button and I clicked at it.” That’s cool, that’s neat. I do these demos all the time. I’ve been at conferences and 1 of the things that I do and I’ve been out at HP Universe and HP Discover is that I do a live demo.

 

I love showing off stuff. I’m not the PowerPoint guy. I come out and say, “Let me show you how I did this, let me show you how I did that.” Let me just cover 3 little things here on how defect detection and demos actually work to turn this into a major success. The first thing I was doing was I said, “I’m going to create a new client.” That’s part of 1 of the first tasks, create a new client. Then we basically turned that into an endurance test. Create another client, another client, another client and let that run and run and run. It ran, our first test ran for about 15 minutes before it just basically blew up and died. What it turned out was that we’d used up all the memory.

 

We’re like, “Man 15 minutes it shouldn’t blow out all the memory.” What happened was that we found out that there was a memory leak and I go and talk to the developers and I said, “Check this out guys there’s a memory leak over here.” They looked at it and I’m going to ask you again Joe what do you think was the first thing that the developers thought about that memory leak that I had found?

 

Joe: That it works on their machine. I don’t know.

 

Paul: That’s a very good … I like that one so I might … The second and most common answer is, “Oh I know what caused that. It was the test automation tool that you installed on that system that’s what’s causing the problem.”

 

Joe: That’s another point.

 

Paul: Yeah so I sat down with them and I said, “Okay yeah you can look at this, our memory and you can see it’s adding up and the only problem is that this particular system I’m working with here doesn’t have the automation tool installed on it.” They went back and checked and came back and had a new build and we ran our little endurance test and it ran for 2 hours and then blew up and found another memory leak. We ran it again and it ran for 8 hours and blew up and we got another memory leak. Long story short over several months we got to the point to where it was running 24 hours a day continuously. Creating thousands and thousands of clients and it was solid.

 

It wasn’t eating up any more memory and we’re like, “This is great, fantastic.” Next thing we come across is that the developers come back to us and say, “Well that was a pretty good finding … thing you found there but we got a little bit more of a challenge. We have a defect that we believe we’re hearing about where the system doesn’t come to a ready state. We try and turn it on and it just sits there and just basically counts numbers up to 99 and just never gives us the interface.” They said, “Is it possible that you can automate the system so you could shut it down and bring it back up again and see if you can replicate that issue?”

 

I’m going to ask you Joe do you think based on the description I just gave you, do you think it’s possible we could use an automation tool to power off a system and bring it back up again?

 

Joe: Sure.

 

Paul: I like that answer.

 

Joe: Yes, absolutely, everything is possible.

 

Paul: Of course, it’s going to be a lousy answer if I couldn’t do it. I’m going to say the reason you said yes it’s because you remembered just moments ago I told you about that story about my manger I had the joke. The system that I was with had that ability that if you powered it off it sent the UPS sent the [inaudible 00:19:22] commanded and powered it back up. I sat there I'm like, “Okay this is like 8 lines of code. I just have to have an increment of a number, save it. Shut down the system and then take my automation script my Winrunner script and stick it into the startup directory and just let it go.” It ran and cycled through and keep in mind this is 10 minutes every cycle.

 

It got through the first time 47 cycles before it actually came up to a point where it wouldn’t come to a ready state. I’m like, “Great cool, I think I found it, let's replicate it.” I ran it again and about a day later it’s now done 63 reboots. I’m like, “Oh boy this could be a truly random problem.” We tried it 1 more time, get to the end of the day 63 reboots again well now we’ve got a pattern. I know it’s every 63rd reboot and I show it to the developers and I have them sit down in front of the system. I said, “Okay reboot this system 2 more times.” They reboot it and they’re like, “Okay Paul what are you doing here?” I said, “Yeah we’re going to reboot it and you watch and see if the system actually comes to a ready state.”

 

The second time we rebooted it sat there and just wouldn’t go and the reason was because I had already sat there manually and restarted the system. Of course they didn’t know that, the first thing they said to me when they saw this issues come up was, Joe what did they say to me?

 

Joe: I think it’s that automation tool that did it.

 

Paul: Okay, again this is not the system with the automation tool on it. I did that one across the hall this is the one I manually did and yes it’s still a problem. Okay great fantastic we found that. The third thing we come across is we have a problem where we are finding a data file that is being corrupted and we identified it, we saw it. It was actually an image file that corrupted. I’m looking at that, I’d show it to the developers. Now I’ve never seen a developer do a back flip in the lab before. He sat there and went, “Oh my gosh we’ve heard about this one. We’ve never seen it, we’ve never seen it here. We can’t seem to replicate it but people occasionally here about it. How'd you do that?”

 

At this point we basically have 4 systems running and I said, “Listen let's replicate this. I’ll take the 1 system that created this error and I’m going to have our little offsite system we call the black hole which basically took in data and just deleted it. Check and see is this a bad image? Send me an email if you did.” We ran it for like a day and a half nothing happened on emails. I got managers telling me, “Can you turn on those other 3 systems? Because we really need to get our testing done.” Great I turn on 4 systems let it run and within 15 minutes I get an email it says, “Hey we got a bad image.” Okay bad image, fine, I go look at it, it’s a completely different system that’s generated this.

 

I go on back and show it to the developers and Joe I got to tell you something, in the back of the room I heard this little mumbling. It’s a quiet mumbling, can you guess what the mumbling might have said? I think it’s that automation and then he shut up. The lead developer comes up to me and I'm going to tell you this guy struck me very much like Jim Carrey. It’s the only way I can explain it. He looks and goes, he says, “Don’t worry about him he’s the new guy. We have pretty much faith that if you found the defect it’s on our side we’ll take care of it.” They did, they went and found these things. All 3 of these systems … Oh I need to ask you this, 1 other question Joe.

 

Now this issue what it turned out to be was that we were throwing a lot of data down the network and we were basically corrupting 1 of the files that went down the network on this huge pipe. Do you know what kind of testing that is when we are throwing a lot of data down a network?

 

Joe: Performance.

 

Paul: Yes performance testing, but wait a second what tool do I work with? I work with UFT and Winrunner what kind of tools are those?

 

Joe: Functional testing tools.

 

Paul: That’s functional testing so I’ve accidentally done performance testing with a functional test tool. Here’s the thing … The other question I got for you Joe is what do you think of a system let’s say … Let’s take that second error that reboot one. Do you think a system that doesn’t come to a ready state every 63rd time is that a high level issue or is it a low level would you say?

 

Joe: Like I said it depends how critical is that piece of functionality? Would you lose a million dollars if it goes down?

 

Paul: Yeah, that’s a possibility. I like that answer. I’m wondering if you have seen some of my stuff I’ve had posted already. Yeah, so in this case they said no that’s low level. I call it the Pony type of … the Pony level of defects basically you’re going to get a pony for Christmas before that defect gets fixed. Lasted for about a week when we found out that 1 of the clients was training the owner of their brand new equipment. This was an equipment that got installed in the building and it cost about $100,000 to get it in there. They were training the owner on the new system and guess what? They trained him how to turn the system on and that 63rd reboot, he was not happy.

 

Got this whole new thing $100,000 I turn it on the light doesn’t go on, what is going on here? He just about lost the sale but they were able to recover it came back and said, “Okay yes we’ve solved this.” Immediately that defect went from pony level all the way to showstopper. Here’s the point we got 7 showstoppers out of that entire project. We sat there and said, “You know what I’m going to estimate that if these things hadn’t been found by test automation you would have lost $100,000 in sales for each one. I’m going to add up $700,000 for those 7 showstoppers, the other ones we found I’m not going to even count them in it's fine.”

 

The second thing we talked about was where we’re basically counting our metrics and our metrics that we were counting was how many hours was our endurance test running? We had basically logged every hour in an Excel sheet somewhere and it turned out that at the end of 3 years and 4 systems running we logged about 35,000 hours of runtime. We basically multiplied we’re going to go back a moment say okay let’s take that number about $40 an hour for a manual tester to sit down and do the exact same thing and get very bored. You multiply $35,000 … I’m sorry 35,000 hours by $40 an hour and you get about $1.4 million. Plus you add in the $700,000 of defects that we‘d found that basically showed us some estimated return on investment on our project of about $2.1 million.

 

That’s what I showed to our department head who had already clearly said to me before he’d seen that was he wanted to replace me with somebody else. After he saw that he said, “Can I have those PowerPoint slides so that I can show my managers what you did?” I’m like, “Great.” If I had been smart I would have said, “How about a bonus like 10% off that 2.1 million” and it wouldn’t have [inaudible 00:26:28]. Anyway, that’s where we got, we got our metrics and once we got the metrics we figured out our return on investment and we basically got management our side. After that I stayed another year working on that and I’ve been basically following those 5 rules ever since.

 

Joe: Like you said that was the fifth bullet know metrics know ROI. I guess the problem when you’re doing automation or testing is a lot of times people don’t see the value. Like you said it’s like if your development they see something they see output but testing they don’t really see value because you're not really creating it. What you're saying is if you can quantify something and actually put an amount to what you’re doing that is 1 key metric you can use to help justify or to help show management that you are doing work. This is what you're saving the company in the long run.

 

Paul: Absolutely. Sometimes I think of this as the George Bailey approach. You have to imagine well if we didn’t have automation what would things be like? Then of course there’s the other question that comes out when we talk a little bit about money is our test automation tools. I’m going to exclude Selenium out of this, are they expensive? The first thing we think is, “Yeah there’s a lot of money for those tools.” Let’s say there’s about $10,000 a pop for a license of those and that can sound really expensive. Then again you think about a manual tester and for $10,000 is going to show up for a whole year and do manual testing? Probably not, it’s just pretty … That’s below poverty level.

 

That’s another approach. Then when you think about it what’s the second year of having an automation tool? It’s usually you’re just paying the maintenance fee on it which is 10%. Now it's what? $11,000 that we’ve paid for over 2 years and now I’ve got a salary for a manual tester of about $5,500 a year. No way am I going to show up for that, am I?

 

Joe: This brings up that reminded me … I don’t know if you remember but you sent me an email and I thought you said something really funny. You called Selenium, Selenium is the word perfect of automation tools. I just wanted to get it in here. What do you mean by Selenium is the word perfect of automation tools?

 

Paul: I suspect that I was having a bad and I was trying to rant somewhere. 1 of the other things that was probably happening around that time was that I was taking courses on Selenium. I’m going to tell you that Alan Richardson has a wonderful course on Selenium Udemy out there and I was happening to be taking that. 1 of the first few courses he's showing and he's like, “Okay now when you’re using Selenium you’re going to do on a SAD and if the SAD passes that’s great. If the SAD failed Selenium just stops dead. I’m like, “Wait a second, what? It stops? No my frameworks don’t stop they keep going. If the object doesn’t exist I just write another. The object doesn’t exist let’s get on to the next test.”

 

I actually expressed those views up on the site and Alan actually responded right back to me. He says, “We’re teaching you the building block. The right way to do this is you’re going to put in a tri-catch error handling block. You’re going to catch the problems and we’re going to continue, make a framework that’s going actually continue on and not stop.” I realized that’s and I’m like, “Oh yeah okay all right.” I’m learning the building blocks but I’m an automation engineer in a completely a different tool. I’m asking about the jumping head about the entire building . I was basically looking at … Selenium was … I just see some old school stuff that’s going on.

 

Like the kind of relationship to the XY coding with the X path and the way that I perceive the Selenium stopped at your very first error reporting. I’ve changed my mind as I've come along from a while but I’m still … It’ll take me a little while to warm up to Selenium but it’s going to get there. Joe 1 of things I really do like is I’ve been looking at LeanFT. I know you’ve been talking about that with some of your other guests on the show. 1 thing I’m excited about is that it gives us the opportunity for people like me who’ve been working in descriptive programming to actually use that approach to create our objects. I also know a lot about regular expressions and I’m hoping I can actually squeeze some of that stuff in there.

 

Move away from the X path and have I’d say LeanFT be a stepping stone between the UFT and full blown Selenium. I see that as a really great opportunity from HP and I can’t wait to start actually working on project with LeanFT.

 

Joe: All right Paul so are there any books or resources that you would recommend the testers to help them learn more about test or test testing or test automation?

 

Paul: Absolutely. There’s 1 Greek guy out there his name is Tarun Lalwani he wrote 2 fantastic books on automation with QTP and UFT. There is for Selenium I can highly recommend Alan Richardson’s course on Udemy another course is the EvilTester. He has a course online Udemy and actually responds to questions that you have on his course. There’s also a company in New York if you happen to be in the area called RTTS it’s run by Ron Axelrod and taught by Nelson Moteiro in Selenium. If you aren’t in New York they actually run the courses live online. They are a great resource because not only did I take the course last year.

 

10 years ago I took these courses from RTTS when I had to switch from QTP in order to I’d say witch from Winrunner to learn QTP. Great guys out there. 1 last thing I can tell you about is that there’s a company that teachers testing in general, all sorts of aspect of it called the International Institute of Software Testing it’s ran by Dr. Magdy Hanna. Courses there are prerecorded but they do occasionally have classes that are taught live in cities across the country. The schedule for that is over at testinginstitute.com and it’s a little self-serving because I’m actually 1 of the instructors.

 

A lot of great people out there. They all are experienced in their particular areas and if students happen to show up in Vegas, San Diego or Chicago I’m going to show up too.

 

Joe: Awesome, so a shameless plug 2 people you mentioned I heard Tarun Lalwani on TestTalks episode 10.

 

Paul: I do remember that.

 

Joe: Alan Richardson was on episode 4. Okay, Paul before we go is there 1 piece of actual advice you can give someone to improve the test automation efforts and let us know the best way to find or contact you.

 

Paul: Sure I could do that, I’m actually going to give you a list of 10 things and when I say 10 things it’s the mentors that I’ve actually followed over the last 15 years. This is our 10 people we’ve listed out here . First of all there’s this guy called Joe Colantonio, he’s a great guy. He has a fantastic website very insightful stuff and his TestTalks Podcast are just amazing, I love listening to him.

 

Joe: Thank you.

 

Paul: Linda Hayes she wrote the Automation Testing Handbook and it’s been out print for several years and I looked it on Amazon because I had to replace 1 that got destroyed recently. It looks like I’m going to set back 80 to 300 bucks in order to get that. Still fantastic information. She’s the first one who taught me that you need to have an object to validate that it’s enabled and exist before you go an interact with it in order to get rid of your error messages that are popping up. Some guy who tested that particular theory was a guy named Jimmy Mitchell, he provided each team a [inaudible 00:34:30] libraries as part of a automation tutorial way back in San Jose at the Software Test Automation Conference and Expo in 2002.

 

Fantastic guy if you can find him look him up. Adam Gensler he created the SAFFRON framework it stands for Simple Automation Framework For Remarkably Obvious Notes. It's not a full framework. It’s actually less than 100 lines of code but he has some brilliant and elegantly written code. If you can find it it’s hidden somewhere on HP's site out there and you can google around and also find copies that are out there. Wonderful insights on how to do stuff. Another guy out there is Wilson Mar at Wilsonmar.com. He talks a lot about automation and tools and insights and stuff. If there’s anything else in your life you need advice on he’s got pages on that as well.

 

I already mentioned Tarun Lalwani he also has his website knowledgeinbox.com. A great place to go ask questions and get very insightful and excellent answers. I keep pestering him saying, “You’ve got to write that Selenium book” every time I talk to him on Facebook, come on man write it. Another guy is Dani Vainstein he wrote fresher courses they call him on advanceqt.com. That’s where I was hanging out years ago and it’s still up and around. People don’t visit it all that much because we all moved to Facebook but it still has courses out there, great information out there. I’m actually working with him right now on project so I’m really happy to get 1 of my mentors on the project.

 

2 more guys Marcus Merrill and Will Rowden. They created the software inquisition.com. Now this is a problem because that website doesn’t even exist anymore but if you go to the Wayback Machine also known as the internet archive.org and google that website softwareinquisition.com you will find all their pages and a lot of really interesting secrets about test automation that I never knew about. Brian Herington has a code library out there. I’m going to keep this one a secret but if you figure out where it is you’re going to find that automation with VBScript as a piece of cake. Boyd Patterson at Pattersonconsultant.com creates the test design studio and ID that actually has static code analysis for VBScript.

 

Nobody else has that. I’ve googled it, it’s not there. The last person I want to mention is Lee Barnes, he’s a conference speaker and if you check out on LinkedIn you can find 2 minutes rants. He’s got 5 of them, he talks about automation where things should be the Utopian view of stuff but then the reality is things aren’t quite what they should be and finding the intermediate stuff on that. I’ve gotten through that list. I want to tell you where you can find me. First of all I’m too busy to have a blog of my own or even write a book but you can find me on my email it’s at qtp imagine that qtpmgrossman@gmail.com. You can find me on Facebook as you mentioned the Advance Test Automation Group.

 

There’s about 5000 people out there. If any question on any particular tool you’ve gto throw it out there someone is going to have an answer for you I’m also on on LinkedIn. As I said I've got courses on the testinginstitute.com.

 

 

  1. Thanks Paul and Joe for sharing such useful information.hope to meet u in person soon.

  2. It was a pleasure speaking with you as well Joe. Let’s do it again sometime soon.

    Paul

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Symbolic AI vs. Gen AI: The Dynamic Duo in Test Automation

Posted on 09/23/2024

You've probably been having conversations lately about whether to use AI for testing. ...

8 Special Ops Principles for Automation Testing

Posted on 08/01/2024

I recently had a conversation, with Alex “ZAP” Chernyak about his journey to ...

Top 8 Open Source DevOps Tools for Quality 2024

Posted on 07/30/2024

Having a robust Continuous Integration and Continuous Deployment (CI/CD) pipeline is crucial. Open ...