Automation Testing

An Automation Success Story [PODCAST]

By Test Guild
  • Share:
Join the Guild for FREE
NeilInterviewPost

Welcome to Episode 76 of TestTalks. In this episode, we discuss how to succeed with implementing an enterprise automation solution with Neil Suryanarayana. Discover what steps Neil took to implement a successful, large-scale automation effort for his company.

NeilInterviewPost

Automation is hard. Sometimes you don't even know where to start. It gets even more difficult when you have multiple teams that need to get involved with your testing. There are so many things that can go wrong. Don't lose control of your enterprise testing efforts. No worries! Get ready to learn some actionable best practices that you can use in your own automation efforts to help lead your team to success.

Listen to the Audio

In this episode, you'll discover:

  • Why management buy-in to automation is so critical to success
  • Not every automation project is a good fit for Selenium
  • How to evaluate which test tools your team should invest in
  • Tips to improve your enterprise-wide automation efforts
  • Why collaboration is key to testing and automation

[tweet_box design=”box_2″]A #testing framework is really like a rule book. We did not want every teams defining their own standards, and guidelines. [/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I'd like to get your thoughts on.

This week, it's this:

Question: What tools (other than Selenium) do you use to help with your test automation efforts ? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Karma

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on Stitcher.com so if you prefer Stitcher, please subscribe there.

Read the Full Transcript

Joe:         Hey Neil. Welcome to Test Talks.

Neil:        Thanks, Joe. Thanks for letting me on.

Joe:         Awesome. It's great to have you on the show, but before we get started, could you just tell us a little bit more about yourself?

Neil:        Sure. My name is Neil. I've been in software testing space for about a decade now. I am very passionate about driving agility using automated testing. My experience crosses multiple domains. I have rendered testing and test automation services in networking and virtualization, healthcare, E-learning, and insurance.

I am currently a full time test manager working with Physician's Mutual based in Omaha, Nebraska. I am also a leader for VIvit Nebraska chapter, and that's an HP user group so I get to drive some learning in town.

Joe:         Awesome. So Neil, I really wanted to have you on the show because I'm always curious to know how other automation engineers are implementing test automation at their companies, and it sounds like you've done a lot of cool things at Physician Mutual. Can you tell us a little bit more about your current testing efforts and what you've done there?

Neil:        Absolutely, Joe. I could go on for hours talking about automation and our implementation. That's something I take a lot of pride about. We first of all started off with a clean slate here at Physician's Mutual. We started off with a comprehensive tool evaluation about three years ago. We've considered the best of both open source and vendor-based. What's an appropriate licensing model that best fits our talent landscape?

The testing group's mission was really to enable quality and Agile alignment. I must add, our VP for enterprise architecture development is a strong proponent of engineering for testability. I could not ask for a better foundation as we were getting started with our test automation at Option.

Just to give you a brief overview of where we are at in terms of our maturity. First off, we started off engineering a robust framework using HP, and that was something we engineered ground-up, HP UFT and Quality Center. So we got an opportunity to apply all our learning from other implementations. I must add, HP's UFT itself was a great marketing move with an intent to provide a unified ID for both GUI and program interface testing.

So we basically wanted to inherit the same concept and apply that to our keyword or [inaudible 00:02:27] framework. We wanted to bridge from that bridge between all the GUI functions and the API components. Might get technical here, but let me give you a sample use case; typical insurance domain use case. Let's say you want to connect to a database, fetch a value, and then invoke a web service using the token based on the value that you fetched. Then navigate through a set of web pages, validate something on the web page, and then probably update a database. So as you see, it's all intertwined and it's a spiderweb, and all in a single use case. This is what we've accomplished using this framework, and this is really centric to our testing organization.

Joe:         So there's a few things I really want to pull out of this that you mentioned. One was, which I really like, is that your manager, or the VP, is a big proponent of testability. And-

Neil:        Absolutely.

Joe:         -I think that's key, like you said. If you don't have that, it doesn't matter what tools you have in place or how many tests you have automated, if you don't have that testability, or the developers thinking about testability, I don't think you could be successful. I'm not sure. Have you seen the same thing on your end?

Neil:        Absolutely, I certainly have. Second that. One of the core Agile tenets, as in the Agile Manifesto individuals and interactions over tools and processes. So this is more of a cultural adoption than a cultural change, not just in terms of the tools. No matter how many tools you change or how many tools you valuate, I don't think you will get there unless you have that cultural change and the management by you.

So I must say, that has been a great win for us in terms of driving testability itself, wherein we get some of our developers to write in some test code. We have them engineer their programming production code in such a way that is conducive to testing. All in to shift testing left.

Joe:         Awesome. Another thing you brought up was your tool evaluation, and it's curious that you didn't pick an open source tool like Selenium. So I just want to know, what was your thinking behind that? I know you worked for an insurance company and they probably have a lot of different types of applications. What made you choose HP over, say, Selenium?

Neil:        Yeah, that's a great question. After a conference of tool study, what realized was although there are so many tools out in the market, whether it's vendor-based or it's just open source, we wanted to get a tool that best suits our needs. Number one was our Agile project management tool, we had Rally in house. So we wanted a solution that would really integrate and form a bridge between Rally and our test automation harness, because Rally is really the repository for users to read features, manual test cases, and all that. So how do we really build a [inaudible 00:05:04] path from automation back to the user's story and the feature? So an out of the box integration between HP Quality Center and Rally gave that for us. In addition to that, okay, how long would it take for us to get a frameworks to that? That was one factor that drove us to move toward HP. Another thing is building the skill and talent landscape.

More importantly, what I would add is something that I wanted to avoid was getting clogged in managing and maintaining multiple frameworks. Now, Selenium is one tool that supports only browser-based. We do have when you take an [inaudible 00:05:39] workflow, the example I gave you you're talking about databases, web services, and some native desktop applications. We didn't want to end up in maintaining multiple frameworks and kind of stacked them up one after the other. We wanted one unified solution that would give us the biggest bang for our buck, and that was a move for HP.

Joe:         Awesome. I like how you went in that evaluation on bias. A lot of people try to use the latest tools, or what they think is the hippest tool. I went though the same thing. I saw a new project and it was supposed to be all new HTML5 development, and we decided to go with Selenium, and that's great with BDD. But over time we started to have to realize we have to integrate with these older technologies. Lo and behold, we can't do it because Selenium doesn't handle it. We have to do all these weird things, like with Sikuli, to try to handle it. I like how you said you have all these other applications, you want to have just one framework, and that's one of the main reasons why you chose that particular tool set.

Neil:        That's true. Hey, probably LeanFT is a solution to all that, where that's a move from HP right now.

Joe:         That was my next question. I've had an early evaluation of LeanFT, I've used it for a few things. I'm going to do a little proof of concept against our application, see how it works. Have you tried LeanFT? Do you think that's a direction your company would go? Are you pleased with what your have so far?

Neil:        Frankly, we are pleased with what we have so far, but that's not to say that we would stop exploring. We do have the version downloaded, that's still version 12.5, and something we are evaluating at this point, but that's certainly on the road map.

Joe:         So Neil, a lot of companies are different, a lot of teams are different. I know on my team, the teams I work with, they have a lot of sprint teams with a lot of developers and not so many testers. So that's another reason why we chose a tool like Selenium, because we can use the same programming language that the developers are using, like Java.

Neil:        Mm-hmm (affirmative).

Joe:         So I'm just curious to know. In your company, how are the groups set up? Do you have an automation group and then they're the ones that handle the automation? Do you have people on sprint teams? How does that work?

Neil:        Yeah, so the way our talent landscape is structured is all of our testing resources from the test practice are embedded within the project teams. We have about 12 project teams, so they are part of the daily Agile ceremonies. They're responsible for the automation within the iteration and also for the regression automation as we go along. We do not have a shared services team or a team that provides independent services and validation. And me, myself, as a practice leader and a test manager, so part of my responsibility is driving the designs and giving that direction in terms of test automation and the technical insights for the team.

Joe:         That's a great point. How do you handle all these different teams with automation? Because I know one team might do automation one way, and your team may do something another way. How are you keeping things consistent? How are you driving that effort across the teams?

Neil:        That's a great question. We addressed this right from the get go, when we started with the framework design and we deployed it for the enterprise. Framework is really a rule book. What we said was, “Every team using automation really needs to join the buzz and join the party, and pretty much adopt this framework.” We did not want each of the project teams defining their own standards, defining their own guidelines. We started out with defining a set of operating procedures for everybody to use. The framework itself … it offers that repeatability aspect for the team so that they don't have to re-engineer the same code again and again. We have a set of common library functions, support library functions. This really has been helping us keep everything centralized.

Joe:         Cool. So it sounds like your framework is written in such a way that it's readable. That it's broken in logical sections, that when a new team comes on board it makes sense, the way you have it laid out. What's what, and what to use, and what's already there.

Neil:        Exactly. Re-usability was a key for us. The driver functions and the support and the common functions that we've engineered, that is the core and that's the center for everybody using this framework. All the testers in the enterprise using it, they pretty much use the same libraries. We've also done a performance tuning to ensure that doesn't give us any problems as we go along.

Joe:         Cool. Are you using any version control within LM for UFT? That's one reason why we didn't use UFT also, is we needed version control, and we couldn't turn it on in the LM for every project. Are you using any sort of version control for your efforts?

Neil:        We are using the in-built version control in Quality Center itself, but I might add we maintain only one active production project within Quality Center. I think that is where we got off without needing a full-blown version control tool. The reason we are maintaining a single production automation project is because we did not want to duplicate the common libraries and the support libraries for every project team. Again, that is redundancy in code, so the driver scripts, the init scripts, and the core of the framework itself we did not want to duplicate. Instead, what we did was maintain one production project and define a structure for the folder itself with a test plan and test resources and baked that into the framework rules. That was we still get to use some flexibility in terms of versioning and the files getting locked when one person is actively modifying that within Quality Center. That's how we operate at this point.

Joe:         Cool. You also mentioned you have integration with Rally.

Neil:        Mm-hmm (affirmative)

Joe:         Are there any sort of reports or metrics that you use for automation to see how your automation efforts are going that you use often, that you find helpful?

Neil:        Most of our reports are automated and those are generated from Quality Center itself. That again is not a part of this harness, that is the bridge between Rally and Quality Center, but we do have some API coding done for Rally itself that reports off of Rally's test sets. That really was the key for us to build a bridge. Take an example:

We have a test set maintained in Rally. That has a mix of both manual and automated tests. Once we had run the automation using Quality Center and UFT, those results are pushed over to Rally. So you have the unified results, all of them, tied in that single test set. And, we have a code, basically, what we have developed for a dashboard, it's a wiki that pulls the results from all these test sets per release and per iteration, and then we start recording all that. So we get to report our regression system testing reports, project system testing reports, and all this is from Rally API.

Joe:         So Neil, I also want to explore a little bit more the API testing piece. I actually wrote a book on UFT API, but I don't know many companies that actually use UFT API with the GUI piece. It sounds like you use both. I get this question all the time, how do I create a framework for my UFT API tests? Are you creating reusable methods or calls to web services that you reuse in your workflow? How does that work?

Neil:        Well, Joe, before I answer that, I got to thank you for your book and for your blog post. That was really helpful, and as a matter of fact we got some of the books for training some of our engineers on the team.

Joe:         Awesome, cool.

Neil:        That has been our manifesto for us. [inaudible 00:12:54] I think that was the only help available on the website.

Joe:         Yeah.

Neil:        Yeah, so we [inaudible 00:12:59] to that and, thanks to some of the skill that we had on the team, we really integrated our API module also as part of the framework. Like I told you, database component, UI component, and the API, all these three are integrated within the single framework. You really have an end-to-end business flow with keywords. You first keyword could be an API call, second keyword could be a bunch of UI operations, and third keyword could be really a database push or a fetch.

Joe:         Do you have any tips or tricks to fight off tests being unreliable or flaky? Any tips on how to make your tests more reliable?

Neil:        Tips or tricks to really make our UFT scripts robust and ensure that they are not brittle, would be how you engineer your library of common functions. Some of the common functions that you can think of, set value, get value, file manipulation operations. If you leave that code at the perusal of each of the teams and each of the engineers, they tend to write those functions in their own way. The approach we took was, we rather defined the library and take it in as a part of the framework, and then we just [inaudible 00:14:10] those user functions. So essentially, for the engineers using the framework, they are building functions. That minimizes the chances of failures as we go along. We don't have to constantly end up debugging, especially when you leave the script for an overwrite run for continuous integration. You don't really have to deal with that stuff.

Joe:         You bring up continuous integration. Are you able to integrate your framework within a CI system? What CI system are you using?

Neil:        Oh absolutley. Speaking of CI, what we've done is really we've used Quality Center's OTA to develop a CI framework roundup. We made the modern automation demands of continuous testing and integration. Now that's a framework that we've engineered ground-up, again. No use of open source tools or commercial tools, everything coded ground up using QC, OTA, and Java. What we've done is Build Forge is really our build deployment, co-deployment tools. Our intent was, after every build deployment we wanted to really execute our automation on a specific cadence, on a nightly basis, and on a weekly basis. This is a bridge that we've built between Build Forge and Quality Center. This is using QC's OTA. This framework has the intelligence to create test sets at run time in Quality Center, and that's all using OTA. Also execute them and be able to have [inaudible 00:15:35] multi-threading. Let's say you want to execute it on two different tests that runs concurrently overnight. That's also plausible. We separate the reports also.

For the e-mail reports that get generated, they are reported by the test set and by the length. In addition to that, we've also added some load balancing, because all of our execution is [inaudible 00:15:54] on a set of VM pool. We get the best feature of load balancing also within Quality Center, so we're not waiting on a specific VM. As another VM is available, that's available for execution.

Joe:         Awesome. I guess I'm curious to know. For the people that don't know, OTA is … it's an API that you can use to do things within ALM that doesn't happen out of the box. It sounds like you created some custom code within Java that interacts with that API, that you could then do things within ALM behind the scenes programmatically. But, I found in my experience it's not very well documented and it's kind of hard to understand. Any tips around that? Or do you have any sites that you use as a resource for OTA?

Neil:        Sure. I would say joecolantonio.com.

I should admit, that was the only source for information for us on OTA. We built it on top of it, so that was really a great start for us, for us to even explore and then further dig into the API documentation from HP. We're building it brick by brick.

Joe:         I don't know if you know this, but I used to work at an insurance company called One Beacon Insurance for almost six years. I'll be honest, it was one of the most [inaudible 00:17:09] companies or places to work with. I don't know why, [inaudible 00:17:12] insurance company was mid size, but I was able to do everything. It's different, I work for a large company now and it's great, but it's very narrow. With the insurance company I was able to work on all kinds of projects. A lot of people, I don't know if they realize, a lot of times insurance is cutting edge. Is there anything on your roadmap that you see for new technologies or improvements to your framework that you plan on working on in the coming year?

Neil:        Yeah. Certainly, I think the rate at which the technology is evolving, even more so faster than the rate at which the customer's needs are evolving, I think we need to stay ahead of the competition. So that's something we constantly monitor and see if we're actually on par in terms of our technology adoption, are we getting the biggest bang for our buck, are we getting the value on investment in addition to the return on investment? Along those lines, from the test practice itself, like I said, LeanFT is something that we are interested to explore and see what the capabilities are that it can offer at an enterprise level. In addition to that, like I said, I'm a leader for [inaudible 00:18:16] Nebraska chapter, so there's a constant synergy going on there also in terms of discussing mobility, the latest trends in performance, and how do we really integrate all these solutions together? When you talk about so many tools and so many frameworks, you're not only talking about using a lot of data, but you're generating a lot of data. How is all of this managed, and how is this managed in a centralized way that it is efficient for our business partners? We always keep an eye out for that.

Joe:         Great. What are the benefits you've seen out of your automation framework so far? Has it found real issues in production, or before something gets to production? Can you give us any examples of how it actually caught something before it was released to a customer?

Neil:        This is something that I am constantly challenged with in terms of generating metrics with automation adoption ad how we are evolving over time. I would say the biggest benefit that we've seen over the years is SafteyNet and increased [00:19:14]. We couldn't just dream of getting this amount of coverage and automated execution without the framework or this adoption of this technology. We feel over a three year span, automated testing within the company has democratized to a point where the project teams have graduated to loving test automation from needing and choosing test automation. We have seen teams adopt this technology not just for business work requests, with formal test cases and Rally and stuff like that, but also for any activity that can sensibly be automated and save some time. I think this has been a cultural change in a positive way.

Joe:         Awesome. I just want to touch one more time on the testability piece, because I think that's such a key, key thing to automation. Is anything you did specifically, or told developers specifically, what you needed in order for you to be able to automate their code that they follow now? Or any tips you can give for other people who are working with developers to say, “Hey look, we need to make it testable and here's why. Here's the benefit to you.”?

Neil:        I feel it's about open dialogue and tremendous collaboration between the development teams, and the test practice and the testers, and now the dev testers if you really want to call it that persona. This is something that needs to be driven from the management itself. One sample project that we had was … if I can take an example, it's called application entry framework.

The way the development teams and the development architecture itself was conducive for us to build a framework so that we don't really wait for automating. That you don't try to [inaudible 00:20:58] wait for the UI to be completely developed and big, and then you start automating toward the end of the duration, or probably toward the end of your release cadence.

Due to the development engineering marvel, we were able to actually automate it as we go, alongside development, and that gave us the benefit of time to market. We were able to release few products to production completely tested using automation scripts and giving us that benefit of time to market. I think the key here is collaboration with the dev teams and keeping the dialogue open in terms of technical design reviews, and engineering it in such a way that you think about testability as you start thinking about the design. Testing is not an after-thought anymore, you got to think about it as you start designing the [inaudible 00:21:43], as you start thinking about the architecture, in order to shift testing left.

Joe:         Great point. I definitely agree with you. It may seem common sense, but collaboration is really key, and a lot of teams have this function. It's good to know that there's companies out there that actually implementing automation, that having these conversations, and it sounds like it's working. That's awesome.

Neil:        Exactly. It is said that it's customer collaboration over contract negotiation. I think it's collaboration at any level and any skill. It's not just with the product owners, it's with the developers, it's with this [inaudible 00:22:16]. Collaboration is the key there.

Joe:         Awesome. Is there one thing that you see over and over in your experience that you think most people trying to implement an automation framework are doing wrong?

Neil:        I would probably add, it starts off with defining your business outcomes.

Joe:         Mm-hmm (affirmative)

Neil:        I think more so than adopting a technology or a tool, it's about defining your business outcomes and your problems. Then you aren't going to solve all of them in a single day or in a single quarter, you've got to build a road map. You've got a to build a road map and tackle them quarter by quarter, year by year, and it's all about how we fundamentally enable and embrace a rich set of experiences using the technologies for the business units more efficiently, more agilely, all in a simpler way.

Joe:         Excellent. Are there any books or resources you recommend for someone trying to get up to speed with testing or automation?

Neil:        Well I recommend your book, certainly. The book on API testing, and that has been a manifesto for us as well. In addition to that, I have found some online resources and blogs really help. One offered is KnowledgeInBox, another book by Tarun Lalwani, I found that to be really good in terms of giving some practical examples and real world examples on automation and coding best practices.

Joe:         Awesome. Great resources. We had Tarun in episode 10. He's a great, great guy. I love his books. He has great resources.

Neil:        I recall that I've been reading through all the posts in the SQL forums for over seven years now.

Joe:         Right, yup. He's the leader of the group in SQA, so he's really helpful there too.

Neil:        That's right.

Joe:         Okay Neil, before we go, is there one piece of actual advice you can give someone to improve their automation testing efforts, and let us know the best way to find or contact you.

Neil:        The best piece of advice I can give is automation is not a silver bullet, but at the same time be cautious and cognizant of what you want to automate and focus on value on investment rather than just return on investment. So the best way to contact me would be my LinkedIn profile. I'm always on social media, I keep checking my inbox

 

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

SafeTest: Next Generation Testing Framework from Netflix

Posted on 03/26/2024

When you think of Netflix the last thing you probably think about is ...

Top Free Automation Tools for Testing Desktop Applications (2024)

Posted on 03/24/2024

While many testers only focus on browser automation there is still a need ...

Bridging the Gap Between Manual and Automated Testing

Posted on 03/13/2024

There are some testers who say there’s no such thing as manual and ...