About This Episode:
Automated Pipelines are critical for testers to know. In this episode, frequent guest Greg Paskal, author of Test Automation in the Real World, will share an example of how his team has achieved automated pipelines using Jenkins and Go. Discover the benefits of this approach, lessons learned, and how you can apply his tips to your pipeline implementations. Don’t miss it!
Exclusive Sponsor
The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!
About Greg Paskal
Greg Paskal is a natural innovator, pioneering new approaches across Quality Assurance and Test Automation landscapes. Greg enjoys mentoring others to excel as a craftsman of manual and automated testing. Author of Test Automation in the Real World and countless technical publications, Greg can be heard on the TestTalks podcast with Joe Colantonio. Creator of METS, the Minimal Essential Testing Strategy, Greg’s approach is recognized and taught by ASTQB as an effective, manual testing strategy.
Speaker at numerous conferences including the Automation Guild, StarEast, StarWest, and QA Trailblazers. Greg founded the Open Test Technology Forum, encouraging collaboration and focusing on greater quality across the SDLC. Learn more about his work at METSTesting.com, RealWorldTestAutomation.com and RecognizeAnother.com.
Connect with Greg Paskal
-
-
- Company: www.testingintherealworld.com/
- LinkedIn: /gregpaskal/
- Twitter: GregPaskal
- YouTube: channel/UCyi0iE311gzTNEuN9y–RpA
- Github: gregpaskal
-
Full Transcript Greg Paskal
Joe [00:01:50] Hey, Greg! Welcome back to the Guild.
Greg [00:01:54] Hey, Joe! It's so good to be back with you and the rest of the listeners. I love coming and visiting your show. And I get to share some new stuff today.
Joe [00:02:01] All right, great. Before we get into it, is there anything I missed in your bio that you want the Guild to know more about?
Greg [00:02:05] Joe you covered it pretty well. It feels like there's always something new I'm learning or teaching on or writing on. I've got a couple of new articles in the can I'm working on right now that are probably going to come out in the next few months.
Joe [00:02:15] Very cool. So can you give us a little teaser maybe on what they're going to be around at a high level?
Greg [00:02:20] Yeah, I got one that's going to be called Sailboats and Submarines, a Journey into Testing. And it's about the idea that as we take a journey into testing things, we can approach it from the sailboat perspective of kind of having our eye on the destination and cruising across the top of the water. We can approach it from the aspect of a submarine where we go down into the depths of the application and really study and understand and verify and validate as we're down there. So it's pretty exciting. And I've got a second one I'm working on. It's called the Testing Grand Illusion. And the premise of this article is going to be around the idea that from an untrained observer testing that reduces risk and testing that's just kind of maybe fun and it's not reducing a lot of risk can look exactly the same from a distance. And why that matters and that we consider those things as we're bringing on test engineers and as we're overseeing them and as we are testing junior ourselves.
Joe [00:03:17] Absolutely. And I love the concept of a journey into testing. I believe both you and I have been on pretty much a similar path. We both started with venture (??) tools, get involved with open source. We had not relearned, but kind of get acclimated into a new type of testing approach. And then I think we both got sucked into, not sucked into, DevOps and those types of activities, which I think you mentioned to me recently, you started doing things with Jenkins, automating your pipeline and Go. And I think this is a trend that people can see more of in 2021 and beyond, that I think people need to get the heads around. So at a high level, could we just talk a little bit about what you meant about automating pipelines at your current position maybe?
Greg [00:03:54] Yeah, well maybe I need to redefine that or reexplain that.
Joe [00:03:58] Yeah.
Greg [00:03:59] What we wanted to start to consider doing is when our product went into the build process, that we could go ahead and kick off our existing automation and get that into the hands of the test engineer as soon as possible. Now, we had learned a really valuable lesson in the previous year about trying to let's just call it automated monitoring. Joe, you and I know we have at times been sent off from what I call an automation fire drill. It usually starts like this. Something bad happens in your company. And the next conversation is walking over to the automation team and saying we need to automate this so it never happens again, right? Been there and done that. I'm sure most of the listeners have had that, too. So that provided some great foundational training that led to this next generation of what we're doing. I say that because if you think about automated monitoring, especially in production now we're getting into an area that a lot of us have been told don't do, don't put test automation in production. I'm of the same background. I've heard that many times. We need to do an experiment with that. And what we found out is when the mechanics of a process like that is so complex, it sounds great on a whiteboard and it sounds great during that fire drill. But when you start to get to the reality of it, at some point the machine gets so complex that it doesn't become reliable. And I'm not other than maybe a fire alarm going off often, it didn't bring a lot of wind and we actually back that process out. And we found there were some much better ways to do it. Joe, I was able to reach out to a lot of my professional contacts, and I found out that folks were using other tools to solve those things in a much more efficient way. So all I had to say is when we started to approach this phase two of how do we get automated results into the hands of test engineers early after new builds, what we didn't want to do is repeat that lesson. So we said, let's don't put our automated testing in the critical path of a build going out. And I think that's a good takeaway I want the listeners to hear, but instead, let's as soon as possible at least kick off the test so that the data from our automated results is available for analysis. Because that's really where the great wind is from my perspective of automation isn't that we can reach our destination, but what did we learn along that journey of the test that we executed? And what did we provide to the manual test engineer that gave them new insights that they can identify, “Hey, we regressed in this area or certainly the test passed, but it took 30 seconds longer.” That's telling us we might have a soft fail that's in the works. So I hope that kind of sets the mood of where we're headed.
Joe [00:06:25] Yeah. So is this activity separate from your developers, like when your developers checking code where your automated test can run and because they took so long to run that they were slowing down the code being released to production?
Joe [00:06:37] No, that's a great point. And I can see a lot of folks probably have encountered that. I've had conversations with devs around that, trying to solve that problem. We actually use Go to solve some of those pipeline build stuff. We've got a great DevOps team that's doing an exceptional job in that. I'll tell you how Jenkins came into this picture, because it is too a build tool, as most of the listeners know. We initially had to solve another problem we were encountering or another challenge in our automated testing, and that is we wanted to find a way to get test certain tests. We have different types of automation in-house and we needed a certain type of that automation to run early in the morning so when our engineers came in, they had some insights into the health of their application first thing in the morning. And as you can imagine, we needed a way to execute this test on some sort of timed or scheduled approach. And so for quite a few, I'd say a couple of years at least, we were leveraging Windows Task Scheduler to do that in our VM environments and talk about an awkward tool to work with. That is not something you want to play around in all day. You can get the job done, but it can get very complex. So we had been using a tool like that to kick off what we call our automated sanity check, or maybe some folks would call it a smoke check in the morning and (unintelligible) to another day. But what we found was when we were working with Windows Task Scheduler, this was a very painful process and we needed to come up with a better process. So one of our exceptional partners we have here at Ramsey is we work with our I.T. team gentlemen, that that has been a great help to me, another Greg by the way, made the suggestion that we consider Jenkins. It had a UI to it. It's a web-based tool, runs in a Java framework. And it has all kinds of scheduling capabilities outside of just doing the build process. So much like a lot of the work we've done in automation, we've had to find other tools that solve other kinds of problems that we were able to leverage a particular aspect of that tool, which was its scheduling capabilities to begin to execute this morning sanity test. So that's how Jenkins came into the picture for us.
Joe [00:08:40] All right. So maybe I'm way off base here. You remember ALM, right?
Greg [00:08:43] Of course. Yes.
Joe [00:08:45] Right. So, you know, I used to be able to put in your automated that's why I like to (unintelligible) you'll be able to integrate automated test there, someone can log into a dashboard. They can click on a machine, point to a computer, and run it on that computer and see the results. Almost sounds like that's what you're doing with Jenkins. Am I wrong?
Greg [00:09:00] That's not far from it Joe and (unintelligible) come from my background with tools like QTP and BBT and ALM, was understanding some of those capabilities and finding ways to solve those things. So you're right in that when we did it through ALM, we kicked the test off through that tool. In the case of Jenkins, that's actually very possible as well for those folks looking for an alternative to using something like Quality Center ALM., it provides a means of that. It's not its intended use, but you can perform that. But one of the things that Jenkins does really well is scheduling things that need to run at certain times. And so we leverage that capability and started to execute these tests on a regular schedule basis so that, again, a test engineer would come in at seven a.m. and there was our initial run of results for that day. You know, Joe, I'm a believer that you run your automation multiple times in a day, whether there's a new build or not. Those results, when you look at them over time in reporting tools, as we've spoken on our previous podcast about our work with Elastic, give you insights into the health of the application.
Joe [00:09:59] Absolutely. So what we used to do is I worked for a health care company and at the end of development, we had to go through a verification process for the FDA. So it's really regulated. So what we did until we got there rather than waiting last minute is we had Jenkins and so the developer would check-in code. The unit tests and everything would run automatically. We'd run some smoke tests, but then we'd kick off a job in Jenkins that goes to a different environment that runs against the new build that was created by the developers that would run an automated test overnight. So when we came in, we saw the results and people can then start triaging issues to see if it had to do anything with the new check-in the developers used or checked in. So is that the same process we're talking about here? Like, is there a disconnect between your developers and your testers, or is this specifically just for testers?
Greg [00:10:44] Yeah, you're right. Now, the results at the moment are just for our test engineers. We have tested engineers dedicated to each of our products and so they can get it. Now, we do have many devs and project people who are interested in those results and get them as well. But the initial target audience is is our test engineer as part of our team. So the way you explained what you guys are doing is pretty much the same way we're doing that although we're using Go for our primary build process, we're not using Jenkins as a company. So we actually use Jenkins just for the execution of those automated scripts on that environment that actually Jenkins is running on. That's kind of our 1.0 of this. But we realize there's even capabilities of expanding that out.
Joe [00:11:22] All right. So I'm not very familiar with Go. I always thought it was a programming language. So how does that play into that? Are you using it as glue or something or some sort of interface between Jenkins and your test?
Greg [00:11:33] That's a great question. So there's something called the Go Pipeline Process. And this is definitely getting out of my area of expertise, but we just go to do our build process. It's a competitor, I believe, to Jenkins in the way it's sold and in doing automated build processes. The way we couple those two things is within the Go Pipeline Process, there is a configuration file, a YAML file that we store a URL in and that your URL just so happens if you go to it, will execute our automated script that is being overseen by that Jenkins system. And so it's a very easy way for something external to the Jenkins box and the automation execution environment to go ahead and kick a test off. So that's kind of our current approach right now as we're evolving it. At some point, we may even implement an API or other things to do it. But this turned out to be so simple and it was built right into Jenkins that we can do that and pass parameters in that run the test differently. It's actually been exceptional and extremely reliable. I think I've told you we're running about 65 000 automated tests a week now and almost half a million automated audits on top of that in the processes is going exceptional.
Joe [00:12:43] So once again, I just want to bring it back to I understand and see if it's the same. So we used to call the Apache Maven because we used Java to do this type of build. So it managed the project build, the reporting, the documentation, all that information to put in the pom file. Jenkins would read that pom file through a command line and it knew what to run, what test to run and were reporting to create, and all that type of information. It sounds like Go, not the language, but Go Build, is that what you call it, is taking the place of what we used called Apache Maven.
Greg [00:13:11] Yeah, I think so. It sounds like that.
Joe [00:13:13] Nice. So now you said you're running all these tests. What's the process then when people come in to view the results? Because you know the triaging piece usually takes the most amount of time. And also I know you were playing around with Elasticsearch a few years ago, creating data lakes and all that. Does this come into play at all with this type of reporting that's created after all these tests to run in the morning when you come in?
Greg [00:13:35] It does. It does very much. And I think, again, for those listeners that are in this space of being, leading an automation effort, you want to take the time to build great building blocks of your automated strategy, right? So we have two reporting methodologies in place. A part of this just started because we initially noticed that without reporting, if you don't have anything that's outputted at the end of a test run, you're putting all of that responsibility in the hands of the test engineer to likely interpret results at the command line. And, you know, when you're first starting out, that might be okay, but you want to move past that at some point. It's not okay that you're consuming all that time for the test engineer to have to read through that. At least it's not from me because it's a very easy lift to get some reporting. So our first reporting efforts were all done through email. We were leveraging some of RSpec capabilities. We still do that. We've customized that now so it's much more intuitive, but we leverage RSpec since we're a Ruby shop and its test framework to generate an HTML report that we initially just attach to an email at the end of the run. We have a tool that we call Auto Tools. It's just kind of its nickname right now. It's a suite of entry points to execute our automated test and to feed into our framework. And it's actually how Auto Tools got initially put into place was to be a wrapper around this automated test execution to send that email out at the end of it. Now, it's become much more powerful the way that we use it. But at the end of the day, it still handles some of the basic things that just need to get done. So, yeah.
Joe [00:15:03] Nice. So Greg just to refresh my memory, with your automated test run is it Ruby, and does Ruby work well with Go and Jenkins?
Greg [00:15:10] Yes, we are writing in Ruby Joe. That's our framework and our automation's written in Ruby as well. What's nice about Jenkins is Jenkins can touch something at the OS very easily, can do things even outside of that. But because we're running in a VM environment for our execution of our automated tests, that are Ruby-based. We actually installed Jenkins on the very machine that executes these scheduled tests. Granted, we have a number of other VMs that our manual engineers use all day, I should say our test engineers. I know that our manual engineer can be confusing. We look at those test engineers we empower with automated tests. So the test engineers have multiple VMs they can log in to and execute their automation that feeds all into our reporting through our data lake and through our Kibana tools with Elastic. But again, going back to our recommendation from I.T. for the our kind of our 1.0, which is I'd say we're in maybe we're transitioning into our 2.0 version of that. We actually have Jenkins running on the very machines that are going to execute the test as well. So they can touch something at the local file system, including kicking off the right Ruby commands, or the Rspec commands in our case to go and begin the whole automated process of a specific suite of tests for a product.
Joe [00:16:21] So, Greg, so it sounds like then Jenkins has a node that reports back up to the main server, I guess. And so when you're saying you have a local on a VM, you have a node, a Jenkins Node on these VMs that are then communicating their information to the main Jenkins server, I guess.
Greg [00:16:36] Yeah, actually we actually execute since Jenkins is running in a Java environment, I believe this is probably having mostly virtually within that VM itself. But there is a node within Jenkins, where the automated test executes within, and that feeds just wonderfully into our reporting framework, which again is built-in Elastic using our data lake.
Joe [00:16:56] So then why are you doing this again? Is (unintelligible) run test in parallel? Like why or is it kind of running in a Selenium grid so you're able to run your tests and say, exactly, run against these particular VMs and spread them out this particular way rather than use a grid? Or what is the benefit of using just the node rather than a node in a grid? I don't know if I make any sense. Are you almost creating your own Selenium Grid I guess then but just using Jenkins with the nodes on the different VMs?
Greg [00:17:21] We don't leverage Selenium Grid at all. No, that's actually a different model than what we use. The reason we have a number of VMs is remember our target audience is our automated test team and we have a pretty large team of 30 engineers now. And so they're executing their automated tests all day long. So they have a number of VMs they use to go ahead and log in to and manually execute or kick off the automated test they specifically need. When it comes to Jenkins, these are scheduled jobs. That's how it initially started, scheduled jobs that run in the morning. And now here's really what's at the heart of our conversation. Now, we have the Go pipeline say, “Hey, I'm building this new product, product A over here. And since I'm building this now, at the end of this build process go ahead and kick off that automated test, the test product A. Jenkins can now consume that request and go ahead and get that kicked off just like one of our test engineers can. So it allows us to as soon as the build was done to go ahead and execute a suite of automated tests. And granted, our suites can finish up in anywhere from five to 20 minutes. We don't have suites that run for hours and hours. So you don't have that particular problem. So that's how we're using Jenkins, is to be a listener to go ahead and execute things when our Go pipeline says, “I finished this build, go ahead and run the automated test now.”
Joe [00:18:39] Nice. So how do you specify what tests to run? Once again, I'm going back to my experience we used BDD and we had a tag called, I don't know, smoke, and so we'd have a build job and the Jenkins job would run that build and say just run anything that has the tag smoke and it knew to run it at a certain time or after a certain action. Is that something similar?
Greg [00:18:57] Yeah, great question. And something we've been working on for about the last two or three months in our next evolution, if you will, of the way we are organizing our test suites. So initially for the first few years, we would call a specific test suite out and we break suites down again using the Meths (??) methodology. That's our manual test plan strategy in-house. We would say, “Okay, we want to run the account test for product A.” And so those would be in a suite designated account for that product. Those products all live in their own repo, it's all GitHub based repos we use. Again we actually leverage GitHub a little bit like a quality center as well as far as being a repository to sort things in. And it does give us some versioning capabilities, but we use it primarily for distribution and repo purposes. Anyway, going back to your initial question, one of the things that we've begun to implement is we now use tagging within our RSpec usage of our automated test. And so if we have, let's say, within that account suite, we have a specific group of tests that are related to maybe deleting an account or maybe changing some information for an account user. We can actually provide the specific tagging within that RSpec tests to run that. And so what we're now passing in through our Jenkins call is a tag that says go run product A and run all the account tests. And it's been worth the effort to go ahead and take, we have thousands and thousands of custom tests here, to go ahead and align them to this approach where we're tagging them well according to our test plan strategy.
Joe [00:20:28] Nice. I saw another issue I ran into back in the day once again when I was working for a company when I wasn't solo is that, and I guess everyone runs into this, is getting test environments to be clean and not flaky. So what we did also with Jenkins is whenever we kick off a test run, we create a new instance of the environment clean and it would run the test and when the tests were done it would shut down that VM. And so it'd be a clean start and end. And we know the environment was pristine because we're using a template of a known good copy that would always be instantiated when the test ran and then tear down when the test ended. Are you doing anything like that because you didn't mention VMs a few times?
Joe [00:21:04] Yeah, I am. I love that you bring this up. It's like you read my mail, Joe. Yeah, we put a high emphasis on consistency, our execution environments. We even put a high consistency in our automation development team's environments that they have a certain process we put into place so we are in sync. That's by design. We don't have kind of a willy nilly you decide you're going to go, use this IDE in this. We have a certain set of IDEs, a certain set of Gems. We use Ruby versions. We do that. It gives us a very consistent approach to our testing. When it comes to the VMS that we have, we don't spin up a new one every time. We have a, they're all exactly the same. They've all actually been cloned off of the master VM we built initially. And so it must have been about six months ago. I had this thought as we were installing new software on a new automation engineer we hired, I thought I built the script that basically ensured things were installed well and proper like that. It gave me this idea, “Why don't we do something like this with our VMs?” So every single morning we run basically a health check on all of our VMs. And I get that reported to me and that data goes into the data lake and it tells me things like, “How much drive space is still available in that virtual environment. What version of Ruby are we installed? What's the Gem set up?” Anything looking unexpected we'll go ahead and highlight. So in a way, we're automating the automation. We've talked about that in the past and we use our automated skills to ensure even the health of our environments there. Now, a lot of folks might be saying, “Well, don't you want to have a lot of environments?” Yes, there is a case for that. But we found that by having some control of the environments, we reduce the certain risk of knowingly that we're going into it running under these conditions. So that's how we approach it. And we run home every day. And I look at them every morning and it's helped keep those environments very healthy and in good working order and ensure that the data that our test engineers getting is accurate to something that they've seen before. Joe, you and I have talked about the personality of an application. This helps them determine, “This is the same application I've seen before” or “Hey, this looks different than what it normally is.” That's telling me because we put so much priority and consistency, something's not right. Our application, it's got a headache this morning. Something's not right. I need to dig in deeper and do some analysis.
Joe [00:23:15] Very cool. So are you doing the same thing with the test data that the test actually uses for data, because once again, we used a database that was fresh from each instance? And so any change we made because we tore it down and we started back up, all those changes weren't baked into the database. So we kind of removed some flakiness caused there. Any tips around, because it sounds like a very large number of testers there, any type of test data management type solutions you have in this particular approach you're using?
Greg [00:23:43] We haven't really had too many issues in that. It just may be the nature of the products we're testing. We do use a lot of data. Our sanity tests are all data-driven and account for a large number of tests that we run every day. That data still to this day managed primarily in spreadsheets and put into a CSV format because it's easily approachable to the test engineer. All of them have worked and do it in Excel or spreadsheets before. And when they have updates to it, it's very easy for them to go ahead and work in a spreadsheet, add a new URL and a few things they want to validate there, and just have our team go ahead and update that data for them. So we haven't really deviated from that because, to be honest, the ROI would be so small that this approach is scaling well for us at the moment, but it's likely somewhere down the road we will need to enhance that process. But why spend that money? And at a time at this point, we don't have to. One of the other types of tests we have is a truly data-driven test. So I love to do data-driven testing. These are things I learned back working in QTP and some of its capabilities. We've supported that concept into our automation usage, leveraging the Selenium API and our Ruby framework. And we use that for things like calculators and things. Since I work for Dave Ramsey and we're helping folks in the area finances and getting out of debt and working through investments and things like that. We have a lot of calculators. So those things we test through true data-driven tests where we're taking input data and validating expected results at the end. And so that data is pretty stable and doesn't need to be modified too often.
Joe [00:25:15] Perfect. So Greg you do have a big team like I mentioned. How do you get everyone on board? Any best practices you have to get this all implemented and everyone on the same page?
Greg [00:25:24] Wow, that's a great question. So I'll tell you, it can be hard sometimes, right? You've got like when we started to implement the morning scheduled automation runs we decided for the team we would schedule one automated run for them each morning. But what can happen is if you let's just call it over automate the process and automating the automation if you overdo that, it's really important to never remove the eyes of a tester or the brains or the intellect of the test engineer from why you brought automation to the game plan, to begin with, to the test strategy. It's part of your strategy. So if you overdo that and you expect that your automation itself is going to flag with something, pass or fail, I do believe you're probably setting yourself up for a bigger problem because automation, as much as I love to do it, is still pretty dumb for the most part. It's good looking around at some pretty basic things and going from point A to point B, but man, when you compare that to the intellect of a person and a test engineer that really cares about the work they're doing this there's no comparison. I don't care how much AI you throw at it. A person's intellect and their ability to reason through results is still going to win. And so I truly believe that at my core. So we have to continue to go back to the team to talk about those things. If we produce results and we put them in your inbox, you still have to look at them and you have to make a decision to do that. So we talk a lot about the strategy of testing that automation is just a tool in the hands of the test engineer. It doesn't solve all the testing part itself. It just gives you the insight to say, “Things look as I expected or I didn't expect, and now I need to dig in further.” Or I can have confidence things are in good shape and we can move forward with the new feature I'm testing or whatever. It's interesting. It gets to the philosophy of what we do and it's something I believe pretty firm in. And I think it's helped us to keep a good balance. Our testers aren't worried about losing a job because we never pose automation as something that's taking away their work. It's a tool that comes alongside to enhance and help them.
Joe [00:27:23] Great advice. Okay Greg, before we go, is there one piece of actual advice you can give to someone to help them with the automation testing pipeline efforts? And what's the best way to find or contact you?
Greg [00:27:33] Hey, Joe, that's a great question. The easiest way to contact me is on LinkedIn, and I'm very easy to find on the web at Greg Paskal. That's G-R-E-G P-A-S-K-A-L. You can find my materials on realworldtestautomation.com. You'll find my book there and about thirty-five articles now I've written. As far as actionable advice, I would say when it comes to the automation engineer, look at the work that you do as a partnership with your test engineer, even try to separate that idea of manual versus automated test engineers. At the end of the day, a good automation engineer really needs to be a great test engineer because you need to understand the craft of testing and you need to understand when an automation fire drill comes up and someone says, “This awful thing happened. Automate this.” And you can go through the reasoning of a test engineer and say, “You know, while we could do that, at the end of the day, we might miss one of the highest risk I see here” or “This is a better task to be done visually by the test engineer. And we can build automation to come alongside and help that test engineer. But to solely automate it, it sometimes actually blinds you from seeing it again.” So try to get into that place. A lot of test engineers aspire to go into automation and that's a worthy place to move. But never look at this as you now need to you now get to move into the place with playing with Lego blocks all day or the fun work. Automation is hard. The easiest thing we do in automated testing is writing automated tests. There is a lot of hard work. It takes to do that consistently and reliably and give you exceptional results back to the manual test engineer or to the test engineer themselves. That's my advice.
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.