Automation Testing


By Test Guild
  • Share:
Join the Guild for FREE

Welcome to Episode 72 of the TestTalks podcast. In this episode, we discuss how to test and monitor APIs with Neil Mansilla, head of developer relations at Runscope. You'll discover how to take your API monitoring and testing efforts to the next level.


We are currently living in what some are calling the API economy. But not everyone is ready for this API development/testing centric world.  Neil explains all the ways a service like Runscope can help.   Learn how you can easily setup, monitor and test your APIs as well as how to test your websites that might consume your APIs using Ghost Inspector. 

Listen to the Audio

In this episode, you'll discover:

  • What are microservices
  • How to setup an API test
  • What you should be verifying when testing APIs
  • Tips to improve your API testing and monitoring efforts
  • How include API tests into your continuous integration environment

[tweet_box design=”box_2″]API should start from a functional standpoint. Its not just about testing every single method~[/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:

Question: What is your API testing and monitoring strategy? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Kama

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on so if you prefer Stitcher, please subscribe there.

Read the Full Transcript

Joe:         Hey Neil, welcome to test talks.

Neil:        Hi. Thanks Joe. It's awesome to be here.

Joe:         It's awesome to finally have you on the show. The people would not believe how much issues I had trying to get this working, so thank you. Before we get started though Neil, could you just tell us a little bit more about yourself?

Neil:        Sure. My name's Neil Mansilla, and I run developer relations over here at RunScope, were an API testing and monitoring company. Before I joined RunScope, I've been here for about a year, I was at Mashery which is an API management company. I managed API's for about 300 enterprises around the world. I was there for about 4 years, I was in Dev relations and helping out with developer experiences at PM as well. So, I'm a developer at heart and just a big software geek. I love software, I love technology.

Joe:         Awesome. So what does a VP of developer relations do?

Neil:        Sure, so primarily our role is to make sure that we're helping out the developers or in this case developers and testers and dev ops folks. Helping them in any technical or tactical ways that they need help. We build community tools, we do a lot of the writing of content, from blog posts, guest blog posts, to doing podcasts for instance. So, its just kind of being on the ground where developers are and just helping them solve problems.

Joe:         Awesome. This might be a little off topic, but now I'm just curious, What's the number one issue you think developers usually have with testing? Is there one thing you think that most organizations or developers could learn from a common issue that you see most developers struggling with?

Neil:        Yeah, I think that… Well, I mean a lot of developers just don't … They're not testers, right? I think that they struggle with building really good tests. I think that even myself, I'm a developer, and tests aren't always … Even though Ideally that's always something you'd want to start with, you want to do some test driven development and do your tests, build your tests beforehand. I think that building really comprehensive tests that build the test of time, pun intended, that really do help fight against regressions or problems. I guess writing really good tests are just not understood by the person that wrote them but can kind of survive you even after you move on to a new project, or move on to another company. I think that's a big challenge. I guess, because again, you don't do it professionally, you don't do it day in and day out, you want to write code, you don't necessarily want to write tests. So, I guess it would be tantamount to me not being a writer. So for instance writing blog posts is really difficult for me. I think that for core developers, writing tests sometimes just isn't top of mind, or even their top skill.

Joe:         I thought it was just me because I work for a big company where all their people were older people. I think you're a younger hipper company so I can't believe you have the same issues, that's crazy.

Neil:        Yeah, we have old souls here.

Joe:         Awesome. So, before we get in to RunScope, it seems to me when I look at Runscope … I just heard about it maybe two weeks ago. I got a popup for a free t-shirt so of course I clicked on it. Then I got to reading it say “Hey, wait a minute, this actually is like my core audience I think.” Why do you think API testing is important, or is that what RunScope's main focus is, on API testing in general?

Neil:        Yeah, API testing and monitoring. So we have a very, pretty deep API background personnel wise. I mentioned, myself, I'm from Mashery 新しい カジノ and the head of our communications, she's also from Mashery. We have a couple ex-Apogee folk. Our founders, John Sheehan and Frank Stratton they both were very very early employees over at Twilio, which is a very API first company. Yeah, think that, you know, for us being so embedded with API's one thing we did notice as developers or just people that have been in the industry for so long is that engineers, software engineers, have kind of grew up, and been brought up to be really good at building tests for lets say code level coverage type tests. When it comes to web services, or any type of services that type of testing has not really gained the attention that I think is required. Given that so much of our … You know we have this thing, I don't know why they dubbed it but they dubbed it the API economy. I mean, its very, its at the forefront of mobile app development and now we've been with modern dev ops organizations that have … Are building on micro services. I mean API's are everywhere, so It's really important to test these services and not just leave it to code level tests to be considered to have comprehensive coverage.

Joe:         Awesome. Now I've been hearing micro services from a bunch of different guests, and I'll be honest I don't know exactly what a micro service is. Is it just a small API, what is it? Is it lighter than rest, or…

Neil:        I guess I'll just speak as an example. So we have deployed a micro services architecture behind RunScope. It was something that we decided to kind of have you just very … I guess behind RunScope the web application, behind RunScope the testing service that runs in the cloud. It's comprised of like close to 70 independent micro services out there from a service discovery service to a queuing service, to a mailing service, and the idea here is that we didn't want to build one big monolithic application that drove everything. We wanted to have these very small independent services independently deploy-able, independently testable services that would be comprised of … All come together and culminate to make up what RunScope is today. The benefit to that is that we can have … It helps us scale, not just architecturally and not just in terms of the amount of traffic we can handle but also the way that we develop software too. We can have teams that build, that work just on very very specific services and they can work completely independent of the other services that are running behind RunScope. So that's our story in terms of… We've adopted micro services.

Now in the context of API's every one of those micro services has it's own API. Now for us, and many others, a lot of those micro services communicate with each other over HTTP. So we use RunScope to monitor our own micro services architecture. We have several other customers that do the same thing. So that's kind of a … When I mentioned micro services and monitoring those, they're just API's as far as RunScope is concerned and as far as testing them is concerned.

Joe:         How would you use RunScope to test a micro service? How would that work? Could you give us an example?

Neil:        Sure. So, we have a service. We've given quite a few talks on this, not because we sell anything that's specific to micro services, I think its just one of those things that we kind of have a passion for, just kind of sharing some of our findings and some of our experiences. So, as an example we have a service that is responsible for queuing, one is responsible for actually executing the tests. So when people queue up tests, or schedule tests to run to monitor their API's there's a service, behind the scenes, that handles the scheduling and execution of those. So, they all have … That particular service has an interface, over HTTP. We have services. I'm sorry. We have RunScope tests that are monitoring that service. So its very Meta, because we're using RunScope to monitor RunScope services. That's just one example.

Traditional test that you might have at the code level of course. Let say you deploy some code, code passes, and everything … And your tests go. Even you're integration tests which are just like multiple unit tests that are you know, the different … Communicating with each other, that's great. Standing up that service in staging, in standing up that service in production, making sure that that service is responding properly. Because that service is also dependent on other services, so it's not as if just, you know, it's not just a code correction test, its more of a service. Your testing the service itself while its up and running and all the other network operations that it requires. It's a true functional test against a micro service.

Joe:         So, when you say you're testing it, how are you verifying it? Are you just making sure you get a 200 back? Are you verifying the data being returned is correct?

Neil:        Yeah, so I mean … You got the T-shirt “Everything's 200 Okay.” That's a basic [inaudible 00:07:59] that you want to get back, but no, you can test any type … You can set assertions against any type of response. So for instance we can check on response time, response size, any piece of the payload. So for instance, if it's Jason you can find exactly some type of node inside the Jason response to make sure it has a particular value. You know, set an XML path against if it was in XML for instance. All those assertions you can set without doing any programming at all with RunScope. However, if you wanted to, of course, we also support programmatic assertions as well. So we support java script as a language that you can set assertions, you can define variables, and iterate through payloads etc, to set assertions against very dynamic payloads.

Joe:         Cool. So it this a cloud-based service where you would log in and would be able to run this? Or is it something that you would install local on one of your servers in-house?

Neil:        At it's core it's a cloud-based service. That being said, it can also be used to test not just services in the cloud, but also… In the public cloud but also in your private network, or even on local dev. The same testing agents we have running in the cloud, we have 12 different testing locations between AWS and Rackspace. You can run that same code locally in your own infrastructure to test any type of services on a 10 dot network or 192 168 or local host for that matter. So its kind of a hybrid on prim deployment.

Joe:         Okay, so if I have a continuous integration deployment, would I be able to plug in to RunScape somehow?

Neil:        Oh, absolutely. A lot of our customers use it that way just as you described. So for instance if your using Jenkins and you can well … We have RunScope plug-in for Jenkins so you could just add that plug-in, but more specifically, its just a build step. So you would trigger a test to run, so every test you define in runscope has it's own trigger URL. You can trigger any test to run, you can pass over optional parameters if you want to, to just kind of make it more dynamic. If the test fails, then the build fails, then that's basically how that works. So, its not just specific to Jenkins you can really use it with any type of CI automation.

Joe:         Cool. So when it does fail is there a dashboard? Or are there email notifications? Are people alerted? What happens when…

Neil:        Yeah sure. So if it was Jenkins and, I mean, you'd have all of your, you know … If your build failed, then obviously your build failed any type of notifications you have tied in to that would work. It would trigger. With RunScope itself, yeah, we have a whole bunch of … At the code level of course, lets say you deploy some code, code passes, and everything's and your tests go … Even you're integration tests which are just like multiple unit tests that are you know, the different … Communicating with each other, that's great. Standing up that service in staging, in standing up that service in production, making sure that that service is responding properly. Because that service is also dependent on other services, so it's not as if just, you know, it's not just a code correction test, its more of a service. Your testing the service itself while its up and running and all the other network operations that it requires. It's a true functional test against a micro service.

Joe:         So, what are the benefits of someone using a service like RunScope rather than something in-house?

Neil:        Sure, so we have quite a few paying customers that are even on the large side. One of the first questions we as is “What are we replacing?, What did you have in there before?” Rarely do we ever find that there's some type of commercial solution we're replacing. What we often find is that there's two things that are either happening. One of them is DIY, they have someone, if its monitoring they have some ops person who wrote some script that runs maybe on a Chron, that does some type of testing against their [inaudible 00:11:57] Or it really ends up being nothing at all, you know, they don't have anything in place. So it's kind of like apathy, like, well, we have code level tests, and its kind of like fingers crossed. Things seem to be passing, things seem to be working. It's kind of treated not as a first class citizen. In fact, if you were to ask many integrated 3rd party API's, you know, API's that are outside of their four walls what do you use to monitor them? What do you use to test against them? You'd be surprised, its not … They're not going to give you great answers, because they just don't have many things.

Anyhow, that's what we find is that there's not many amazing in house solutions around API service testing and monitoring for that matter.

Joe:         Cool. So, what do you tell your developers when they're developing tests? Do you recommend they do more API type test rather than end to end UI tests? Is there a ratio?

Neil:        No, no. Yeah, they're totally not … I mean those … I guess we kind of treat those differently. I mean you could … Anyway, we do have a service called ghost inspector which is a UI testing tool which we can talk about a little bit, but between the two, they're quite different. For API testing I guess the recommendations we do make are that you really take it from a functional standpoint. So its not just about testing every single method you have. So for instance Joe, if you have, you know, three resource steps with 10 methods in each one and you basically have thirty tssts that are essentially like pings. Like “Hey, my service is okay, its returning a 200.” That's just not good enough, I mean, that's just now how the applications operate when they consume these API's. Instead, they're …

A retail scenario for instance. You're, in one resource set you might be managing inventory, you're adding an item. In another resource, you might be a shopping cart one and you're adding that item to a basket and checking out and so on and so forth. I mean those are real scenarios in which you have, you know, cross resource types of functionality going on. That's a type of tests … Those are the type of tests you really need to be writing. Not just these basic independent method by method tests. You really need to have, I guess what would be equivalent to an integration test in code. Where you're taking multiple units and mixing them together to make sure that the units are, the functions are working well together. Same thing with APIs and methods and resources, you have to really take a look at how methods are going to be interacting with other ones, and tests that simulate that. For writing good tests that's a bare minimum. Its not mutually … you want to do both. You want to do method by method testing, but also you want to do the where you're doing more traditional integration testing between those methods and resources.

Joe:         That's, great, great advice. I think a lot of people miss that. They either do unit testing, or they do end to end testing and I think they miss that happy medium I think almost with integration tests.

Neil:        Yeah. Well, you had mentioned, you know, UI testing and why … Last year we had acquired a company called ghost inspector and it does UI testing. So again this is not my you know … Even web testing, I've used Selenium before but just as a total noob. I've played around with like Phantom JS and like Selenium web driver but I've never really took a deep dive into it because it can be pretty complex to do, to build out a comprehensive browser UI test. So, ghost inspector is a UI testing tool that is driven off of Phantom JS and I think that there's some other drivers in the background too that you can use. Building a test is super simple, there's a chrome extension that you can install and to do it non-programmatically. You just basically use your browser and you just traverse through your website, through your web application and it just creates a test for you. Just from you browsing around and clicking and setting assertions is cake.

So anyhow, there's a ghost inspector, and RunScope API testing, they work very well together. So, you can essentially do API testing, do some UI testing as well all in the same test suite. So its very useful to do that type of end to end testing. Imagine for instance I have to log in to an account to … that has no off flow. You can jump back and forth between RunScope API testing and Ghost inspector UI testing so its very useful.

Joe:         Is that a headless test?

Neil:        It is. So it's headless and, you know, you can get … Every ghost inspector test is video recorded. I think that it's programmed to take snapshots and it snaps together a video for you and it takes some screenshots at the end. You can even do like screenshot comparisons to take a look at the change that's happened between each test run. So yeah, it's headless.

Joe:         Cool. So in RunScope you can create a test flow almost. Could you call an API use it's data and feed it to your UI test?

Neil:        So, I know this primarily from the RunScope perspective and how I've used it is yeah, I've integrated my Ghost Inspector account, and I add a Ghost Inspector test step. So a traditional API test is composed of, you know, any number of requests, and you can chain your requests together to make very complex tests in terms of extracting data and using them in the next and so on. So, those are for HTTP calls and for Ghost Inspector its no different, I just add Ghost Inspector step and it will run my Ghost Inspector test. It shoots back a bunch of information. I can extract data from that and again use that as my test is continuing to run on the RunScope side. So, Yeah, its pretty useful and pretty easy to use.

Joe:         Cool.

Neil:        To kind of chain those two together.

Joe:         Awesome. So, Do you have any recommendations, this is a pet peeve of mine, for test data management. We run against multiple different environments, we have staging, we have development, we have 8 sprint teams that have their own environments. Do you have any recommendations that you give your developers on how to make their tests reliable against different environments for data, data population?

Neil:        Yeah. Sure. So, RunScope's always been very flexible in terms of being able to use, to run tests with dynamic data that you can just send it, you know, if you trigger tests. Also, we just productized this thing called RunScope Environments where you can define as many environments as you want. So now you can take the same test and apply different environment settings to it. So for instance, you have someone who constructed a test and rather than just having a static base URL, you would just put in a variable base URL. Depending on the staging environment … I'm sorry, depending on the environment that your in, if its staging, maybe your base URL is staging dot you know, Joe API dot com and so and so forth, from prod or even for local host. So we have it set up so that you can re-use the same test from, you know, testing on your local dev, to testing on staging, to pre-prod to prod, to monitoring on prod. For instance you can schedule a test to run on a schedule basis using that exact same test but with different environment settings.

Joe:         Awesome. So, I'm going to switch gears a little bit. I've been doing some stalking so, quick question. Is Internet of things a real thing? The reason why I ask you that is I notice you won a hack-a-thon for tech crunch recently for an IT device and application that monitors your water consumption. So what's that all about?

Neil:        Yeah. So Tech Crunch Disrupt if you're not familiar with Tech Crunch Disrupt its … So I've been to a boat load of hack-a-thons in my day because I used to be a developer evangelist over at Mashery where I would evangelize a bunch of APIs that were owned by different companies. So we'd sponsor hack-a-thons we'd help organize them and so on and so forth. One of them is Tech Crunch Disrupt, and this kind of like one of my all time favorites. I've been to Disrupts in San Francisco, New York, Berlin, and its the most competitive hack-a-thon because you've got 60 seconds to get up there and show off what you got, and that's not a lot of time. In this particular one last year, here in San Francisco, I created a … We all wanted to learn how to build, kind of do some hardware hacking. Couple guys that I worked with over at Mashery they were already familiar with Arduinos and like the whole hardware thing. A lot of the work that we were doing, because Mashery got acquired by Intel back then, was around IOT.

So we did, we came up with the idea called “Shower with Friends” which was a way to kind of gameify water conservation. Before you hop in the shower Joe, it will tell you like “Hey, you know, yesterday you consumed 12 gallons of water, with a 15 minute shower. Lets do better this morning.” And it kind of tell you how your friends were doing as well to try to compete. We're in the middle of like the worst drought in recorded history here in California so it was something that was very near and dear to our hearts and something that we're still living through today. So anyhow, that was the motivation to build this thing.

So Yes, to your question is IOT real, absolutely. I mean its something that I think is inevitable. I mean I think that you already had very large industrial equipment manufacturers that kind of had their own type of platforms that were communicating very well within their own network or within their own brand of hardware. I think that with now everything, kind of, being interconnected, and being able to be connected to the internet it was an inevitability that this was going to happen. [inaudible 00:21:33] Was, it was real. It still works. We have one guy that still has one [crosstalk 00:21:41] day. We can still see George's water consumption, its hilarious actually. So the thing is that yeah, its going to be a near impossibility to defend our title this year, like I said it's one of the most competitive hack-a-thons, but we're thinking of another IOT type hack around water. Yeah, cats not out of the bag yet, so I can't really say what it is.

Joe:         Cool, we'll have you back on when you do that probably. So IOT to test these its mostly would be API tests, is that correct?

Neil:        I mean, I think that IOT there is … I gave a talk at Mobile World Congress that probably wasn't the most exiting talk, but I think the point was that there are like I don't know how many attempts that try to standardize around, IOT framework or IOT communication. You know, yeah, there are obviously some that do communicate over HTTP that do have APIs and so yeah. Its just another web service that's out there. Now that being said there obviously other protocols out there and its still the battle between what's open and which companies want to participate in that or will actually participate in these more open standards. So, its kind of, still out there, like in terms of … Those are table stakes now.

If I wanted to use and API to send out to manage my email inbound and outbound, or an API to manage calendar, or an API to make send and receive text messages or phone calls. These aren't services that I'm going to be building myself, nor am I going to buy a proprietary solution, I'm going to find an API that's going to do that for me. Someone where that's their core business, and I'm not even going to think twice about it. I'm just going to integrate this into my applications be they mobile or desktop or server whatever. I think that because your relying so heavily on services, whether they're services you roll out or third party services that your relying on, you have to keep an eye on these things. Your business completely depends on them being up; right? So, I think that that is something that is … Well, that's why we exist today because that's a problem we want to solve. We want to make sure that things are, that services are responding properly. That we expect that they are correct, that they're performing properly, and keeping an eye on that the same way that you keep an eye on your code level, you know, your code quality. Right? I mean, you have to treat the services, give the services, just as much attention, as you would your own source code.

Joe:         Yeah, that's a great point. I think a lot of people forget about that. If they're consuming someone else's service, what happens if that service goes down? A lot of people don't anticipate that. So –

Neil:        Yeah, I mean we see that with AWS today, and AWS is amazing infrastructure. Right? We rely so heavily on these cloud services whether its infrastructure service or platforms as a service or telephony as a service, no one can guarantee 100% all the way around. Its not as in, of course there are …. You can deliver nine 9's after that 99.9% and still you need to know when there's degradation, when there's latency, or if there's, you know, absolute downtime. So, you've got to protect yourself against those types of things and know particularly if its really really deeply embedded. You know, just keeping your eye on that is very very important.

Joe:         Cool. So along the same lines, for RunScope do you have the ability to mock an API? Say, you need to use a service but you need to pay for that service. So, say your doing some sort of performance testing or you just … Can you mock APIs within RunScape I guess is my question.

Neil:        Yeah, so I mean, that's obviously something that were interested in looking at. We don't currently support any types of service mocks or service virtualization. We're aware of it, but no, that's not a product feature we currently support.

Joe:         Okay, cool. Okay Neil, before we go is there one piece of actual advice that you can give someone to improve their API testing efforts? Let us know the best way to find or contact you or learn more about RunScope.

Neil:        Yeah, so I guess the last piece of parting advice that I would give is that … Maybe this is bigger than API testing. It's that when you're … When it comes to even designing your API to before you even start designing it is that from the very beginning make sure you're getting as many stakeholders involved as possible. Because the API itself can't just be looked at as something that is just some technical interface. It's not just that, I mean, the API for your sales team, your business development team, your marketing team it means a lot to them because this the way that you guys conduct business. Its through your APIs and the services that you're putting out there for your partners or for your customers to consume. So, keep them involved from the very beginning so that it's not a mystery as to, you know, is my service up and running. Is it delivering the data that it's supposed to be delivering. Really try to get the stakeholders in as early as possible and keep them involved, even all the way through testing.

So, anyhow, that being said, the best way to get ahold of me is, you can catch me on twitter at mansiladev that's at M-A-N-S-I-L-L-A-D-E-V of course by email Neil N-E-I-L at runscope dot com as well.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Synthetic Monitoring? (2024 Guide)

Posted on 04/18/2024

I've found over the years many testers are unsure about what is synthetic ...

SafeTest: Next Generation Testing Framework from Netflix

Posted on 03/26/2024

When you think of Netflix the last thing you probably think about is ...

Top Free Automation Tools for Testing Desktop Applications (2024)

Posted on 03/24/2024

While many testers only focus on browser automation there is still a need ...

Discover Why Service Virtualization is a Game-Changer:  Register Now!