About This Episode:
You limit your career potential if you only know how to write end-to-end tests via the UI. Automating API Testing backend services allows you to ship changes confidently. Several popular tools will enable you to do this, but which one should you use, and what are their pros and cons?
Also, check out Browserstack Automate TurboScale, now: https://testguild.me/turboscale
In this episode, I want to share a private monthly guild session we had for automation guild 2024 members. If you don’t know, when you buy a ticket to our online automation guild (the next one is Feb 10-14), you also get nine monthly sessions after the event is over each month to keep the learning and community going after the official event is over.
In this session, Stephen Kilbourn, a longtime guild member and speaker who brings a wealth of experience in automation testing, will share his top three API testing tools: Postman, Playwright, and Jest, and dive into the pros and cons of each one.
Exclusive Sponsor
Setting up, scaling & maintaining your functional automation grids can be an uphill task.
A lot of technical expertise & cross-team dependency goes into achieving an automation setup that is robust & scalable.
What if I told you there is a smarter way to navigate these hassles easily? Let me introduce you to Automate TurboScale!
It is a high-scale desktop browser automation solution that allows you to quickly set up grids on your preferred cloud provider (be it AWS, GCP, or Azure). You can configure metrics like test concurrency, CPU limits, and browser node cooldowns and set up alerts to ensure your grids run with the most optimal resources.
Now, here is the real deal! It even allows you to leverage a testing-ready, fully managed automation grid on the BrowserStack cloud to run your tests. Simply integrate your tests with the BrowserStack cloud using the BrowserStack SDK and get testing ready in minutes. No code changes, no setup, no maintenance—just pure testing!!
Automate TurboScale elevates your debugging capabilities by integrating with Test Observability, which offers advanced Test debugging & reporting capabilities. You get instant access to video recordings, logs, quality profiles, custom dashboards, AI-based flakiness detection, and more.
Sounds next level, right?
With all the above features & flexibility in scaling and debugging tests, Automate TurboScale is truly the answer to all your high-scale functional testing needs. Why don't you give it a try yourself now?
Head over to https://testguild.me/turboscale and let me know what you think.
About Stephen Kilbourn
After earning a degree in Electrical Engineering, Stephen worked in the defense industry for 8 years where he performed systems integration testing, interface design, and requirements management. He transitioned to consulting clients in ways to improve their development processes, create testing frameworks, and improve collaboration to break down silos in their delivery teams.
Connect with Stephen Kilbourn
-
- Company: www.dialexa
- Blog: www.intomydigitalforest
- LinkedIn: www.stephenkilbourn
- Twitter: www.krull_etc
- Github: www.stephenkilbourn
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
[00:00:34] Joe Colantonio If you only know how to write end-to-end test via the UI, you are really limiting your career potential. Automating API testing backend services allows you to ship changes more confidently and also enhance your opportunities to help your team. Also, there are several popular tools that allow you to do this, but which one should use and what are the pros and cons? In this episode, I want to share with you a private monthly guild session we had for Automation Guild 2024 members. If you don't know, when you buy a ticket to our Online Automation Guild, the next one being February 10th to the 14th, 2025. Check out that in the links down below. You also get nine monthly sessions after the event is over each month to keep the learning and keep the community going after the official event is over. So you have support before, during, and long after Automation Guild is over. In this session, Stephen Kilbourn, a long time Guild member and speaker who brings a wealth of experience in automation testing, shares his top 3 API testing tools, Postman, Playwright, and Jest and dives into the pros and cons of each one, along with demos. If you're listening, you also probably want to check out the video as well.
[00:01:44] Joe Colantonio As you know, sending up scaling and maintaining your own functional automation grid can be a pain. A lot of technical expertise and cross team dependency goes into achieving an automation setup that is both robust and scalable. But what if I told you there's a smarter way to navigate through these hassles with these? Let me introduce you to Automate Turboscale. It's a high scale desktop browser automation solution that allows you to quickly set up grids on your preferred cloud provider. You can configure metrics like TestConcurrency, CPU limits, browser node cooldown and set up alerts to ensure your grids are running with the most optimal resources. Now, here's what's even more awesome. It even allows you to leverage a testing ready, fully managed automation grid on the browser stack cloud to run your tests. No code changes, no setup, no maintenance needed. Automate turbo scale can elevate your debugging capabilities as well by integrating with test observability, which also offers advanced test debugging and reporting capabilities. You also get instant access to video recordings, logs, quality profiles, AI-based flakiness detection and a bunch more. Automate Turboscale is an answer to a lot of complaints I hear from test engineers on my podcast to all the high scale functional testing needs that they deal with. Why not give it a try for yourself? Head on over to testguild.me/turboscale to check it out for yourself now.
[00:03:10] Stephen Kilbourn We'll go through kind of four steps. One is just the introduction of why automate API testing and looking at the API that will test here and then we'll go through each of these three tools. And write test the same exact test with each of these tools. Why automate API testing? This is API, The Automation Guild. So I don't think I have to really sell anyone in automating, but APIs generally are going to be what you want to test outside of the front end. Or sometimes you're just making an API that is your product. And if you can test that in isolation without the front end, you can catch regressions faster a lot of times. And the whole goal of our testing is to prevent regressions or find bugs before it even gets out of the door in front of your end users. If you're testing at the API level, instead of going from the front end to an API, it makes it a little easier to investigate failures. And I think it's very important whenever you're writing tests to do your best to use the same repo that contain your test code as your API code. So if you're writing an Express app, Express API that's in node, I think it's very important to try looking at ways to keep that code there. That makes it easier for you to locally on your machine as you're testing, run the API and your test code. It makes it where you can give a developer that's writing a new feature the ability to run test locally before they try to make a pr. And then it just simplifies your setup when you're trying to do CI. If you're in GitHub or Jenkins, you don't have to pull down a test repo to run the test against the API. Those are kind of the four big goals that I try to incorporate. And to walk through how we do this is publicly available. Just so if you want to go to the GitHub link, I'll show you, share this repo in a bit. That you can get your own API key and actually use this for free. I did want to do something where you had to do a lot of work to run it locally or you are limited. Whether API is free, you just sign in and you need an API key because they don't want you to DDoS their free tier. We have two endpoints that I'm going to write test for and you can look at in here. One, just getting the current weather for a location. And the other is very similar, but you add in the number of days that you want and you get a forecast for that number of days.
[00:05:55] Stephen Kilbourn A general strategy for testing an API I think is when there's some sort of validation authentication process to one, you want to make sure that you can get the API without a key or token and make sure that you get the right kind of error message. And then once you have the key, start going down the both the happy path of you get all the data, you have a valid location. In our case, you get the right kind of response or you have a bad location. Does it handle that gracefully the way that you document that you would expect that to do? You're trying to go through that scenario with each of these three tools? I will start with Postman. I think probably everyone here is familiar with Postman, but Postman allows you to write and make request. I'm talking very heavily on Rest APIs today, but you can do other things as well. But you can make test, you can make a git request, write out the query parameters or a post. You can have your post body and see if the API is responding the way you want, but they also have a tab in there that we'll see in a second that where you can write test and then Newman as something probably doesn't get the same buzz as Postman, but it's made by the same company and it's really just a tool to let you run the collections either on your own machine or via your CLI, or you can they give you a snippet of code put and you can run it as a note app. And that's what we'll do today. Kind of seeing those steps. Postman, you're going to the app or your VScode plugin, whatever you prefer. You'll write your test, validate that everything's working that the way you expect, and then you save those tests that you write and what they call a collection and you export it in a JSON format. And then you need to set up a Newman to actually run that file and also make sure that you have the variables that you need kind of looking at the app, I've already set this up and we have our two endpoints as how I put our collections together. And then it's kind of those three scenarios that I talked about. One is the happy path and then two different sad paths. One where I have bad location and one where I have a bad API key. When you want to use Postman, probably most of us have done something where you send an API request. You can set as variables, your URL, your location, and the API key. And for this, if you go to the weather API and look at the docs, you can see in the docs the API key is actually just a query parameter here that you include. And when you make the request, you'll get back an object with your location and a lot of keys. And not everyone realizes this, but they have a scripts tag now. It used to be, I think a specifically called test, but you can write these scripts in JavaScript that will run either before or after your request. So this is how you could actually write a test with Postman.
[00:09:22] Stephen Kilbourn Postman walking through here for a happy path. The things that I said I wanted to test and so I want to make sure the responses are 200. I wanted to see the location was included in the response and that the current weather is actually returned with all of those expected keys. And the way you would do that with Postman, they include let you import AJV, which is a schema validator and I can write out all the star keys that I want. What the schema looks like for the three spot. So you can see like the region. The name region, country, these are all things that should be in the response and I can make sure that it really is a string that comes back. I don't really care that it's specifically this might tell us just that it is a string. I try to be flexible on a lot of these because local time the actual weather in this API would be changing each time and I don't need to see everything. I can test it elsewhere hopefully at a lower level that that would was actually working. And then for your sad paths, you write very similar paths that they'll be a little bit more simple because should just get back in error. And then that error object, you'll have a code and a message. And you can write tests that have the response of 400 and then actually test the error object to have the code that the weather API defines and the message that they say that they'll send back. Forecast would be very similar, you go through the same process and what you read now is to export them, you have in that dropdown the actual option to export and save. And so once you save that, you put it in your repo. This API is where this repos public you can go look at it later, but I'll pull it up right now and you can see I've created folders for each of these tools. And so for Newman what I called it, I saved those two collection files and then I wrote this one script called Postman and this Newman even more. All of this from where this first line, line one. And then there is Newman.run statements are pretty much copied straight from the Postman doc. They make it pretty straightforward for running your collection. You just need to tell it the path of where to find the JSON collection file.
[00:12:11] Joe Colantonio Stephen, Postman introduced a lot of paid options over the free ones. Is this all free that you are shown right here?
[00:12:17] Stephen Kilbourn Yeah. So everything I'm showing right now is free. Once you want get to where you want to start sharing the collections and everyone being able to run it that same file locally or in their own app, it starts getting different. But the runner itself, you can run for free. So it's essentially parsing these JSON files and running it on your machine just like they do for you.
[00:12:43] Joe Colantonio Perfect.
[00:12:43] Stephen Kilbourn Two global variables I set up where the an API key and the API URL. And this key one is your big one that you want to make sure you never forget. Don't put your key in your repo. The way to do that is to set it as an environment variable. You can put a .env file and in my repo I have a sample and it's just pulling in these key value pairs. And API URL you don't have to do, but I feel like it's a good practice to do it because a lot of times you'll want to set up the ability to run like a dev environment and then a test environment and then like a production environment. The key you don't want to put out there publicly because then anyone can use your secret password. So to import it for your test, you just do this process .env, .api key, and it'll save it. And when Newman is running your collection and it knows the idea of global variables and it just automatically pulls it in because you said all the environment variables are called globaVar. To run it, and you see the Postmanweather.test.js. So you just run it with node and to make things easy I put in my package JSON a script. Going to try switching to a different app right now. All right. We'll see what happens when you run that and just writing the script, you can see you're getting some output and probably about six seconds that ran everything. And you get a list of all the tests that ran and green checkmarks that it passed. And Postman is nice enough to give you like some response times. You don't get the nice debugging experience that you can with the next tool that we've had, but it is fairly straightforward to see, okay, if I had instead written my test and I had said like my expected response for one of these in the background here, change my 200 to saying I expect it to be 250, which doesn't exist, you'll see a failure. It didn't get the response that it should. You're able to start debugging and see what failed, but you have to manually go in and call the API and start debugging. You don't have any idea of a trace or a saved response. Pain points is before we jump to the next tool. There's no built in retry logic with Postman. If it fails and you get that 200 or 400 response, there's no built in way, where you can just call it again. And sometimes that's good. You want it to fail right away. I'm working in a space where we use a lot of serverless endpoints. When you deploy a new deployment, a lot of times your Lambdas will be cold and need to be hit once or twice before it warms up and starts working. And if you don't manually write and each time you need it to happen, it's not going to retry for you. And when you're doing like test deployments, you need to do that. The other thing is, it's a little bit hard to read that you're looking at a JSON file with the tests. It doesn't have the greatest developer experience because if you need to update that test, if you're using a free version, you certainly have to well like save the JSON from your GitHub repo and your Postman and start editing it and then upload it again, which that's prone to errors.
[00:16:52] Stephen Kilbourn And then you're going to go up, there are ways to sync with the paid version, but not always you work with a company that wants to pay for those features. And then the third one is that it's not universal. It works great when everyone uses the same tool. But sometimes you're working with developers that are used to something else and they don't want to use Postman to look at tests. And then when they're used to their own workflow, that's maybe Paw or maybe some of these open-source tools. Overall, you can do everything to test this API with Postman, but there are some two things to think about. And so now we'll go over to Playwright. So didn't do a polling question, but I'm going to guess everyone here is sort of Playwright. As far as doing end-to-end testing touching the API or touching the front end and picking selectors and clicking through that. It does a great job of doing that. It provides a really good hand tracing mechanisms of your test fails. You can see the screenshots of what happens, but you also get the network response calls and that kind of goes into their ability to test the API because they have this idea of the request context. Not only can you use the request to configure things for your test environment before the Playwright tests run, you can actually test the API responses either in the UI-based end-to-end test, in conjunction or what I'm doing today is just install Playwright without all the browsers. It's a little lighter weight and just make API calls. If you go their docs, they have full on explanation of how to do this. So going back to that repo for Playwright, I have its own folder as well. I like the idea of setting up kind of a BDD syntax so that given, when, then without like getting very pedantic about getting BDD. But for our application, I'm going to collapse this so you can see the flow. I have the idea of giving an authenticated user and given about API key. And then, I'm given an authenticated user I want to do when we invoke the API within, this is going to be the happy path. And then the sad path is when we have an invalid day. So those when statements are actually just making the API call and this is using that request context. And for our API, I just made the pass in some query, there are all get requests so I can see a request.get with a specific URL that has some query parameters.
[00:20:00] And if you're not used to the TypeScript world, these parameters or things that I say have to be a string, but they're optional because on my test, I want to be able to test what happens if someone doesn't pass the location or what happens if someone doesn't pass the API key. Even though in the actual API, these things are required in the testing set up, I want it to be optional. When I start writing tests because I'd broken it into that given when syntax, I can make it a little bit easier to read of say, when we make that happy path, it should return it to 200 response. And so I just pass and the request context, the location that I'm testing, which I just defined up here as a city. And then that I'm an authenticated user. And because of this app, only being API token that doesn't change given it's really just reading .envy very much like you did in Postman. If you have a different API setup where you're hitting a token endpoint and asking for a temporary token, here, I would make that API call and define the token and save the token and then pass it here. But since our API key doesn't change from day to day, then I just have to save it as a variable. And so once you make that request, I can start looking at the status and response to be a 200. And here I make sure it's defined. If I want to check that the locations and the response, I just put the portion of the body, of the response to make sure that it is what I expect it to be. And then for keys here, I did not use AJV which you can it is an NPM package you can install and do exactly like Postman. I thought it was helpful to see like you can really just compare all the manually, do a very similar check, making sure that each of those key values match up with the name and type of value that you wanted. And you'll see the sad path would be similar but now I just want the response to be a 400 and the error object that since back to have the right code and the right message. And then just like for the Postman and Newman set up, you can write Playwright test as a script and I like abbreviating because it's getting long to type all these. All right, when you're on your Playwright test in your terminal, you can see it runs pretty fast because one nice thing about switching over to Playwright is that can run several things concurrently. So it's able to make several API tests at the same time and be a little bit faster. I think we were 6 or 7 seconds in this time we're closer to 3.
[00:23:22] Stephen Kilbourn The other thing is you get a report, so if you run that command, you'll see that it opens and then I will jump back to my browser here. And if you haven't used Playwright before, this is what a Playwright report looks like. You can jump in, you can see a list of all the tasks in each of the test files and the current and the forecast test. And if I click on a specific test, I'm able to see all the steps and checks that it's doing. And then I can open up a trace. And so it's all I'm doing is calling us single API. There's not a lot to look at, but it does save the exact API call that it makes. I can see that the URL, with all the query parameters right there. And I can look at the response header and the body. If for some reason this fails and I have documentation of why it fails and beyond the assertions that I make and I can actually look ask a developer to look with me and see like, hey, this is what I'm getting back. You know what would have caused this failure? Caused this to happen? The developer experience starts getting a lot better when you have Playwright and if you have a lot of test to run, being able to run in parallel is very nice.
[00:24:53] Stephen Kilbourn Kind of the pain points. One is like it's a new tool that's unfamiliar to backend developers. So if you're not working a full stack team or people that have ever tried Playwright, it's just one more tool to teach and on board everyone with how to look at those traces. Where do you get it? How do you navigate it? How do you rate test with this tool? And introduction dependency. Sometimes you do have to go through security or your team has to get approval to go to use Playwright because you're in health care or you're in banking situation where they need to know everything that's being installed, make sure there's no risk for them. Sometimes it's a lot of extra work just to use something like Playwright even though it is very popular. And then that kind of the third point is like it might be something that is duplicating what the developers are already using. That brings me to Jest, so Jest if you are familiar, is really well known in the JavaScript and TypeScript world for unit testing. Most likely if you're working on an app that is a node or React, it's going to have test using Jest. In fact, Playwrights using a lot of Jest under the hood. But what's less common is what I'm going to demo, just pairing it up with API testing. And to do that you need to use a node tool or a library to actually make the API calls and let Jest do the assertions side of the test. I'm demoing with Axios. There are some other options, but Axios, you can go to their website and look at the docs, but it's a very simple format of how to make API requests, either from a browser or from node. The front end and backend apps can use it. And when we use this, we can actually write test very similar to the other two methods that I demo. So jumping to the code now, I see a folder called Jest and in that folder you will see it looks almost identical to the Playwright. One out steps actually could be identical if I was cleaner when I set this up, but given it is pulling in that API key and then you open and you have two different test cases. When is really the bread and butter of making that Jest and Axios work. At the start off, it is very similar. I'm going to have my two when scenarios of calling the current weather API and they forecast whether API. Once again need to pass a string or the location and the API key. The different gets in under the hood Axios is giving this header, so you need to make sure you pass this header. Really and these three lines are going to be just like in Playwright. I'm really just passing an API request, get API request. But to handle an error you do need to do a little configuration. I have this method called makeHttpRequest and it's just taking the method URL string and this is extraneous, but if you were to test some post endpoints, you'd also need data. When you make the request with Axios, you're going to get back a response and that if that promise turns into an error, if you don't do this step, the test is going to fail, which is great. If you're expecting a 200 and you get a for a 400 back, you want to error and then that that works as expected. But when you start testing the sad paths, you need to catch that error. And we want to look at did you get the error body that you expected? Did you get the error code that you expected? And so I do a little map, do a little mapping here, but because you do this little extra work, the actual test itself ends up looking just like your Playwright test. With the exception instead of test.describe, you just say describe. You can see I go response body, response location, do all those same assertions. And then because I did that mapping for the error cases, I can once again catch the error that the Axios API would return and then start investigating it without ending the test and not seeing a screen. Once you made that mapping, you can once again run a script. And so with Jest, the test.jest is how I set it up. I've just running the Jest command.
[00:30:11] Stephen Kilbourn So second to last time a jumping windows here, I will go here and kill my Playwright report server and say run test.jest and you'll see that it goes through. The tests are going to take a little longer than Playwright, probably very similar to running the Postman collections, but you get a report output that looks a lot like Postman. You don't get like a broken down average of all the response times, but you do see how long each test takes. And then the total test run time of 5.5 seconds. Now with doing this, we've run the same set of tests of, I guess, 14 total tests, 3 different ways, and you get a little bit different output for each one. Some are better than others. There's a little bit different word, but we're able to test that same API with all 3 tools. The pain points of doing it with Jest and Axios is one, testing using this combo is less common, so you're not going to see as much support. If you go on the stack overflow or whatever, if you have problems specifically with your test. You do have a lot of because Jest an Axios are both very common. If you have specific issues with either of those, you do have support. The other thing is it's a little unfamiliar to other QAs, so it's great if you're the one person working on that and you get to set it up and handed off to a developer because developers tend to be more familiar with Jest. But if you need to on board other people that aren't familiar with this tool, with the setup or with Jest and you have a lot of training to do, so it probably would make sense in that scenario. And then the third thing is that configuration. You need to understand, like Axios is designed to fail and through an error message, if you don't get a happy 200 response or 201. So you need to understand to set up those helper functions that will catch your errors for you. But in doing, if you understand what's going on under the hood, it is possible to use this combo and not have to go through the trials of getting Playwright or Postman approved or worrying about licensing with a paid account and Postman. Some scenarios it does work for your team. That is the demo I have. I know it's a lot of code pretty fast to try covering 3 tools in a presentation, but I'm trying to leave some time here to ask questions and the repos public and you can reach out to me later too. If you have to play with it, it doesn't work.
[00:33:13] Joe Colantonio Thank you, Stephen for your Automation Awesomeness. Links of everything of value we've covered in this session. Head on over to testguild.com/a513. And while you're there, make sure to check out our awesome sponsor of this episode BrowserStack and their Automate Turboscale solution to help you remove the frustration and time of setting up scaling and maintaining your functional automation. That's it for this episode of the Test Guild Automation podcast. I'm Joe, my mission is to help you succeed in creating end-ot-end full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:33:49] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:34:33] Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.