Why You Need to do API Performance Testing with Patrick Poulin

By Test Guild
  • Share:
Join the Guild for FREE
API Performance Testing Feature with patrick poulin | TestGuild

About This Episode:

It's almost 2021. Are you still not doing API Automation? Did you know that API performance testing is also critical for delivering reliable APIs to your customers? In this episode, Patrick Poulin, CEO of API Fortress, will explain how to unify your functional and nonfunctional API testing, allowing you to use existing functional tests as load tests. Listen up to learn ways you can gain a holistic understanding of your API performance and how to stress test endpoints and full API flows.

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Patrick Poulin

Patrick Poulin

Patrick Poulin is the co-founder and CEO of API Fortress, an API testing automation platform that was built from the ground up for continuous API testing and unlimited functional uptime monitoring. Prior to API Fortress, Patrick worked as the API evangelist at Getty Images. Before that, he ran the retail vertical for Usablenet where they built the first mobile websites and apps for companies including Tesco, Target, Macy’s, MAC Cosmetics, and 70 other major brands.

Connect with Patrick Poulin

** Register for the Automation Guild Online Conference **

Joe [00:03:11] Hey, Patrick! Welcome to the Guild.

Patrick [00:03:15] Hey! Thanks for having me. Love a good trumpet intro.

Joe [00:03:19] Awesome. So, Patrick, it's been a long time since you've been on the show. What you've been up to? For the folks that don't know, just tell us a little bit more about yourself.

Patrick [00:03:26] Sure. Patrick Poulin, the CEO of API Fortress. We've been in the API testing space since, I'd say 2015. We're mostly stealth initially. But the last two years we've been public and doing really well, getting a lot of large and medium-sized customers.

Joe [00:03:42] So, Patrick, I think a lot of times when people think of API testing, they just automatically go to functional testing. So I thought we'd do a little bit different thought process here and focus more on performance testing. So where does API testing come into the performance testing picture?

Patrick [00:03:58] Well, I mean, foundationally, API runs a lot of these web platforms. I know my background actually came from I was building my first job in tech. I was building like the world's first mobile labs and mobile websites. And one of the bigger issues we would have was this was back in the iPhone 3GS days when networks were not the 5G they are today or allegedly 5G, but everything is just slower. So if your mobile app had a one-megabyte download or payload as a megabyte, the app might crash. The device didn't have the memory to work with that. There was a lot of...the weight effect was significant. And now the more statistics we have, the more we prove that you'll lose a conversion if the performance of your application isn't up to snuff for customers simply because they have other options.

Joe [00:04:45] So, Patrick, I know you've been going to a lot of conferences as a vendor. Just curious to know what your pulse is on the industry. Are more and more folks using API testing for performance testing, or are they still even neglecting in the functional testing part of their test strategy?

Patrick [00:05:01] I would say in the last year, which is why I get Fortress sponsored close to 20 conferences this year, it's really changed in that API testing has become a much larger conversation in the testing community. Everything is build first test later. And APIs we've been around forever but really, the inflection point happened about two years ago. And so early last year, we'd be talking to people, they wouldn't even really know what an API is. They would be testing anyway. We sponsor SmartBear. SmartBear has some pretty interesting white papers out about the statistics of how many large, medium, and small organizations that don't have a formal API testing process. And in the last year, that's really sort of exploded. Like we do presentations sometimes on how to go from manual API testing to automated API testing. And it's two years ago when we were doing these presentations there would just be 10 people and now it's standing remotely because it's a thing now. Now that we've sort of have so many solutions and been so matured in terms of UI testing and mobile app testing, it's time for that next phase, which is the actual foundation of those things. It seems crazy to say that people aren't testing them, but I can tell you for a fact that it's pretty rare. We speak to an organization that has perfect functional testing and load testing of their APIs already.

Joe [00:06:25] So I would think it would be easier to do performance testing or functional testing of API because you usually have that done before the UI and usually API is a drive in the UI. So what seems to be the resistance, just education?

Patrick [00:06:39] Education and lack of tooling. The tooling now exists. But, you know, I mean, LoadRunner(??) came out this year and API Fortress only had load testing earlier this year. Before that people are trying to use Jmeter to do load testing and then you really had to program it by hand. That's one of the reasons that API testing has had some stagnation. You really have to be a lot more developer-centric. And so you hire people that more specific skill set and those can be more difficult to find. And they already have a lot of stuff to work on, a lot of other stuff. So with the load testing, like the tooling has really come out as of late, that's been significant. And I know in our platform, like the big focus for us is to not have completely disparate products that do different things. So in our platform, like we generated API Test and you tweak and you can create this really smart end to end test. But the benefit is why not just use that same test as your load test? Previously, people might use Jmeter for load testing, but then have to write a completely different sort of test. If they're going to do functional testing, they'd use a completely different tool. So unifying those has been one of our main focal points. And I think that's why things have become a lot more prevalent than it's just the beginning this year. It's going to be even bigger next year, I'm sure of it.

Joe [00:07:50] You seem to always be adding functionality and I'm sure you're not just doing willy nilly. So obviously that had have been demand for performance testing. So I like how you said you're reusing your assets almost all in the same ecosystem. So how hard is it to take an existing functional API test and then make it a performance test within your platform?

Patrick [00:08:11] It's the exact same thing. When you go to the load testing option, you just pick one of your tests and you say, I run this using these load generators for 300 milliseconds with a duration of three milliseconds ramping up over 30 seconds. We very specifically built it so it's all built off the same thing. And the thing that's been really interesting for us is that was obviously well received and it's now a major part of the packages we sell today, what's been really interesting is seeing how people are now leveraging the APIs because we're a platform first we have tooling and command-line tooling, all that stuff. But ultimately, like everything in the platform can be hit by an API, whether it's execution test generation, or the new data or notifications. So on an API level, you can actually execute your functional test. We can also execute your API tests. And that's been really interesting to see. Like we just built that just because we built everything on API first, but to see how well received that's been that people wanted to run these load tests as part of their CI pipeline. Because that means it should be noted. You know, when we're talking about these load tests, we're using our existing functional tests and actually running the functional testing along with the load testing. So you get all the statistics on the load testing. But we're also telling you like, hey, if there's one object I can't keep up, if there's 5 000 concurrent users, that's good to know. And customers report back to us. That's what they find. Like they'll find a memory leak or find one database that's feeding one specific object that just can't keep up under a certain amount of load while everything else responds 200 okay. That's specificities. Significant. In this day and age, we expect more of ourselves.

Joe [00:09:49] So what differentiates a functional API test from a performance API test? Is it just you're running more of them.

Patrick [00:09:56] No, no. So a functional API test., the core feature of that. I mean, everyone has a different definition of functional test, end to end test, integration test. But test in API Fortress, just like when we talk about a functional test that quite literally means sounds like proofreading a book, the entire payload being validated, not just that every object is there, but also like the data that's associated with that object is there. And then on top of that our platform, one of the big things you push is that the business logic testing, which you could only do with an API. So let's say you have a retail retailer and you search for the color red, we might find red shoes or a red dress. They're both going to have a size object. But that size object has a very different range, whether it's a pair of shoes or a dress. A pair of shoes would be like two to 22, although if it's in Europe the shoe size will be like 32 to 52. So just those ranges can be validated on the API level and you can't necessarily validate that in a UI Selenium test. And there's always a way. But that's just not how people are doing. That's the benefit of focusing on API test. You can catch a lot more. And to us, that's how we're defining functional tests. A load test to us is running a reproduction of a normal user behavior at a much higher level. So for us search, add to cart, check out, that would be one solid end to end test you create in our platform. Then running that exact thing 5 000 times over four minutes, for example. To us, that's the difference. That's the proofreading of a book versus what happens when you run 5 000 of those at once.

Joe [00:11:34] So Patrick another thing that usually comes into play is monitoring. So does the platform offer any sort of monitoring or do you integrate any sort of monitoring systems? So as you're doing that load that you're able to see different metrics?

Patrick [00:11:45] Yeah, that's actually a funding (??) Literally, in the last 12 hours, we just closed two new deals where one of the major reasons why we won those deals against we went head to head against other people. And we won them because monitoring is built into the platform. That actually was initially, that was our initial intent with API Fortress. It's a functional uptime monitoring platform. But with the shift to Agile and CI/CD like we had to sort of talk a lot about that other stuff. But as of late, it's shifting. It's shifting back to where we originally started whereby, you know, monitoring your internal APIs that are the basis of your internal platforms or the basis of your customer-facing mobile apps. You need to know those APIs are up and not just up or down, but functionally up. And so that's been interesting for us. It's the way we get people interested by saying we could integrate automated testing into your CI pipeline. And so that's how we started the conversation is the last two people just closed with. But then the next piece is like I said, but then can I use this test as a load test? Yes. Alright cool. Can also use this test as a functional uptime monitor. Yes, you can. And so it's been really powerful for us because we have the one thing that people don't realize is that if they're not specific in the spaces, that APIs can be very sensitive. They are often not available. Most of them are not available outside of an Internet or outside of their firewall. So you need to be able to either get your IPS whitelisted by the customer, which is always sort of push back from the security ops team, or just deploy the whole platform internally. And that's a unique differentiator for API Fortress container experts. Our whole platform runs out of a Kubernetes or docker container you deployed on Prem and you have complete control and ownership of all the data and the results and everything can hit your internal APIs and microservices that are behind your firewall because it's all internalized. Nothing goes back outwardly. So Security Ops doesn't struggle to improve it.

Joe [00:13:39] So I actually didn't know you had that functionality. I love the concept of shifting right. Almost where you wrote your test all the way through the lifecycle. You release it in the wild and now you're monitoring it, gathering that feedback to inform you how to develop again. So it's like iterative. So it sounds like you have the shift right in place as well. Now, how many companies are using that feature? Is that common?

Patrick [00:14:02] It's becoming more and more common. I would say in the last two months, all the deals we've closed, about 70 percent are really into the..they really came in hot because of the monitoring aspect capability. So for us, it's I'll give you one example of one customer. We actually don't just talk to QA team. Our platform is very received by architects. So, people that are making a big push to APIs, they might be buying a big API, like an EBS, like an Apigee or a Real Soft or Calm or typical Machree, what have you. They're buying one of those because they're making a big API shift. And so it's like a chief architect that's looking at that. But as they're purchasing that platform, they realize that those platforms don't have testing. They've never had Test Guild like they just have pieces of it, but they don't truly have proper API testing. And so we get bought as part of that same purchase. So it's a budget of like, hey, we got this, they can help us build. We also need another piece to help us test. So we come in as part of that purchase because the architects say like, oh, cool, I can write an endpoint test using my own IDE to execute from the command line, but then they can hand it off to the team who can pick up your endpoint test that the developers created and the engineers created. Use those as the foundation when they create end to end tests. So a developer might test like he built this endpoint and he'll test that endpoint. QA is looking to see how does that whole API program work throughout a normal user flow? So a single test that does, again, Search, Add to Cart, Check out. That whole flow should be validated in one intelligent test. And then it gets handed off to that same test DevOps team and the ops team could come in and say, hey, I'm going to schedule this as a monitor. And it's not just monitoring production. We're seeing more and more the interest in monitoring different tiers of their staging environments, like staging one pre-prod gold standard. They want to monitor all those pieces because everyone is working off the staging environment. They're not working off production. So a lot of this monitoring comes into monitoring internal environments.

Joe [00:16:00] Awesome. So the full name of this podcast is the Performance and Site Reliability podcast. I think this type of testing is critical for your site to be reliable. And like you said, you are able to monitor all your different environments. So what happens? How does that work? Does it say there's a threshold that if you don't get a response back in a certain amount of time, three times in a row, that they get alerted? How does that piece work?

Patrick [00:16:23] Well, with API Fortress for the monitoring, we're using a functional test. So if there's any functional issue, which can include just like the size object not being in the correct range, if you choose that to be one of the options, you'll be notified of that. And if there's a critical outage like we have throttling there. You don't get in the data with emails or Slack notifications or Pager Duty, pings, or Jira tickets, all those are built out into the platform as integrations. But there's throttling for major catastrophic events. But you really choose whether you're just validating that it's up or down, whether you just want to validate that the statue (??) the latencies below a certain level or and that's what we always suggest, like proper functional test. That same test you would use to validate your latest deployment as part of your CI pipeline. That same test should be your monitor. And that's really easy with API Fortress because we have a built-in scheduler. Take the test, click on the schedule button, and you're done.

Joe [00:17:18] So Parick, I'm just trying to remember our previous conversation. I thought you brought something up where your tool had some sort of functionality, where it did a lot of things for you automatically. The other tools may not have built into that. Was it the creation of...I forgot what it was.

Patrick [00:17:34] Yes. We had a test creation so we can create a test from either a payload or from a spec file. And so it gives you a draft of the test and which is really beneficial for customers that are dealing with I don't know if you ever looked at Instagram or Spotify payloads, but they're very long, very nested, that's incredibly annoying to set up on like programmatically if you're just looking in your IDE. Our platform actually set that all up for you and then it gives you the ability just to tweak it from there. So for one analogy, I like to make is that people ask, like, how do you write a test on? Like it's sort of the same way that we all learned how to create a website. You looked at a website you like, oh this thing has frames. That's amazing. Then you right-click and I probably just stay to myself there, but you just right-click and go view source and suddenly understand how the structure of the HTML works. That's how we like to do on our platform. We create the foundation, then you edit it and make it smarter from that.

Patrick [00:18:28] So Patrick I think a lot of people sometimes are worried about API because they don't necessarily know about all the other dependencies or the microservices that may be needed in order to make a full transaction. Do you usually just focus on one API or do you create a flow that automatically touches all these other APIs?

Patrick [00:18:46] I mean, we always suggest that people create a test for every endpoint as well as creating entire user flows. So what 80 percent of your customers do, creating that flow as a test on an API level. But the thing that's been really interesting, what we've been building a lot of stuff since we last chat and we have this micro gateway, we call AFtheM. It's in our GitHub. It's open-source. What it can do is it can record API calls. It makes it very easy for you to record API transactions. And we actually run into a lot of organizations that, you know, maybe they've worked there for a year, but that companies out for 20 years building APIs. So they'll have a mobile app and they'll be doing mobile. This actually just happened to the airline that been talking to lately. They have a mobile app and the person that does that mobile app testing has been told, alright cool, need you also start testing the APIs. So he starting this journey and he doesn't even know how to make an API, how he knows how to make API calls, but not all of the API calls that are involved in the mobile app. He was never given a proper set of documentation that says, hey, the mobile app uses these 19 endpoints. And so what we're helping them with is like, well, you know what? You could just set up our micro gateway. So you do your noble, your normal mobile app testing, run your automated suite, or just do it manually. We will capture the API calls behind that and give it to you on the screen we call a logger. Then you could see all of the API transactions that are involved in your normal usage of the mobile app. Then you can create tests right from that logger screen.

Joe [00:20:14] It's really cool, so you talk about educating people going from manual to functional testing, at what point do you recommend performance testing when you're talking to customers now? Are you trying to shift to left or is it still after the fact once everything's built?

Patrick [00:20:30] No. I mean, we show them as part of the initial demo because I mean, our load testing isn't meant to replace HP LoadRunner. It's meant to be fairly lightweight, but very informative as part of a CI pipeline for testers. And so that's why we show them like just this exact same thing could all run as part of your CI pipeline,. We try to get them hooked on to it right away because you catch a lot of stuff you didn't expect. There's a lot of memory leaks and just older services that aren't keeping up. That should be seen before it has to go to the LoadRunner team. A lot of large organizations that are load testing team is completely disparate and different from their actual testing team. So we're trying to give some of that power back to the testing team.

Joe [00:21:18] No, I completely love this approach. Absolutely. The sooner you could tell something, you might as well test it. And like you said, it's not an enterprise full-blown performance test. But that's good because as a developing, they can do this confidence (??) type testing. Fix it before it gets to the point where it's like taking months to fix. So did you see a lot of customers finding value that way?

Patrick [00:21:40] Yeah, everyone that started, we actually have a couple of customers that started talking to us just for this capability. They weren't even as interested in the functional testing. We've gotten them one over on that. But for some reason, they viewed performance as more important than functionality, which it's kind of confusing. But that's the world they live in. Well, different companies with different goals and different types of APIs, and different business intentions.

Joe [00:22:06] So, Patrick, as I said, I know you're always constantly iterating on your product. You're always making it better. And I think I didn't see the study, but I heard a study came out by Forrester, I believe that did a comparison of different tools. Where did your tool rank compared to other maybe options out there? Not to say one is better than the other, but different ones have different advantages. So how would you say yours has that advantage over maybe another tool?

Patrick [00:22:33] Sure. The thing with Forrester is that it's interesting. It takes a long time just to be recognized as a player in the space. So and they actually haven't created an API testing wave in like three years. And they said they're creating a new one this year. We're very motivated to get them to create a new wave. We would love to see our tool compared with all of the players out there because we know we built a better thing from scratch. Their tools have 12 years of iteration, a lot of great features. But what we have is what people need for today's Agile DevOps workflows. That's what we're built for. We are a platform that's API enabled. We are not a downloadable application. We have downloadable applications. That's not what we are. So we're very confident in that head to head comparison of all that stuff. Now, I don't know when forcers are going to do their new wave, but we were not on the original wave they created three years ago because we were still a bit in our infancy.

Joe [00:23:32] Cool. When you say API testing platform, can you just again explain what that means to folks that may not know exactly what that means?

Patrick [00:23:40] API Fortress is entirely focused on the testing and quality of your API program. That's our entire focus. There are other tools out there that offer the option. They'll have a tab that says, why don't you write some JavaScript in here and you could do some testing as well. And it's a very good start. Like you have the ability to actually upload Postman collections in our platform because Postman collections are awesome. Postman is awesome. Postman is one of the main reasons why we've had such a good year because they've helped educate the market on what an API is. With that said, we think due to our complete, unfettered focus on testing, we have a little more we have more capabilities for those people that are really looking to create strong automation suites dedicated to APIs.

Joe [00:24:29] So you've built basically built from the ground up focused on API testing. That's your bread and butter. But I mean, it's a full-stack kind of solution as well from beginning all the way to the end that covers you.

Patrick [00:24:42] Exactly. We have a lot of our customers that I mean, we don't have a customer that doesn't use Postweb (??). Everyone uses Postweb (??), but it's great. It's great for what it is. But when customers are ready to take their testing to the next level, we've had a great success in being part of those conversations and having those opportunities. But again, without Postweb (??), I'm not sure we would have had the year that we had because they have truly helped us educate the market about what an API is and the importance and how foundational they are to everyone's applications, since Posman has been a real godsend for all of us and the API community.

Joe [00:25:22] Awesome. So, Patrick, any tips for folks that are trying to get more into performance testing with API test? Is there a way they can develop their APIs to be more performing or a way to develop scripts to make them so they make better performance test?

Joe [00:25:37] I mean, I'm not an engineer. I'm sure there are all sorts of tricks engineers can suggest. I used to be a lot more technical than I am now. I'm getting dumber every day the more I have to do a CEO role. But what I can say is that the one thing they can do is just test it at every phase, every release test, get those metrics, get those numbers and you'll have comparables. And then you could see like if things are getting slower over time, I mean, I'm sure we've all, seeing that the slow degradation has happened, like some of our favorite applications. I remember back in the Winamp days. Winamp 3 was so good, 3.1 one was amazing. Then AOL bought it, became version four and suddenly it was a heavy beast. People stopped wanting to use it because the performance wasn't there anymore. It wasn't the lightweight great MP3 player that it once was. And then version five, they rolled back to version three, basically. There's something to be said for performance. And if you don't have those metrics, you will realize that it's getting slower over time. You need to find a way to measure it and keep a history of it and make sure everyone's aware of it.

Joe [00:26:41] Okay, Patrick, before we go, is there one piece of actionable advice you could share with someone to help them with the API testing efforts? Also, the best way to find and contact you or learn more about API Fortress.

Patrick [00:26:53] Sure. We actually created API Testing University, which is sort of a subset of our docs. But I think if you just go to apitestinguniversity.com, you can go there, too. We're trying to put more and more information about what we're training our customers and the people that we work with, They ask us for advice all the time because we've been doing this forever. We're the experts in the space. So we're helping put more content out there, the sort of stuff that we do out of presentations on the website. So take a look there so you can understand better. Like, you know, what's basic API testing and then what's more advanced API test. And we start talking about being data-driven and not, you know, using a small CSP but actually using a dynamic, huge database or using an API to feed data into your API test. Those more advanced use cases are really how you get from a C minus test to an A-plus test. Apifortress.com, I'm Patrick@ if you want to email me.

Full Transcript Patrick Poulin

 

Joe [00:03:11] Hey, Patrick! Welcome to the Guild.

Patrick [00:03:15] Hey! Thanks for having me. Love a good trumpet intro.

Joe [00:03:19] Awesome. So, Patrick, it's been a long time since you've been on the show. What you've been up to? For the folks that don't know, just tell us a little bit more about yourself.

Patrick [00:03:26] Sure. Patrick Poulin, the CEO of API Fortress. We've been in the API testing space since, I'd say 2015. We're mostly stealth initially. But the last two years we've been public and doing really well, getting a lot of large and medium-sized customers.

Joe [00:03:42] So, Patrick, I think a lot of times when people think of API testing, they just automatically go to functional testing. So I thought we'd do a little bit different thought process here and focus more on performance testing. So where does API testing come into the performance testing picture?

Patrick [00:03:58] Well, I mean, foundationally, API runs a lot of these web platforms. I know my background actually came from I was building my first job in tech. I was building like the world's first mobile labs and mobile websites. And one of the bigger issues we would have was this was back in the iPhone 3GS days when networks were not the 5G they are today or allegedly 5G, but everything is just slower. So if your mobile app had a one-megabyte download or payload as a megabyte, the app might crash. The device didn't have the memory to work with that. There was a lot of…the weight effect was significant. And now the more statistics we have, the more we prove that you'll lose a conversion if the performance of your application isn't up to snuff for customers simply because they have other options.

Joe [00:04:45] So, Patrick, I know you've been going to a lot of conferences as a vendor. Just curious to know what your pulse is on the industry. Are more and more folks using API testing for performance testing, or are they still even neglecting in the functional testing part of their test strategy?

Patrick [00:05:01] I would say in the last year, which is why I get Fortress sponsored close to 20 conferences this year, it's really changed in that API testing has become a much larger conversation in the testing community. Everything is build first test later. And APIs we've been around forever but really, the inflection point happened about two years ago. And so early last year, we'd be talking to people, they wouldn't even really know what an API is. They would be testing anyway. We sponsor SmartBear. SmartBear has some pretty interesting white papers out about the statistics of how many large, medium, and small organizations that don't have a formal API testing process. And in the last year, that's really sort of exploded. Like we do presentations sometimes on how to go from manual API testing to automated API testing. And it's two years ago when we were doing these presentations there would just be 10 people and now it's standing remotely because it's a thing now. Now that we've sort of have so many solutions and been so matured in terms of UI testing and mobile app testing, it's time for that next phase, which is the actual foundation of those things. It seems crazy to say that people aren't testing them, but I can tell you for a fact that it's pretty rare. We speak to an organization that has perfect functional testing and load testing of their APIs already.

Joe [00:06:25] So I would think it would be easier to do performance testing or functional testing of API because you usually have that done before the UI and usually API is a drive in the UI. So what seems to be the resistance, just education?

Patrick [00:06:39] Education and lack of tooling. The tooling now exists. But, you know, I mean, LoadRunner(??) came out this year and API Fortress only had load testing earlier this year. Before that people are trying to use Jmeter to do load testing and then you really had to program it by hand. That's one of the reasons that API testing has had some stagnation. You really have to be a lot more developer-centric. And so you hire people that more specific skill set and those can be more difficult to find. And they already have a lot of stuff to work on, a lot of other stuff. So with the load testing, like the tooling has really come out as of late, that's been significant. And I know in our platform, like the big focus for us is to not have completely disparate products that do different things. So in our platform, like we generated API Test and you tweak and you can create this really smart end to end test. But the benefit is why not just use that same test as your load test? Previously, people might use Jmeter for load testing, but then have to write a completely different sort of test. If they're going to do functional testing, they'd use a completely different tool. So unifying those has been one of our main focal points. And I think that's why things have become a lot more prevalent than it's just the beginning this year. It's going to be even bigger next year, I'm sure of it.

Joe [00:07:50] You seem to always be adding functionality and I'm sure you're not just doing willy nilly. So obviously that had have been demand for performance testing. So I like how you said you're reusing your assets almost all in the same ecosystem. So how hard is it to take an existing functional API test and then make it a performance test within your platform?

Patrick [00:08:11] It's the exact same thing. When you go to the load testing option, you just pick one of your tests and you say, I run this using these load generators for 300 milliseconds with a duration of three milliseconds ramping up over 30 seconds. We very specifically built it so it's all built off the same thing. And the thing that's been really interesting for us is that was obviously well received and it's now a major part of the packages we sell today, what's been really interesting is seeing how people are now leveraging the APIs because we're a platform first we have tooling and command-line tooling, all that stuff. But ultimately, like everything in the platform can be hit by an API, whether it's execution test generation, or the new data or notifications. So on an API level, you can actually execute your functional test. We can also execute your API tests. And that's been really interesting to see. Like we just built that just because we built everything on API first, but to see how well received that's been that people wanted to run these load tests as part of their CI pipeline. Because that means it should be noted. You know, when we're talking about these load tests, we're using our existing functional tests and actually running the functional testing along with the load testing. So you get all the statistics on the load testing. But we're also telling you like, hey, if there's one object I can't keep up, if there's 5 000 concurrent users, that's good to know. And customers report back to us. That's what they find. Like they'll find a memory leak or find one database that's feeding one specific object that just can't keep up under a certain amount of load while everything else responds 200 okay. That's specificities. Significant. In this day and age, we expect more of ourselves.

Joe [00:09:49] So what differentiates a functional API test from a performance API test? Is it just you're running more of them.

Patrick [00:09:56] No, no. So a functional API test., the core feature of that. I mean, everyone has a different definition of functional test, end to end test, integration test. But test in API Fortress, just like when we talk about a functional test that quite literally means sounds like proofreading a book, the entire payload being validated, not just that every object is there, but also like the data that's associated with that object is there. And then on top of that our platform, one of the big things you push is that the business logic testing, which you could only do with an API. So let's say you have a retail retailer and you search for the color red, we might find red shoes or a red dress. They're both going to have a size object. But that size object has a very different range, whether it's a pair of shoes or a dress. A pair of shoes would be like two to 22, although if it's in Europe the shoe size will be like 32 to 52. So just those ranges can be validated on the API level and you can't necessarily validate that in a UI Selenium test. And there's always a way. But that's just not how people are doing. That's the benefit of focusing on API test. You can catch a lot more. And to us, that's how we're defining functional tests. A load test to us is running a reproduction of a normal user behavior at a much higher level. So for us search, add to cart, check out, that would be one solid end to end test you create in our platform. Then running that exact thing 5 000 times over four minutes, for example. To us, that's the difference. That's the proofreading of a book versus what happens when you run 5 000 of those at once.

Joe [00:11:34] So Patrick another thing that usually comes into play is monitoring. So does the platform offer any sort of monitoring or do you integrate any sort of monitoring systems? So as you're doing that load that you're able to see different metrics?

Patrick [00:11:45] Yeah, that's actually a funding (??) Literally, in the last 12 hours, we just closed two new deals where one of the major reasons why we won those deals against we went head to head against other people. And we won them because monitoring is built into the platform. That actually was initially, that was our initial intent with API Fortress. It's a functional uptime monitoring platform. But with the shift to Agile and CI/CD like we had to sort of talk a lot about that other stuff. But as of late, it's shifting. It's shifting back to where we originally started whereby, you know, monitoring your internal APIs that are the basis of your internal platforms or the basis of your customer-facing mobile apps. You need to know those APIs are up and not just up or down, but functionally up. And so that's been interesting for us. It's the way we get people interested by saying we could integrate automated testing into your CI pipeline. And so that's how we started the conversation is the last two people just closed with. But then the next piece is like I said, but then can I use this test as a load test? Yes. Alright cool. Can also use this test as a functional uptime monitor. Yes, you can. And so it's been really powerful for us because we have the one thing that people don't realize is that if they're not specific in the spaces, that APIs can be very sensitive. They are often not available. Most of them are not available outside of an Internet or outside of their firewall. So you need to be able to either get your IPS whitelisted by the customer, which is always sort of push back from the security ops team, or just deploy the whole platform internally. And that's a unique differentiator for API Fortress container experts. Our whole platform runs out of a Kubernetes or docker container you deployed on Prem and you have complete control and ownership of all the data and the results and everything can hit your internal APIs and microservices that are behind your firewall because it's all internalized. Nothing goes back outwardly. So Security Ops doesn't struggle to improve it.

Joe [00:13:39] So I actually didn't know you had that functionality. I love the concept of shifting right. Almost where you wrote your test all the way through the lifecycle. You release it in the wild and now you're monitoring it, gathering that feedback to inform you how to develop again. So it's like iterative. So it sounds like you have the shift right in place as well. Now, how many companies are using that feature? Is that common?

Patrick [00:14:02] It's becoming more and more common. I would say in the last two months, all the deals we've closed, about 70 percent are really into the..they really came in hot because of the monitoring aspect capability. So for us, it's I'll give you one example of one customer. We actually don't just talk to QA team. Our platform is very received by architects. So, people that are making a big push to APIs, they might be buying a big API, like an EBS, like an Apigee or a Real Soft or Calm or typical Machree, what have you. They're buying one of those because they're making a big API shift. And so it's like a chief architect that's looking at that. But as they're purchasing that platform, they realize that those platforms don't have testing. They've never had Test Guild like they just have pieces of it, but they don't truly have proper API testing. And so we get bought as part of that same purchase. So it's a budget of like, hey, we got this, they can help us build. We also need another piece to help us test. So we come in as part of that purchase because the architects say like, oh, cool, I can write an endpoint test using my own IDE to execute from the command line, but then they can hand it off to the team who can pick up your endpoint test that the developers created and the engineers created. Use those as the foundation when they create end to end tests. So a developer might test like he built this endpoint and he'll test that endpoint. QA is looking to see how does that whole API program work throughout a normal user flow? So a single test that does, again, Search, Add to Cart, Check out. That whole flow should be validated in one intelligent test. And then it gets handed off to that same test DevOps team and the ops team could come in and say, hey, I'm going to schedule this as a monitor. And it's not just monitoring production. We're seeing more and more the interest in monitoring different tiers of their staging environments, like staging one pre-prod gold standard. They want to monitor all those pieces because everyone is working off the staging environment. They're not working off production. So a lot of this monitoring comes into monitoring internal environments.

Joe [00:16:00] Awesome. So the full name of this podcast is the Performance and Site Reliability podcast. I think this type of testing is critical for your site to be reliable. And like you said, you are able to monitor all your different environments. So what happens? How does that work? Does it say there's a threshold that if you don't get a response back in a certain amount of time, three times in a row, that they get alerted? How does that piece work?

Patrick [00:16:23] Well, with API Fortress for the monitoring, we're using a functional test. So if there's any functional issue, which can include just like the size object not being in the correct range, if you choose that to be one of the options, you'll be notified of that. And if there's a critical outage like we have throttling there. You don't get in the data with emails or Slack notifications or Pager Duty, pings, or Jira tickets, all those are built out into the platform as integrations. But there's throttling for major catastrophic events. But you really choose whether you're just validating that it's up or down, whether you just want to validate that the statue (??) the latencies below a certain level or and that's what we always suggest, like proper functional test. That same test you would use to validate your latest deployment as part of your CI pipeline. That same test should be your monitor. And that's really easy with API Fortress because we have a built-in scheduler. Take the test, click on the schedule button, and you're done.

Joe [00:17:18] So Parick, I'm just trying to remember our previous conversation. I thought you brought something up where your tool had some sort of functionality, where it did a lot of things for you automatically. The other tools may not have built into that. Was it the creation of…I forgot what it was.

Patrick [00:17:34] Yes. We had a test creation so we can create a test from either a payload or from a spec file. And so it gives you a draft of the test and which is really beneficial for customers that are dealing with I don't know if you ever looked at Instagram or Spotify payloads, but they're very long, very nested, that's incredibly annoying to set up on like programmatically if you're just looking in your IDE. Our platform actually set that all up for you and then it gives you the ability just to tweak it from there. So for one analogy, I like to make is that people ask, like, how do you write a test on? Like it's sort of the same way that we all learned how to create a website. You looked at a website you like, oh this thing has frames. That's amazing. Then you right-click and I probably just stay to myself there, but you just right-click and go view source and suddenly understand how the structure of the HTML works. That's how we like to do on our platform. We create the foundation, then you edit it and make it smarter from that.

Patrick [00:18:28] So Patrick I think a lot of people sometimes are worried about API because they don't necessarily know about all the other dependencies or the microservices that may be needed in order to make a full transaction. Do you usually just focus on one API or do you create a flow that automatically touches all these other APIs?

Patrick [00:18:46] I mean, we always suggest that people create a test for every endpoint as well as creating entire user flows. So what 80 percent of your customers do, creating that flow as a test on an API level. But the thing that's been really interesting, what we've been building a lot of stuff since we last chat and we have this micro gateway, we call AFtheM. It's in our GitHub. It's open-source. What it can do is it can record API calls. It makes it very easy for you to record API transactions. And we actually run into a lot of organizations that, you know, maybe they've worked there for a year, but that companies out for 20 years building APIs. So they'll have a mobile app and they'll be doing mobile. This actually just happened to the airline that been talking to lately. They have a mobile app and the person that does that mobile app testing has been told, alright cool, need you also start testing the APIs. So he starting this journey and he doesn't even know how to make an API, how he knows how to make API calls, but not all of the API calls that are involved in the mobile app. He was never given a proper set of documentation that says, hey, the mobile app uses these 19 endpoints. And so what we're helping them with is like, well, you know what? You could just set up our micro gateway. So you do your noble, your normal mobile app testing, run your automated suite, or just do it manually. We will capture the API calls behind that and give it to you on the screen we call a logger. Then you could see all of the API transactions that are involved in your normal usage of the mobile app. Then you can create tests right from that logger screen.

Joe [00:20:14] It's really cool, so you talk about educating people going from manual to functional testing, at what point do you recommend performance testing when you're talking to customers now? Are you trying to shift to left or is it still after the fact once everything's built?

Patrick [00:20:30] No. I mean, we show them as part of the initial demo because I mean, our load testing isn't meant to replace HP LoadRunner. It's meant to be fairly lightweight, but very informative as part of a CI pipeline for testers. And so that's why we show them like just this exact same thing could all run as part of your CI pipeline,. We try to get them hooked on to it right away because you catch a lot of stuff you didn't expect. There's a lot of memory leaks and just older services that aren't keeping up. That should be seen before it has to go to the LoadRunner team. A lot of large organizations that are load testing team is completely disparate and different from their actual testing team. So we're trying to give some of that power back to the testing team.

Joe [00:21:18] No, I completely love this approach. Absolutely. The sooner you could tell something, you might as well test it. And like you said, it's not an enterprise full-blown performance test. But that's good because as a developing, they can do this confidence (??) type testing. Fix it before it gets to the point where it's like taking months to fix. So did you see a lot of customers finding value that way?

Patrick [00:21:40] Yeah, everyone that started, we actually have a couple of customers that started talking to us just for this capability. They weren't even as interested in the functional testing. We've gotten them one over on that. But for some reason, they viewed performance as more important than functionality, which it's kind of confusing. But that's the world they live in. Well, different companies with different goals and different types of APIs, and different business intentions.

Joe [00:22:06] So, Patrick, as I said, I know you're always constantly iterating on your product. You're always making it better. And I think I didn't see the study, but I heard a study came out by Forrester, I believe that did a comparison of different tools. Where did your tool rank compared to other maybe options out there? Not to say one is better than the other, but different ones have different advantages. So how would you say yours has that advantage over maybe another tool?

Patrick [00:22:33] Sure. The thing with Forrester is that it's interesting. It takes a long time just to be recognized as a player in the space. So and they actually haven't created an API testing wave in like three years. And they said they're creating a new one this year. We're very motivated to get them to create a new wave. We would love to see our tool compared with all of the players out there because we know we built a better thing from scratch. Their tools have 12 years of iteration, a lot of great features. But what we have is what people need for today's Agile DevOps workflows. That's what we're built for. We are a platform that's API enabled. We are not a downloadable application. We have downloadable applications. That's not what we are. So we're very confident in that head to head comparison of all that stuff. Now, I don't know when forcers are going to do their new wave, but we were not on the original wave they created three years ago because we were still a bit in our infancy.

Joe [00:23:32] Cool. When you say API testing platform, can you just again explain what that means to folks that may not know exactly what that means?

Patrick [00:23:40] API Fortress is entirely focused on the testing and quality of your API program. That's our entire focus. There are other tools out there that offer the option. They'll have a tab that says, why don't you write some JavaScript in here and you could do some testing as well. And it's a very good start. Like you have the ability to actually upload Postman collections in our platform because Postman collections are awesome. Postman is awesome. Postman is one of the main reasons why we've had such a good year because they've helped educate the market on what an API is. With that said, we think due to our complete, unfettered focus on testing, we have a little more we have more capabilities for those people that are really looking to create strong automation suites dedicated to APIs.

Joe [00:24:29] So you've built basically built from the ground up focused on API testing. That's your bread and butter. But I mean, it's a full-stack kind of solution as well from beginning all the way to the end that covers you.

Patrick [00:24:42] Exactly. We have a lot of our customers that I mean, we don't have a customer that doesn't use Postweb (??). Everyone uses Postweb (??), but it's great. It's great for what it is. But when customers are ready to take their testing to the next level, we've had a great success in being part of those conversations and having those opportunities. But again, without Postweb (??), I'm not sure we would have had the year that we had because they have truly helped us educate the market about what an API is and the importance and how foundational they are to everyone's applications, since Posman has been a real godsend for all of us and the API community.

Joe [00:25:22] Awesome. So, Patrick, any tips for folks that are trying to get more into performance testing with API test? Is there a way they can develop their APIs to be more performing or a way to develop scripts to make them so they make better performance test?

Joe [00:25:37] I mean, I'm not an engineer. I'm sure there are all sorts of tricks engineers can suggest. I used to be a lot more technical than I am now. I'm getting dumber every day the more I have to do a CEO role. But what I can say is that the one thing they can do is just test it at every phase, every release test, get those metrics, get those numbers and you'll have comparables. And then you could see like if things are getting slower over time, I mean, I'm sure we've all, seeing that the slow degradation has happened, like some of our favorite applications. I remember back in the Winamp days. Winamp 3 was so good, 3.1 one was amazing. Then AOL bought it, became version four and suddenly it was a heavy beast. People stopped wanting to use it because the performance wasn't there anymore. It wasn't the lightweight great MP3 player that it once was. And then version five, they rolled back to version three, basically. There's something to be said for performance. And if you don't have those metrics, you will realize that it's getting slower over time. You need to find a way to measure it and keep a history of it and make sure everyone's aware of it.

Joe [00:26:41] Okay, Patrick, before we go, is there one piece of actionable advice you could share with someone to help them with the API testing efforts? Also, the best way to find and contact you or learn more about API Fortress.

Patrick [00:26:53] Sure. We actually created API Testing University, which is sort of a subset of our docs. But I think if you just go to apitestinguniversity.com, you can go there, too. We're trying to put more and more information about what we're training our customers and the people that we work with, They ask us for advice all the time because we've been doing this forever. We're the experts in the space. So we're helping put more content out there, the sort of stuff that we do out of presentations on the website. So take a look there so you can understand better. Like, you know, what's basic API testing and then what's more advanced API test. And we start talking about being data-driven and not, you know, using a small CSP but actually using a dynamic, huge database or using an API to feed data into your API test. Those more advanced use cases are really how you get from a C minus test to an A-plus test. Apifortress.com, I'm Patrick@ if you want to email me.

 

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

 

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Brian Vallelunga TestGuild DevOps Toolchain

Centralized Secrets Management Without the Chaos with Brian Vallelunga

Posted on 09/25/2024

About this DevOps Toolchain Episode: Today, we're speaking with Brian Vallelunga, the founder ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Testing Castlevania, Playwright to Selenium Migration and More TGNS136

Posted on 09/23/2024

About This Episode: What game can teach testers to find edge cases and ...

Boris Arapovic TestGuild Automation Feature

Why Security Testing is an important skill for a QEs with Boris Arapovic

Posted on 09/22/2024

About This Episode: In this episode, we discuss what QE should know about ...