Podcast

209: API Functional Testing with Klaus Neuhold & Dave Karow

By Test Guild
  • Share:
Join the Guild for FREE
ApiFunctionalTestingDaveKlausTestTalksFeature


Want a way to create quick more reliable functional test without the overhead of a full-blown UI test? Want to develop functional tests that can also be used as performance test later in the software development lifecycle. If so this episode is for you. We’ll be Test Talking with Klaus and Dave from BlazeMeter all about API functional testing and the benefits you get by using this approach to software testing.

About Dave Karow & Klaus Neuhold

Dave Karow is passionate about digital performance, and about proactively pushing apps and infrastructure to prove readiness before revenue is put at risk. A veteran of Keynote Systems, Dynatrace and SOASTA, Dave knows not only that “performance matters” but that we are in the midst of a transformation where performance testing moves from being an optional step at the end of development into a continuous discipline that starts early in the development cycle and stands ready to catch any performance regression at any point thereafter.

Klaus Neuhold is the product manager at BlazeMeter. Klaus has 15 years of experience working as a product manager, developer and running his own web consulting business.

Quotes & Insights from this Test Talk

  • API Functional Testing is that when we say Functional Testing most people's first where they first go in their head is UI functional testing. And in this case, this is a protocol level right. So we're testing backends, not the GUI that's that's why we call it API Functional Testing. The basic idea is that you're doing performance testing on BlazeMeter but before you run for example a large scale load test against your API you want to make sure that it actually works at all. So you want to do the functional tests first and then move on to the performance tests so this is sort of the workflow that we have in mind here.
  • So what we do is we show all the details for each of the individual requests. Right. So, first of all, we give you a high-level overview of how many of the requests passed and failed. And then you can drill into the details of each individual requests so you can see response codes you can see all the details about the request itself like the request body, headers and then you'll get the same thing for the response you get the response body. But you also see details about assertions that you may have defined. So what are these assertions passed or failed and what happened there? And also I think something that is sort of a little bit different from other functional testing tools that we have is that since we're coming from this whole performance testing world initially we still have some performance metrics in there as well like response time latency. So it's not just a functional test in the sense of passed and failed and here's your response code and assertions but there's also some performance aspects in there as well.
  • For a functional test, you're really just running it once each request but we're getting everything you could possibly expect from that request. And then we're showing trends so if you run this test 20 times and there are 10 calls in each of these tests. You can see a bar that's partially green, partially red, that's representing how many tests in that one iteration passed or failed. And at a glance, you can see kind of where things got broken to where they got fixed. So it just provides visibility. You always know what's the condition of your code at a glance.
  • We are looking in AI – obviously not just in the context of the API functional testing but also in the context of performance testing and the approach that we want to take is not just we have to do something with machine learning yes it's the buzzword and everybody's talking about it but we want to take it a bit differently and ask ourselves okay what are the problems that we're trying to solve. And maybe machine learning is the answer for that but maybe it's just some plain old statistical analysis that's not that glamorous as machine learning.
  • In our architecture, you have Taurus which is this open source test automation framework which allows you to execute tests on any kind of open source execution engine whether that's a performance test engine like JMeter or Gatling or Grinder or a functional test engines.
  • Don't boil the ocean. Don't try to create the most amazing test suite but hold yourself to the idea that every API endpoint has at least one smoke test that you can always run every time you touch that endpoint and it should be complete to the extent that you know what a well-formed response looks like. And you have some idea of how fast you want it to come back. You should never have a testing backlog. We're building code and you haven't got a test for it. You build a piece of code you should have a test for it.

Resources

  • State of The Union for API Testing Webinar. Register [here]
  • Taurus

Connect with Dave Karow & Klaus Neuhold

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...