Podcast

116: Modern Testing for the 21st Century with Andy Tinkham

By Test Guild
  • Share:
Join the Guild for FREE
Andy Tinkham TestTalks


On today’s show we’ll test talk with Andy Tinkham, a software testing expert with over 20 years’ experience, about his views on modern testing. Andy will discuss how testing has evolved, what we as testers need to know, and how we are all responsible for helping to shape the future of testing. Within the philosophy of modern testing there are a series of principles that Andy has broken down into four pillars: Context First, Testers are Not Robots, Using Multiple Lenses for Test Design, and Providing Useful and Timely Information.


If you’re a tester who wants to continue being a tester in the 21st century, you’ll want to listen up! Check it out.

About Andy Tinkham

AndyHeadShot

Andy has nearly 20 years of experience in software testing, predominantly focusing on automated testing and performance testing. He’s spent a few years in graduate school at the Florida Institute of Technology, where he earned a master’s degree in software engineering and progressed towards a PhD. in computer science, both focused on software testing. Recently, he’s resumed working on my dissertation in an effort to finish his degree.

Quotes & Insights from this Test Talk

  • Absolutely testing is changing and we haven't seen the full extent of that yet. The rise of agile methodology, the headlines we see about big tech companies getting rid of their testers. Very oh no am I going to have a job type articles where companies aren't getting rid of testing they're getting rid of people who had a particular job title of testing. All of these pieces are driving a big piece of change. We've got more sophisticated users who have more things competing for their attention. We've got app stores that make it easier for users to switch in some sort of commercial application space. We've got shorter and shorter development cycles. 
  • There are testers, there are test organizations who either are not providing enough value to their organization. They may be seen as a roadblock, they may be doing the wrong things, or they may simply not be communicating the value they're providing and the value is largely unseen. More and more those are just going to be cut. Those roles, those positions, those departments. Businesses are going to be saying, “Why am I paying for this?” They can't see the value. As an industry we share a good chunk of the blame for that.
  • Within the philosophy of modern testing there are a series of principals that I'll get to late because I've distilled them down into four pillars. The four pillars are: Context First, Testers are not Robots, Uses Multiple Lenses for Test Design, and Providing Useful and Timely Information.
  • Computers are really bad at noticing the unexpected because we didn't know to tell them to look for it. It's unexpected. People are really good at that much of the time. While it is very contextual where the line is there is for most projects a combination of manual testing and automation that make the most sense to achieve the goal. I have seen teams where they are so focused on the stuff that could be automated that they never do the other things. That again ties back to that testing is dead mentality where it quickly becomes just a question of, “I could invest how much now to create automation for that or keep paying six people's salaries for how long to keep running these by hand?” That becomes a financial trade off and not one of value at that point.
  • In modern testing part of understanding the context first is understanding what the information needs of the team and stakeholders are. It's not generally going to be, “How many test cases do we have?” Although they might frame it that way because many managers have been trained over their careers that that's how you get information about quality. When it comes down to it most managers probably don't care about the number of test cases. They just use that as a surrogate measure for more test cases equals more testing. What they really are concerned about is things like, “If we release now how many phone calls are my support center techs going to get? How bad are the headlines going to be if we have a problem? How much is this going to impact my bonus?” Other pieces of quality than just the metrics.
  • What we've traditionally done in our reporting is we've given our team, we've given our stakeholders, raw data. We give them counts. We say, “For this feature I have seventy two test cases. I ran sixty five of them and sixty passed.” There are so many interpretations of that data. First, why do we have seventy two test cases? Maybe we should have had seventy five and we just didn't have time to write three of them. Maybe we should have had two hundred and we haven't had time to write a lot of them! Maybe we only needed sixty in the first place and so we have extra test cases. Once we get passed that, why did we only run sixty five of them? What about the other seven were they not important? Were they the most important tests and so all we know is that cosmetically the feature looks good because we have sixty five tests written for that. The seven tests that do the functionality, the reason that exists, haven't been run yet. Are the seven tests just broken and we don't care about them? They don't matter? There's so much there. Only sixty passed. Well were they the sixty least important, the sixty most important?

Connect with Andy Tinkham

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Hi
    Was it me or is the sound quality not up to scratch?

    What software and networks were you using?
    ie. Skype over Wireless on a Dialup modem?

    I

    1. This was the first episode that I recorded with a new mixer. Some of the settings where off during the recording that added to much gain to the mix that I was not able to clean up. I’ve since figured out the optimal settings for the mixer and the newer episodes should sound better.

  2. Adding a comment whilst listening (in Chrome)
    Causes the whole page to re-load.
    You loose the spot where you were listening.
    Would you good to have a note to say don’t comment whilst listening..?

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...