Software Testing

Smarter Testing Through Smarter Testers [PODCAST]

By Test Guild
  • Share:
Join the Guild for FREE

Welcome to Episode 84 TestTalks. In this episode, we’ll discuss smarter testing and reinventing testing with Christin Wiedemann Co-CEO & Chief Scientist at PQA Testing.

Find out how to fine tune your software testing efforts in 2016. You'll be left ready to tackle any testing project in the new year.

ChristinWiedemannSmarterTestingFeature

When I’m interviewing someone, weird random thoughts sometimes pop into my head. For some reason, as I was speaking with Christin all I kept thinking about was the book The Martian, in which the main character, who is a botanist and engineer becomes stranded on Mars. He then has to use his ingenuity to survive and overcome some crazy obstacles. Christin reminds me of the type of person we’d want to send to Mars to solve these types of hard problems.


Listen up and discover Christin's five-step framework that will help us focus on the things that really matter and take our testing to a whole new level. Christin shares some of her best methods of creating testing awesomeness that will leave you satisfied and your customers delighted.

Listen to the Audio

In this episode, you'll discover:

  • The REAL purpose of testing.
  • What question you can ask to find out the WHY behind what you are testing.
  • Different types of risks
  • Tips to improve your testing efforts
  • What are the five As of understanding why you're testing, or to know what you need in order to test effectively .

[tweet_box design=”box_2″]Sometimes, we actually more than we should, or test the wrong things~ http://www.testtalks.com/84[/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:

Question: Why do we test? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Kama

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on Stitcher.com so if you prefer Stitcher, please subscribe there.

Read the Full Transcript

Joe:         Hey Christin. Welcome to TestTalks.

Christin:  Thank you very much, Joe.

Joe:         It's great to have you on the show. Today I'd like to talk about some of your OreDev presentations in Sweden about software, testing. They were titled Smarter Testing and Reinventing Testing.

Christin:  That's correct. I really enjoyed OreDev and I had a really good time there.

Joe:         Cool, yeah, you had a great presentation. I like how you interacted with the audience. I'll actually have a link in the show notes to the presentations. People should definitely check it out. You have a real incredible background. Based on my stalking skills, it looks like you're a particle physicist, PhD, a developer, a tester, a presenter and CEO of PQA Testing. Did I miss anything?

Christin:  No, but man, you make it sound pretty good actually. I appreciate that.

Joe:         How did you get into testing with that background?

Christin:  That's a question I don't know that I can answer, but I'll try anyway. I'm not really one of those people that set up goals in life. I'm more of a go after the squirrel kind of people. I basically jump. When there's an opportunity, I go for it. I've always been interested in science, and that's knowledge and learning have always been my passions. I started studying physics and math at University and what I really liked was astroparticle physics. That basically means trying to find particles from various phenomena out in the universe. I ended up working on experimentala called IceCube, which is really a cubic kilometer of Antarctic ice that's being instrumented. How can you not like that? I got to go to South Pole three times and work there and build [inaudible 00:01:45]. What I also did was, of course, write a lot of software because, for some reason, there's not a whole lot of commercial tools for particle analysis. You kind of have to build your own. I ended up working on both the detector simulation and the analysis software. That's really how I learned to program.

Then why I got my PhD in 2007, I started thinking about what to do next. I decided to try to get a job in industry for various reasons. I wasn't sure what I could actually do, because there's also not a whole lot of industry jobs for particle physicists. I was pretty good with statistics and I thought I could program, so I figured I could be a developer or maybe work with an assurance company building statistical models. Out of the two, programming sounded way more interesting to me. I started working as a developer, but realized after awhile that that wasn't actually a fit for me. I was used to building my own code the way I wanted it, and having other people give me requirements wasn't quite the same thing. I was asked to help test on a project and I really liked that. To me, testing is detective work. It's problem solving. It's all about being curious and asking questions, so it felt like research actually. Quickly after that, I changed careers and started working as a software tester. I find that there's a lot of parallels between science and the scientific method and how we test, and that's what's kept me in this field for quite awhile now.

Joe:         Awesome. What's harder, testing or particle physics?

Christin:  Depends on who you're asking. Of course, I think both things are really cool. I think one is definitely more intimidating than the other when you talk to people.

Joe:         What does smart testing mean to you? You had an OreDev presentation around this topic, so what is smart testing?

Christin:  I think smarter testing goes back to, again, asking questions. To use critical thinking and really being able to motive what you do. Explain why you're picking one tool over another, or one approach over another. It also, to me, means being able to quickly adapt to the context. Which means that first, we need to be able to understand your context, and then derive your test approach based on that. It kinds of goes back to understanding why a test and knowing what you need, and that it varies a lot between different projects and product that we work on.

Joe:         Awesome. Do you have any tips on how we can ask better questions? I know it seems pretty simple, but sometimes I wonder are there any critical skills that a tester should have or frameworks they can use to help actually ask better questions, actually get to probe developers or to probe a software product to see exactly what it is they need to test, or why they need to test it?

Christin:  I think there are three things that play important roles. First of all, it's cognitive skills themselves. Having good cognitive skills, understanding what those are and how to hone them, and understanding critical thinking and what it means and trying to be aware of our own biases, the assumptions we make and scrutinizing those. It's also about being trained. Test training. Learning to test better. Reading articles. Watching webinars. Go to conferences. Et cetera. It's also about soft skills. Having good relationships and a good rapport with business analysts, project managers and developers. It's those conversations that you have will help you ask smarter questions.

Joe:         Awesome, yeah. I definitely agree, but sometimes it's so frustrating as a tester to not just tell a developer, “Your code stinks. Of course, the testings going to stink because you didn't create a quality product.” Those soft skills, I think, really are critical. I like how you brought that out in your presentation.

Christin:  Yeah, we need to keep a buy in that. Very few developers actually build in bugs intentionally. That's not really what they're trying to do. They're trying to build a quality product and we're there to help them, not point fingers.

Joe:         Absolutely. One of the key areas you talked about was risk. This is something I've been hearing more and more about is focusing on risk. Can you explain what that means, to have software risk, and how that applies to testing?

Christin:  There are many different types of risks, of course, and I primarily talk about product risk. When I say product risk, I mean a way in which the software could potentially fail to meet an expectation. Those expectations can come from many places. It can be legal requirements, user expectations, business requirements, technical requirements. A risk is just any way which a software could fail to meet any of those expectations, and why tests should be designed to try to reveal those failures if they're there. Just to give you a very trivial example, let's say that you're building a web application that has an input field. A potential risk would be that it crashes if you enter special characters in the input field. What I should do as a tester to mitigate that risk is to design a test that would reveal that failure if it's there. That means I should test with special characters. I think testing adds the most value when it is risk-based, so when we're really tying a product risks to every test we run, basically.

Joe:         That's a great point. I had an epiphany while you said that. I work for a company that does a lot of FDA type software so they sometimes go overboard with things. Some of are tests are like, it's like why are we doing this? I think you said something like we had a risk statement tied to it to say what is the risk of this not passing or failing. That may help drive to do better testing, but also maybe weed out some of these tests that are just there that don't really need to be there, that aren't really doing anything.

Christin:  You touched on something very important there, and that's understanding what's good enough too and actually don't over test. A lot of the time, in reality what happens is that we're time boxed when we're testing, so we have a set amount of time. It can be effort or it can be a calendar time, and we just use that until we run out of time and then we stop testing. Sometimes, we don't have enough time.

Sometimes, we actually test more than we should, or test the wrong things. It's easier to fall into the trap that you want to run a test just to check that it works. If you run a test that passes, if you're going to be rigid about it, the only thing you really learn is that the test passed under those very specific circumstances. Whereas if you run a test that fails, you learn more and it's more interesting. If you have an understanding of what your product risks are and which tests are aimed at what product risks, it's much easier to prioritize and understand which tests are more important to run, which order should you run your tests in, et cetera.

Joe:         You brought up another good point. We just had a review of some bugs that were caught in production. Everyone's like, “Why was this bug not found?” It was such an obscure condition where it was a timing issue based on someone's specific environment that I don't even think we could catch, but some of the managers made it seem like everything could be caught or everything could be tested. You have a mathematical background. Is that even possible, for everything to be tested in a software application?

Christin:  Absolutely, if you had unlimited time and unlimited resources. In reality what we can test, what's feasible, is such a small subset of all the different combinations. If you take just the simplest web page you can imagine that has a couple of input fields, a couple of backgrounds, or maybe a link. The different variations of inputs and the order in which you do the actions, it's impossible to cover everything. What you want to make sure is that you cover the most important stuff. That's what I think is the difference between testing and smarter testing.

Joe:         Awesome, yeah, I definitely agree. I think it all goes back into what you said. We need to know why we're testing, what is the risk of what we're testing rather than just not even thinking about and just writing test cases just to say, “We have test cases,” and not really understanding what they're really covering, I think.

Christin:  It seems like a silly question that has an obvious answer, why do we test? There are different things that go into that. I often talk about, I call it quality criteria. Some people call it quality parameters. They're different things that matter in different contexts. Sometimes it's important to have a product that's very reliable. Sometimes usability's more important. Sometimes it's security. Why we're testing, it really is different. Sometimes we are testing because we need to make sure that a product can handle large load. Other times, maybe we're testing because there are specific legal requirements we needs to meet. There are actually different answers to that question. If we don't make sure we ask ourselves why we are testing on a project, we might go down the wrong path.

Joe:         Absolutely. Just going back to risk, how does someone measure risk? What's something that they should look at to know, hey, this is definitely risky? What would you recommend?

Christin:  The classic way to measure risk or to be able to compare risk is to use impact and likelihood. Where impact measures how bad would it be if this risk was realized, how severe is the risk? Likelihood is, of course, also probability that this is going to happen. Those two things can be quite different. You can have a risk that, if it was realized, it would be catastrophical, but it's basically never every going to happen. What does that mean, where it says if you have a risk where the impact is fairly low, but every user who uses your product is going to have that failure happen, which one do you test for? The catastrophical one or the one that's not so bad but it's going to happen every time someone uses the product? It's hard. You can't give absolute numbers because that would require having complete knowledge of all the failures that are in the software. It's really guesses, but they still allow you to compare and rank risks against each other so you can do your prioritization.

Joe:         Awesome. I know it's been a few weeks, more like a month, since your presentation. I'm not sure if you remember this. Do you remember what is the five step framework you have, the five As for understanding why you're testing, or to know what you need in order to test effectively?

Christin:  Sensitive framework I actually use I better remember.

Joe:         Okay.

Christin:  We talk about the five As at PQA and that's we just try to use, find a way that we would remember it. Hence the five As. We have the five steps which is Assess, Align, Advise, Accelerate and Apply. Really what assess means is just understanding your context and evaluating your expectations of quality and knowing which quality criteria or quality parameters are the most important on your project, and having a shared understanding of what quality means. Align means asking questions around what level of formality you need for the testing and what are the product's needs for test artifacts? How much traceability you need? How important is your previous ability? Once you have a good understanding of your context and your definition of quality and how formal you need to be in your testing and what kind of artifacts you need, then you can align with the project type. What kind of industry are you in? There's a difference between working with mantic companies versus, of course, working with digital media companies. Based on your understanding of what the needs of the project are, you can advise what's typically the PM, or the project owner, on what you think the best test approach is.

You should never start from scratch. You can accelerate your testing by using heuristics, templates, frameworks, [inaudible 00:14:43]. All those little nifty tricks that just help you get up to speed faster. Once you've figured all that out, you just do it. That's the apply step.

Joe:         Awesome, yeah, it's a really handy five step framework. Once again, I'll have that link in the show notes that actually has a link back to your website that actually explains it in more detail. I think it's really a good resource for testers to always have in the back of their mind when they're testing.

Christin:  It just helps you remember things to do sometimes, actually. You might skip over or that you fall back in the rut of doing things the same way you did last time rather than thinking about what's actually the best approach in this particular context.

Joe:         Is there anything you see over and over again in your experience that you think most testers get wrong or they don't full understand?

Christin:  What I think we do in general probably as humans, but maybe more so as testers and some other roles in IT, is that we build or we create artificial constraints around ourselves. We create sort of a little testing box for our self that we limit ourselves to. We do a lot of thinking inside the box, and we're really good at saying we can't test it this way because of something or we can't test this because of something else, rather than instead thinking or asking, “What should we test? How should we test it?” Then there might be constraints, but we should worry about that limit later. First, we should really try to think about what do we need to do? What's the right approach and right solution for this context? Then worry about what the constraint and limitations are.

Joe:         As a tester, we're getting near the New Year. What do you think a tester should focus on for skills? What skills do you think testers should have that you think would really benefit them to make their lives easier in their day-to-day work as testers?

Christin:  If I had to pick one, I would definitely pick communication skills. So much of what we do is communication. To me, really the purpose of testing is to provide information. I test the product to learn more about it. I can't prove that it works. I can't guarantee quality. I can't assure quality. I can learn and then share that information, and that's really all communication. Every different report I write is a written communication, so those communication skills, I can't overestimate or overemphasize how important they are.

Joe:         I definitely agree. That's one of the biggest … Even on teams. I work on teams that are spread out across the globe and we try to use behavior-driven development to collaborate and yet they don't treat it like a collaboration tool. They treat it like an automation tool. It's one of the hardest things. It seems like one of the simplest things, but it's almost like it's the hardest thing to get these teams to communicate with one another. I definitely agree. Testers could have that skill and help facilitate collaboration and communication between different teams. I think that would help a lot.

Christin:  Yeah, I couldn't agree more. Then, of course, honing your testing skills. Again, there are podcasts and you are a perfect example of that. Conferences. User groups. Go out there and talk to other people. Collaborate. Find new, cool ways of testing.

Joe:         Absolutely. I know you mentioned in one of your presentations that you don't feel that testing in general keeps up with development, from the sense of it's not really evolving or doing new things. Besides that, though, do you know if there are any new ideas or trends that you've been seeing and testing that you think testers need to be aware of in the coming years that you think is going to be important for them to know?

Christin:  There's a lot of things happening, of course, in industry. The big bus words are still big data and the internet of things.

Joe:         Yeah.

Christin:  Of course, mobile has been around for quite a while, but there's still a lot of things happening in there too. In those areas they definitely don't feel that testing has been keeping up. I think we need to make sure that we, again, that we adapt to what's happening and don't try to use, or approach projects the same way over and over again, because it worked last time so I'm sure it's going to work this time too. To think about, what are the new challenges? What's different? How might the user's perceptional quality be different on a mobile device compared to how it was on a stationary device ten years ago? It is very different. One of the big game changer has, of course, been mobile data. Both the speeds we can get these days and the volumes of data we can use. Are we really taking that into account when we're testing, or to a large enough extent? I'm not sure.

Joe:         I definitely agree. Big data internet of things. I just see how most testers if they had skills being able to analyze data, because we're going to have all this data, but actually analyzing it and making logical decisions based on that data, I think, is going to be a skill that people will need.

Christin:  Absolutely.

Joe:         You mentioned PQA. You're the CEO of PQA. What is PQA?

Christin:  PQA testing, which stands for Professional Quality Assurance, is a testing services provider. It was founded by Keith McIntosh back in 1997, so it's been around for a long time. I joined as I moved to Vancouver from Sweden in 2011. I'm now the co-CEO, actually, with Keith. We split the responsibility of the CEO role. My focus is primarily on delivery, so on service delivery. I also have the title of chief scientist. It's in the chief scientist role that I try to come up with new ideas and drive our innovation and R&D work, and also all our training initiatives.

Joe:         Awesome. Training initiatives? Do you offer services from training around risk testing or software testing in general?

Christin:  We do. We have a set of courses that we offer externally and we also do, of course, tailored training programs depending on the organization's need. We also do a lot of internal training. We have what we call the PQA Academy, where we run different sessions where we're trying to focus primarily on skills, of course, that are directly related to testing. Like, for example, we were working a lot with Python, which I think is a really good language for testers to know to help when we're testing.

Joe:         I love Python. I definitely agree with you. It's so much easier, I think, than JAVA and it's so many libraries. I use it for a lot of different things. It's almost like duct tape to me, so I definitely agree.

Christin:  It's a nice, clean language, which I really like as well coming from a more Linux environment and being used to shells groups.

Joe:         Right.

Christin:  A lot of what we do is simple manipulation of Excel files, for example. For that kind of stuff, I think that Python is perfect.

Joe:         Christin, I have one more question, but before I ask it, is there any questions you would have liked me to ask or you think I should have asked?

Christin:  No, I just really like that I gave you an epiphany. I don't think that's every happened to me before.

Joe:         I've been hearing risk, risk, risk, but then when you said that, I went, “Huh.” I'm going to tell that to the managers next time. I'm going to say, “We have these tests. We should have a risk statement. Why are we doing this? Is there a real need to do this? What is the risk if this doesn't pass, and if there's no risk, why are we testing it? You know?”

Christin:  Actually, you made me think of something just now saying that. A lot of what I talk about when I talk about testing and smart testing and risk-based testing, it is common sense. A lot of testers are already doing this, so people might wonder why is it so important to call it risk-based testing, or why do we need to spell out what our quality parameters are and how we're ranking them? It gives you a way to talk about it to non-testers. It helps you build your argument for why you're testing a certain way or why you need so much more time for a certain test. It also lets you give decision makers options. Rather that saying, “I need five more days to test,” you can say, “Well, if we stop testing now, these are the outstanding product risks. If you want us to mitigate them, I would need five more days to test.” It just makes all the conversations around testing so much easier and much more productive.

Joe:         Once again, I definitely agree. It makes us, rather than QA our testers being the good cop or bad cop, saying, “No, this shall not be released to production yet.” Giving management valuable information, say, “Here are the facts. Here are the risks. Here's what's been tested. Here's not. You, as the manager, you make the decision based on that.” I think that it opens up conversation better than just saying, “No, you stink. This application stinks. It's not going to production.” Right?

Christin:  No, because it's not my decision to make.

Joe:         Right.

Christin:  It's somebody else's and that person wants to be able to make an informed decision. The reason, typically, why we get pushed back from managers or other roles is because they are not sure. They don't know. Am I making the right decision now? We're supposed to help them. Talking about product risk gives you a tool to use in those discussions rather than just going back and forth. Well, I don't have enough time to test. Well, you have too much time for my budget. Et cetera, et cetera.

Joe:         Absolutely. Okay, Christin, before we go, is there one piece of actual advise you can give someone to improve their software testing efforts. Let us know the best way to find or contact you.

Christin:  I'm very easy to find. I'm on Twitter and LinkedIn and, of course, if you go to PQA Testing's website, you can find my mail address and my phone number. I always say that I love to talk about testing, and actually mean that. I don't at all mind if people send me a mail, give me a call, and I really want to have more discussions around testing in a public forum, and more diverse discussions. When it comes to a piece of advice, or what I would like to see, is again more different ideas. Daring to try to think of other ways of doing things. We so often talk about Agile as still as being something new and it's fifteen years old. Exploratory testing is still something very strange in certain organizations. It's been around for thirty years. There's so many ideas that we haven't maybe quite taken to heart, but they're already getting outdated. What's the next new thing? That's what I would love to see in the next year.

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 03/14/2024

What is ETL Testing Let's get right into it! An ETL test is ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...