Software Testing

What is Pair Testing? (Free Your Testers)

By Test Guild
  • Share:
Join the Guild for FREE

I spoke with Katrina Clokie during a recent TestTalks interview about the benefits of cross-team pair testing in Agile, and she gave me a great definition of what is pair testing:

At a high level, pair testing is exactly what it sounds like; it's two people on the same machine, trying to complete a test together. The main difference between two people working together and two people pairing is that with pairing, both people are actively working on the problem at the same time.

Pair testing is also much different than simply asking someone over to help you do something. It's more like saying, “Okay, I have this task, will you come and sit with me the whole way through this? We can then take turns at being the person driving the activity.”

It’s an activity where both testers are sharing their ideas, working on the whole thing together.

How Pair Testing Can Help Your Test

One key benefit to encouraging your teams to try pair testing is that it helps with cross-pollination of tester skills, across your organization.

In most Agile environments, testers are spread out across many sprint teams, working on many different products and features.

Although the testers may all be in the same department, a tester on one team has quite a different experience from a tester on another team. For example, they may be working on different platforms, different approaches, and/or different tools.

Katrina has seen that pairing two testers together on a task is a great way to achieve knowledge sharing between testers on different teams.

The reason this is so effective is that most testers don’t have visibility of what’s happening on other sprint teams until they get an opportunity to go and actively work with those teams.

Pair Testing Frees Siloed Testers

Speaking as I do with testers from different companies, a theme that comes up over and over again is that they feel like they’re isolated from other testers.

Yes, Agile sprint teams can be more productive than waterfall teams, but a dirty little secret is that it also tends to create very insular teams. It can be difficult for testers on one team to know what's happening on another team.

Pair testing helps break this isolation and creates bonds between teams through the structured knowledge sharing that pairing provides.

Katrina has tried a number of things at her company, like setting up guild communities which brings the developers, BA and testers together to try and get some knowledge sharing happening through disciplines. Her company also has tribe-based meetings, which brings together her 18 delivery teams and splits them into three tribes.

She believes, however, that pairing has probably been more effective than any other method her teams have tried because it's so hands-on, and so practical about what someone's actually doing.

Will Pair Testing Hurt Your Team’s Velocity?

Whenever I talk to Scrum Masters about doing anything that would take a resource from their sprint teams, they start to freak out and complain about how it’s going to hurt their team’s velocity. When they hear about pair testing, testers usually understand its value right away. However, the concept usually has to be sold to their scrum masters and managers.

No worries, though — in Katrina’s experience, getting management buy in for pair testing is not as difficult as you would imagine. Katrina always tells management that it won’t really affect velocity because work is still happening when this activity is taking place.

One person moves out of their team and onto another team, and that team effectively has two testers for an hour, but then the flip side is that that person from that team would then go back with the other tester and that other team would have two testers for an hour.

There's not really any change in resourcing or any impact to the project; it's basically just like shuffling a person across and then shuffling the other person back across.

This has been Katrina’s pitch when explaining pair testing to managers — that these people are still actively working on their projects; they’re just moving an extra person in to assist them and learn from for an hour so and then vice versa.

So using this logic pair testing shouldn’t be a difficult sell to your teams.

Fresh Eyes Find Bugs

Another benefit to pair testing is that often a pairing session will generate a conversation between testers that would never have taken place otherwise.

A session will frequently uncover an issue that may not have been addressed by the team because they know the feature so well and work on the product as power users that they can lose sight of how a real user might interact with the system. A fresh pair of eyes can bring up things that the team just assumes is understood. It really breaks the “curse of knowledge” bias individuals have when they are unable to ignore the knowledge they have that others don’t.

This fresh insight often ends up providing valuable feedback for the teams, who can then ask their product owner and business analyst(s) to explore how a user will actually be using the feature.

Pair Testing Pair Testing Ducks

How to Structure Your Pair Testing

The pair testing initiative Katrina started in her company began with a framework that was more prescriptive than she would have liked to be, but it was the first time that her teams had done any pairing.

She was keen to emphasize that she wanted both testers to be hands-on during that pairing session. She didn’t want a demo, and she didn’t want them just talking through a feature. She emphasized that the testers should be actively working on a task with both testers contributing.

The process she proposed allowed 10 minutes to introduce the task. The “native” tester would then give the “visiting” tester their workstation. The native tester would step back and let the visitor be the driver for 20 minutes. They would then swap places for another 20 minutes. To finish, they would do a 10 minute debrief. So a 10 minute intro, 20-minute native testing, 20-minute visitor testing, and 10-minute debrief.

She also put on each of these periods of time some of the questions that she thought might get asked during those time periods.

What she found out was that some people used her proposed framework religiously, and other people ignored it. Then, based on the feedback from the pairing sessions, it became apparent who had ignored it because they weren't seeing the benefits.

She was getting feedback like, “An hour is too long,” or “This station felt like a demo,” and so bringing people back to this and saying, “Can you try to do it this way the next time you do it,” was an effective way of getting people in the habit of swapping.

They also worked on the process for the first few iterations and did a monthly iteration, so that each month the testers would get a new partner.

In the end, the best mix of time Katrina saw for effective pair testing was where the native/visitor testing sessions lasted an hour each as opposed to 20 minutes. She also observed that the framework, once people started to use it, was really useful.

Key Advice For Testers

Katrina’s most actionable piece of advice she would give someone to improve his or her pair testing efforts is communication.

She feels the biggest difference she sees between testers and the results they achieve is based on the number of conversations they have with the people around them.

Even if you're the most awesome tester ever, if you're not talking to your developers, product owners and business analysts about what they're doing and why, your testing will be less effective because you haven't received input about what other people have already tested, why the thing was designed the way it was, how the customers are expecting it to behave and what information from the testing will be valuable.

Without having those conversations, you increase the risk of ultimately delivering stuff from testing that people find irrelevant. Using pair testing is one of the most effective ways to encourage this all-important communication between teams.

Discover More Pair Testing Awesomeness

Here is the full transcript of my Test Talk with Katrina Clokie:

Joe: Hey Katrina! Welcome to Test Talks.

 

Katrina: Thanks for having me.

 

Joe: Awesome, it's great to finally have you on the show. Before we get into it though, can you just tell us a little bit more about yourself?

 

Katrina: Certainly, I am an assisting coach at the Bank of New Zealand in Wellington, in New Zealand. I work with a team of about 30 testers and it’s quite a fun role. It's a mixture of training, automation frameworks, testing practices, and just generally helping people out and making sure they have cake every week. I'm very active in the testing community. I edit Testing Trapeze, I co-founded the WeTest Meetups in Wellington. I speak at a lot of conferences. I write a blog. I try to be active on Twitter, although sometimes I need to take breaks.

 

Joe: Awesome. That's what I love about you. You're very active, you're always giving to the community and there's a lot of things we can talk about. We'll probably touch on a bunch of different things, but today I probably want to focus on two main things and that is some of your most recent post is pair testing and changing culture through testing transformation. I guess before [inaudible 00:01:13] how did you get into testing? I'm almost curious to know how someone becomes a tester.

 

Katrina: For me, I did a computer science degree at University and I was a developer. I transferred into a role that was called a solution delivery engineer. I was based in New Zealand, but I was doing mobile network installs, configuration and testing of what we call the intelligent network part of mobile systems. The reason I took that role was that it involved a lot of fun travel, so we were supporting Asia-Pacific and Central and Latin America. I got to do a lot of trips overseas and a lot of really fun technical work. After a few years in that role, I kind of thought about from what I had done in my career so far what did I enjoy the most and I decided that I would look for a testing role. For me, it was an active decision that I wanted to move into testing because I really enjoy the problem solving. Identifying the bug is one thing, but working out why it's happening and potentially how to fix it is the part that I really enjoyed.

 

Joe: Very creative [inaudible 00:02:39] I love the different mindset you get to use when you're tester as opposed to a developer. Since you do have the developer background, you're probably familiar with pair programming, but in one of your recent posts, you talked about pair testing. For people that maybe don't have a developer background or aren’t familiar with that term, what is pair testing at a high level?

 

Katrina: At a high level, pair testing is exactly what it sounds like, it's two people on the same machine, trying to complete a testing test together. The difference for me between two people working together and two people pairing is that paring, you are both active on the problem at the same time. It's not like asking someone over to help you do something. It's like saying, “Okay, I have this task, will you come and sit with me the whole way through this? We can take turns at being the person driving this activity. I want your ideas. We can do this whole thing together.”

 

Joe: Cool, so when you say pair testing though, is it pairing two testers together or it could be pairing just two people together to have them test the system? Does that make sense?

 

Katrina: Good point, yes absolutely, so at my organization, it has been testers, but it certainly could be two people from any discipline, who would like to test an application.

 

Joe: Very cool, I heard Lisa Chrisman talk about, in her team, they actually are pairing up testers with developers and they're sharing knowledge between one another. They’re getting the benefit of pair testing, but also getting the benefit of cross-pollination of most of the different skills. I thought that was kind of cool.

 

Katrina: Yeah, absolutely. The main reason that we've done it in my organization is that same thing, it's about cross-pollination of skills, but it's actually cross-pollination between our testers. The 30 testers that I work with are spread across 18 different agile teams. Those 18 teams are working with maybe five or six different products. Even though the testers are all in the same department, a tester in one team has quite a different experience from someone in another team, like they're working on different platforms, different approaches, different tools. For us, pairing two testers together on tasks was actually a way to do that cross pollination because people didn't have visibility of what was happening in other teams until they got the opportunity to go and actively work in those teams.

 

Joe: I just want to go into a little bit more how you handle, how you coach 18 sprint teams because I coach eight of them and I find it very difficult when they’re spread across the globe. I guess one of the issues I've seen and I'm curious to know if you’ve seen the same thing is we have eight sprint teams, they're developing the same application, but they act like their own little world. Like you said, they have almost different practices, different tools and the tester almost becomes, I don’t know, almost isolated from the other tester on the other sprint tracks. Like have you seen that and what are way that we can get more of the sprint teams working as one group, which they really are, they're building one product, how do we get that?

 

Katrina: Absolutely, the situation you've just described is exactly what the situation was for us. When I started with the bank in my role, I talked with each of the testers and one of the things that came through really strongly was that testers felt like they were siloed from other testers. They were working in really engaged delivery teams, cross-functional teams and those teams are really productive and they’re working well, but they get very insular. It can be difficult to know what's going on. I called it a pairing experiment and for as that was about trying to create the bonds between teams through a structured knowledge sharing that pairing provides. It's one of many things that we do, so we also try to have guild communities, so bringing the developers together and the BAs together as well as the testers and try and get some knowledge sharing happening through disciplines. There's also tribe based meetings, so we have 18 delivery teams, but they're split into three tribes. You would be part of a group of people who are all working on your product and we try and bring people together in that forum to exchange ideas, but I think that pairing has been probably the most effective and it's actually something that other disciplines in my organization have started to pick up because it's so hands-on and it's so practical about what someone's actually doing.

 

Whereas is we're finding that with those kind of a wider audience meetings or one hour kind of knowledge sharing session sometimes that's not tangible enough for people to really take away what's actually going on in that team and how do they actually work and what are they actually doing.

 

Joe: Great, so I guess how do you push this initiative? For me, these sprint teams, they're like overwhelm, sometimes we have this huge definition of [inaudible 00:08:19] that they have to complete, how do you encourage them and say, “Look, this is a benefit, you're going to learn something out it, it's going to make it easier for you in the long run.” Do you have any tips for how to encourage [inaudible 00:08:30] actually taking advantage of pair programming or pair testing?

 

Katrina: It was really interesting, I've had a question of this nature every time I've done a talk like this, which is understandable. In my organization, I think I was lucky in that the testers saw the need and me creating this framework of pairing was in response to something they had bought to me and said, “We failed this way, what can we do?” I didn't have to sell the testers on it necessarily. I did have to sell the management on it slightly, but it turned out to be not as difficult as I was imagining because work is still happening when this activity is taking place. One person comes out of their team and into another team and that team effectively has two testers for an hour, but then the flip side is that that person from that team would then go back with the other tester and that other team would have two testers for an hour. There's not really a change in resourcing or an impact to the project, it's just like shuffling a person across and then shuffling the other person across. That was my pitch on explaining it to managers that these people are still actively working on their projects, we're just moving in a person to essentially help them and learn from for an hour and then vice versa. That wasn't a particularly difficult sell in the end.

 

We also did regular retrospectives about the activity and I was collecting feedback from the testers about what they had found valuable, [inaudible 00:10:19] they've been part of and how that had influenced the way that they were working in their teams. That feedback was really valuable to shear wider as well to keep the momentum going, to say, “Here are some of the things we are getting from this that’s worth the shuffling we're doing each month.”

 

Joe: I think that's a great idea of a way to really talk to managers how this is a benefit. I guess are there any ground rules you have for pair testing? Do people just say, “Okay, let's test,” or like you mentioned you have a framework or that they work an hour on one, at a high level, do you have any ground rules that you think make for a successful pair testing session?

 

Katrina: When I introduced this to the team, I gave a proposed structure of a session. It was quite prescriptive and in some ways more prescriptive than I would have liked to be, but it was the first time that we had done any pairing. I was really keen to emphasize that I wanted both people and the pair to be hands-on during that pairing decision. It wasn't a demo, it wasn't just a talk through. It was about actively working on a task with both people contributing. i have a graphic on my blog, which was the one pager that I shared with people and for a 60 minute session, I was proposing 10 minutes to introduce the task, 20 minutes for the person we called the native, so that the tester who was on that team would test for the first 20 minutes and then they would swap. The native would give the visiting tester their workstation. They would just kind of step back from it and the visitor would be the driver for 20 minutes. Then at the end, we would do a 10 minute debrief, so 10 minute intro, 20-minute native testing, 20-minute visitor testing, 10-minute debrief.

 

I had put on each of these periods of time some of the questions that I thought might get asked during those time periods. It's almost a heuristic framework for pairing, I like to think so, which of these questions are relevant for what you're hearing and how can you drive this? What I found with the framework was that some people used this religiously and other people ignored it. Then, the feedback in the [inaudible 00:13:07] sessions, it became apparent who had ignored it because they weren't seeing the benefits. We were getting feedback like, “An hour is too long,” or “This station felt like a demo,” and so bringing people back to this and saying, “Can you try to do it this way the next time that you do it,” was an effective way of kind of getting people in the habit of swapping. I think that after the first few iterations of this, so we did a monthly iteration, every month you would get a new partner. The expectation was you would do an hour where you were the visitor and an hour where you were the native. The framework, once people started to use it, I think was really useful.

 

Joe: Is it mostly everyone is co-located in this, how it would work or does this work with people that aren’t co-located?

 

Katrina: We are very fortunate that all of our teams are co-located. We're in the same building at least and so we’ve been lucky. I have had some questions as well from people about doing this with remote pairs. I'm imagining that whatever the tools are by which you collaborate remotely now, you could do this with. I'm imagining like screen sharing, video calling, a text chat window, a Flowdock or Slack, whatever the tools are that you're using. You could have a session with the same structure, where you are actively working with someone for an hour and getting some insight into the way that they approach the problem. They would get some insights from you and to how you view what they're doing.

 

Joe: Awesome, as your company has been doing pair testing, are there any benefits that you've seen right away or any features or bugs that were caught that you think may not have been caught if you weren't using pair testing? Any stories of like a really success that came out of this?

 

Katrina: It's interesting because when you say that I can't think of any bugs, like I haven't been told specifically about bugs. People will say to me, “My pair thought of things that I never would have thought to do and it was really interesting to me to see how they approach this because it was totally different to what I was going to do.” A lot of the feedback was about just the visibility, so it gave people more confidence in what they were doing to some degree because it was validated by seeing the same things and other teams. Previously, the testers and there delivery teams have developed a test approach for [inaudible 00:16:09] and they don't really know how closely that matched with what other testers were doing. This has been an opportunity to validate some of the way that they have chosen to approach testing.

 

I think the biggest success stories have probably been in tooling. When I say tools, I'm not talking about automation specific tools — I'm talking about things like Chrome extensions or screenshot tools, where people use them without thinking and someone new will come along and say, “Whoa, what did you just do, how did you do that so fast, like what is that thing?” A lot of those sort of tools have cross-pollinated across the teams through this and really helped to make our testing a bit more efficient. Yeah, I think with the benefits have been around getting a broader scope of thinking for people, giving them some confidence that their approach is aligned with the approaches of other teams and exposing them to new ideas about tools.

 

Joe: Great, I like your point about visibility. This is something we struggle with. Once again, we have eight sprint teams working on the same product, but they have different features. One sprint team may be working on a feature and have no idea that impacts how much it actually integrates with another feature that another team is working on. We have this communication gap almost and it almost sounds like pair testing that is definitely one of the benefits. You pull one tester into another team, they may say, “Oh you know what, we're working on this feature and I didn't even realize how much this is actually going to impact your feature if we both merged it into, this is going to cause an issue.”

 

Katrina: Yeah, absolutely, so I have seen that a pairing session starts a wider conversation because someone will go, “Whoa, I didn’t know about this,” or the other thing I've seen is we have a tester who visits the team, has so many ideas from the perspective of someone who's not very familiar with the product that those actually end up getting feedback to our product owner and our business analyst to think about how the feature is actually designed because when you are working with a product day to day, all the time, you're a power user. You get really used to everything it can do and how it does it. What was interesting about this is it was a skilled set of fresh eyes, so they have all the domain knowledge but they’re unfamiliar with a particular product. The feedback from a person like that was really valuable as well.

 

Joe: Awesome, I definitely see the benefits of this. I'm definitely encourage it and try a little experiment on my own and see how this works. You didn't mention tools and I know some people say, “It's not about the tools,” but I love tools, so are there any specific tools that you've seen that you think would really help benefit testers doing pair testing or do you have a blog post on a list of tools you think would be helpful to someone that's just starting off with pair testing?

 

Katrina: When I talked about tools, it was more about the tools that people use in their testing day to day that they assume everyone else knows about. The tool knowledge we've seen pollinate has been in those kind of quirky cool little things that people find and don't think to tell anyone about. It's only when you're watching someone test and you see how they use these things that you go, “Whoa that looks really useful, what is that?” It gets shared a bit wider, so tools for pair testing, like to aid the actual pairing, we don't have anything. I guess the only thing I can think of is a stopwatch, so with the only thing we found was where sometimes it was difficult to get the native tester away from their keyboard and to get the visitor to actually step into that role and to take ownership and control of what was happening in the activity and maybe a really visible like a stopwatch app on your phone or something next to the screen, I can imagine might be useful. There’s not really any applications or software that are specifically for pairing.

 

Joe: I got you. I do a lot of test automation, teams do a lot of automation, so it would be almost like they may watch another tester use a certain method and not realize that that method was available. It's actually they're learning from one another, “Oh wow, I didn't know you could do it that way or use this to do that.” It's more like not necessarily tools, but maybe approaches or ways that another tester may do something more efficiently that they didn't know about.

 

Katrina: Yeah, absolutely, so some of these sessions have been automation based. We have a lot of UI automation in each of the different products that we support, so some of those stations have been about sharing coding practices from one [inaudible 00:21:37] across to the [inaudible 00:21:38] of another product. There's certainly been a little bit of exchange like that where we’ve seen utility classes and helper classes, some of those methods cross-pollinate. We've seen general approaches to what are we automating here and why. Some of that started to get a little bit more consistent as people see the decisions being made in other teams. It's not really about the tool. It's more about how people approaching automation, how they implement code and the decisions they make about what to include in a suite or not.

 

Joe: I work with eight sprint teams spread across the globe and you had another post talking about changing culture through testing transformation that they kind of touched on this. Could you just talk a little bit with what this post is about at a high level?

 

Katrina: This was from a talk that I went to in Australia, so I was at Australian testing [inaudible 00:22:41] and Michelle Cross gave a talk about changing culture. Hers was called Transformation of a QA Department. She was speaking about the challenges in her environment of working with an offshore testing team that were located in the Philippines. She was part of an organization in Australia and some of the cultural differences between those two nationalities essentially. What was really interesting to me was as she was talking, I started to think about the cultural differences between software development teams who are running a waterfall approach and software development teams who run an agile approach and some of the parallels to the cultural measures that she was thinking about. She talked about power distance and individualism, which are meters of how important it is to have kind of a hierarchy and a leadership structure, how important it is for people to have ownership of their own work versus being part of a bigger thing.

 

She talked about trust and how trust is formed between different cultures and how for some people it's about respecting cognitive skills. If you're very intelligent person, then I trust you versus effective trust, let people who are warm and open and create kind of a relationship with you, then you will trust them regardless of whether they have intellectual skills that you respect. As she was talking about all these things, I was thinking about an agile team versus a waterfall team. An agile team for me, there's a lot of collective, people are happy to be part of a we, rather than to take ownership of things for themselves. People are happy to be autonomous rather than having a clear hierarchical leadership structure. I think people, maybe their trust is a little bit less split, but they build a really strong trust within that group within that team.

 

Then, if I were to compare it to some of the waterfall teams that I've worked in, I found people in those teams, they liked to have a really clear ownership of what their role is. They kind of ring fence their tasks and things go through a process and the piece of it that's yours, you take. The people in those teams really like having a management structure is my observation, so testers who like to have a test manager who’s the person who they report to and has some I guess [inaudible 00:26:07] in the organizations, some power there and they respect them and there’s that strong bond.

 

The structures of both are different and the people who gravitate towards each are quite different. This is what I'm thinking is Michelle's talking and then I started to think about transformation, so something that's very strong in the New Zealand market and particularly in Wellington where I work because there are lot of government departments. It's the capital of New Zealand, is transformation, so taking big teams of people who are used to working in a very waterfall way and trying to flip that organization towards running cross-functional agile teams instead. I started to think about that from a cultural perspective and if I was to say to a Filipino person who has similar traits perhaps to the waterfall teams, where they really trust leadership and they like to have ownership of their own thing and if you say to that person essentially, “You should become more Australian,” and that's quite an [inaudible 00:27:31] thing to say to someone. That's kind of what we say to people when we say, “You need to stop the way you're doing this and now do this other way.”

 

I hadn't really thought about transformation in that way before like it was a really, I had always thought, “Just stop doing it that way and do it this way,” like how hard can it be, somewhat flippantly. When I started thinking about it as a cultural shift, I think it gives me more empathy for what those people might be going through when you say, “Now is the time to transform the way you work,” that's not just a challenge to your role. It's a challenge to their identity and the way that I enjoy interacting with people as well I think.

 

Joe: Awesome that's a great point about empathy. I'm kind of guilty of this myself. I'm just thinking of a conversation I had where I just shut the guy down, said, “Oh that's the old way of thinking, we don't do it that way anymore,” and I just moved on, probably wasn't the most helpful way to really encourage them to move towards agile [inaudible 00:28:40] I’ll take in [inaudible 00:28:42], let go on, but I definitely agree. Also what I thought was interesting is my company, I took a course with my company and they had something where you compare your managing style, your style to how it would relate to other countries. I have a lot of teams in India and my style what I found out is completely different than what they respect and what the culturally norms are within India. I was having issues getting … I was wondering, “Why is my communication off?” I guess it's the same way like you said you work with 18 sprint teams and may even be just maybe how you approach a certain team because to me our eight sprint teams are definitely different, even though it's the same product. I may have to approach one team different than another team, so just being aware that not only cultural differences but there’s also almost sprint team differences. Have you seen that? Does that make sense?

 

Katrina: Yeah, I think from the perspective of the fundamentals of what people value, I find that that's relatively consistent. If people are happy working in our agile teams, generally they're happy about having collective ownership of a problem rather than their own piece of it. They're generally happy to not have to repeat to a leader or a manager to just make decisions within the team, but I think at a higher level than that there are different cultural differences in in the way that our teams operate because they're made up of different people. Different people will collaborate together in different ways. New Zealand is kind of interesting in that we have a really high proportion of immigrants and our teams are very cross cultural anyway. I am a native Kiwi and I am almost in a minority in my organization. We have a lot of people from all over the world, the US, the UK, India, South America, Australia, Asia, every place you could imagine. The cultures that develop in each of the teams as a result of those different inputs can be really really varied, depending on who who's there.

 

I definitely find engaging with different teams and even engaging with different testers in those teams, you have to adjust your approach based on who you're talking to.

 

Joe: Katrina, one thing I forgot to mention was the coding dojos you mentioned in your article about your pair programming experience. Could you just tell us a little bit more about that?

 

Katrina: The testers had got used to pairing with each other and then we got to what is the New Zealand downtime period, so December, January, February is our summer and a lot of people go away on vacation during that period of time. I decided to run dojos, so as you would with programmers and we're with [inaudible 00:31:56] automation to try and get some refactoring work done in a way that everyone would understand what we were refactoring and why. I think that was a lot less challenging for people than perhaps it could have been because we had done so much pairing that they were used to, they're switching keyboards and staying engaged in a problem and trying to collaboratively solve something together.

 

Joe: Okay, Katrina, before we go, is there one piece of actionable advice you can give someone to improve their testing efforts? Let us know the best way to find or contact you.

 

Katrina: I think the biggest difference I see between testers and the results that they achieve is based on how many conversations they have with the people around them. Even if you're the most awesomest tester ever, if you're not talking to your developers and your product owners and your business analysts about what they're doing and why, your testing is less effective because you haven't taken input about what other people have already tested, why the thing was designed the way it was, how the customers are expecting this thing to behave, what information from testing will be valuable? Without having those conversations, you increase the risk that you will end up delivering stuff from testing that people find irrelevant.

 

I think that the biggest piece of actual advice that I would give to a tester is go talk to people and do it more than you think you should have to and go and actively seek out what other people are doing and why. How to contact me, so I am on Twitter @katrina_tester. I don't think I have my email on my blog, which is somewhat intentional because I get a lot of spam. If you want to contact me, you can contact me on Twitter and then I will message you with my email if we want to have a conversation.

 

  1. Great read and cool concept. All is well for me except the fact that the benefit of pair testing is quite intangible and can only be gained in long term. for e.g. Less dependency on senior testers just because of their knowledge of AUTs. But on the other hand it would be tedious, distracting and sometimes overwhelming for testers themselves and may be its difficult to identify the responsible tester for any particular scenario. Although the net result is a better testing approach with quality test cases or scripts. But, Again “But”, its true that managers that I had worked with were more interested in numbers and not the quality.

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 03/14/2024

What is ETL Testing Let's get right into it! An ETL test is ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...