Software Testing

Agile Mob Based Testing [PODCAST]

By Test Guild
  • Share:
Join the Guild for FREE
Agile Testing with Lisa Crispin

Welcome to Episode 101 TestTalks. of TestTalks. In this episode, we'll discuss Agile Testing & Agile Mob Based Testing with Lisa Crispin, author of Agile Testing: A Practical Guide for Testers and Agile Teams, More Agile Testing: Learning Journeys for the Whole Team, and Testing Extreme Programming.

Agile Testing with Lisa Crispin

Do you want to rub out bugs like a mafia tester? Mob programming is becoming ever more popular especially when you want to include more than just two engineers in a code review. With the Mob Based Testing approach the whole team works on the same thing, at the same time, in the same space, and at the same computer.


In this episode you'll discover Agile testing techniques like mob testing and much, much more on how to catch bugs in your software as early as possible. Listen now to find out how to rub out bugs using Agile testing:

Listen to the Audio for Agile Mob Based Testing

In this episode, you'll discover:

  • What is Agile Testing?
  • How mob testing can help to improve your testing efforts.
  • How to get more women presenting at software conferences.
  • Tips to improve your agile testing efforts
  • Why a FDA audit is not as scary as you think
  • Much, much more!

[tweet_box design=”box_2″]For good we need the diversity of different skill sets and different types of experience.~[/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:

Question: What do you recommend to get more women presenting at software conferences? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Karma

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on Stitcher.com so if you prefer Stitcher, please subscribe there.

Read the Full Transcript

Joe: Hey, Lisa. Welcome to Test Talks.

 

Lisa: Thanks. It's great to be here, Joe.

 

Joe: It's awesome to finally get you on the show. You have so much experience, so today I thought we might just focus on some random questions that I came up with from gleaning from you blogs and books.

 

Lisa: All right.

 

Joe: I guess at a high level, what do you think of when someone says, “Agile testing”?

 

Lisa: Well, mainly I just think of the mindset shift in testing going from our traditional model of, okay the code is quote unquote, finished, and now we're going to find bugs to preventing the bugs from occurring in the first place so that testing begins even probably before coding starts.

 

Joe: Awesome. I definitely agree. I think people still   struggle with this, so are there any techniques that you find helpful to actually find those bugs earlier in the development lifecycle?

 

Lisa: Well, definitely if taking the whole team approach and getting the whole team to talk about quality and what it means to them, and what level of quality as a team are we committed to providing in our product. Once we've decided that then it's a question of deciding what practices we want to try out, what experiments we want to try to make that level of quality happen. For example, the team I worked on before my current job was a team transitioning to agile from, I don't even know what you would call it, complete chaos. We became a self-organizing scrum team, and we talked about, “Well, we really want a very high level of quality as financial services application, and it's important to people that they not lose any money, for example.” We talked about, “Okay, what practices would lead us to that.”

 

Well, the first few things we thought of were test run and development, and pair programming, and refactoring, and continuous integration. Those felt like basics, core practices we needed to master. Then once the developers had a hang of test run and development, which is hard to learn, but well worth the effort, then we decided to go up to the next level. Let's help guide our coding with customer-facing tests. Acceptance test run and development, or behavior-driven development, or specification-by-example, whichever flavor of that appeals to you, then start … Also in conjunction with coding, let's start writing these higher level acceptance tests. Make them executable, if possible, and let them guide coding as well as the unit level tests.

 

In my experience, that really does result in much more robust code, and also much more likely that you're going to build what the customer wants because you get together with the customers. You have your three amigos meetings with a product person, a designer, a developer, a tester, and whoever else you need, maybe an operations person, or a data person. Talk about the features you want to deliver and have conversations. Use techniques like example mapping that help you specify rules and examples for each story. There's a bunch of techniques out there to try, and I just feel like each team should try out whatever practices they think might work for them and see. Just keep experimenting and use retrospective to see what's working well and what's not. Design new experiments to try.

 

Joe: Awesome, so it sounds like it's not a one-size-fits-all. It really is based on the company's culture, probably, and the different teams and what their preferences are.

 

Lisa: Yeah, and I think also the risks. If I were on a team producing, say, pacemaker software, I'd have a lot different attitude towards quality and maybe even towards automated regression tests than I do on … Currently, I'm working on a project tracking tool. Of course, you'd want it to be good quality, but if it goes down, nobody's going to die. Everything's relative, and we have to focus on value to customer. In some cases, maybe the value to the customer is making sure that there's zero risk. In other cases, value to the customer might be, let's get more and more features and if some of them aren't perfect, that's okay. It's just a balancing act depending on your product and your team and your timelines and things like that.

 

Joe: Awesome. I hope no one from my company listens to my podcast because I always pick on them, but you brought up a great example. I actually work in an environment where our application can be audited by the FDA. The management's like, “Oh, we're agile. We need to do things fast,” but we really can't because we need to do all this extra stuff that the supposedly FDA can audit at any time. It's almost like we're trying to say we're agile, and we're trying to convince ourselves that we're agile, but we're really not. I just don't know how … Who's role do you think it is to educate the management and your team members on what exactly it is you're trying to achieve to catch risk and increase quality?

 

Lisa: Well, first of all, people do tend to equate agile with speed, and that's not what it's about. Agile is to use Elisabeth Hendrickson's definition, it's about delivering value frequently, so we're trying to deliver every week or every day, or at least once a month, at a sustainable pace, so without killing ourselves over time.

 

The sustainable pace is what captures all the good technical practices that we need, such as continuous integration and test driven development and those kinds of things. A lot of teams get into trouble because they say, “Well, we're going to be agile now,” and they expect things to speed up. The truth is, in my experience, you're going to slow down for quite a long time, but you're going to build a platform of good code and a really good safety net of automated tests and continuous integration that, in the future, will enable you to go faster, but you only get that by focusing on quality.

 

As far as auditing, my last company we were also subject to auditing by the financial powers that be, and what we found was we could just talk to the auditors, and they would say, “Well, we need to see the traceability of your stories to your tests.” We'd say, “Well, here. We have these fitness tests, and they're self-documenting because they're in English, and we can put comments to explain what they are. Here are the inputs, and here are the outputs. We can see that this test is passing currently because it's running several times a day. This clearly shows the system behavior and what was tested for each new feature.” They're totally happy with that. They don't need to see an inch-thick word document with the test plan in it.

 

I've talked to people in other companies regulated by the FDA and other agencies that say that they found that too. If they sit down and talk with their auditors, a lot of times the auditors will adjust for what they're-looking for as long as they get the information that they need. I don't think there were a lot of … I was just at the agile alliance technical conference a couple weeks ago, and I met a lot of people who worked in regulated industries and government agencies, and they were saying that they were able to be agile and make the auditors happy, provide them everything that they wanted. That was really great to hear that there's a lot more flexibility.

 

Joe: That's awesome. That's great to hear. In my opinion, once again, it's probably a culture thing. They treat the FDA like the boogeyman. We have internal auditors that come in. They will ask for insane things, and if we can't meet the internal auditor's demands, they keep increasing all this complexity and all this other stuff we have to do in order to meet what they think the FDA could ask for. It's just craziness.

 

Lisa: Yeah, those cultural problems are … It's really hard to change culture. I would recommend a book by Linda Rising and Mary Lynn Manns, called More Fearless Change, and it has a bunch of patterns in it that you can use to try to affect change. I've used it, especially when I've been trying to fight a cultural issue. I found it very helpful. It doesn't always help me succeed, but at least it helps me make good choices and try out good strategies. A lot of times I can make a little progress. We're all faced with those getting stuck in a culture that doesn't want to move forward. Sometimes these little patterns for change can just help us be the change agent.

 

Joe: Absolutely. I know previously you also mentioned how agile doesn't equate to fast. Your velocity may slow down. That's another struggle I seen in teams involved with. I've even had managers say, “Well, I don't want developers even thinking about any sort of testing. They're rock star developers. I just want them coding,” and I thought I was back in the '90s in a cubicle. I guess my point is, who should be testing nowadays in an agile environment? Is it a person's responsibility, or is it really a whole team effort like you mentioned earlier?

 

Lisa: Well, and again, it's one of those questions where it depends, but I think in the majority of cases that I've seen, I think it's usually good to try to have experienced testers on the team. People with testing skills in areas such as exploratory testing that developers may not have. I definitely think they should be part of the team. I have seen cases where it made sense to have a separate test team. For example, with embedded software sometimes the development team does not have access to all of the actual devices that they need to test with. They're using emulators or simulators, and they can't do the end-to-end testing. In that case, it might make sense for a separate team that works closely with the developer team but as a separate team. In general, you need to test as you go.

 

Another issue, though, is in my experience, it's also very difficult to find the testers with the right kind of attitude and mindset to work on an agile team. It really takes such a mindset shift. It's hard to find those people. It's hard to find people who are great exploratory testers, and at the same time have the technical awareness to be able to collaborate easily with the technical team members. You have communication issues. Since it's hard to hire appropriate people, a lot of teams are finding they don't have enough testers. What do they do? What I'm seeing is they're using those testers to train the developers and transfer those good testing skills. Pair with the developers.

 

Sometimes even pairing with the developers to write production code and at the same time, teach them how to write testing charters. Teach them exploratory testing. Make sure that they're covering adequate test cases with their automated tests. Help them with things like driving development with customer-facing tests like behavior-driven development, that sort of thing. That's the route my team is taking right now because well, we just hired another tester, so we now have three testers for a team of thirty developers. It's really not enough. We're experimenting with each developer pair, and our team does 100% pairing. They do test-driven development, or else they're doing behavior-driven development. They're writing cucumber tests to capture the more end-to-end scenarios. They do those, and they do exploratory testing on that story before they call it finished.

 

Then we testers, we can't possibly test every single story, so we do testing charters at the feature or epic level. We're testing more layers, and we're probably going to find issues they couldn't find by focusing on one story. It's still an experiment. I'm seeing areas, sometimes it's successful. Sometimes, I worry that the feedback loop is getting too long because we can't do that exploratory testing at a feature or epic level until a critical mass of that epic has been done. That's a little longer feedback loop. At the same time, until all of those stories are done, you don't have all those layers in there to test.

 

I think it's just a question of experimenting because I know other teams where this model is working very well. I think it just depends, again, on your product. If it's a domain the developers understand, they're going to be able to do a better job of testing it.

 

Joe: Great. You mentioned that there's so few testers on your team, and I've seen the same thing, especially in agile teams for some reason. Most Sprint teams have more developers than testers. It sounds like if we use behavior-driven development, we can help guide developers to test their code as they go along because at some point we just don't have the capacity to do all the testing. Have you seen that as a change from where we used to do things, back maybe in 2000 and the 2000's that now it really is everyone is involved in testing and has to be involved in testing just for that mere fact.

 

Lisa: I think it's certainly a change from the more chaotic teams I was on in the '90s, but on the other hand, I was on a waterfall team in the early '90s where the developers, they actually wrote unit test plans, which I think was going a little far, but they didn't do test run and development. They did automated all other unit tests. We had continuous integration. We had an automated deployment. We had a big enough test team that we were able to do all the exploratory testing and automate sufficient bubonic testing. It worked great. Yeah, we only delivered every six months to a year, but that was fine for our product. I don't think it really matters if you're waterfall or agile, but I think it is important for everyone to get involved in the testing.

 

I've always gotten involved from day one of every project, even when I was on a waterfall team. Back in the day when we had a requirements phase, I tested the requirements. It was really important to ask questions at those early stages and make sure we're going down the right path, and make sure that we understand what's the value that we're giving the customer. Is this really what they want? I think that's always true, but I do think because of agile, I think we have a lot more programmers who are quote on quote, test infected or test obsessed. They understand the value of quality, and they understand the value of testing. They're much more open to it. Certainly developers I've worked with for the past ten years are totally, even the past fifteen years, generally, are totally on board with it and willing to do the test.

 

Now, that said, sometimes they don't want to do some of the tester activities. I feel like one of the ways I add value is in talking to our product owner and our stakeholders and try to understand what it is they want or trying to help them articulate, what are the business rules? Give us some examples of desired behavior. Give us some examples of undesired behavior and asking the interesting questions, thinking of all the dimensions of quality, all the dimensions of the product. I think that's one thing where testers and also business analysts and people with similar skill sets can add value. A lot of the programmers I've worked with are uncomfortable doing that. They're happy to sit in a room with someone facilitating that conversation, but if it were just them and the customer, they would have a hard time. I think generally programmers are willing to take on just about any testing activity, but there are some that are more happy with it.

 

They're happy to automate tests because that's really just coding. Specifying the test, though, they may not feel like they are going to think of … Maybe they can think of the habit-path test but not the boundary conditions or the sad pass and things like that. They just don't think that way. Programmers have to be optimistic. They don't necessarily think of the negative things. That's why I think we do need all these different perspectives on a team. We need the diversity of different skill sets and different types of experience.

 

Joe: Absolutely. Can you share some ways or do you know of some ways that teams can build and share expertise in testing?

 

Lisa: One of the things I do is like pairing is a big one. I think something new that I would like to do more of, I've just started learning about it and trying it, is mob testing. Mob programming is becoming more popular where you have more than a pair, you have a group of developers and you time box it and rotate positions. Someone is at the keyboard driving, somebody's the navigator, and then the rest of the people are just contributing however they see fit. Then you switch every five minutes or ten minutes or whatever it is. You can apply that to testing too. You can do testing with a group of people. I found that's an interesting way to help people get experience. Pairing obviously is a great way to transfer skills. On my team, we do something called, we call them group hugs. If we've got a new feature that maybe we think is ready to go out to beta, we'll get a bunch of volunteers from the team.

 

It might be developers, product owners, designers, as well as testers, and we'll get a room together, that might be virtually or actual physical room, and we'll have charters, so people will pair up on charters and do some exploratory testing. We're all in a room together so we can talk. It's like, “Hey. Did anybody notice this behavior?” We keep notes of what we find, what we try and what we find. That just gives everybody a feel for, “Oh, I didn't think of trying that. That's a great thing to try. I'm going to remember that next time.” Just a great way of not only bringing a bunch of perspectives together so that you come up with better ideas but also just that sticks with people, of “Oh wow.” It's really important to test these different heuristics.

 

One other thing that I did recently. I got this idea at the AATC conference week before last, is somebody who was talking about how to transfer testing skills to developers says she laminated some heuristic cheat sheets and just left them lying around the workstation, and because they were shining and laminated, people picked them up. I was like, “That's a great idea.” I printed off some copies of Elizabeth Hendrickson's testing heuristics cheat sheet and laminated, and I've left them around the workstations. I have noticed people picking them up. We'll see if that has any effect. In our company every Wednesday, we have a tech talk. The company buys lunch, and somebody can get up and give a talk.

 

I signed up to do that, but instead of a talk, I did an exploratory testing workshop where I taught them how to write a charter using Elizabeth Hendrickson's template from her book, Explore it. Then I brought a bunch of little toys and little games and things, and I said, “Just pick a game or a toy and write some charters for exploring it. Use your persona,” I told them about personas as well, “and do the exploratory testing. Do some exploring based on your charter.” They had a great time with it because they were playing with toys and stuff. They said it really helped them understand better about how you would apply that to testing software as well. I think there are a lot of different ways that you can encourage people to sharpen their skills.

 

Joe: Awesome. Those are all great tips, and I think you actually came up with a title to this episode “Mob Testing”. Maybe because I'm Italian, that really resonated with me. “Mob Testing” rubbing out bugs or something. That's awesome.

 

Lisa: That's not my original idea, but I like to pick up other people's ideas and try them out. The person that I learned that from is Maaret and I can't pronounce her last name, Pyhäjärvi. She's from Finland, and she's co-written a book on mob programming with Llewellyn Falco, and it's been really fun to try it out.

 

Joe: Awesome. You've written two books on agile, Agile Testing and More Agile Testing.

 

Lisa: Actually, I've written three. I co-wrote Testing Extreme Programming back in 2001 with Tip House, and then Agile Testing and More Agile Testing with Janet Gregory.

 

Joe: Awesome. Are there any chapters in those books that you think are misunderstood or you think people didn't understand the point that you were trying to get across? Any concepts you think really would help the community to re-look at maybe?

 

Lisa: Well, actually, we've got pretty much nothing but good feedback on Agile Testing and More Agile Testing. People have said it's really helpful, and what people like the best is we've got so many stories in there from contributors who are practitioners with different specialties in testing and in different domains around the world. I think people learn really well when they read a story of, “Oh these people are doing what I'm doing, and they had the same problem. What did they do to fix it?” It can really give you ideas. Like in More Agile Testing, we have people sharing stories of how they tested embedded software, or how they tested in a big enterprise company or how they tested data warehouse. Yeah, I haven't had any push-backs saying, “Well, I don't really get this concept.” Some people have not really liked our agile testing quadrants, which we adapted from Brian Merritt's permission, but that's just a model. That's how we use the model. We encourage people to take the model and make it work for them. Of course, you need to apply it to your situation. Change it. Make it work for you.

 

I did have a lot of misunderstanding on the first book I co-wrote because people tended to look at that one chapter and not read anything else. Back in the early days of extreme programming, there was a lot of test-driven development at the unit level but not so much at the acceptance level. People were not automating tests above the unit level, which is really a problem because you need those regression tests at the API level and something at the UI level. You don't want to always have to do everything manually. We had a chapter on manual testing, and to make a point, the chapter said no manual testing.

 

Then the next chapter was entitled What? and in that chapter, we explained that we mean no manual regression testing, that we need to have time for exploratory testing, so we need to automate all the regression tests. Unfortunately, people didn't necessarily keep reading, and they got mad at us because they said, “Well, you're saying not to do any manual testing, and that's bad.” Yeah, that's right. If I could go back and redo that book, I probably would not try to make my point that way. On the other hand, I feel like, no I don't know if it was a result of our book, necessarily, but I know a lot of XP leaders at the time were saying, “Yes, we have to automate acceptance tests as well. This is something we have to do, so start by assuming you're going to automate it. Then if there's some reason not to automate it, then don't, but start with the assumption that you're going to automate it.”

 

I think that helped a lot of teams get over the hump of doing something that was really hard, starting to automate their acceptance tests because they felt like, “Well, that's just something we have to do if we're doing extreme programming.” Sometimes, I think you have to make a bit of an arbitrary rule to goad people into doing it. Once they've learned how to do it, and gotten over that hump of learning, then they can adapt and apply it where it's appropriate instead of using it as an excuse. Just like, “Well I don't necessarily have to do this because there's nothing wrong with manual testing.”

 

Joe: Absolutely awesome. Great advice. You also recently blogged about how to get more women presenting at software conferences, and this is actually how I got you on the show finally. Why do you think in 2016, this is still an issue, especially in the testing community? I've worked with so many great leaders that happened to be women developers and testers in my career, so why do you think this is still an issue?

 

Lisa: Well, I've been puzzling over that a long time, but I think what Maaret said in her blog and what some other people have said has a lot of truth to it. I don't believe that any of these conference organizers are doing it on purpose. They're not intentionally slighting women, but they're men, and they happen to know more men. That's who they end up inviting, and if you go to conferences … I was at Test Bash in March in Brighton, England, which is a wonderful conference, and there're easily equal numbers of men and women, both speaking and attending. When you go to the pub afterward, the men are tending to group into groups, and the women are tending to group into groups. This is probably just natural human behavior. We tend to congregate with people that are more like us, perhaps.

 

I happen to know a lot of women, so when I was on the program committee for the Agile Alliance Tech conference. Of course, I kept thinking of women to invite, which I wanted to do. All of us on the program committee wanted a diverse lineup. I think we just have to pay attention to our biases. Our brains have biases built in, and we have to fight them all the time. It is, I think women are less confident. Most of the women that I asked to come and speak say, “Well, I don't really have any experience. Everybody has the same experience I have. I don't have anything new to share.” Well, you don't have to have anything new. You have to have your experience, which is unique. How you overcame a particular testing problem or what experiments your team tried, other people do want to hear that.

 

Speaking is hard. It takes a lot of extra effort. To encourage these women and convince them that it's worth it … The reason I speak at conferences is so I can go to conferences to learn. I actually don't feel comfortable speaking at conferences. I've gotten competent at it over the years by working very hard at it and learning a lot about how people learn and setting up learning experiences where they can learn things themselves. My motivation is to get to the conference so I can learn myself and bring ideas back to my team and meet a lot of fantastic people because when you have a big network of people, and you have a problem, then you usually know somebody who might know the answer.

 

It takes extra work. Conferences, Agile testing days, I know the organizers, because I've helped them, they go through a lot of extra effort to try to find women speakers. To offer even, they compensate lightning talk speakers to try to get more new speakers and more women to try it. Even with that, they're only getting about a 30% women submission rate on their proposals. It's hard. It's tough, but we just need to keep working at it, and we'll make progress. I'm seeing progress now. I really feel like we're turning the corner on this.

 

Joe: Awesome. I think you've brought up a good point. Maybe a lot of people aren't conscious of what's going on, their biases. It may not even be a bias. I looked at my previous, almost a hundred episodes now, and I'm thinking, “Huh, you know what? My male to female ratio is pretty skewed.” I don't know why. I randomly select topics, and I look at authors. I don't even know their nationalities or anything. It just turned out that, maybe that is something I need to be a little more conscious of.

 

Lisa: Yeah, I've asked the other people who do podcasts, and they said, “Well, anybody can ask to be on the show.” It's like, “You know what? We women are not going to ask. We're waiting to be invited.” There's a two-way thing. We need to ask more, and then maybe other people need to invite more.

 

Joe: Absolutely. Okay, Lisa. Before we go, is there one piece of actionable advice you can give someone to improve their agile testing efforts and let us know the best way to find or contact you?

 

Lisa: I would say think of your biggest testing problem, and take it to your whole team, your whole delivery team, and say, “We've got this testing problem. Can you help me think of experiments to make this problem smaller?” My website is LisaCrispin.com, and I'm Lisa Crispin on Twitter. My email is lisa@lisacrispin.com. I've tried to make it really simple.

 

Joe: Nice.

 

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 11/29/2022

What is ETL Testing An ETL test is executed to ensure that data ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...