Software Testing

Becoming a Test Catalyst – Ways to Ignite Your Team!

By Test Guild
  • Share:
Join the Guild for FREE

I just heard a great story from Maaret Pyhäjärvi, a collaborative software specialist who appeared on TestTalks about how one of her developers called her a “test catalyst.” I love that term! I believe more of us need to ignite our teams with infectious testing the way Maaret does. But how?

Here are a few ways you, too, can become a Test Catalyst.

Mob Programming

The first thing Maaret highly recommends to help get your team more involved in testing is Mob Programming. 

Mob Programming is the idea that the entire team works on one computer, so there's just one input device. Team members take turns being the driver of the keyboard, and nothing goes onto that computer without the other members of the team — the navigators – having input.

It's driven by a new code — strong style pairing — which means that before an idea from my head goes onto the computer, it must go through someone else's hands. It’s basically just taking turns being on that computer. (Tip: Also check out Pair Testing)

Mob Testing

Maaret has also starting using Mob Testing  on specific Mobbing Sessions, during which her team focuses on either exploratory testing, or creating test automation.

It could actually be considered a subset of Mob Programming, because you would perform the same types of activities within Mob Programming.

Maaret has been teaching other testers how to do the stuff that she does, and she has found that Mob Testing is a great way to teach others what actually goes on in a tester's head and make it visible.

Creating a “Team Testing” Mindset

Sometimes it’s hard to get developers to buy into this testing mindset.

Maaret had also been struggling to find a way to get her developers more engaged with testing, which is one of the reasons she got into Mob Programming; she was trying to find more ways to bring her team together, instead of having the typical mindset of providing testing after the fact.

Mobbing has been a way for her teams to learn about what the different folks on the team do, and using Mob Programming has helped her instill the ideas a tester would typically bring to a session. She’s found that it’s a great way for her to teach her developers what she actually does.

Mobbing has significantly changed the culture of Maaret’s teams for the better. And it’s not only Maaret that has seen this; Mob Testing is a technique that Lisa Crispin also highly recommends.(TIP: Also listen to Lisa talk about Mob Testing)

Exploratory Testing 

Exploratory testing is a great way to bring your whole team together around testing and quality. Teams often add this as a last step — after they’re done with all their development — but as we all know, the sooner you find bugs the better, so this is not always optimal.

Maaret also pointed out the common misperception that you need a GUI or finished product to start exploring.

She began holding sessions during which her team would take something without a GUI and explore it. They would often take something that already had extensive unit tests and was really extensive testing in general, yet when they started to explore it they were able to find issues with things like documentation.

While using this technique her team found, for example, issues related to having environments with several different kinds of third party components. They also found various problems with the usability of the API.

Exploration is critical, because of the type of thinking that goes on in exploratory testing is that if you don't have it within your team, you’re stuck relying on the end users to come back to you with that feedback – which oftentimes might not even happen. A user may simply stop using your product and never tell you why.

Maaret has also seen how these exploratory testing sessions have helped developers with their more developer-centric tests, like Unit TDD tests. Sometimes when she works with developers who do Test First Development (TDD), she notices that there are a lot of commonalities in how she thinks as an exploratory tester, and how her developers think while they're doing TDD.

The difference often comes from TDD focusing on the very small bits and pieces one at a time, whereas she’s already strategizing the next steps and risks around the product. The things that could go wrong, both on the bigger scale, and in the whole environment. But it's a very similar thing.

For example, she has one friend she’s been working with quite a bit lately who has told her that he's actually become better at TDD, and has been coming up some great ideas simply by learning to perform better exploratory testing. There's definitely a shared knowledge in those areas.

Discover more Testing Automation Awesomeness

For more ways on how to beocme a test catlyst for your teams check out my full Test Talks interview Agile, Lean Startup Mindsets with Maaret Pyhäjärvi

Joe: Hey Maaret, welcome to TestTalks.

 

Maaret: It's great to be here.

 

Joe: Awesome, it's great to have you on the show. I've had you on my wishlist for a little bit since I've spoken with Lisa Crispin who brought up your name about, I think she was talking about Mob Testing. I think this is probably something we'll touch on, but before we get into it, can you just tell us a little more about yourself?

 

Maaret: I've been doing all kinds of work, things testing since twenty years ago, and I currently work with a small development team. There's about nine developers and myself, I'm the only tester in my team, and we're working on a software product. Kind of like all around testing type of person; I've been building a lot of communities, and I have a lot of contacts and ideas from that side.

 

Joe: Awesome. You have a lot of opinions, you blog a lot on your blog, probably a lot of different topics we'll touch on today. Probably some of them are going to be how we can work with devs on automation, exploratory testing, and production code.

 

Maaret: Definitely, those sound a lot like my topics.

 

Joe: Awesome. There is one thing though, I just noticed recently is you blogged about microskills. I was just curious, it just piqued my interest, what are microskills and how can we them to better work with developers?

 

Maaret: Microskills is really something very new that I've started thinking of. I've noticed some people, especially in the Agile conferences, talking about microskills. They're like identifying the smallest possible things you can learn and become good at, instead of having these huge things, and I was having this session where we were improving our test automation for my team with one of the developers, and I was noticing that again, pairing up and having me drive, I was getting more and more comfortable; like drilling through the production code and seeing what is there behind the interfaces, and finding out all kinds of things about the structure.

 

I realized that even that is something we don't talk about. It's like a mini, mini skill; like knowing how to be comfortable around that. I just started thinking about that, there's so many more of these kinds of things that we don't even give credit for, and maybe we should start talking about them more. It's really a new idea that I'm that starting to work with.

 

Joe: Awesome. Kind of reminds me, I'm not sure if it's the same thing as, Janet Gregory brought up the term, “technically aware.” Being technically aware of all these other things? I don't know if it's the same concept or not, but that's just something that-

 

Maaret: It's probably similar, but in the sense that you could actually label the things being technically aware means, so that we can acknowledge and give people chances of learning these very little skills, so that they can go forward in their technical [inaudible 00:02:43]

 

Joe: Awesome. I think it's important, because as you brought up, you're one tester on a team of nine developers, and I'm old enough to remember where I was in part of an organization called QA, and we had, I don't know, forty, fifty QA people? Now, I'm outside of sprint teams, but every Sprint team has maybe one or two what they call testers or QA people, and the rest are developers.

 

How do you make that transition? I'm not sure if you ever worked in a waterfall type of environment, but that's something I think a lot of people are struggling with, even though Agile's been around since, what, 2000? It still seems this concept of having more developers than testers is really causing some sort of friction, a misunderstanding what an actual tester's role is now within Agile.

 

Maaret: Actually, I have worked with very typical waterfall teams. I've been usually moving around every two to three years, and this whole idea of being a context-driven tester, it has meant for me, meant this curiosity of seeing different kinds of organizations, and seeing and different kinds of context and trying to figure out, “What does good testing look like at those organizations?”

 

I've also been part of these teams of QA or tester, testing professionals, and I really have started to favor this very close relationship with the developers, and figuring out how do we, in the close relationship, how do we actually build the quality together? For my background, I was a tester at first, then I became a test manager, and then I realized as a test manager in one of these very waterfall type of organizations, that if I actually had the hands on team that the testers do, I would've been able to help with major financial considerations in that organization.

 

The empirical evidence that a real tester, a hands on tester has, it's so valuable that we actually give it to little credit, and it actually drove me back to being a tester, and I find that working side by side with the developers, that's really how you become even better at what you're doing.

 

Joe: Awesome. I'm always curious to know how other testers handle this, though. What I've seen teams struggle with, is they always burden this one person with the quote-unquote, “testing activities.” It's really frustrating, because it's almost like they're stuck in these mini-waterfalls, where developers develop up to … they have a two week sprint, you know, thirteen days of development, and then they give one day to this poor guy that, “Okay, you're in charge of testing.”

 

How do we fix that? How do you handle that? I'm just curious to know how your typical experience is in this situation.

 

Maaret: At first, I handled that by saying when I joined the organization, I handle it by saying that, “I'm just one, and there's a lot of the developers, so they can't actually expect any of the testing that they were doing to go before me. There is this whole invisible part of testing that they have never seen, and that's what I will be doing.

 

I got the buy in from the developers that they would still keep trying to test their best, and one of the gimmicks that I've been using with them is that sometimes I put things into production, and I tell them in advance, and I look at their faces when I say that I will do that. Kind of like being available when they ask me is one of the things, sometimes they know to being available, and just making them work on their own. That's been one of my things.

 

We've been still struggling a lot with all these kind of things, that's kind of how we got into Mob Programming, that I was trying to find ways of bringing my team more together, instead of having having this, “I will provide testing after the fact.” Mobbing has been a way for us to learn about what the different people do, so it's really the Mob Programming as it's usually assumed, and kind of me then to instill the ideas that a tester would bring.

 

Also, it's a way for me to teach my developers, “What do I actually do?” Kind of showing what happens when I do exporter testing.

 

Joe: Awesome. There are a few things there that I'd like to expand on. I've covered this in previous episodes, but I'm curious to get your take on, what is Mob Programming, or Mob Testing?

 

Maaret: Well, Mob Programming is the idea that the whole team works on one computer, so there's just one input device, and we're taking turns to be the driver of the keyboard, and nothing goes on that computer without the other members of the team, the navigators, speaking up about it. It's driven by this new code, strong style pairing, which is an idea that for an idea from my head, if I have an idea, to go onto the computer, it must go through someone else's hands.

 

It's just, you know, taking turns being on that computer. Mob Testing, you were asking that, I've been starting to use the word Mob Testing on specific Mobbing Sessions where we're focusing on either exploratory testing, or creating test automation. It's part of Mob Programming, really, like you would do that kind of activities within Mob Programming.

 

I've been training-wise, especially in conferences, I've been teaching a lot of other testers on how to do the stuff that I do, and I find that Mob Testing is a great way of teaching others what actually goes on in a tester's head, and making it visible; what is the knowledge about testing that we have in the groups already.

 

Joe: Awesome. I definitely agree. I don't know if this is controversial; I think because there's so few testers on teams that developers need to contribute to testing, but then I'm always brought to some people say, “No, that can never happen, they're two different mindsets, and we're always different, and a developer can never do testing.”

 

What are your thoughts on that? Can a developer do testing, or do you think there are two completely disciplines, and never should the two ever meet?

 

Maaret: There was actually an interesting experience about this. I was sitting with a developer, pairing with a developer, on one new feature. At some point I had this practice where, when the developer said he was done with the feature, he said he would come and demo it to me. I would, you know, just by speaking up about things I would like to try, I made them try it, so we were preparing this way.

 

In one of these particular sessions, the developer in question was sitting on the keyboard. He was just looking at me and saying, “Hmm, you want me to test this, right?” He would click on the things that I would have wanted him to test without me saying a word, and I realized that remaining silent, he kept going on and going on, adding these bits and pieces, saying to me all the time, “You would want this.”

 

In the end of that half an hour session, he had found many bugs without me saying a word. I was asking him, “What was different about me being there, and me not being there?” He could have tested it by himself as well; he had all the skills and all the knowledge, clearly. I just know that. He said that it just feels different when he thinks about me. We suggested after that that maybe holding the space isn't purpose I need to actually provide. Maybe there could be like a voodoo doll that looks like me, that I can give to all of my team members.

 

For me, that particular experience was one of the defining moments when I realized developers actually can test, but a lot of times we don't actually encourage, take the time, change the perspective, and we even do a disservice in saying that they cannot do that, because they won't try because they will fail anyway.

 

Joe: What are some reasons around this? Do you think some developers don't even think of testing? They don't even want to think of testing, because it's not part of the actual deliverable that they're being judged on within their particular sprint team?

 

Maaret: Probably some people would think that way, but then again, a lot of developers, at least the ones I've run into, they are really well aware that it comes back to them anyway. They know it's not the QA people who will be there on weekends and on long nights, to actually fix, it's them. They would actually like to do that; they feel they don't have the tools, or the knowledge, and they feel they don't have the time. This whole velocity driven mindset, it often creates this feeling where you would be better if you were faster.

 

Then the wishful thinking of, “Maybe it works, because I designed it to work,” kicks in. With my team, we really handle it a bit differently. We used to do Scrum, we used to have this velocity, and we actually were seeing the very same problems that you were talking about, like being focused on the velocity and often finishing the features last moment, before it was supposed to be done. I was then struggling a lot more with getting the things tested.

 

For the last two and a half years, we've been doing continuous delivery. We put things in production every day; the things that we in production are typically very small, and when we started off with continuous delivery, it was a point when we had no automation whatsoever, and still, these small pieces that I would be exploring with the team, they worked a lot better. At that point, we move into no estimates, we don't look at velocity points at all anymore.

 

Actually, we are delivering a lot more because our time isn't going into trying to figure out what's a random estimate, into figuring out what would be the next smallest piece we can deliver, including testing.

 

Joe: You mentioned automation a few times. Once again, I'm just curious to know, who handles automation in your team? I believe that developers should do automation, I feel it's a programming skill, it'd a developer's skill, I feel that they would be the ones that best create the code for automation. I'm curious to know what your thoughts on that-

 

Maaret: Even though I'm a tester, I'm also a programmer. Having a special skills in testing, and having so many people without the special skills in testing on my team, we've had the approach where the developers in my team do most of the automation. I'm often the one who drives that, so I'm the one who helps them figure out time frames in which they could add unit tests that they never had. I'm usually the one who brings in people that show them how to do proper unit tests. I'm usually the person who introduces Mobbing sessions on distributing our Selenium knowledge.

 

A lot of times, my role is into finding the next best thing we can do in automation side, and especially in the Mobbing sessions, I'm often part of that. We've been doing our Mobbing on Selenium, adding our Selenium scripts together, and interesting part was that not only two thirds of my team learned to do Selenium after that, but I actually started doing more Selenium after that, because again, it gave me enough view into knowing what kind of things I could add there.

 

We've done exactly the same thing on approval tests, which is more close to unit tests, so kind of learning this basic skills, and it could be anyone who has time [inaudible 00:14:19] that does the automation.

 

Joe: Absolutely, that's a great point. Once again, I've mentioned this on previous shows, I feel a tester almost is like a shepherd nowadays, because they know pretty much the product from beginning to end. They tend to be able to guide people, “Here are the best practices, here's what we should be following, here's how you do unit testing.”

 

Before, people were saying, “Oh, QA's dead. Tester's dead.” To me, it's almost like this role has been elevated now as to this person, because there's so few of us on the sprint team, it's up to us to really educate our teams, and to really get everyone involved in the development lifecycle. Part of that is testing.

 

Maaret: I really liked this. My team called me a catalyst.

 

Joe: Nice.

 

Maaret: They said that's what I do. I actually think a lot of testers do that kind of things. They bring in ideas and they suggest the things that could be considered together as a team, they show the things that are broken, and then they initiate the discussions on, “Why does this keep getting broken again and again? What could we as a team, do for that kind of thing?”

 

The whole idea of empirical evidence and the whole idea of having all these illusions that you need to identify and [inaudible 00:15:32] that's a big part of the tester identity. It comes very natural, it also identify illusions related to the way we work, and you know, just open those discussions up so that together we can decide on what we would do about them.

 

Joe: Awesome. You keep saying “us as a team,” it is really sounds like a whole team approach, everyone's responsible on that team to deliver quality. How did you get your team to all embrace this, “We all own it, we're all one team, we're all delivering quality here?”

 

Maaret: I would actually have to say that Mob Programming has been the key to that. The fact that we started doing Mobbing once a week, for practice purposes, and started to actually see what the others were doing, what the others were struggling, and seeing the value of having them in the same room at the same time, it also helped us appreciate each other when we are not all in the one room.

 

Mobbing has changed a lot of the culture in my team, and it's been really nice experience in many ways. Like for me as a tester, going into the whole Mob Programming mindset, the idea that I would take my five minutes on the keyboard and having to type something I haven't done much before, like the on production code side, and not really knowing what's going on, that was a big fear of mine when we got started.

 

Over time, I started appreciating the fact that I had special information that none of the others had, and there was for example, one time when from half a sentence, the guys were saying that I caught a bug that would've caused them probably weeks of work, and it all came from half a sentence, without any egos bruised. You know, just people talking and realizing, I have information that none of the others in the room had.

 

That appreciation of having those experiences together, that has really come into the other parts of work, even when we work remotely. Cultures change through working together, and I feel that Mob Programming is a great way of getting that kind of change coming in.

 

Joe: Let's change again a little bit, and get to your thoughts on exploratory testing. I'm just curious to know, when we get to exploratory testing, isn't it too late to find a bug? What I mean by that is, I know a lot of companies have moved to a more behavior driven development type approach, where they try to shift the testing earlier and earlier in the development lifecycle, so as soon as they start developing requirements, they're having discussions, they're trying to hash out things before any code is written.

 

When we get to exploratory testing, I see the most value for my teams when they start doing exploratory testing, but it's almost disappointing to me, because it's like, “How come we didn't catch this earlier?” I know it's a loaded question and it's a long answer to that, but what are your thoughts on exploratory testing, why we need to do it, and what's the right way to do it?

 

Maaret: There's really two bits to this answer. First of all, I live in the world where we do continuous delivery, we release daily. I can go and explore the version that is already in production, and there's a lot of information that I can find there that would then impact the features that we are still adding to the product. For me, I look at the product as if it was my external imagination, and even when I'm trying to do BDD style things, I need the external imagination to feed all those ideas that go into the product next.

 

Then the second part is that, even when we actually do the BDD types of things, this has been a big stress of mine at some point, I feel that we're not perfect enough, we don't understand things well enough yet, we haven't learned enough yet, to actually be complete with the things were coming up, the scenarios that were coming up.

 

At the point when it has been built and you can see it as really part of the bigger scale, you still find out new information. It's a great thing that we're trying our best to put things down before we start implementing, so that we would actually be targeting the same thing. There's still bound to be, in my experience, things that we find later on when we're exploring, when we're leaving the ideas that we earlier came up with, when we're giving them time to sink in, and go together with all of the other experiences and ideas we have.

 

Joe: Awesome. I think that's a great answer. It sounds like, you know, can't really hash out everything in the beginning, it gets back to almost waterfall, where we thought we could hash it out with requirements.

 

Maaret: Yeah. An export or a testing is a really great way of breaking some of the information available to the team, that the real end users never get. Like for example, one of the things that I've been doing recently, is that I've been running conference sessions on exploring an API.

 

There's this common misunderstanding that you need a [GUI 00:20:57], or you need a finished product somehow to explore. I started doing these sessions where we would take something without a GUI and we would explore it. We would even take something that had extensive unit tests, and really extensive testing, [inaudible 00:21:12] in general, and still when we've been exploring that, we've been finding out issues about documentation. It was basically one of the problems was the documentation that existed wasn't copy pasteable, so it made users starting using that tool more difficult than it should be.

 

We found issues related to having environments with several different, kind of like third party components, but there would be problems on installing from the typical channels. That was one type of thing. We found various problems about this probability of the API, usability of the API, things that people wouldn't understand about it, and also information that would actually help people figure out that particular API [inaudible 00:22:03].

 

A lot of the thinking that goes around exporter testing: if we don't have that in our team, we're relying on the end users to come back to us with that feedback. That might not even happen.

 

Joe: For some reason, I think of exploratory testing as after something's already developed, but it almost sounds like you're saying that, we could think of exploratory testing even at the unit level. You could some tests, and explore it, how your code behaves to it.

 

You could use it in integration testing, when you say you have an API but you don't have the front end yet, and then you can use it later on once you have all the components put together. It's almost like it's not just after everything is done, you can use exploratory testing on almost every stage of development. Did I understand you correctly?

 

Maaret: Definitely. Sometimes when I work with developers who do test first development, I'm noticing that there's a lot of commonalities in how I think as an exporter tester, and how they think while they're doing TDD. The difference often comes from TDD focusing on the very small bits and pieces at a time, whereas I'm already strategizing the next steps and risks around the product, and things that might go wrong on the bigger scale, and in the whole environment.

 

It's a very similar thing, and I have this one friend that I've been working with a lot. He tells that he's actually become better even at TDD and kind of like coming up with the ideas that he has, just by learning to do better exporter testing. There's definitely like a shared knowledge in those areas.

 

Joe: Awesome. Once again, I have eight to ten sprint teams, and what we've been doing lately, we have what we call a test fest at the end of a deliverable, like after four months, we do a test fest, which is basically exploratory testing, everyone jumps on a test.

 

How do you get your developers to follow the same thing where you have this whole team spirit going on already, so exploratory testing is just another activity that the whole team participants in?

 

Maaret: Well, at first we also had these test fest type of sessions. Over time, the developers actually grew really bored with those. They came with whatever knowledge they already had, they spent very little time, so this whole idea of testing being, learning in layers, they actually never really got to the layers. They were always scratching at the surface only.

 

We started doing those sessions more in the Mob Testing format, that we would actually all work on the same thing, and that way we can begin to skills and observations that would come from the team, but would be focused on this one task.

 

When you feel you are either learning or contributing, usually you want to come back, and if you're just using your existing skills and not having any new insights from the other people that are participating? It might be hard than to keep up the energy of going back to-

 

Joe: Are there any books or resources you would recommend to attest, or to learn about some of these topics that we've hit on so far?

 

Maaret: There's a lot of articles about exporter testing around. Of course, I write about this stuff a lot in my blog, so that's one place to go and look for things, but they are often more like [inaudible 00:25:30]. I'm thinking through these types of things, so there's a lot extra, maybe towards that goal.

 

On the book side, there's a couple of books on Mob Programming. One of them is by myself and Llewellyn Falco called “Mob Programming Guidebook.” It's available on Leanpub. It's more like how to get started book, and then on the history and practical ideas, there's another book by Woody Zuill that people should also look into.

 

Joe: I'd like to touch a little bit on automation. Like I said, you're a tester, but you also are a developer, you have developer skills. How do you handle automation in your team? How do you help your developers create better automation?

 

Maaret: I've been very privileged in the sense that I get to go to a lot of conferences, and I'm very connected with the community in general. I've had the chance of reading a lot, and finding the different routes. I often, especially in open space conferences, I come back to my team with all these amazing ideas.

 

The most recent one that I came back with is the idea of multi locators. We're often struggling on Selenium side with the locators being brittle, and maintenance being difficult, and having to go and fix things all the time, adding all the time up.

 

There seems to be really cool stuff looked at by Jeff Bosch. There's some academic research on that side too. I'm kind of having these reusable ways of finding certain elements with different kinds of things, this is definitely one of the things that I'm bringing back to my team.

 

Another thing that I've been recently really into, is looking at some developers working on, they call it theory based testing, but in my world of tester testing, we've always called them partial oracles. The idea that we would have these rules when you are given any A, you would do B, and you would have these characteristics that are supposed to be true though tout the application, or in particular area.

 

Automating from that perspective, I think there's a lot of stuff that we can do together with the developers. Often I come with these ideas to my team and say, “Hey, I'd like to try,” and they come with deeper developer skills, and then say, “Hey, this is how we could try things out.”

 

Joe: I think a lot of people feel that automation, because they just focus on end to end Selenium tests, or even back in the day, Windrunner, QTP tests. It never was a good experience because all those tests in the end tend to be brittle, because the rent end tests, not everything needs to be united to end tests.

 

How do you handle this? I know you've talked about API testing. Do you automate things underneath the covers, or like you said, you do encourage unit testing, do you have a certain percentage of what you try to have your teams focus for any sort of automation?

 

Maaret: We're still getting started with test automation side, so it has been a skillset that has been particularly difficult in my team, for all of the developers. We've had maybe best experiences in my my team with database checks, you know wanting to ring up the data space in a state where it's supposed to be.

 

There's all kinds of rules on relationship of things, wanting to ring those both in the test environment, and in the production environment, and getting alerts. That's been the most valuable thing for us. We've been trying to figure out all these different kinds of things that we could try.

 

We've tried approval testing on unit tests. Approval testing, it's this idea of a golden master, so instead of defining by a search, put out the things that you expect, you basically put the things that you're getting out of your code into a text file, and then you're looking through those, if those are the things you expected, if you have manually tested the things, things are already in production, that's kind of like works, good enough currently, and you can add tests quite fast that way.

 

This whole approve us unit tests integration, like finding the different places where we can put things and different technologies that help us. That's what we've been doing a lot with my team.

 

Joe: You've been mentioning that your team does continuous delivery. I guess one of my pet peeves is when people call automation. I guess one of my pet peeves is when people call automation “checking,” because to me that limits automation. I don't think of automation as just checking I think of it more like Richard Bradshaw called automation and testing, so you have a set of data you need to populate an environment so people can do testing. You automate that.

 

You have to deliver something into production, and need to move an application and build it automatically, so you don't do it manually. That to me is automation. How are you handling these types of activities for your continuous delivery? How are you able to deliver software every single day that's working?

 

Do you use any sort of automation, like Chef or Puppet to help you with those type of activities?

 

Maaret: We work in the Microsoft environment, so TFS is giving us the basic tools, very similar in that sense. That's the environment. The whole building and getting the things deployed, that's all automated. Again, it's a bit of a reference whether you want to call that testing or development and sometimes these lines are really, really blurry.

 

You mentioned the word checking that is going around? I'm also having a bit of difficulties with that term. For me, it was helpful to understand that James Bach and Michael Bolton were talking mostly about this checking aspect; that they understand that checking is the thing the computer does.

 

When the human creates the checks, that's already test. There's this weird distinction that it's the creation of checks is not checking; the first running of those checks is checking. There's this big gap, but I'm also seeing this problem, especially since I work a lot with developers that my colleague, Llewellyn Falco? He often says testing, and then he talks about specification and he talks about the feedback, and regression, and granularity.

 

I say testing, and those are not the words I would use to describe exploratory testing, or my point of testing. I would say it gives us guide us, and so it gives us serendipity, or it gives us models, it gives deeper understanding.

 

We have testing and testing, and we should find some way of making a difference between those, so instead of calling the other checking, I've come more into the idea of calling the other focusing on creating artifacts from testing, and the other one focusing on the performance and exploration side, and the whole infrastructure creating scripts that would help us do the collaboration better. I really don't care if it's development or testing, I just care it gets done.

 

Joe: Absolutely, great point. Before we go, is there one piece of actual advice you can give someone to improve, either test or develop a collaboration, or their Agile testing efforts? Let us know the best way to find or contact you.

 

Maaret: There's one thing that I believe everyone should do, and it is strong style pairing. The idea that instead of Mobbing, you would just find somebody with special skills that you don't have, and say that you will take the keyboard and just be their hands. It's amazing way of both creating the relationship, and learning from that other person.

 

The way to contact me is, you can find me online on Twitter, I'm @maaretp, M-A-A-R-E-T-P, and you can also find my email information, maaret@iki.fi is email that I use.

 

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 03/14/2024

What is ETL Testing Let's get right into it! An ETL test is ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...