About This Episode:
When it comes to software development, it's not always easy to articulate what we want from the software. That’s where Behavior Driven Development (BDD) comes in. In this episode, Seb Rose and Gaspar Nagy, authors of the new book Formulation: Document examples with Given/When/Then, explain how all stakeholders need to be involved in creating a product's specification. Discover BDD tips for your entire development process, including the specific technical practices required to drive development using collaboratively authored specifications and living documentation successfully. Listen Up!
The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!
** Buy Formulation: Document examples with Given/When/Then Now **
About Gáspár Nagy
Gáspár Nagy is the creator and the main contributor of SpecFlow, regular conference speaker, blogger (http://gasparnagy.com), editor of the BDD Addict monthly newsletter (http://bddaddict.com), and co-author of the book “BDD Books: Discovery – Explore behavior using Examples” (http://bddbooks.com).
Gáspár is an independent coach, trainer, and test automation expert focusing on helping teams implementing BDD and SpecFlow. He has more than 15 years of experience in enterprise software development as he worked as an architect and agile developer coach.
Connect with Gáspár Nagy
About Seb Rose
Consultant, coach, trainer, analyst, and developer for over 30 years.
Seb has been involved in the full development lifecycle with experience that ranges from Architecture to Support, from BASIC to Ruby. He’s a BDD advocate with SmartBear, helping people integrate all three practices of BDD into their development process and ensuring that appropriate tool support is available.
Full Transcript Gáspár Nagy and Seb Rose
Intro[00:00:01] Welcome to the Test Guild Automation podcast, where we all get together to learn more about automation and software testing with your host Joe Colantonio.
Joe Colantinio[00:00:16] Hey, it's Joe, and welcome to another episode of the Test Guild Automation podcast. Today, we'll be talking with Gaspar Nagy and Seb Rose all about their new book, Formulation: Documentation examples Given/When/Then. If you don't know Gaspar, he is the creator of Specflow, it's a widely used BDD framework for dotNet and I think it's a really great implementation of Cucumber. Anyway, he's an independent coach, trainer, and test automation consultant, an expert focusing on helping teams implement BDD and Specflow throughout their companies. He has more than 20 years of experience in enterprise software development and he's worked as an architect and Agile developer coach. Also, Seb is joining us. He has been a consultant, coach, designer, analyst, and developer for over 30 years, which I do believe he must have started programming when he was around five or 10. But anyway, he's been involved in the full development lifecycle with experience ranging from architecture to support, from Ruby to basic, all types of different programming languages. He's currently a continuous improvement lead with SmartBear helping apply the lessons he learned to internal development practices and product roadmaps. We have two awesome experts on the show. BDD can sometimes be controversial because I think most people do it wrong. Learn how to do it right. You don't wanna mess this up, so check it out.
Joe Colantinio[00:01:30] The Test Guild Automation podcast is sponsored by the fantastic folks at SauceLabs. The cloud-based test platform helps ensure your favorite mobile apps and websites work flawlessly on every browser, operating system, and device. Get a free trial, just go to TestGuild.com/SauceLabs, and click on the exclusive Sponsor section to try it for free for 14 days. Check it out.
Joe Colantinio[00:01:58] Hey, guys, welcome to the Guild.
Seb Rose[00:02:01] Hello
Gaspar Nagy[00:02:02] Hello everyone.
Joe Colantonio[00:02:04] Awesome, great to have you both back on the show. It's been a while and I thought it'd be a great time to get back together since you wrote a new book. So I guess Gaspar & Seb, so before we get into it, is there anything I missed in either of your bio that you want to Guild to know more about?
Seb Rose[00:02:15]Well, I want to first say I've actually been writing commercial software for 41 years, but I started when I was a little bit older than five so there you go.
Joe Colantonio[00:02:24] Hard to believe that's awesome.
Gaspar Nagy[00:02:28] Yeah, probably. I just have to think about making it a little bit shorter because as you just said, all of those things, it was a little bit too long maybe.
Joe Colantonio[00:02:36]Now that was awesome. Good. Good to hear. It seems like Cucumber is growing. I guess I must start off in, off the topic and maybe I'll let it go, it's just curious now BDD started off as I guess with Cucumber or Gherkin as a movement to help teams evolved in writing better, having better communication about software. And it seems like vendors have gotten more and more involved in this as well. Is there a reason why, you know, some of these Cucumber implementations have been acquired by different vendors, or is there a push for customers that are causing vendors to get more involved in BDD?
Seb Rose[00:03:06]Yeah, so I mean, I think the basic momentum behind this has been to coin a cliche shift left and desire for test automation. BDD has benefited from all the wrong buttons being pressed. I don't know, Gaspar, do you have anything to say, do you disagree with that?
Gaspar Nagy[00:03:24]No, no, no, definitely not. But I see that I think all these acquisitions and all these things is, is a little bit of a symptom or maybe an indicator that BDD has stepped forward and now it's, it's somewhat more mature or which has also, of course, all these disadvantages as well. But I think with these movements, there is a chance that that will reach a wider audience, maybe.
Joe Colantonio[00:03:47]Absolutely. So you've both been involved with BDD for a while and you've written a few books on BDD as well. So why, why this book, this books on Formulation, I mean, I'm not even sure what that means. So maybe explain why you wrote this book and what formulation is Gaspar, let's start with you or Seb be the one to jump in.
Gaspar Nagy[00:04:04]I think there are some books about BDD, but particularly books which are focusing on this topic that how to write good BDD scenarios. That's the very simple way of saying what is the topic of this book. There are not so many end up and I think we have, we have seen that there is a lot of practice gathered by the different BDD practitioners and then the good ideas and good practices, good things. And before that, let's try to put them into one place where, where you can find all these things in a single book. I think that's why we did it, of course, historically how it happened, it's a little bit of a different story. But in the end, I think that's why we are really happy that, that we have this book.
Seb Rose[00:04:44]Yeah. So I'd like to sort of just I mean, I understand why Gaspar doesn't want to go into the history as such, but you know…
Gaspar Nagy[00:04:50] I just wanted to give it to you that.
Seb Rose[00:04:51] Yeah, give it straight so that definitely, you know, we think it's a really valuable book and luckily so the people that reviewed it and read it. So, you know, that's a good place to start. But that's not why we wrote it. We actually started off with a quote or a statement that Liz Keogh made. It must be getting over a decade ago where she said, you know, this BDD is not about automation. First off, you need to get to a shared understanding, have the conversations, then you need to capture the conversations. And once you got good at capturing the conversations, then you start using the tools to automate them. And our first book was called Discovery, and that's all about having conversations. And this formulation is the second practice of BDD, so Discovery is the first practice and formulation is the next practice. And that's where you take what you learn when you are having those conversations and you capture that in business readable language. And if you can use tools like Cucumber, SpecFlow, you use the Given/When/Then Gherkin format. And so it was a logical place to go. You start with the first practice, you write a book about it, and then, oh well we've got a series coming on haven't we. So now you can write a book about the second practice. What I think is really good for us is that there is no other book that quite covers this field. There's a couple of books that come close, but this is the first book to cover using Gherkin to capture the shared understanding that comes out of discovery in the shape of concrete examples.
Joe Colantonio[00:06:16] So why do you think there is that gap or why was this gap? Is it because it's an area where people don't unnecessarily inform about or do they skip it or is it usually modeled between discovery and it gets confused in that process?
Seb Rose[00:06:29]Yes. So a lot of people try and when they're doing discovery, they go straight to give, and when then, the folks that don't go straight to give them when then. Well, it just seems really easy and obvious, doesn't it? You just take the examples and you just write given and you think of all the contexts that you could possibly have when you do something, then something happens. You go and you look at your test scripts and I don't know when whatever test scripts database, you keep them in and you go, well, you know, this is the setup. We'll call this given this is the actions that the manual test is used to make, so we'll just turn these into whens and this is what they check at each point and return these events. So the thing is, it's really simple to do. It's just really quite hard to do well in a way that's maintainable and understandable, which anybody that's read through test scripts they've been lurking at the bottom of an ALM for a decade will know, you know, you look at these test then you go “who wrote this? Why did they write this? What does it mean, what does it test?
Gaspar Nagy[00:07:23] Yeah, absolutely. In addition to that is that in many cases the BDD concept comes through the developers or the test automation engineers. And, and I think developers are typically assessing the complexity of something that, how complex the languages. And Gherkin is an extremely simple language. It has, I think, less than 10 keywords or so. And there are minimal constructs that you can really have in that. And therefore, that gives that feeling that writing is a BDD scenario with Given/When/Then is easy. Actually, it's not that hard, but I think it's not in that way easy how, how people would think and, and you really have to get these small tricks and tips into your fingers to be able to make that better.
Joe Colantonio[00:08:01] Absolutely. In theory, this is very simplistic when you can just open up a text file and you can write your specifications. And it's so where the complexity comes in where like I say, it gets perverted. I don't know why, I guess we all dive into this, but it seems like a lot of teams I've worked with its use just for an automation framework rather than where I think it really shines is communication, finding issues like you said, as we shift left before we actually build bugs into the software and it allows the teams to get a shared understanding. So is there a disconnect between it, when is it, like whose fault is it and what phase, you know what I mean?
Seb Rose[00:08:35] Oh, you want to point fingers at people…
Joe Colantonio[00:02:38] Not point fingers, but what's…
Seb Rose[00:08:40] Joe, you bad, man.
Joe Colantonio[00:08:42] How can we make it better?
Seb Rose[00:08:45] So I think the biggest the first disconnect we cover in the book, and it's a scenario, a structure that is a, is called scenario in Gherkin and you can create big scenarios that cover End-To-End journeys of a user or an actor in the system. And they then turn into these hard to read, hard to maintain brittle encapsulations of a single flow through lots of connected pieces of functionality in the system. Whereas the way BDD and Cucumber work really well is where the majority of the scenarios you write are what we call illustrative scenarios to illustrate a single business requirement or business rule. And this is a realization that actually has only manifested itself in the Cucumber community. You know, in the past four or five years or so, it really started and I gave credit where credit's due. It really started when Matt Wynne discovered created example mapping. So he was when he was visiting a client and they were having real problems during discovery, not formulation during discovery. And he happened to have magically in his pocket one of those packets of five by three index cards where it's in four colors. And he just sat people around a table with these index cards. He wrote the story on a yellow card. And then he said, “What are the acceptance criteria? What are the rules that govern this story?” And he wrote those on the blue cards and he put them on the table and he said, “Does everybody understand what these, these acceptance criteria, what these rules mean?” And it turned out they all had subtly different understandings of what those rules were. So we encouraged them to use the green cards to write out examples that illustrated their understanding of each of those rules, each of those acceptance criteria. And that's where we got example mapping. And it was only Gaspar, you'll correct me here, but it's only three or four years ago that the rule keyword got added to the Gherkin language. Up until we did that, there was no way to take an example map, which has rules and its illustrated examples, and converts that or transpose that into a feature file because we had no way in Gherkin to indicate what rule an example was illustrative. We just had free form text, now in Gherkin, you can say this is a rule that we want to understand. And here are the scenarios that illustrate that rule. And lo and behold, we've not got a really smooth flow through from the discovery which where the conversation leads us to create concrete examples to formulate scenarios where they are structured around the rules that they are designed to illustrate.
Gaspar Nagy[00:11:30] So the scenario so far before the rule concept coming in was basically just flying around and where they weren't really connected to the requirement. And I think that highlighted a typical testing problem as well, that at least those kinds of tests that we typically have with BDD are functional tests, so testing the functional quality of the software. And I think it's easy to see that to be able to check the functional quality, I have to verify that the software really does what it's supposed to and this is where people are typically stopped. But I think what is very important that what we supposed to make sense and what we supposed to, so the expectations are properly documented and that's the problem that was also visible, I think, in the BDD community that they were creating a task for BDD, given when then language. And of course, everyone was happy if they turned green and everyone becomes sad if they turned red. But they were really disconnected from the original expectations and therefore they weren't really useful, at least not as useful as they could be. So if you are investing the time and the money to make a proper automated test then it just really makes it even better if you can really trace it back to the original requirements and the rule keyword and all the things that related to the paraphrasing of the scenarios are somehow helping you with that.
Joe Colantonio[00:12:43] Just mentioned that the first chapter of the book is on formulation, then the section we're in now basically seems to land in Chapter two when you talk about cleaning up scenarios and you actually talk about example maps, which I've never heard about before. And I guess why did you start the second step cleaning up old scenarios? Because it implies that people are already doing BDD, I think it would be a whole mess already rather than start with a brand new scenario. Why start with cleaning up scenarios as the second chapter?
Seb Rose[00:13:09] So that's a really good question. And the answer is we decided not to do that. We decided to do exactly what you said, which was started with a greenfield site. And when I came to write that, you know, start writing that chapter, it was just seemed more natural to go and show a team struggling with something that we had, we both have seen so many teams struggle with already. So part of the problem of getting behavior-driven development accepted in organizations is that many people have heard about behavior-driven development and a lot of them think that it's test automation using given when then to get people sucked in, we need to show them straight away. That's not what it is and what you can do is you can use the process of discovery, maybe with example maps or with other forms of structured conversation to get to a place of shared understanding so that everybody is pulling in the same direction and then capture that understanding not just for the team today, but for the team in the future and for product owners and customers that come along in the future. Because one of the thought experiments that I think that I use and at one point in the book is asking people which they prefer. Do you prefer conceptually this is a thought experiment? Would you prefer to have unreliable documentation or no documentation? It doesn't take people long to think about it, to come up with a set answer that, you know, honestly, unreliable documentation is completely useless. It would better have no documentation. At least then we have to go to the single source of truth, which is the running system, to work out what it does. And so then I follow that up and actually ask the people, “Well, which do you have?” And the answer is always unreliable documentation. And so although discovery and formulation don't get there yet when you've read the third book, which we are working on at the moment, which is about automation, you get to something called living documentation, where the scenarios that you formulated in business language, which document how the system is supposed to behave on a requirement by requirement basis, actually guarantee themselves to be correct because underneath them there is automation code that verifies that the system behaves as described. And honestly, when you get to a point like that, you're living in Nirvana Land. This is quite a cliche. Everybody gets happy.
Gaspar Nagy[00:15:29] And I think one part of that and we were really thinking about it, how to structure the book. But we have found that going through this scenario, like taking an old bad BDD scenario and fix it is a very easy way to get straight to the point and give the reader some kind of good high-level understanding. And actually, when we were having these discussions, whether we should start with that or not, start with that, we finally realized that both Seb & I are teaching BDD and, and having training. And this is what we were doing for many years, that we have to explain to people how BDD works and how to do proper formulation. And we realize that, OK, what are the tools? What was the easiest way for us to get this concept through the audience and realized that this was such a good, strong example, which was helping the people to get straight to the high-level concept that we had really started that? And this is how we are able to introduce this kind of these six principles that are afterward detailed, then work through the rest of the book.
Joe Colantonio[00:16:24] Absolutely, you talk about the six principles, that make up the acronym brief, actually, and that's a big, big part of Chapter two. Really quick, could you just talk about what makes up the acronym brief?
Gaspar Nagy[00:16:34] Yes, we made this acronym that people can better, easily or more easily remember, and this is also true for ourselves. So B stands for business language. So that means that you should use the business language instead of the technical solution language. R means for real data, which means that it's good if these scenarios have concrete, illustrative data which are really showing off how the system is going to be used. I mean intention revealing maybe that's the most tricky one, but basically, which is that whenever we are, we are freezing the scenario, which is rather focused on what our intentions were for those different things that we are doing there and not so much on focusing on the methodical steps. How do you achieve that? In an earlier, people were calling it declarative versus imperative scenario. That was a word that was used. But when we were talking through the book, we wanted to get rid of this too technical word that that's why intensely revealing was, was basically found. E stands for Essential so that we want to keep only the essential details in the scenario and, and push down to the automation layer all those incidental details of all those details that are not relevant for illustrating that particular rule down to the automation and F means focus that the scenario itself is focusing on a single rule. And we don't want to solve all the problems with a single Oracle scenario or something like that. And the good thing is that briefness itself so that you have a concise and small scenario that fits under the screen that you can easily share with the stakeholders. That also seems to be a good principle. So that makes together the six one.
Joe Colantonio[00:18:03] Absolutely. That's good practice. I recommend that if you can't read it on a screen, you have to scroll down. It's probably not as concise as it could be or as descriptive in a way that makes it really helpful for everyone. So that's a good point. Yes.
Gaspar Nagy[00:18:15] It's not brief.
Joe Colantonio[00:18:16] Brief exactly. But I guess people here brief, especially when you work in an enterprise like, “Oh, no, this scenario is a requirement.” So they need to be very FDA need to come in here and it needs to be like really on. So is there an issue with people getting confused with what is even a scenario at this point? Is it a requirement? And do you have to cover every conceivable point of what can happen with that particular scenario? Does that make sense?
Seb Rose[00:18:40] The question makes sense. So first off, there's a section in the book where Gaspar makes it really clear that this is not about testing. Right. So we're not looking for every edge case to be covered here. What we're trying to do is that the scenarios are illustrations of how a particular business requirement or business rule is supposed to behave. Behavior is the key here. And what we need is just enough, just enough scenarios so that there can be no misunderstanding about what that rule means. You know, what we expect the system to do and therefore what work we expect our developers to implement in the course of making that rule part of the system. So we're in a space, you know, a Goldilocks space. If there are no examples, then where you just have to read the business language of the rule and understand it. And my experience tells me or has shown me that pretty much any rule in any requirement can be misunderstood by somebody. And so you have to do that and say, well, what should happen in this position, which is maybe edge case, and then create an example to demonstrate how the rules should behave. Maybe it's a rule-based on dates or times. We want to think about time zones, right. You know how many bugs they've been in the world based on a misunderstanding of time zones and daylight saving. So you have an example to show it and you get to the point where everybody in the room agrees that there can be no more misunderstandings because the examples cover all of the possible questions they could have. And at that point, you then think about formulating them as scenarios. And every one of the examples that you came up with can become a test, can be formulated as a scenario. But we need to remember and the teams need to remember that feature files in the scenarios that live in them are there to help document a shared understanding. So that really that business-facing maybe some of those edge cases that you went through are not really that interesting to your product owner or your project manager. The examples are still valid, but to get your developers to code them up as programmer tests in JUnit or RSpec or whatever, or get your testers to run them as exploratory tests the first time around. Just want enough in the documentation so that the ambiguity from a business perspective has been covered, because we definitely want our product owners to read these feature files in these scenarios. And product owners are always very, very busy because they're very, very important people. So you don't want to make them read too much and you want to make them read just enough that they give you feedback.
Gaspar Nagy[00:21:18] Yeah, I get many times the question that, OK, I understand that the scenario that he showed me in that simple business that is so simple, but our domain is much more complex than that. And interestingly, when we start digging into that and we start working with their own user stories and breaking it down to to the rules at the end, even in very complex business, the examples doesn't have to be really complex and that's the key to accept and to get practice on that, the focus and the decomposition of the problem. Actually, this is what we are doing there in some ways. And the good thing is that this is anyway useful. So it's not because it's BDD and not because it fits the BRIEF principles, because the composition of the problem helps the team to write better software at the end, because if you need to describe something over 12 lines, there is no developer who can remember things, at least definitely not me, but never really seen others as well. So that will not work anyway. That code will be buggy in some ways.
Joe Colantonio[00:22:12] Absolutely. And this book follows a fictional team working through this whole process. Questions are where we've been covering here fits I think mostly in your third chapter on our first feature file. Is there anything else people trip up on features or I guess a thought that just came to me on a why is COVID, so in a COVID world, like how much of a competitive advantage would a team have if they actually had nice, brief, clean, readable features where the team is no longer now in the same location all over the place? But because they did the feature in a way that has a shared understanding that they were able to create better software quicker and faster because they put the effort in and when this terrible COVID came around, you know, the team was set up nicely to still be able to successfully deliver software that was their customer something like…
Gaspar Nagy[00:22:56] That's absolutely like that. So I think teams are even if they are co-located and even if there is no COVID, they are overestimating their capacity of remembering things. And it's not only about remembering because maybe we have heard that the same thing, but because we have different backgrounds and different contexts, we have interpreted it differently. And in the end, that will be causing those clutches of whether it's a bug or a feature or whatever. So those kinds of issues and having those little decisions documented, this is useful to avoid that. And as you said, it really helps to work distributed because then you have the core agreements, those pinpoints in the net of the implementation, where at least if those pinpoints are, will be all green at the end, then there is much less chance that we get the things wrong.
Seb Rose[00:23:38] You know, there's more as well in the sense that one of the main pushbacks against and any sort of front-loaded process is that it takes it feels like it's more expensive to have meetings and understand something rather than just go and do it. And, well, it's only so far, right. We can change it later. And it turns out that it's been said for many decades that this is not the cheap way of doing it. This is an expensive way of doing it. Shift Left has tried to capitalize on that or at least promote this idea that learning early through deliberate discovery is going to reduce the number of defects that get into the software and therefore reduce the total cost of ownership of the software. But when you then move on to what you see a lot in this day and age, which is teams that come and go quite quickly because there's a lot of churn in the industry and products that live for quite a long time and need to be enhanced and maintained is the business world and the regulatory framework changes around it. Now you get to the point where you've got bits of software and people have to who've never worked on them before, come along and go, “Well, wtf, you know, what's this doing? How is it, how did it ever work?” You know, what were its requirements? And if there are any documents and sometimes there aren't any documents at all, if there are any documents, well, they're unreliable. So this there's you know, in your question, Joe, you talked about is it cheaper, faster and better? And the answer is yes, it is. Getting that, convincing people of that is still very hard. And people ask, why is there no definitive case study with, you know, dollars and cents added up? And the truth is, because it's really hard to have a control case, you know because the lifetime of a product isn't weeks. You can't do it in a lab and see this saving play out over a five-day controlled experiment with students. You need to have product owners coming and going, board members changing direction, teams, you know, being outsourced to one location, then brought back in and products changing.
Gaspar Nagy[00:25:46] And you need to have them.
Seb Rose[00:25:48] Exactly.
Joe Colantonio[00:25:50] Absolutely. I actually lived through this. I try to fight the good fight with the BDD to get it done correctly. And then they got rid of the whole division. They laid everyone off. They outsourced it. And the people got back to me and said, “Hey, Joe, can you help us? We want to have a new framework to write on.” And I'm like, they totally missed the point. They're going to start over missing the point. And like you said, if they put the extra effort up front when they do this sort of transition, you know, it would have been smoother and they would have got the benefit over time for sure. So I guess the next part of this is writing stories. So Chapter four, what would the people get wrong when writing stories that you think maybe tripped them up as well?
Seb Rose[00:26:26] Yeah. OK, well, I think the problem with the user stories is long and then…
Gaspar Nagy[00:26:31] There is no problem with the user stories.
Seb Rose[00:26:33] Well. I mean, where do you start? User stories are also well misunderstood. User stories come with an acronym INVEST. The INVEST acronym is one that I've I've taken to task in a number of presentations and conference talks because even the acronym INVEST is quite difficult for people to get their heads around. I'd suggest that the main problem with user stories is that people don't appreciate the fact that they were created as a way to defer detailed requirements analysis. So if you go back to the XP world, people used to say user story was a placeholder for a conversation, and that's what discovery is. So BDD takes user stories in and the conversation happens during discovery. The bit that I think most people may not be very keen on doing and avoid is that as part of that conversation, the user story should change. Now you've got a detailed understanding. You want to split it up into something that small as migration of from a large user story, which Jeff Patton, in his memorable asteroids metaphor, caused a large asteroid and overtime as you discuss it and understand the complexity, you break it down into smaller asteroids until eventually, you end up with really small, fast-moving asteroids. And those are the user stories that you want to bring into your iteration of a sprint backlog and deliver really quickly. So the biggest problem with these user stories is people don't use them in the way that they're intended to be. We talk about a user story and people reckon, well, if we use the standard connector structure which talks about who's going to get the value and what we're going to do, and what the value is going to be as a… I want so that well, that's a user story and it never changes. But actually, the point is we should have the conversation and create many smaller, more detailed user stories that each allow us to deliver incrementally and iteratively another thin slice of functionality so that we can eventually, in a low-risk way, deliver what our customers want.
Joe Colantonio[00:28:32] I guess one thing that you pointed out in this chapter, I know I have it in my notes, like, there are a few things. The first one is sort of the readability trumps automation and as you said, it's not about automation. Automation is a benefit of this. So I guess where is the happy medium for people that know ultimately they want to use it for automation, but they want to get the readability and the communication as well?
Gaspar Nagy[00:28:54] Yeah, it's a very hard question, but I try to watch myself when I'm implementing something through BDD in a way that I'm the one who is imagining the thing. I'm the one who is formulating the scenarios and I'm the one who is actually automating it. And I realized that if I try to totally forget about automation for a moment, which is really hard, because, of course, I'm a developer, I know that this is going to be automated but try to forget about that for a minute and try to describe what I really want and then try to capture that as quickly as possible and then see whether it really makes any conflicts with the automation. Then that's the thing that was working for me. So so for me, there is no compromise. So you really have to forget about automation, at least for a moment. And of course, then once you have a clear picture of what you want, you can let it verify whether we are able to make that. And we are able to make that, of course, means that better be able to make the software and we are able to make the test for that which is which are verifying that. And that's a very good feedback channel. But I think for writing the scenarios for me that really work the best if I really try to forget about automation and I tested this with a couple of teams who were brave enough to try that and it worked for them, you happened to see the two sides of the coin, otherwise, it just doesn't work.
Seb Rose[00:30:08] Yeah. So, I mean, just one thing I'm going to belabor the point here. It's not a trade-off. Do not let the automation intrude into the scenarios. It's not necessary. Cucumber and Specflow both have many various under the hood automation techniques that allow you to do all the things that you thought you wanted to put into the scenario. That's where, you know, this is just basic software engineering, abstracting away the details. The scenarios are formulated to talk about the intent, the behavior, the details. You don't need them there. You need them somewhere. Right. But you don't need them in the formulation.
Joe Colantonio[00:30:48] Absolutely. We dive into that where we're going almost overtime. So want to get to the rest of it. That's a great point. Hopefully, when you get your book on Automation you will rejoin us and we will dive into that as well.
Seb Rose[00:30:57] Yes.
Joe Colantonio[00:30:58] I guess Chapter 5 is it seems trivial, but when I worked for a large once again enterprise work, eight to ten separate teams, it became a really tricky point, and that is organization and documentation. So why is it so important? I know from my experience it was hard because you'd have different sprint teams and we're able to find where things live. So they would create the same scenario, the same type of thing over and over again, and because they just didn't have time to find it. So I'm curious to know, Is that something that you've seen as well or why did you write this particular chapter?
Seb Rose[00:31:27] Well because we wrote this chapter because of a perennial problem. So the basic problem, again, I'm afraid, comes back to user stories, which is the de facto behavior of teams, is to have a one-to-one mapping between user story and feature file, which is an anti-patent. And we go into details in the book about why that is bad. But if that's not the right way of doing it, how should we split things up? And, you know, I'm not a very organized person. If you ever see me on a webinar, you'll see a very black background behind me and it looks like I'm in a wonderful laboratory, but if you taunt me, I'll go and pull the cord in the black patch and will go up into the roof and you will see the chaos and mess that is my library. And that's how many people's documentation is. They, you know, they're used to having wikis with lots of information that's out of date and links that are broken and things that should be together in separate places that you can't find. And it's a nightmare. So like so many things in the world, if you do a little bit of tidying things up, every time you make a change, then they stay tidy. And the same is true of organizing scenarios and feature files and come up with a structure, make it logical based on a simple metaphor, which is the one we suggest is chapters with sections and subsections and put things in the correct place when you write them. And then as you add to the documentation and add more functionality, refactor that documentation to ensure that you keep it in a structure that makes sense to the people that are specifying it and implementing it.
Gaspar Nagy[00:33:07] It's very funny that I think people are anyway judging, for example, tools based on how good their documentation is. So I have heard many times or this tool is great because it has great documentation or this tool is not so good because it's impossible to find anything. So generally it looks like that we know that structured, easily searchable, meaningful documentation is a value. But whenever it gets to our own specification, our own documentation of the software that we are building and we are going to build for several years more, we somehow don't necessarily see that the same kind of structure and the same kind of values are also applying here. And again, it's not so complex to make that it's a few little tricks that you need to follow. And then you get basically with the same investment, you get much more value out of that.
Joe Colantonio[00:33:51] Great. In the last chapter, we retouched this a little bit, as are coping with legacy. As I mentioned, a team I've worked with, we were doing BDD for about six years, we got rid of the team. They moved overseas. I'm just curious to know how that transition went so do they have to deal then with the legacy of these BDDs that had been around for five, six years, and the original team is no longer there? So, some quick takeaways from this chapter to help people that maybe were in that situation like, “Oh, my gosh, now what? How do I deal with legacy?”
Gaspar Nagy[00:34:17] I think this was the most tricky chapter in the book because there are so many problems with legacy and so many interesting challenges. And we could have made a full book about that. But fortunately, Michael Feather's book is still very valid, by the way. So we really try to focus on the thing which is related to the BDD formulation. And I think the most important thing that is related here is that somehow people think that whoever is transitioning to BDD and they already have some manual test scripts, then one easy way, I mean, they say one easy way to transform to BDD to take those manual test scripts is simply transcribing them to BDD scenarios and automate them. Unfortunately, this is something which is not working. The core difference between what is the cost factors and what are the problems with the manuals test script and what are the cost factors and the problems with an automated scenario. If you just try to throw them, then if they look like that they are totally different. Manual test scripts are more focusing on the human who is, who needs to execute them. And of course, the human can check so many things at the same time because that's what we are good for. On the other hand, for automation, the biggest factor is, is to maintain and create those automation codes. And because of that, these focused and illustrative scenarios that we were mentioning are working much better. And this means that unfortunately, you cannot simply want to translate or transcribe a manual test case to one BDD scenario. You need to go through that. Think it over which kind of business rules we were validating there and maybe making many smaller scenarios out of that if this is really what you want. But in many cases, we have seen that actually looking into the one topic of the code or one part of the code and try to come up with the key business rules that are the most important ones that we want to cover. Maybe that's a better way.
Seb Rose[00:36:02] I just want to emphasize one thing. So Gaspar said you can't transcribe, you can't take a manual test script and automate it using Gherkin given when then. Actually just to correct. I mean, you can do that and people do, do that and they get into awful trouble doing that because you end up with something that is just as incomprehensible. But now you have to maintain automation code that is incomprehensible as well. So you can do that. But please don't do that. Please don't.
Joe Colantonio[00:36:33] I wish you'd end up with that because that is one of the biggest points there because that is a nightmare. I've lived through it myself so great point but as I said, it's like a team that actually works through the whole process of what we talked about so far. So I guess, to end this, any quick takeaways or best practices you can give to someone actually actionable advice you can give to someone listening to this so they can implement right away to help make their BDDs better. I will start with you, Gaspar, and then we'll end with Seb.
Gaspar Nagy[00:37:00] I think one thing that you can maybe try to take from the book is that try to listen to yourself and try to make practice. And this is also why we have this imagined theme in the book so that you see that this is, these are real discussions and you have you can make options and you figure out which one works better for you and then, yeah, inspect and adapt. That's a good way to go.
Seb Rose[00:37:21] I guess I'm going to go for a twin takeaway. Sorry, Joe. The first is please read Discovery first. So the core practices of BDD are discovery, formulation, and automation and you really will get much more value out of them by applying them and learning them, and applying them in that order. And then the actual takeaway that I want to give you is that behavior-driven development is about the shared endeavor. It's about reaching a shared understanding. This means it's about collaboration. So it's not something that QA or test department can do on its own. You need to have development teams involved and you need to have your business or product owner bought into as well. If you have any, people are just going, that's not my business, I'm not interested, I'm not going to get involved. You're going to have a painful time of it.
Joe Colantonio[00:38:07] Absolutely. Thank you so much, Seb & Gaspar for joining us today. And I highly recommend and would check out the book Formulation Document Examples Given When Then, there will be a link to this in the show notes. You just need to go to testguild.com/a349 and we'll also have a link to Discovery there as well. So thank you guys so much and you're welcome back any time. And hopefully, once that automation book is out, you'll be one of the first to be on the show to talk about that as well. So thank you so much.
Seb Rose[00:38:32] Thanks Joe.
Gaspar Nagy[00:38:33] Thank you very much.
Joe Colantonio[00:38:34] Cheers.
Outro:[00:38:34] Thanks for listening to the Test Guild Automation podcast, head on over to TestGuild.com for full show notes, amazing blog articles, and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.
Connect with Seb Rose
- Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.