Testability Helps with Test Automation
In my personal experience, this has been a concept I’ve found difficult to explain to developers that may not be used to this good practice, so it was cool to hear from a real-world developer who consistently aims to create testable code. In my opinion, this is key to being successful with Agile software testing.
I feel that before one line of code is written for a new feature the developers should be planning their solution with automation in mind. Developers should ask themselves, “How can I expose a hook or create a unique ID for this element in order to make it more testable?”
A Developer’s View on Testable Code
To get a developer’s point of view, I asked Derick what he meant by making code testable. He explained that the idea is if you want to write test automation for your code, you need to be able to separate it into individual pieces.
You need to be able to take this one function, this one class, this one object, this one whatever — and completely separate it from everything else in your system so that you can verify the behavior of that one element.
Once you have the behavior of the one thing verified, you can then start putting it together with other things that have already been tested. You start integrating those things together and verifying the interaction between them.
Build Small Things
The secret is to build small things that can be combined into larger things. The best way to build small things is to have a good test suite around the small things, so that when you combine them into the bigger things you have to write less tests on the bigger thing first off; it's also easier to test the bigger things, because you already have guarantees about how the smaller things work.
You don't want to write code just for testing. You want to write code that is testable but doesn't do any more than is needed. What it often comes down to is breaking these things down into smaller pieces, and testing the individual pieces.
Communication: The Soft Skill Critical to Testability
I think it’s kind of funny that many development issues (including creating testable code) tend to have a non-technical component that would fix many of the problems we see in our development process.
For example, Derick recommends that when it comes down to code design — well, the designers don't really care about code design. The testers do, however, and when they’re test planning they often have to write tests that interact with the way the code works and is designed.
Derick believes that whenever possible, it is critical that everybody get into the room (or call for remote folks), to make sure that the entire team is involved in the discussions because it leads to better understanding, a better set of questions, and just a better way forward. It also gets everyone thinking about testability right from the start.
Development – It’s a Human Thing
I also agree with Derick’s general philosophy on software development and testable code is that there are no technical issues anymore — there are only human issues at this point in time.
He acknowledges that we’re going to run into bugs and this framework is going to be wonky and whatnot, but even those often come down to human issues as well, because those crazy frameworks that have weird API's were developed by humans.
There are generally very few technical problems. If you can focus on the human side of things, and really build up collaboration and develop meaningful relationships within your team, you're going to have a much better time with your technical experience than you would otherwise – and, you will end up with more testable code.
|Joe:||Hey Derick. Welcome to “TestTalks.” |
|Derick:||Hey. Thanks for having me here. It's great to be on the show. |
|Joe:||Awesome. Now you're pretty much an independent, on your own, developer noir, and you had a thing called SignalLeaf at one point, so you've really done hardcore development on your own, almost. You're not necessarily part of a group. I know a lot of companies, they really try to force TDD on their developers, and I'm just wandering to know, as an independent developer, as you're doing all of this hardcore development, do you still use TDD on your own or what approaches do you use to actually test the code that you're creating for both your courses and also for your actual applications that you're creating? |
|Derick:||For unit testing, I still use Jasmine. I've been a big fan of it for a good number of years now. It's not the most up to date framework out there. I know MOCA and Cape and a few others do push forward faster, and more frequently, but I'm not so interested in staying on the cutting edge, or even the leading edge of the testing world. I'm more interested in long term stability and support, which I find in Jasmine. I've stuck with it because of its simplicity, as well. My experience with MOCA, it was a few years ago, but it was more like, “Okay, here's one piece of your test suite that you installed, and it's called ‘MOCA,' oh, and you want it to actually do some assertions? Well, now you need to go get this test piece of your test suite. Oh and you wanted to use a specific format for your test suite? Well, now you need to go get that piece of your test suite.” |
|It just got frustrating, having to piece together the different things, just to run a test, whereas Jasmine is like, “Oh, here's Jasmine. It can do everything you need.” Doesn't do all of it super well, and there are additional things that you can add onto to it to make it work better, but from a very basic standpoint, get it installed and get it running, Jasmine is still my go-to tool for its simplicity and getting things up and running. |
|I've been planning that for this year, actually, to produce a series around testing Node.js specifically, starting from the ground up, and building a module and test driven fashion, and eventually publishing it to NPM, so that other people can install it, and use it, and run the tests themselves. |
|Derick:||The idea is if you want to write test automation for your code, you need to be able to separate individual pieces. You need to be able to take this one function, this one class, this one object, this one whatever, and completely separate it from anything else in your system, so that you can verify the behavior of this one thing. Once you have behavior verified of the one thing, then you can start putting it together with other things that have been tested, as well. You start integrating those things together, and verify the interaction between those 2 things. |
|Then, you build upon it from there. This ties directly into the YouTube video about building large scale applications, and the secret, as I said in that video, is to build small things that can be composed into larger things. The best way to build small things is to have a good test suite around the small things, so that when you compose them into the bigger things, you have to write less tests on the bigger thing first off, but it's also easier to test the bigger thing, because you have guarantees about how the smaller things work already. |
|Derick:||That's a struggle that I never quite solved in my career as a developer with other companies. In this one project, I had the pleasure and responsibility of being tech lead for a given project, and I was able to use my influence to get the full team, including the testers and designers and everybody, into the same room. In that particular project, I made it a policy to always have everyone on the team involved in design decisions. When it came down to code design, well, the designers don't really care about code design. The testers do, though, because they often have to write tests that interact with the way the code works, and the way the code is designed, but whenever possible, I make sure everybody in the room, everybody on the team was involved in the discussions, because it led to a better understanding, a better set of questions, and just a better way forward. |
|If at all possible, I would recommend doing that. I would recommend having the testers in the same room as the developers, and having the testers and developers involved in the same conversations, because ultimately, the testers have to understand what they are testing, and the developers have to understand what is being tested, as well as what is being developed. It's necessary to have everybody talking and everybody on the same page. |
|Joe:||Awesome. You know, it's kind of funny. It's always a non-technical seems to be fix for a lot of things. It's the most difficult thing, collaboration, and communication, especially if you're on teams that are spread out across the world. I definitely agree with you. Sometimes people ignore what seems easy, because it's the hardest thing. |
|Derick:||Exactly. My general philosophy on software development is that there are no technical issues anymore. There are only human issues at this point in time. I mean, yeah, okay, you're going to run into bugs and this framework is going to be wonky and whatnot, but even those often come down to human issues, because those crazy frameworks that have weird API's, well, those were developed by humans. There is generally very few technical problems, and if you can focus on the human side of things, and really getting collaboration and deep and meaningful relationships within your team built up, then you're going to have a much better time with your technical experience than otherwise. |
|Joe:||Awesome. Great advice. I'd like to switch gears really quick here, and go over to now RabbitMQ. I notice you have a lot of material on RabbitMQ. I guess at a high level, what is RabbitMQ? |
|Derick:||RabbitMQ is a message broker. What that really means is it's a centralized messaging service. You publish a message to RabbitMQ, it stores the message in a queue somewhere, and some other code comes along and picks up the message out of the queue. Think of it like email, or even physical mail, where you can send an email to somebody's inbox, you send it through a bunch of services and networks and what not, and it sits there in their inbox until that person decides to read it. Well, RabbitMQ works in the same way. Instead of an email, you format a specific message using something that the other consumer at the other end will be able to understand, like a JSON document, or an XML document, or a CSV, or whatever it is. |
|You send that as a message through RabbitMQ, which gets put into a queue, which is essentially the inbox, and then some additional code on the other side picks up that message when it can, processes the data, does whatever it needs, and then maybe it just deletes the message and says it's done, or maybe it sends a response back, or kicks off some additional processes, or whatever it is, but the intention and the goal of a messaging service like that is to decouple long running processes, and intensive processes, and processes that need to run on different servers somewhere else. It's just a way to create an architecture that runs distributed across your network, or sometimes even on the same physical machine, but just in a background process. |
|Joe:||Because it's not a front end type technology, how would you recommend someone test this? I think a lot of people struggle with, from a testing perspective, they always think end testing. Especially with a synchronous type messages going on, how do you approach testing RabbitMQ back ends that you're putting together? |
|Derick:||There is a couple of different aspects to it, really. It does get to be quite challenging at times, on all fronts. You have this large system that is running across multiple physical machines, or logical machines, or however you setup your application, and you need to make sure that all of it works together, but you can't necessarily test the full thing end to end in an automated way. Maybe you can if you put some extensively long delays or if you have some process to notify the test suite that the back end code is done, and it can go check things, but that can get difficult, and can cause a lot more architecture and code, just for the test suite, which you want to avoid. |
|You don't want to write code just for testing. You want to write code that is testable, but doesn't do any more than it really needs. What it comes down to often is breaking these things down into smaller pieces, and testing the individual pieces, like I said before, but what that means with RabbitMQ and other distributed systems is separating the actual communication mechanism from the data and messages that are being communicated. If you're writing a framework that is sitting on top of RabbitMQ directly, which I have a small Node.js framework called “rabbus,” the RabbitMQ service bus. When I wrote code and tests for rabbus, I have to interact with RabbitMQ directly, so I have to put in place both the publisher and a subscriber of these messages in order to make sure the messages are going through appropriately. |
|There is a necessary amount of interaction in RabbitMQ, and actually having RabbitMQ setup and running and able to deliver my messages, but I'm isolating that down into this library that I control, the library that I build, so that I don't have to do that in my real application development. In my real applications, I don't test the interaction with RabbitMQ. I test the interaction with an API where the API implementation will know about RabbitMQ, but in the actual application, instead of checking to make sure that I'm really sending a message across RabbitMQ, I checked to make sure that the API to send the message received the message that it expected. It's the part where we start ripping things apart at the seams. We start building API definitions, and interfaces, programmatic interfaces, and APIs, so that we can isolate the real network interaction from the code that needs to use it. |
|Joe:||I think I'm following. An API interface, so you can speak to the API rather than speak directly to RabbitMQ? |
|Derick:||Exactly. For example, I have a scheduler system that I built for my client, and it needs to tell an agent, which is a service worker somewhere else, a process on another box, it needs to tell an agent to go do a job. When I test the code that publishes the request for the job to be done, I don't test to make sure that the actual agent code is doing the work now. I test the “send job” request API. I have an object called “Job Request Sender,” or whatever it is, and I make sure that my scheduler object calls the JobRequestSender.send method. I do that with a mock object. |
|I make sure that I can inject a mock object into the scheduler at test time, so that the scheduler can call the mock job request sender, and then I verify that job request sender had its send method called in my assertions. The real challenge there, when you start looking at testing the individual pieces, is making sure that everything actually does work together as a whole. There's a great gif that I've seen running around where it says something to the effect of, “All the tests passed, ship it,” and what you see in this gif is a picture of a foam alphabet board, like a child's toy, like a 2 or 3 year old child's toy. In this picture of these foam letters, you can see that the letter “D” has been crammed into where the “G” is supposed to go, and the “Z” is crammed into where the “M” is supposed to go, and the “I” is crammed into where the “H” is supposed to go. |
|There's all these letters in the wrong place, but they all technically fit, because they were bent, and warped, and mashed in, so that they look like they fit. You have to make sure that your unit tests are not looking like that when you get to the system as a whole, and the way we do that with larger systems and distributed systems, when we start tearing apart the seams, and just testing the API interactions, is to have better documentation. When you know that you're going to be sending a message across RabbitMQ, you don't test RabbitMQ's ability to send a message. You test the ability for the message producer to send the right message and the message consumer to handle the message appropriately. |
|In order to do that, you need to have a well documented message format, something that can be verified on both ends, to make sure you're sending the same message that the consumer expects to receive, so that the consumer will be able to handle it appropriately. |
|Joe:||Awesome. Maybe I'm wrong, what I love about this concept, it's almost like once again, you're making your code testable by providing an API that you can interact with, without going into the guts of RabbitMQ. |
|Joe:||What I love about this approach, also, is a lot of times people are so focused on end to end testing, so in the end, when you try to build and you deploy into a [inaudible 00:23:25] integration environment, it takes 8 to 10 hours, because you have end to end tests, because you're not testing at the API level, you're testing at the browser level, because no one ever thought about testability by creating an API that would allow you to go directly to the API that would be quicker and faster, and better than using just an end to end test that probably wouldn't have [some 00:23:42] of this. |
|Derick:||Right. Yeah, absolutely, but it's never a one size fits all situation. I write a lot of unit tests, a lot of isolation tests where I rip things apart, and inject my mocks, and verify the behaviors and interactions on those mocks, but there is always a point at which I can no longer do that, and I have to put in place some kind of interaction and end to end test, and when it comes to writing a route handler inside of Express.js, inside of my web server in Node, at some point I need to be able to take, I think it's called “SuperAgent,” or “SuperTest,” one of those 2. I need to take that testing library to stand up a real instance of my express server, and handle the actual interaction and produce the HTML. |
|There will come a point at which isolation testing and unit testing is no longer sufficient, and you need to go integration testing and end to end testing, but you can't really on end to end testing or integration testing for your one and only place to be, either. You need to have a good balance between both and finding that balance is a constant challenge, and will be … It'll look different on every project that you work in. |
|Joe:||Absolutely. I completely agree with you. It's all about risk to me. |
|Derick:||Yeah, definitely. |
|Joe:||I spoke with someone and I brought this up a few times, where they claimed to have 100% test coverage and the application still failed because in the end, the customer didn't even want what they created, so it's a good point. Focus on risk, and it's different per company. It's not one size fits all, but it's all this what is the right combination, or the right portfolio of different percentages of tests that we should run for our particular situation. |
|Joe:||Awesome. Now, for RabbitMQ, once again, I believe you handled this in “Watch Me Code,” but are there any other resources that you have for people that may want to learn more about RabbitMQ and do you actually cover this API type approach that you just explained? |
|Derick:||I do have a good series of RabbitMQ for Node.js developers, which you can find at RabbitMQForDevs.com. Either the number 4 or the word “for.” Either one works. That is really an introduction to RabbitMQ, and how to work with it. It covers installation and basic management and configuration, sending messages, and then getting into the Node.js side of things, including a lot of the common patterns of usage. It doesn't cover testing RabbitMQ or testing code that uses RabbitMQ. That's a more advanced topic that didn't make it into that series. It would have added another 3 or 4 hours of videos and interviews on top of what's already 3 or 4 hours of videos and interviews. |
|Derick:||Yeah. I've been using RayGun.io to monitor application errors in production for many years now. I really like what RayGun provides, the service that they offer. There's a tonne of great services out there to capture errors and log those errors for you, but I've been with RayGun forever. They've been a phenomenal company. They continue to provide phenomenal support, and one of the reasons I like them particularly is their deep integration with a lot of different languages and environments. I can put a RayGun client inside of a browser, and capture browser errors. Anytime a browser throws an error, it'll email me and tell me, “Hey, there's an error inside this users browser. It's this browser, it's this page. It's this stack [inaudible 00:27:57]. It happened at this line of code,” and it allows me to be very proactive about solving problems in production, but it also integrates very well with Node.js and ExpressJS. |
|It's really easy to setup one global error handler in Express, and have it log all errors to RayGun.io, which I do, quite frequently as well, and again, it'll be more proactive about finding bugs on the server side of things, and getting things fixed before customers even realize there's problems. |
|Derick:||One thing to do, it's just going to be practice. Just hands down, that is the single best thing you can do is practice, and the best way to practice is to create throwaway projects that are designed to do nothing but let you practice. I do this with every aspect of development, no matter what tools or technology I'm using. If I need to get better at something, I throw all of my actual project code to the side, I stand up a dummy project that is nothing but practice for that tool or technology, and I just pound away at it for as long as I can, or until I feel comfortable enough to go back to my real production code and put it in place. |
|Joe:||Great advice. Derick, the best way to find or contact you? |
|Derick:||Best way is at DerickBailey.com. You can get all my links to everything from there. You can find my Twitter, links to “Watch Me Code,” all of my eBooks, and Screencast's and everything else are all going to be linked from DerickBailey.com. |