More and more industries and companies are running on software today. Most of the suppliers we subscribe to deliver their goods via online digital products and services.
Yes, the saying is true – Software is eating the world. So would it also be accurate to say that testers are testing the world?
As an engineer in the trenches, you can see how software is consuming more and more, and why our jobs as testers are more important than ever. It's up to us as testers to keep pace with the rapid growth. We are the quality champions and leaders of our software.
Will you allow software to become morbidly obese, with useless features or harmful side effects, or will you stand up for your users and help direct it into a way to benefit your customers and society?
Seems like pretty heady stuff, but as teams push harder and faster to get features into the hands of our customers, we are the last line of defense for educating our team to potential unseen consequences of our development decisions.
How do you prepare for this new world of testing?
The digital testing landscape is changing fast, and in this article, I will share some thoughts around the new era of Continuous Testing as well as some key pillars to help plan for this transformation — People, Process and Technologies.
Why I'm Writing About Continuous Testing?
I feel I’m in a unique situation that allows me a unique insight into the current and future states of the digital testing landscape.
After interviewing over two hundred testing experts on my Test Talks podcast as well as running multiple online Testing Guild conferences, I’m in constant communication with thought leaders in the space, which helps keep my finger on the pulse of the testing industry.
Much of the information I plan to share has been gleaned from those interviews as well as my many years of experience working in automation. Also this year one of the top topics that have come up over and over again is this term continuous testing.
So, where do we begin with Continuous Testing?
The best way to know where we are going is to find out where we have been.
Where We Are Now – Agile
Rarely do I speak to a guest on Test Talks that hasn’t worked in an Agile environment. Agile has been around for about seventeen years, but it didn’t start affecting large companies until approximately six years ago. I think it’s safe to say that Agile has now been adopted by the majority of the organizations that are creating software.
But pre-Agile, I remember working in the classic software development Waterfall model.
Within that model, testers only began testing once the software requirements and development were complete, and we as testers had limited input and insight into them. It was basically a world full of silos.
Agile broke those silos down and brought everyone together as one team.
That change was a significant one because it meant teams could work for years on a project without speaking or getting any feedback from their users. As you might imagine, resolving issues found in production were almost impossible to resolve, and fixing deep-seated architectural issues with the software required spending a great deal of time and money.
With customers’ attention spans becoming ever shorter, an infinite array of software options, and the tolerance for bad software at an all-time low, it’s critical to find a way to get our users’ input much sooner in the software development lifecycle.
Agile was the first wave of modern software development I can remember that addressed the issues presented in the Waterfall development approach.
Agile and DevOps
Once Agile took root, the second wave — DevOps began to appear.
Again, it took some time for it to gain acceptance, but it ultimately caught on even faster than Agile.
DevOps gave birth to the collaboration between software development and software operations, creating practices like continuous integration and delivery to provide services and products at high velocity. This approach creates a mechanism that allows us to quickly get our products into the hands of our users, which means quicker feedback in order to determine whether it’s delivering the value we’ve promised.
Like Agile did before it, DevOps broke down even more silos between teams merging software operations with the rest of the team.
My company didn’t start focusing on DevOps until late 2014.
If you look at Google and Indeed trends, you can see how it exploded around 2014.
It's been increasing ever since — as seen in Google Trends and in the frequency with which it appears on job search sites like Indeed.com.
This is the second wave of modern software development.
So, what is the third wave?
I would say its Continuous Testing.
Current State of Testing – Continuous Testing
The best definition of Continuous testing I've heard on my podcast is the ability to instantly access the risk of a new release or change before it impacts customers. You want to identify unexpected behaviors as soon as they are introduced. The sooner you find or prevent bugs, the easier and cheaper they are to fix.
This approach is often accomplished by executing automated tests that probe the quality of the software during each stage of the Software Development Life Cycle (SDLC). The mantra of continuous testing is “test early, test often, and test everywhere in an automated fashion.”
Testing begins not after everything is completed but from the start. Each step along the way serves as a quality gate, baking in excellence at each stage of the SDLC pipeline.
A popular term used for testing earlier and earlier in the software development life cycle is called Shift-left. But I believe that making a shift-right with proactive monitoring and alerting after you release your applications into the wild is just as important. Both shifts and everything in the middle makes up a continuous testing feedback loop.
In Episode 68 of Test Talks, Jeff Sussna, author of the Designing Delivery: Rethinking IT in the Digital Economy describes that loop thusly:
[pullquote align=”normal” cite=”Jeff Sussna – Ep 68 of TestTalks”]If you’re doing continuous delivery, which means that you’re delivering code changes on a continuous basis, suddenly it means that your software development life cycle is actually part of operations in a strange way. Then finally, because you’re continually processing feedback everywhere you’re continuously testing. Yes, it all becomes this continuous feedback loop–which is exactly why I started off the book by presenting this basic concept of Cybernetics because that circularity and steering through feedback starts to guide everything we do as a business and everything we do as an IT organization.[/pullquote]
Evolution of Automation Testing
As software methodologies change, software testing has to change as well. And just like going from Waterfall to Agile to DevOps, our approach to testing has changed along the way. This is critical to understand in order to get at the core of continuous testing.
Back in the 90’s, most of my testing activities were manual.
Then the second evolution of testing occurred with the introduction of test automation tools. The first iteration of testing tools was made available by vendors like Mercury/HP (WinRunner, QuickTest Professional), Seque (Silk Test), and IBM (Rational Robot). They all came out with solutions aimed at taking some of the manual end-to-end regression tests and automating them. The second iteration took place with the introduction of open-source testing tools like Selenium.
Back in the day, using vendor tools that were locked into their own proprietary systems and methodologies helped to feed the silo approach most teams found themselves in. This impeded the whole team contributing to the testing effort.
Not only has open-source technology injected new life into the software development community, but it also has forced tool vendors to embrace open-source tools and create integrations with them as well.
This has helped create an environment in which every member of a team, from developer to testers, can use the same tools and technologies. This, in turn, supports more collaboration and better communication in software teams because everyone is now speaking the same language.
This shift has created a new era of continuous testing.
The Birth of Continuous Testing
Now with continuous testing, we're not only running tests in an automated fashion using the same tools and languages as the developers (and leveraging open-source libraries), but we're doing them continually all the way through into production– beginning with development. We're not waiting until the end like in the old “waterfall” days.
It’s important to remember that continuous testing is not just about end-to-end, UI test automation. With the need to quickly release software, we can no longer rely on manual and automated UI testing only.
When we talk about automation in the context of continuous testing, it’s the process of automating any manual task that is slowing down the process. It doesn’t need to be a “test.” For example, before my team could do continuous integration, we needed to have an automated deploy job for our software. Having folks manually install the latest build on the nightly automated test environment was not a scalable solution. These types of tasks are critical and need to be automated.
Continuous testing is not just a “testers” responsibility.
Developer’s tools have matured enough that a programmer can get real-time test feedback on the effect of their impending change. Tooling is available that will automatically run their unit tests in the background to give real-time info about the health of their code. For example, Kent C. Dodds (TestTalks Ep. 195) mentions he has used a tool called Jest that has:
[pullquote align=”normal” cite=”Kent C. Dodds (TestTalks Ep. 195) “]…watch mode, which is like an interactive experience in the terminal. It’s a real game changer for testing workflows–especially if you're really into TDD. For example, it’s capable of only running the tests that are relevant to the files you've changed since your last Git commit, which is mind-blowing… really awesome if you have hundreds or thousands of tests and in a project that takes a long time to run it'll only run the ones that are relevant to your changes, or you can filter it and run specific tasks…or just run them all. [/pullquote]
And although automation testing is a piece of continuous testing, it’s not the only piece. It’s also about a company having a true culture of quality and testing. Quality cannot be tested into a system. It needs to be added from the beginning. Continuous testing is a way to support this practice.
Challenges of Continuous Testing
As we have seen, the challenge with moving toward approaches like continuous integration is that teams need to understand this change towards continuous testing is not just about the need to create automated scripts.
Teams also need to fundamentally change the way they do development and testing to accommodate these fast feedback loops. Teams also need to adjust to moving code out into production in small pieces, rapidly.
For instance, many teams have begun breaking down their monolithic applications into smaller pieces using microservices. A microservices approach allows them to have those small, independent services that are independently deployable and independently testable. This architecture also opens up the possibility of moving away from long-running, hard-to-maintain, UI-based automation to fast, focused, unit and API-based testing.
In continuous testing, the faster you can give your developer feedback on his or her code change, the better off everyone is.
This requires that most of the tests we create and run are at the smallest level possible to give the quickest feedback possible. I think most folks are familiar with the infamous testing pyramid by now, but at a high level, Unit tests should be the majority of tests, followed by integration/API tests, with only a small percentage of your total test suite being UI-based automation.
Getting teams to recognize this testing shift can be a challenge that ultimately holds them back.
So how do we succeed in the era of continuous testing? There are three main pillars to be aware of as your organization makes the transformation, and they are People, Process and Technology. I will go over these pillars at a high level, but some of the other chapters in this book will take a deeper dive into each area.
If leadership isn’t on board with quality, their teams’ Continuous testing their efforts will fail.
Support for testing not only needs to come from the top down, it also needs to grow from the bottom up, with developers embracing testing and testers embracing development.
As I mentioned earlier, the power of Agile is that that it breaks down the walls or silos that most teams used to work in within a waterfall development environment. Removing the separation between testers and developers forces teams to work together in the same sprint developing and testing, and in the same iteration.
Everyone on the team needs to take responsibility for his or her contributions to the software development process.
This concept of working together rather than having separate development and QA teams can cause confusion when a firm begins making the move towards continuous testing. Developers need to be educated that “automation” doesn’t just refer to UI tests. They need to be encouraged to embrace test-driven development (TDD) approaches to make their code more testable and in an automated fashion.
Testers need to help shepherd their teams along with testing. They should also be technically aware enough to be able to explain to the developers what their expectations are, and to know at what level a test should be done.
The phrase “technically aware” is one I’ve heard from Lisa and Janet Gregory, co-authors of the books Agile Testing and More Agile Testing. So, what does being technically aware really mean?
In More Agile Testing, Lisa and Janet describe technical awareness as something that covers the ideas of technical skills needed for testing and communicating with other members of a development team.
If your team really understands the whole-team approach of everyone working toward the same goal, then testers and developers can share a task–like the job of coding an automated test, for example. A technically aware tester can also collaborate with the programmer (whose life is programming and who is really good at it), and if your tests are written in the same language your developers use it will help your testers to collaborate more effectively with your developers.
If testers can’t articulate those things, it makes it difficult for them to, say, approach their managers and tell them why something can or cannot be automated. Or why an approach the development team is taking is not the best option.
So the burden for change is not just on the testers and the developer – it goes all the way up to the C-Suite. Quality needs to be embraced by everyone in order to succeed.
But one major gap I’ve seen that is a big impediment for teams making this change is lack of proper training.
Companies and consultancies often omit technical coaching, leaving the teams to figure it out. In my experience, this is a bad idea.
Stephen Vance, the author of Quality Code: Software Testing Principles, Practices, and Patterns (and a past speaker at Guild Conferences) mentioned in his session that management often expects improvement without providing the support to ensure success. As a result, teams end up compressing their familiar processes, wondering why things aren’t improving much. In fact, this need for continuous testing increases some of the traditional tensions between testers and developers.
With DevOps and Agile, we’re all trying to create and release quality software quicker and more often, but I think simple things like training are sometimes overlooked. We can become so focused on velocity that we lose sight of things that, in the short term may slow us down, but in the long run will make things better.
Be sure you have a training plan in place for your folks before going “all in” on continuous testing.
Once you have your folks on board and trained, you’ll need a process.
A typical continuous testing process consists of seven key elements:
Ultimately, the process starts with testing the quality of the feature.
Is it really what our customer wants? Has your team cleared up any confusion or misunderstandings before coding even starts?
I recently came across a study in Crosstalk, the Journal of Defense Software Engineering, which showed that 64% percent of total defect costs are due to errors in the requirements and design phases of the SDLC.
Getting clear on what it is you are trying to deliver to the customer can find bugs before a line of code is ever written! This is one reason some teams use acceptance criteria practices like Behavior Driven Development to help drive this communication and test the team’s assumption against what their customers really want.
Once the team agrees on what it is they are developing, testing approaches like TDD should drive the process and let you know if your code actually meets your business objectives.
Code that is checked into your continuous integration process needs to be probed for quality. Automated style checks, security, performance and unit tests, automatic tests on check-in with a required pass/fail ratio needed before promotion to production, etc., will ensure that broken code is not promoted to production.
Once the code is deployed, production is monitored and data is collected to make sure it’s actually meeting the customer’s expectations. You can also proactively adjust to issues introduced by code changes before they impact your customers. All this feedback is collected and used to feed the process all over again. So it’s an iterative approach, with teams consistently acting and adjusting based on the data they are receiving from the feedback loop.
The goal is to deploy to production many times a day, measure impact, collect data, learn from small experiments that feed even more ideas, and the process starts all over again. This approach is a game changer. It’s preferable to waiting months or even years to deliver something to your customers only to discover it’s not what they really wanted, or that the architecture is completely wrong. You save time and money because you’re able to weed out imperfections as soon as they enter production and self-correct based on quick feedback.
Remember: the key to the process is to create quick feedback loops.
If teams are ignoring what a test is telling them, delete it. The objective of continuous testing is not to create tests – it’s to get actionable feedback as soon as possible. If a test is not providing that, keeping it around will just slow teams down by adding noise to the loop.
Testing in Production
Testing in production is important because certain situations only occur in the wild and they are common ones that aren’t anticipated. The problem with complex systems is that they can’t be modeled very well. Even worse is that you can’t know in advance how they’re going to behave. The system may be very resilient, but they’re also very sloppy and have a lot of failures that can’t be avoided.
Most teams feel that the whole idea behind testing is to avoid failures in productions. With such complex systems, you can’t think this way. The mindset change you need to embrace is that you have to get comfortable doing some failure discovery in your production environments.
So having a monitoring and alert system in place to find these unanticipated issues and having that tracing in production is critical.
For example, you want to know immediately if one of your services goes down or becomes unresponsive. By spotting an issue during production with the help of monitoring, you can often automatically roll back to the last-known, good version of the service many times before your users even know there's an issue.
The main part of the process is teams coming up with the metrics they will use to help measure the quality of their code at every stage of the SDLC, and how to react to poor quality.
You need to understand the status, progress and quality level of each change you have in your pipeline. You also need to come up with some key metrics that capture how each of the changes will impact your end user.
Some examples of metrics I’ve seen teams use are:
- Application Performance – you need to make ensure there is high availability of your services and application.
- Measure usage of the newly released or modified feature. Things to use as metrics could be Measure Usage, Request, Impact on Sign-up, Revenue.
In order for your people to put the continuous testing process in place, they’ll need the right technology. With so many releases and changes in the pipeline, what tools and technology can your team leverage to handle the situation?
Tools & Technology
Tools need to be lightweight and easy to maintain as well as integrate with existing infrastructure. Once again, deciding what tools to use is a team decision. Some tools fit in better with certain teams. The need to evaluate and determine which one works best for them as a team, and fits in with their own unique delivery process.
So what tools support the continuous testing practice for a fully automated delivery platform? Our software needs to be functional, performant and secure. Tooling is needed to help with all these areas.
- Tools to define your users’ stories
- Tools to implement your stories
- Tools to create builds and test runs
- Tools for automation
- Tools for infrastructure
- Tools for production and monitoring
Here are some common testing libraries and tools, and wherein the Continuous Testing life cycle they would be used. (** This tools list is not complete; it is not an endorsement and is not ranked in any particular order.)
Developers need tools to assist them as they begin their coding efforts.
As I’ve mentioned, unit testing is a critical piece of continuous testing. Most development languages have a unit testing framework (or something similar) available. Here are some of the more common ones:
Unit testing is testing the smallest, single amount of code or discreet behavior as possible; usually at a method level. A unit test shouldn’t have any dependencies on anything external, such as other methods or APIs. The reason for not having dependencies on anything else is that if the unit test fails, it’s easy to know where it failed. In order to accomplish this, there are many mocking frameworks to mimic these services, in order to allow the unit test to stay self-contained.
Acceptance Criteria Tools
To ensure the correct thing is being developed, here are some acceptance criteria tools:
Software needs to be in a constant working state and be available to ship to your customers at any time. Using continuous integrating tools, you can prove that your software still works as a whole with every new check-in.
API testing tools
If a test cannot be covered at the unit level, the next level to focus on would be the API layer. Luckily there are a bunch of tools, both paid and free, available for testing APIs:
- Blazemeter API Functional Testing
- API Fortress
- Citrus Framework
Test Data Management
One common automation pitfall some teams fall into when moving towards automation is not having a test data strategy in place. If you’re a test automation engineer, you’ve probably faced test data dependency issues in your test automation suites that have caused all kinds of flaky test behavior.
Not only can this be frustrating, but it can also make your tests highly unreliable, which in turn can make your team lose confidence in your test suites. There are approaches you can use to help, but also there are some tools out there designed to tackle this tricky problem:
- CA Test Data Manager
Functional Automation Tools
There are a plethora of functional test automation tools available for your teams to choose from. Tools should be selected based on a team’s needs, and not what is the most popular one. Here’s a quick sampling:
- QuickTest Professional
- Automation Anywhere
- Test Architect
- Test Complete
Feature Flag Software
Releasing a new feature can sometimes be scary; especially if you’re not 100% sure how it will react in production. Using a feature flag lets you easily turn on and off certain features for a percentage of users. This allows you to experiment with new features and control how soon you roll out those features to all your users in production.
Performance Test Tools and Monitoring
Performance testing, just like functional testing, needs to shift-left and shift-right. One exciting thing about having a full SDLC feedback loop is that when you understand your software’s performance profile in production, you can then take that data back to your teams to make sure they’re testing the right things in development.
- New Relic
- App Dynamic
Infrastructure Automation and System Provisioning
Another important piece of the testing transformation is to start treating your infrastructure like code. Provisioning environments in an automated fashion are also crucial to be able to quickly scale up test and environments. Here are some tools to help:
Device Brower & OS Coverage
If you’re developing a web or mobile app, you’re going to need to test it against a variety of operating systems and browser combinations. Creating an in-house lab to handle all the devices and combination permutations that you need to test against can be really expensive and time-consuming to set up–especially if you plan to share those devices across different team members. Cloud service providers get rid of all that complexity and cost by putting it in the Cloud.
- Sauce Labs
Test Case Management
A Test Case Management Tool helps keep teams on track with how testing needs to be done. It allows teams to plan activities, and report on the status of those activities to your management. Different tools have different approaches to testing, and thus have different sets of features. You need a good system that will allow you to create a test plan, and set yourself up in a way that you can be successful.
Once you have the tools in place to support the continuous testing practice, the next step is to optimize and make it better with each iteration.
What about the next wave?
The Immediate Future of Testing is Predictive
As we strive to improve our process and gather and act on actionable metrics for improvement, I see AI/machine learning playing a larger role in the near future. It will enable us to continue moving towards a Predictive model of each test cycle and release to identify risk even faster and earlier than we currently can.
If you’re doing continuous integration and testing, you’re probably already generating a wealth of data from your test runs. But who has the time to go through it all to search for common patterns over time? Wouldn’t it be great if you could answer the classic testing question, “If I’ve made a change to this piece of code, what’s the minimal number of tests I should be able to run in order to figure out whether or not this change is good or bad?”
Lots of companies are leveraging existing AI tools that do just that.
AI is a real thing and should not be dismissed by testers as another buzzword in the industry. I find it hard to doubt or bet against companies like Google who are heavily investing in this technology. For example, at a recent Google conference, CEO Sundar Pichai opened the event by stating that, “We’re moving from a mobile-first to an AI-first world.”
Some Key Takeaways (What I like to call Automation Awesomeness)
1) Whole-Team Approach
You want to improve communication with your testers and developers and create a whole-team approach. In my experience, not having the right quality culture will ruin any continuous testing efforts you try to implement.
2) Smaller is Better
With continuous integration, your developers need to make their code testable. If you want to write test automation for your code, you need to be able to separate it into individual pieces.
The secret is to build small things that can be combined into larger things. The best way to build small things is to have a good test suite around the small things, so that when you combine them into the bigger things you have to write less tests on the bigger thing first off; it’s also easier to test the bigger things, because you already have guarantees about how the smaller things work.
You don’t want to write code just for testing. You want to write code that is testable but doesn’t do any more than is needed. What it often comes down to is breaking these things down into smaller pieces, and testing the individual pieces.
3) Automation is a Must
You can’t succeed in the world of Agile/DevOps without automation. Period.
4) Pick the Right Tools
There are a bunch of tools out there. Each team is unique, and there is no one tool that everyone should use. Regardless of which automation testing tools are selected, I always recommend doing a two-week proof of concept (POC) to ensure the solution actually fits in with your team’s development workflow.
5) Listen to what Your Tests are Telling You
Listen to what your continuous testing feedback loops are telling you. This is sometimes called “code smells.” Code smells are indicators that something in your code or process isn’t right. Being aware of indicators like this is the first step to improving your continuous testing process.
The whole point of performing tests is to get a clearer picture of where you stand in terms of being ready to release software. If your teams start ignoring and devaluing tests it will be difficult for you to move forward to continuous testing.
So that’s my whirlwind tour of the way I see the emergence and future of continuous testing.
It’s a great time to be a tester! Focus on your People, Process, and Technology and constantly be adjusting how you develop based on your Continuous Testing Feedback Loop.
Keep in mind, however, that this won’t happen overnight. It takes time for teams to get it right.
No worries. Start planning now.
Create a small feature and run it through the process, and be continually striving to make the process better. Listen to the feedback from your customers and team; use it to help you build the perfect plan for your continuous testing process.