About this DevOps Toolchain Episode:
Today, we have a special treat for all DevOps and DevSecOps enthusiasts. Joining me is Javier Alejandro Re, CEO of Crowder and founder of Lippia.io, with over 15 years of experience in technology and business.
In this video, you will learn how to build a real-world DevOps pipeline, seamlessly integrating quality and non-functional requirements. From code quality checks using tools like SonarQube to advanced security scanning and automated functional testing, Javier explains everything you need to know to improve your software quality continuously. Watch to the end to see the recommended tools for each stage of your pipeline, along with examples.
Check Out Lippia For Yourself!
We'll explore the four essential stages of a DevSecOps pipeline: code quality checks, security scanning, functional testing, and performance testing. Plus, Javier shares invaluable insights on tools like Git Leaks, Cloud, Trivy, Defecto, and k6 and how they can be integrated into your pipeline to detect and mitigate vulnerabilities effectively.
Stay tuned to discover the practical steps to kickstart your DevOps journey. These steps are designed to be straightforward and actionable, empowering you to maintain a high-quality codebase and ensure your microservices meet expected response times. Don't miss out on exploring additional resources like lipia.io and their GitHub repository, and find out about a 30-day trial offer for their test manager tool.
So, if you want to enhance your DevOps pipeline and stay ahead of the game, this episode is packed with actionable insights. Let's dive in!
TestGuild DevOps Toolchain Exclusive Sponsor
Are you looking to enhance your automation testing DevOps efforts? If so, I've got something that might interest you.
I recently came across Lippia.io, a powerful automation testing platform that's designed to simplify your testing processes and boost your productivity. What's great about Lippia.io is its user-friendly interface and robust features that cater to both beginners and seasoned testers.
Right now, they're offering a free trial, giving you the perfect opportunity to explore its capabilities without any commitment. Whether you're working on web, mobile, or API testing, or trying to create a quality DevOps pipeline like the one we talk about today Lippia has tools that can make your life easier.
So, if you're curious to see how Lippia can fit into your testing workflow, head over testguild.me/pipeline and sign up for the free trial. Give it a spin and see how it can streamline your DevOps testing efforts.
About Javier Alejandro Re
Javier Alejandro Re is the CEO at Crowdar and Founder of Lippia.io. He is dedicated to continuous growth in IT consulting, software quality, and test automation with high-quality standards. He has a Software Engineering degree, a PMP Certification and a Master in Business Administration to enhance his business skills. He has more than 15 years of experience in Technology applied to Business.
Co-created CrowdAr in 2013, after a series of successful projects in Argentina, and has expanded its operations to Manchester, UK, developing the first integrated BDD Automation Test Framework for the Cloud.
Connect with Javier Alejandro Re
- Company: Crowdar
- Blog: www.lippia.io
- Twitter: www.crowdarinfo
- LinkedIn: www. rejavier
Rate and Review TestGuild DevOps Toolchain Podcast
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:00] Joe Colantonio In this video, you're going to learn how to build a real world DevOps pipeline step by step, learning how to seamlessly integrate quality in nonfunctional requirements, from code quality checks using tools like SonarQube, to advanced security scanning and automated functional testing. Join us as we have expert Javier, CEO of Crowdar and founder of Lippia.io, who will explain everything you need to know to improve your software quality continuously. You want to make sure to watch all the way to the end, to see the recommended tools for each stage of your pipeline along with real-world code examples. Before we get into it, quick question, are you looking to enhance your automated testing DevOps efforts? If so, I've got something you might be interested in. I recently came across Lippia which is a powerful automation testing platform that was designed to simplify your testing process and boost your productivity. What's great about Lippia is it has a real user-friendly interface and a bunch of awesome features that really catered to both beginners and expert testers. And right now, they're offering a free trial, giving you a perfect opportunity to explore its capabilities without any commitment. So whether you're working on web, mobile, or API testing or trying to create a quality DevOps pipeline like the one we're going to cover in today's episode, Lippia has tools that can make your life a lot easier. If you're curious to see how Lippia can fit into your testing workflow, head on over to testguild.me/pipeline and sign up for the free trial now. Give it a spin and see how it can streamline your DevOps testing efforts. Check it out.
[00:01:35] Joe Colantonio Hey, Javier. Welcome back to The Guild.
[00:01:37] Javier Re Hi, Joe. Thank you. Thank you for welcoming me back here in this impressive podcast. I'm very, very proud to be here again.
[00:01:47] Joe Colantonio Great to have you. I'm really excited about this episode because I get asked all the time, hey, can you give me a real-world example of this or that? And especially for DevSecOps pipelines, a lot of times people don't know, how to work, so how to create one in the real world. I'm really excited about this. Let's take it away.
[00:02:05] Javier Re Okay. Well initially, I will show a small presentation just to have a quick overview of what we would see. Basically, this is a pretty simple pipeline which is compound by different stages. But before and even in between these main stages of deployment, a solution like build code package and deploy, we will be include different stages that are important for our the quality, as I mentioned, like, including quality scan of the code, security scanning, functional testing and performance testing. We will focus in these four stages, which is the quality checks that we will link to the pipeline. And the idea of this is that every time that someone makes a commit into the specific piece of code, we will execute this check ins that assure the quality or at least improve the quality of the product that we are building. For this example, we will take into account a simple pipeline that is configured for a set of APIs or microservices that supports in particular a product, a web product. We will see the different check ins. So the first checks that we will execute is a code quality check basically. Because the quality process is not just making testing or just the functionality of an application, we tried and it's a matter that we tried to incorporate in our projects is include the quality in all the stages of the software construction. The first step is to check the quality of the code. That's why we include this step. In these steps we will make in this example three simple steps. First of all, scan the quality of the code looking for two things. One is Vulnerabilities. Code Smells. Code Smells are commonly bad practices in code that makes the code less secure or less maintainable or different attributes that we look in the code when we write good code. And then we will check the unit test coverage of that code. It means that every method, or at least, a big percentage of a method has their own unit test and then run those tests to check if they are pass or there are some bugs in the code. To do that, we use basically a set of tools. One of them is probably, well known tool which is SonarQube. SonarQube is a scanner that act into the code that we are analyzing and detect different things depending on how we configure it. We can configure Sonar to stop a deployment if a certain percentage of unit tests are not accomplished or if, they have vulnerabilities in the code, or they have certain rules that are not accomplished in the scanning. And based on that, you can stop or deploy or we can just launch an alarm or a report to make the responsible of the code to check that code and correct after. In this case, I will show briefly an example of the outcome of that stage in a pipeline. If we look at the pipeline, the pipeline will look like a chain of different steps that are executed. Some of them in parallel, some of them in one after the other in serial process. In this case, we will review the Sonar scan. The first step, so we can see here is the code quality where we check the Linter, which is scanning for code smells, as I mentioned. Then the unit test and then the execution of those unit tests. In the unit test, we check two things. One is how much coverage we have of unit test. And then the result of those unit tests executed properly. This step in this example that I showed so in the presentation is passed. This pass the steps successfully. But each of these steps generates an outcome. In the case of the unit test execution, we can see a result of all the unit test that this piece of code has build it. The coverage is not much at all at this stage, but this is a sample, but we can see all the results of those unit test execute. But also, we can see how much coverage this piece of code has using SonarQube. These components are open source component that I have configured in a local pipeline, on prem. But you can use this product, on the cloud using the versions that are available commercially. But in this case, as you can see here, we can see all the other results of the scanning that was executed by SonarQube with a different component, where we can see the test coverage, duplications, different things that are related to what I mentioned before, the maintainability, reliability and security of the code, configured in this example. Obviously, these are rules that has to be configured depending on the language that you use but also depending on the level of risk, on the level of configuration you have. Obviously, if you put the quality gates to difficult to pass, probably your development team has problems to deploy things. So you need to configure that, customize it to your needs and to your language. And the risk that you are open to manage, basically.
[00:09:24] Javier Re Going further with the next steps that we passed the code quality, the scan. And then we move forward after a build and package. We move forward to the security portion. In this section we will pass different sections, for different scans that we will execute in parallel. And with those results, we inject those results in a report tool that we will see in a second, but basically, these has different steps. The first step is to scan to make a static security scanning of the code. And this will analyze also other vulnerabilities that are not checked by Sonar. Sonar could do there are some overlap in between these tools. But in this scenario I configured Sonar for using to detect more language kind of bug practices. And delegated the security version of that to GitLab. In this case, we are using GitLab on-prem, we have this feature configured in our pipeline to scan vulnerabilities of the code and bad practices. The next step, that we would use is to scan for credentials. Common problem of the development process is sometimes we need to configure central credentials to access or I don't know, a token to access microservice that has security, database credentials, credentials to access, I don't know, a bucket to store objects in AWS, etc.. All those credentials are managed sometimes in different portions of the code. And this tool has the ability to detect those credentials in spreading into the code. When the code goes bigger and you have many microservices or many pieces of code, you can accidentally leave credentials in configurations files or even into the proper code and hard coded storage. This tool detects that kind of credentials and makes a report of that. You can then, basically correct that putting those credentials in a proper place like a bolt or I don't know, encrypted files, etc.. The solution that you want to check to avoid basically leave the credentials into the code in clear text.
[00:12:19] Joe Colantonio Okay. So for the folks that are just listening to the audio, the name of the tool is Gitleaks.
[00:12:24] Javier Re All these tools are open source. And obviously at the end of the presentation, we can share the link to each of the tools to let the people experiment with that. The next step that we have in the pipeline is the dependency check. Basically, the dependency check look for vulnerabilities into the libraries that we use in the code, not just in our code, because remember that we scan the static code that we route using it GitLab. If we have a dependency that have a vulnerability and that vulnerability is reported in a central repository like a or OWASP or depending on the architecture different repositories, we can detect with this scanning dependency check which libraries has vulnerability. In the case that we detect vulnerabilities in those libraries, we can replace for a newer version of that library, that has vulnerability corrected or eventually replace that library, if that library is not yet supported by the owner of the tool. This is expanding the scanning not just to the proper code that I wrote, but also to the libraries that I use, which is very common in the open source space where we use a lot of libraries to do different things.
[00:14:05] Javier Re Next scanning that we include in this pipeline is Cloc. Cloc is basically to make a look to sophisticate the thing. But the only main thing that Cloc makes is to count the lines of code and blank lines of code. Basically, Cloc gives you an overview of how big is your software. How big is your code? This is useful basically to prioritize as we wrote much and much code we need to the software goes bigger and sometimes resolving or vulnerabilities or attend to correct this vulnerabilities detected by the scanners are very complicated in terms of effort. Basically, having the size of each component in lines of code, we can prioritize, which pieces of code attempt first and which of them do it later. Basically, Cloc gives you a sense of the complexity of your code. The bigger is the piece of code, the bigger the complexity and probably will have more priority to fix those issues eventually when you need to decide which we to solve first. This is important because as soon as we put these scanners into the pipeline, it start to generate a lot of things. So sometimes it's difficult to reduce the number of vulnerability to a level that is acceptable for the business. This is another tool that helps to prioritize the sculpt in terms of fixing those vulnerabilities. Then we have Trivy. Trivy is another open source tools tool that makes, basically the ability to scan if we use containers into our solution, basically, this allowed to scan a container after build it. So the previous tools, we are just analyzing code or stages prior to the building of an image that can be used later in a container. So this tool has the ability to scan at the container after the build. So you know that the containers usually have an operating system, but also components that are added to the operating system to let the code run in those containers. This scanner basically scan the infrastructure components, not just the code. All the operating system dependencies that we can put in an image to build a container. After to build a container, we scan the container to detect vulnerabilities in the code of the operating system and all the dependencies.
[00:17:34] Javier Re And finally, all the scanning that I mentioned in the security stage generates basically a JSON file or a CSV file with all the results of those scans. Remember that all the scans that I mentioned are open source. We try to keep that model and to read those scans that usually are JSON files are not easy to read easily. We put a tool called DefectDojo. DefectDojo is basically a report tool that consolidated security scanning results. If we look at the pipeline that we have before. We can see in the security section that we have different scans that I mentioned. Dependency check that will check secret 3D container. All these stages in the security stage are generating results. And those results are consolidating in a specific step that we put in into the pipeline core report that consolidate all the results in this tool that I mentioned called the DefectDojo. If we look at DefectDojo, many, many reports, I would show just a couple of them. But basically, the main source of data that DefectDojo let you analyze is vulnerabilities. DefectDojo let you analyze the vulnerabilities with different aspect, by vulnerability, historically checking which vulnerabilities are more critical, which are high, which are medium, and based on what I mentioned before about the fixing of these vulnerabilities and using Cloc, which is gives you the size of the software, you can prioritize which components of your software you have to fix first. Based on the complexity, based on the how much usage that component has into the application, and different aspects. In this case, we can see a dashboard with historical funding of vulnerabilities. Then we can see another report that classified how many vulnerabilities each component have. In this case, I am just looking at one microservice with all the vulnerabilities that has. And here we can see an example of the size of this component. In this case we have a pie chart with the lines of code that has this component, which is 23,000 lines of common TypeScript. And then different lower level in different languages or component. But this gives you the size of that. Also, we can see the vulnerabilities or findings divided by component. In this sample, we can see each microservice with all the findings that are active. Active findings are vulnerabilities that are not solved. Just to clarify. For example, in this case, we have the major level of vulnerabilities in the frontend, which is a big piece of code. And then the rest of vulnerabilities is spread out in the different microservices component application. And also, we can prioritize and see vulnerabilities, by vulnerabilities by priority and by page in the report. The longer the vulnerability are in the report, the bigger the risk to have an issue with that vulnerability if we don't fix in time. Also we can see the libraries that I mentioned that with the dependency check, we can see each of the libraries that we use and see how many vulnerabilities have these all these components. We can see, in addition, a dashboard with metrics, with all the statistics about these vulnerabilities. How many vulnerabilities of each level of .... We have over the time, over the components and so on. This is a big reporting tool. So has many, many reports. But it's interesting to manage all the security aspect of doing the scanning.
[00:22:39] Javier Re Going back to the presentation, the last two steps that we include or we included into the pipeline right now, we made code quality and we made security checks. Right now, we will see functional testing and performance testing. The last two things that we like to include it into the pipeline. For functional testing, in this particular example, we are using Lippia as a framework that we use frequently. It's built based on open source components like Cucumber, Selenium, Appium, etc.. In this scenario we will scan, we will execute all the needed test in this pipeline to do a deploy. Basically in this scenario, we can select different stages. For example, if we are merging code into the development environment with a small commit, we can run just a smoke test classifying subset of the test that we have in the automation suite. As soon as we need to make a deploy to, I don't know, QA environment or staging or production, we can run a full regression test. With the same solution, we can run different scope of test in an automated way. So in this example, I can show you what are the results of those test. Okay. Basically, we can see in the same way that we see in the DefectDojo the results of vulnerability, we can see the metrics of the functional automated test executed into this application and see different metrics like how many tests we have in the smoke suite, how many we have in the regression test, split by functionality, split by automated coverage or not. As soon as we automate the test, we can see how many we have automated versus how many we have not automated at this stage of the metric. And see the evolution of the different status. Also see the results over the time. So how many test we executed, how many passed and how many failed in the passing the different stages over the time. We have an overview of the results, but also a detailed view of the results into this section. Basically this is to cover the functional test. Going back to the presentation, but also we include, at the end of the pipeline a performance test. In this case, the performance test that we include into the pipeline are not a stress test right where you have a controlled environment and you set the application to the limit of the infrastructure or the capacity. In this case, so we put a performance test which are in charge of testing if the performance of each microservice or each component of the application is generating the response time that we expect. We can put limit of minimum response time and maximum performance time. And with that, we check that the test that we executed are between those minimum and maximum value. We use the K6 to write this test suite. You can write those test in different tools. But we choose K6 for this specific example. And we can see the results in a dashboard, which is pretty simple. But basically, we chose K6 because this is useful to write a specific test in code and not to use in scripting, which is more powerful in terms of checking the performance. So if we can see the results, this is a pretty simple dashboard. And it's the last step of the pipeline. Basically, we can check all the requests that we made. And how much of them are in the minimum and maximum that we define for this specific microservice. And we have different additional information on how many users we have. Always remember that in this performance test, we don't try to put a bottleneck or to generate a bottleneck into the application like other kind of performance or a stress test. In this case, we just want to check that the new code is not generating a lack of performance of a reduction in the response time of the services.
[00:28:28] Joe Colantonio How does it know that? Does it keep historical data to say last run it was like 10s now it's 20 or do you have to just do it manually?
[00:28:36] Javier Re No, in this in this scenario, you can see the pipelines. Obviously, this tool is an open source report which is pretty simple. K6 has many other tools in the cloud version that you can store historical data, but eventually, using GitLab as we do in the pipeline, you can see the different execution of the pipeline and check how the application performed in each of those pipeline. But this report specifically doesn't have a historical data, as we see in the other ones. But indeed, all this execution can store into GitLab. In this particular case, with the all the results attached to it. Basically going back to the beginning, just to make a small recap, basically we see four different stages into the pipeline, a real pipeline that check the quality of the software solution, starting from the code quality just to check a good practices in terms of maintainability, unit tests, etc. also checking security vulnerabilities into the code and dependencies and images that we build for deployment in Kubernetes or Docker and also executing two main steps, one for functional test in automated way and also performance test, all the in a single pipeline. This is obviously a thing that you can do incrementally. So starting for code quality eventually, and then at some security scan all the scans that we have. And then, increase the complexity of the pipeline as soon as you feel comfortable to maintain the outcomes of these scanning and quality checks.
[00:30:53] Joe Colantonio Javier, quick question. If someone's just getting started and they're seeing this, is there one piece of actionable advice you can give them on where to start? Rather than being overwhelmed, going I need performance and security and all this, right now?
[00:31:05] Javier Re Yeah. Well, for my perspective, a good thing to start this, to start making some code quality checks at the beginning. Because as soon as you start doing that, you improve the way that you building code. You write code. So first, start code coverage, scanning, and Code smell depending on the language that you use and also do a basic static analysis. The static analysis gives you a quick overview of how good or how bad you are writing your code basically. With that, with those two things, I think it is the first step. And then if you are doing automation, in your functional test, as soon as you start building a single automated test case including to the pipeline, because a common problem is happens that we see in many projects. Is that many teams build a lot of effort to build automation, but they don't give feedback to the developers of those tests. So as soon as you put those tests in the pipeline that start to give feedback to the developers. The main purpose of all of this is to give feedback to the development team on how the things can be broken in production later on. So as soon as you include this information and this feedback to developers. These three main steps I think is. So scanning the code for code smells vulnerabilities and functional test. After you can add more things like performance check ins and and more complete security scanning. But I think with this three step system the basics. I think that you can use.
[00:33:05] Joe Colantonio Are there any plans to share the code or repository access?
[00:33:09] Javier Re Yeah. Yeah, we can share all the tools that we use. And in each of the tools has a different repositories, of sample code. This is obviously, part of an effort that takes some time to build it so the don't overwhelm to put all of these because on other hand the many checks that you put in your pipelines is more difficult to maintain later on. And what usually happens is that, the developers then ignore all of the reports because there's so many things to take care of.
[00:33:53] Joe Colantonio All right, so wrap it up then. If people want to learn more about Lippia from this demonstration, how it seemed to really make the functional testing lot easier in the management, where can they learn more?
[00:34:04] Javier Re Well, basically, you can check in Lippia.io which is the website, but also connected to Lippia.io is github.com
[00:34:15] Joe Colantonio So it's github.com/Lippia-io. And we'll have a link in the show notes down below.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.