Pipeline as Code with Mohamed Labouardy

Published on:
Mohamed Labouardy

About This Episode:

Automation of your functional tests is important, but many testers stop there. In today's continuous testing world you need to automate many aspects of your CI/CD pipeline as well. In this episode, Mohamed Labouardy, author of the book Pipeline as Code, will share tips on how to succeed with the automation of your continuous delivery efforts. Discover tips for using Jenkins, Kubernetes, Terraform, and much, much more. Listen up!

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Mohamed Labouardy

Mohamed Labouardy

Mohamed Labouardy is the CTO and co-founder of Crew.work, and a DevSecOps evangelist. He is the founder of Komiser.io, author of “Hands-on Serverless Applications with Golang” and “Pipeline as Code”, open-source contributor, and regular conference speaker.

Connect with Mohamed Labouardy

Want a FREE copy of Mohamed's book?

  • Check out the book –>[here] GET 40% off using the code podguild19
  • ***Want a FREE pdf copy of the book Pipeline as Code? Leave a comment below and Mohamed will choose the best 5.***

Full Transcript

  • Mohamed Labouardy

Joe [00:01:16] Hi Mohammed! Welcome to the Guild.

Mohammed [00:01:23] Hi Joe! Thanks for having me.

Joe [00:01:24] Awesome to have you. So Mohamed before we get into it is there anything I missed in your bio that you want the Guild to know more about?

Mohammed [00:01:29] No, I think you have covered everything.

Joe [00:01:31] Sweet. So I really like this book. You actually start off the book. If you've never practiced continuous integration and continuous delivery principles as well as DevOps practitioners, you can learn from this book. So it really starts almost from ground zero and ramps people up all the way to having a full-blown pipeline with all these cloud integration providers. So I really love that approach. So I thought we'd start the podcast off with this approach and talk about, you know, what is CI/CD? I say this term all the time on the podcast. Not sure I've ever really defined there what actually it is.

Mohammed [00:02:01] So basically CI/CD, it starts with the CI, which basically stands for continuous integration. So basically it's the idea of having a shared centralized repository and which are the changes and features of your applications will go through a pipeline before integrating them into this repository. So a classic CI pipeline will be triggered each time you push something to this repository. So for instance, if you push and you commit to this repository which will run the unit test, the quality test, the security, etc., and it will build the artifacts or compile the source code of your application and then maybe it will push this result to a remote repository. So, for instance, if you are using Docker, it might build in the docker image and store it on a private registry. If you are working with Java, for example, it might build JAR and store it on a Maven repository, etc.. So this is a simple example of continuous integration. As for continuous deployments, it's basically an extension of continuous integration. So all the changes that will go to continuous integration will be deployed automatically to our station or our preproduction environment. And finally, for continuous delivery, it's basically similar to continuous deployment. However, the deployment is not automatic. It requires a human intervention or a business validation. So, for instance, you will need to wait for acceptance tests or a business validation from the business department, etc. before deploying the release to production. So this is the definition of CI/CD pipelines.

Joe [00:03:39] Nice, so whenever you talk about CI/CD pipelines, what is a pipeline then? I guess you mentioned different phases. Is it phases that happen during the software development lifecycle as you're merging code into main or master or whatever?

Mohammed [00:03:52] So basically our pipeline is a set of stages. These stages can be executed sequentially on a parallel. So basically a stage is a set of commands or steps that you will run locally on your machine just with the CI/CD and with the tools that are emergent on the market today. You will run these steps on our remote server. So, for instance, you can use something like Jenkins or a SaaS platform like (unintelligible) to run your steps. So basically a stage is just a set of steps that you will run locally, for instance, to compile the source code or to run unit test, etc.. So you just put these steps on something, what we call pipeline or a Jenkins file to automate the deployment end-to-end integration of the application so it can be integrated every time you push something to the remote repository in which your source code is stored.

Joe [00:04:46] So why would teams want this? I just assume everyone is doing this, but for teams that maybe aren't, why would someone want to actually use CI/CD or create a pipeline in this type of infrastructure?

Mohammed [00:04:58] Okay, so there is a lot of reason why people need to embrace the CI/CD approach. One example of that is it will help you detect anomalies at an earlier stage, which will help you reduce the risk of releasing something which contains a lot of bugs that might impact your business. And also it will also minimize the technical depth because you will have functional test, unit test, etc. which will help you improve your code quality and reliability and also it will give you a better visibility into your project status and health. So we'll have a better idea about the product roadmap and the features that you are building. And also it will help you have some kind of preproduction environment in which you can do all your tests and also produce bugs and do some AB testing before releasing new features to the user. So this is an example of why people need to embrace CI/CD approaches.

Joe [00:05:57] You cover all this in your very first chapter. So there are a few more things I want to pull out of Chapter one before we go to the next chapter. You also talk about cloud-native apps, so I'm curious to know why you decided to talk about cloud-native apps. Are they different from normal apps? Is this a trend you see more people going towards now as well that they need to be ready for?

Mohammed [00:06:14] Yeah, I think the cloud-native application is something that has started to be used incessantly in the past two years, especially with the adoption of cloud. So you have a lot of people migrating from on premise to cloud, especially with tools like coverage test service, etc. So with this new architecture inspires them of building applications. It changes also the way people doing CI/CD, because when you are playing with the cloud, so you have a lot of services that you need to integrate with. So people also need to adapt their process and their pipelines to be able to build their cloud-native applications. So this is the idea why I'm covering a lot of chapters regarding cloud-native applications because there is a lot of people today who are using Kubernetes service applications like lambda functions, cloud function, etc. So it's something which is very trendy today. And there is not a lot of resources that we can find on the Internet about this topic. So that's why I have dedicated multiple chapters on how you can build a CI/CD pipeline from scratch for these kinds of applications.

Joe [00:07:29] Great. And along those ways, along those lines, you also talked about Cervalis. That's a new kind of trend as well. What kind of things do you have to handle differently with CI/CD when you're dealing with Cervalis?

[00:07:38] I think the first challenge is the complexity of pipelines, because, for instance, when you are dealing with microservices, most of the time you are dealing with one pipeline because you have a pipeline per service. However, you are dealing with a service architecture, you deal with what we call nanoservices. So you split your microservices into multiple functions. So now, instead of integrating one package, you need to integrate multiple functions. So there is a lot of synchronization to be done and a lot of code that will be shared on these different pipelines. So you need to do a lot of factoring. And also if you have an API that has multiple functions behind it, you will end up with a pipeline with a lot of stages, with a lot of branches. And so it will be difficult to maintain, also difficult to iterate on it in the long run. So these are some examples of challenges that people might face when they start building a CI/CD pipeline for service applications.

Joe [00:08:44] So at what point in the (unintelligible) lifecycle, should people start thinking about CI/CD? Should be from the beginning? So I'm just thinking of a company I worked for a greenfield application but they didn't make the application deployable automatically. So you had to manually do it. So it didn't make any sense from CI/CD to be able to actually automate the built. So at what point should I be thinking about CI/CD as a developing code?

Mohammed [00:09:07] Yeah. So I think for me it's an advice that I give everyone when they ask me about CI/CD is just they need to start small and keep it simple in the beginning. I think continuous delivery and continuous deployment, require a lot of attention and a lot of skills to achieve that. I think the first thing people need to start with is having a remote repository. So if they are not using yet something like GitHub or SVN or any other version control system, so that might be a good start to structure the search code of the application. Maybe start using multiple branches, something like GitFlow, Java master branch, feature branch, and start doing requests and start having a real review process that might be a good start. And once they have that, I think one of the things that people can start with is just to write down all the commands that they do when they build new features on their applications. So, for instance, if you are working on a JavaScript application project, I think most of the commands that you might be typing when you will be working on your feature before you push it to your (unintelligible) might be NPM run test to run your unit test, etc. So start documenting the process. So once you start documenting your process, you can from this documentation, generate some kind of pipeline. So we will start to see some kind of commands that are frequently executed. So these commands maybe we can automate them by placing them on a central server, something like Jenkins or Cypress or whatever CI solution can be. So once you have done that, you need to pick your CI server. There is a lot of choices out there in the market on this book covering Jenkins. But Jenkins is just an example of a CI service that exists today. So all the concepts that I have covered in this book can be applied to other CI servers. So once you pick your CI server, you just need to define these commands on a template file. On this template file, you will put it on your remote repository. You might create some kind of webhook. So every time you push something to a specific branch, or if you will, for instance, rise up for a request to trigger your CI server to execute the commands. So the commands can be quite simple. So, it can be something like taking out the source code for new changes running, for instance, unit tests or quality tests or any type of tests that they are having. So I think the purpose in the first place is not having something which with more than 80 percent of coverage because it requires a lot of work, a lot of development to maintain that. And this is one of the reasons why people drop out from CI/CD because it's kind of cute in the beginning. And a lot of people love it because it's automatic deployment, etc. But once you get into it, there's a lot of complexities and you don't want it to be some kind of roadblock on your development cycle. It should be something that you will enjoy iterating every week and something that will bring a lot of benefits. So that's why I recommend people to keep it small in the beginning. Just maybe start with some code linters in the beginning. Your pipeline to have some kind of coverage reports something like SonarQube to have some statistics, etc. so you can have some kind of key points that you might be working on, in the long run, to improve your quality code, etc. And maybe an additional step can be compiled into source code or build in a package. So once you have read that, you will have what we call a continuous integration. So and then from there you can iterate on and add small steps and improvements each week or each sprint until you have something which is working from scratch and from there, maybe you will start moving to continuous deployments and start having some kind of a station environment in which you can deploy your changes. So to sum up, I think what people need to remember is CI/CD is very powerful. For people that don't have CI/CD background in the past, I think the advice I can give to them is to keep it small in the beginning. There is a lot of resources today on the market and also on this book in the first chapters I have tried to keep it simple for people that did not have a background on DevOps or CI/CD. So I think that might be also helpful for people to read out these three or four chapters of the book and that gives them a solid foundation about CI/CD before they jumped onto continuous deployments and the group.

Joe [00:13:46] Great. So, you know, you talked about roadblocks. I like how you said it should be helping the team, not making them frustrated. So how do you see most companies implementing it then? Is there someone in charge of CI/CD or is it like once again, the developer is responsible for everything? Does it depend?

Mohammed [00:14:00] It depends. But I think most of the time you have some kind of DevOps team which are handling their CI/CD. In my current company and also in my previous position, what I have tried to do is to maintain some kind of shared responsibility between developers and DevOps, because that's one of the purposes of DevOps for me is a setup practices and also it's a mindset shift of the entire organization. So if you want to have some kind of successful DevOps culture, I think everyone should be involved with this journey. And I think one of the things that you can do is to set up some kind of shared responsibility when it comes to setting up CI/CD pipelines, especially today with tools like Docker, infrastructure as code, etc.. So I think everyone can be able to get started with anything related to sysops and anything related to the cloud infrastructure, etc.. So I don't think developers now have any reason to say that it's not my responsibility to build the application or to deploy it, etc.. So I think now with just a sample Docker file you can build your image, you can test it locally or on a cluster like Kubernetes or swirl. So but to do so, you need some kind of knowledge sharing because the beginning developers don't have a lot of knowledge about what is CI/CD, how DevOps works, and all the CI services that exist today. And what I have tried to do in my previous position is start some kind of workshop on which I try to explain a particular topic, like, for instance, Pipeline as Code or how you can write your Docker file or Jenkins file. So once I started doing that, people started to get interested to learn more. And from that, I started to create some kind of templates and these templates developers started to use every time there is a new project or a new feature that you are working on, and then you just need to clone these templates and they can create their job on Jenkins or whatever the CI server without the need, without asking DevOps team for further help to create a CI/CD pipeline. It was very nice in the beginning and also it has a lot of benefits because, in the long run, developers started challenging the DevOps team on the solution. It wasn't just people that are asking for their help, but they were also people that give us good ideas about how we can improve the CI/CD pipeline. They have suggested new tools that we can integrate to improve the quality or to make the process faster, etc. And I think by doing that and by having this kind of workshop within the organization can help everyone and the team either on the development side or the op side to work together to have this kind of shared responsibility when it comes to maintaining CI/CD pipelines in general.

Joe [00:16:55] I love the idea of knowledge sharing. And I guess this book almost you can give your developers this because as I mentioned, it starts from beginning all the way to more advanced topics. And so far, we touched on some of the more of beginning chapters, what is CI/CD, pipeline as code with Jenkins, defining a Jenkins architecture I guess you go a little more into that, but there are a few things that I want to go over that I haven't heard of before. But maybe just because I'm also a newbie kind of. And that is Packer. You have a whole chapter on Packer. I'm just curious to know what is Packer and how does it help people?

Mohammed [00:17:25] So Packer is basically is the concept of imitable infrastructure. So it's similar to when you are dealing with Docker images. So let's say you have an application written in JavaScript, for instance. So you will build a docker image for this application. If, for instance, you need to update the dependencies of this application, what you will do is you will build another docker image. So this is what we call imitable infrastructure, so you will always start from scratch. So instead of doing the changes on the existing image, you will build a different image or a new image with all the dependencies. So this is what we call image bound (??) infrastructure. And basically, it's the same concept with Packer, just Packer, it's for machine images and Docker, it's four containers. So Packer allows you to build your machine images with all the dependencies, with all the configuration, etc.. So this is what we call baking (??) a machine image. So you will start with a gold image and then you will provision (??) this image to install all the dependencies, all the tools, all the configuration that you need to have something that will be ready to use after the box. This concept I have used it through the book to build the Jenkins cluster. So basically instead of deploying a new server and then installing Jenkins and all the plugins and all the stuff that I need, what I do is I create an image. And on this image, I have Jenkins installed. I have all the plugins that I need for Jenkins, all the credentials I need, etc. And that way I will have a machine image that I can use any time I want without going through all the manual stuff while configuring Jenkins so I just need to launch a server with this image and I will end up with the Jenkins dashboard with everything installed and configured. Another advantage of using this kind of tool like Packer is with the said template file that you have used to create your image, for instance, for AWS, you can use it to build your image for GCP or Microsoft Azure or for a VMware (??), etc.. So you will start all this with the same template file and you will end up with a machine image for different cloud providers. It's a very powerful concept and it's a concept that we see today, mainly on Docker.

Joe [00:19:46] So just to me, it almost sounds like Maven with the POM file. We have a POM file that has all the dependencies and automatically catch up and running. But for images for machines. Nice.

Mohammed [00:19:55] Yeah.

Joe [00:19:55] Cool. So another one I've been hearing a lot about is Terraform. I mean, a mastermind with someone that was having a hard time with their customers, getting them to build the application. And it was, “Oh, I just use Terraform.” They spent like two weeks creating some crazy Terraform script that does the whole build form. So I guess my question is, what is Terraform and how does it help people with CI/CD?

Mohammed [00:20:15] So basically Terraform is a tool that allows you to describe your infrastructure in template files. So it's an application of what we call infrastructure as a code approach. So basically, instead of deploying your servers manually by going to the console or by using the CLI or the API, I think in the beginning it will be easier because you will start with the few servers. But imagine you have a complex architecture in which you will have a VPC with subnets, with the servers, with load balancer, with databases, etc. And if you need to deploy this infrastructure manually, it will take a lot of time and you will have a lot of errors because it's really tough to deploy a complex infrastructure without missing something. And it's also hard to track all the changes that you have done on your infrastructure without documenting the changes that you have done. And that's where the infrastructure as code approach came in. So basically, all the stuff that you do manually, you just need to write it down on template files. And these template files would be read by a tool like Terraform or others and this tool, Terraform, for instance, will just pass these template files, and then it will just convert these resources that we have defined on your template files to API calls. So the cloud provider. So that's where you will have all the infrastructure versions on template files and you can treat them the same way you treat your source code so you can, for instance, push them to your GitHub repository. You can step out (??) of your process and you can track also the changes. So you can create requests, do some measures, have something with an advance in the review process, etc. also. And you can also do some testing on your infrastructure. So basically, this is the idea behind the infrastructure as code. And Terraform is just a tool that allows you to achieve that. In this book. I have used Terraform to deploy the Jenkins cluster. So instead of deploying the services, and then the server at the load balancer or the WPC architecture manually, I have automated all this stuff on the template files. That way users can just need to clone the repository, run a command and they will have the same infrastructure and the same environments that I have used on this book locally or on their cloud provider.

Joe [00:22:37] Nice. This isn't in the book, but you did speak at this year's Automation Guild. And one of the questions I came up with was what part of the pipeline is a tester's responsibility? I know you talked about shared responsibility earlier, but are there any stages in the pipeline you think are really tester-centric maybe?

Mohammed [00:22:52] When you talk about the tester's responsibility, you are talking about developers or you are talking about the ops…?

Joe [00:22:58] It's usually like a software tester. He's kind of like in a sprint team with developers. He is kind of like straddling both worlds almost.

Mohammed [00:23:05] I think it depends on the structure of your pipeline. So if you have some kind of acceptant test within the pipeline, I think that might be the responsibility of the quality assurance team or the search (??) to verify the outputs and also notify the developers or the product team about the regression or about the bugs that have been spotted. So for me, it depends on the structure of the pipeline, on the stages that are defined on the pipeline.

Joe [00:23:35] So another question that came up and you actually kind of alluded to it is that people should start small because you don't get developers frustrated with actual CI/CD and have them eventually abandon it. And one of the people that I mentioned at the conference was they were doing something with security scans and that it was giving a lot of warnings, but everyone was ignoring them. So it's just like almost like automation test and they feel people start ignoring them because they're not…they take up a lot of time effort. So I guess you have any suggestions in your book or in your real-world where you can get people really to pay more attention to the signals or the alerts that are coming out or CI/CD system?

Mohammed [00:24:09] So I think one of the things that you can do is have some kind of notifications. So it depends on the collaborative platform that you are using. I think most people are using Slack nowadays. So you can implement some kind of Slack notification to notify you about the important stages of the pipeline. And this is one of the topics also that I have covered in the book. And regarding security, I have covered two ways to implement security or to inject security onto your CI/CD pipeline. The first one is running security checks on your dependencies. So when you are working on an application, you will have or will be using a lot of external dependencies. So one way to check for this security vulnerability is to use some kind of open-source tools or premium or SaaS platform to check your dependencies and you can configure the pipeline to be failed or to be successful based on the warnings on the number of errors or the severity of the vulnerabilities that have been found. And the second way to inject security is, for instance, if you are using Docker, is to scan your docker images. So you can scan your docker images to see if they contain any kind of vulnerability that has been received by the public. And the same way you can also configure your pipeline through an error if there is some kind of critical security vulnerability that has been found. So to come back to your question, I think the best way to make people involved on CI/CD and all the stuffs that are going through the pipeline is to have some kind of notifications. And I think people also need to pay attention, because when you start implementing notifications, maybe you will end up spamming people. So we need to find some kind of the sweet spot between bringing the value, bringing the benefits of notifications and also notifying the right people. So that's where you can have every time or all the time developers on the products and the quality and security engineers involved with your CI/CD pipeline. Otherwise, they will just avoid it on you will end up with something which is not useful, it doesn't add or bring a lot of value to the company.

Joe [00:26:30] Awesome. So I thought we should wrap it up. I forgot the answer. I asked you this at the conference. Is your book ready? Because I have the MIP version. I guess I have six chapters. Are they going to be other chapters? Or this is the final?

Mohammed [00:26:41] Now, there are another eight chapters.

Joe [00:26:45] Okay.

Mohammed [00:26:45] Already finished it. They are under review. So I think we will have three more chapters early next week.

Joe [00:26:51] Nice. So what a little rest of the chapters that I may be missing that you think people might be interested to know more about?

Mohammed [00:26:57] Yeah, so the first part of the book was just the foundation of CI/CD and all the practices. Also how you can set up your Jenkins cluster because we'd be using it intensively in the next chapters. The second and third parts of the book is a hands-on experience, so basically will be building CI/CD pipeline for different cloud-native applications. So we'll cover how you can build CI/CD pipeline for our microservices application running on Swarm, on Kubernetes, how you can build a CI/CD for a Cervalis application running AWS, how you can write automated tests. We'll be also covering Jenkins X. We'll be also covering some advanced topics like monitoring log-in, how you can do the migration of Jenkins, and how you can also set up authorization in our back system. So yeah, so basically it will take people from the basics until the advanced topics about Jenkins. And as I said earlier, the book is covering Jenkins but the same concepts can be applied to other CI servers or other technologies.

Joe [00:28:03] So I'm calling this a must-read. So everyone listening to this, definitely the checkout show notes to get the link. I actually have a special discount code for people to be able to get any book on Manning, and especially if you want to get your hands on this book. I think it's essential and it's a great primer and also gets you from beginning to more advanced. I love it. So Mohammed before we go, is there's one piece of actionable advice you would give to someone to help them with their pipeline automation efforts? And what's the best way to find or contact you?

Mohammed [00:28:28] So I think the best way to start with CI/CD, I think, is just to grab your copy from the book, clone the repository on GitHub. You will find all the resources, all the source code that I'm using on the book so you can try everything either locally or on the cloud and just follow with the book. And I think by the end of the book, you would be mastering the CI/CD practices. You would be ready to build your first CI/CD pipeline from scratch.

Joe [00:28:55] Awesome. And Mohammed the best way to find or contact you?

Mohammed [00:28:58] I think the best way would be Twitter. So you can drop me a message any time. I have also a blog which I post regularly new posts about DevOps or DevSecOps and you know either drop me a message on my blog or on Twitter.

  • Rate and Review TestGuild

    Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

One response to “Pipeline as Code with Mohamed Labouardy”

  1. Hi Mohamed, good episode thank you for your useful insight! My question is – and some people are opinionated about this – but I would like to hear your view on when to use Jenkins Scripted Pipeline and when it is better to use Jenkins Declarative Pipeline.
    Regards, S. Yuliannia PhD.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mohamed Labouardy