Welcome to Episode 73 of TestTalks. In this episode, we'll discuss the continuous delivery, automation and creating awesome automation dashboards with Sahaswaranamam Subramanian.
Discover how testing and quality are essential pieces of continuous delivery.
Is your team still waiting until the end of your software development life cycle to understand the quality aspect of your application — when it’s too late to do much about it? Sahas shares how if we want to deliver product continuously, we need to be continuously evaluating quality at every stage. Its testing throughout the cycle is a critical element of building quality, and fitting that quality into continuous delivery.
Listen to the Audio
In this episode, you'll discover:
- How testing fits into a continuous delivery process.
- Why you won't succeed with automation without the buy-in from your leadership.
- Creating test automation dashboards have never been easier
- Tips to improve your continuous delivery efforts
- In terms of the shift-left, why we as a team have to own quality
[tweet_box design=”box_2″]In a #DevOps world the whole team has to own #quality~@Sahaswaranamam http://testtalks.com/73[/tweet_box]
Join the Conversation
My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.
This week, it is this:
Question: What tools do you use to track your test results? Share your answer in the comments below.
Want to Test Talk?
If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.
How to Get Promoted on the Show and Increase your Kama
Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.
We are also on Stitcher.com so if you prefer Stitcher, please subscribe there.
Read the Full Transcript
Joe:Â Â Â Â Â Â Â Â Hey, Sahas. Welcome to Test Talks.
Sahas:Â Â Â Â Hey, Joe. Thank you. Thanks for having me.
Joe:Â Â Â Â Â Â Â Â Awesome.
Sahas:Â Â Â Â How are you?
Joe:Â Â Â Â Â Â Â Â Great. It's finally great to have you on the show. I know we had some technical difficulties, but we're live now, so I'm looking forward to our discussion. Before we get into it, could you just tell us a little more about yourself?
Sahas:Â Â Â Â Yes. My name is Sahas. I am living in the Chicago area right now. I'm working with CDK Global as an architect, specifically focusing on continuous delivery and quality disciplines. Been in the software industry for around 11 to 12 years now. Have played different roles. Right now, as I mentioned, I'm focusing more on continuous delivery and quality. Before that, I was working for a Microsoft partner. I'm also one of the Microsoft Visual Studio alum MVP. Before that, I was a developer, I was a build engineer and scrum master. Kind of played different roles at different times, matching the needs of the company. I'm here.
Joe:Â Â Â Â Â Â Â Â Awesome. You have a lot of experience, so I'll probably be asking questions from all different areas, but today I really want to try to focus on the continuous delivery piece. Also, the second thing I'd probably want to focus on is you've been blogging on some awesome Selenium BDD result dashboards that look awesome. I'm probably gonna ask you a few questions around that, too.
Sahas:Â Â Â Â Absolutely.
Joe:Â Â Â Â Â Â Â Â Cool.
Sahas:Â Â Â Â I'm excited.
Joe:Â Â Â Â Â Â Â Â Cool. At a high level, how would you explain what is continuous delivery?
Sahas:Â Â Â Â Continuous delivery is something very, kind of a buzzword now. The core crux goes back to, very simple, how can we deliver what we made to our customers as early as possible and learn more feedback? The point goes back to getting continuous feedback, getting faster feedback from our customers and our consumers who our using our products. In order to do that, we have to follow several things in-between. Right? From writing good stories all the way through following certain practices while it's coding and shipping and all that other stuff. Continuous delivery is one of the ways to get our product in the hands of our customers as early as possible and gain good feedback on that. Once you get the feedback, then it's important, that's the most important part of it. How can we put it back into our lifecycle and make the product better and better, hence our customers would love to use it.
Joe:Â Â Â Â Â Â Â Â Very cool. How does testing fit into a continuous delivery? Is it an important piece that you need to test before a build is promoted to the next stage, before it's delivered to users? How does that process work?
Sahas:Â Â Â Â Testing/the whole quality is absolutely an important piece. I strongly believe in having a high-quality product, and I strongly believe in some of the practices associated with Agile quality. The whole world, whatever was before, it could be be waterfalls, we may say V-model or whatnot. Now, everybody's shifting towards Agile. Everybody starts talking about Agile, almost any product company, any software company, you go around the world, say, “I'm doing Agile. I'm doing daily stand-ups.” The world has moved on. We were shipping products once every six months, maybe, once a year, once every two years? Now, everybody is thinking about shipping the product in a reduced cycle time. Ship it once in two weeks, ship it once in four weeks, ship it once in a month kind of thing.
Everything changed around that. The business need changed, so now, it's not only development has to shift gears towards Agile. Everything has to shift gears towards delivering that product, which means quality aspects also have to change. There are very specific prescription kind of stuff which we might talk in detail over time. At a high level, I'm a big believer in Agile quality. How would we fit, in a quality mindset, into the product-building lifecycle. We kind of follow specific things, meaning we try to, we kind of, for lack of a better term, we call that as a quality manifesto.
Regardless of what you do, you have to build quality in. You shouldn't be thinking about quality as an after-the-fact, which unfortunately has been the way how we have worked so far. We worked first to get the requirements out and we'll design, I'll do some coding, and finally we will say, “Okay. I'm ready to bring in some testers. Let's start doing testing.” Instead of trying to verify our understanding and quality at the end of the cycle, continuous delivery … If we want to deliver product continuously, we have to continuously evaluate the quality at every stage. It's, testing throughout the cycle is important, very critical piece of building quality, fitting quality into continuous delivery.
Second thing was what we call checking our functionality. What typically has been happening is, “Oh, yeah. I created this screen. I have this button. I have this whatever functionality.” Now, QS, our quality test automation guys come and test it for me. The intention is, at that point, what we leave to them is, “I created a screen. Go and test the screen for me.” That's kind of the situation. That's kind of, I would say, checking whether the screen works or not, checking the functionality. However, now what we're trying to do is we're trying to deliver the product back to our customers in a shorter cycle and we're trying to learn from the feedback. Of course, customers are giving the feedback, and our product owners who would have created stories based on the feedback, and kind of reflected back on the lifecycle.
We need to engage the quality evangelists, who think from a quality perspective, to test our understanding. Here is the feedback that we got from the customer and here's what we built. Did we build what he wanted, instead of taking the tester directly into a screen and saying, “Here's my screen. Go and test.” Instead of checking the functionality, we need to check our understanding. Whether did we understand the problem correct? Did we build the right solution, the right thing, the right way?
The third aspect we focus on, finding bugs. However, in a shorter cycle, in four weeks, if I'm finding a bug, how likely would I go and fix the bug before it goes into my customer's hand? Instead of finding a bug, pair with a developer, be a part of the whole ceremony. Pair with your product owner. Try to understand what your customer really wanted, and focus on preventing bugs. Employ techniques, for example, check with the developer while he's designing, while he's coding, and try to influence the developer to think like you from a quality perspective. Hence, to cover more area while the developer is on the line of the code.
The most important aspect that we should see is, conventionally, it's been like QA's responsibility. Quality, in some places, quality could be a separate organization. We might say quality department or outsourced somewhere to some country and QAs and testers and test automation experts and whatnot. It's kind of, we have to move away from that, and the whole team has to open to quality. It's not that I as the developer, I write the code. I would call to the other side of the window and say, “Hey, QA, go and test for me.”
We as a team have to own quality. We as a team have to, for example, when we plan our stories, we have to think about quality aspects while we're thinking about design aspects, while we're thinking about, “Okay, I need, in order to deliver the user story, I'm going to change the store procedure. I'm going to change this service. I'm going to create a new service. I'm going to create a new screen.” We also have to think about, “I have to think about what is the load on this particular service? How do I verify that particular character that my product owner wants?”. You've got to go back to a product owner and ask them, “You're asking me to build this new service. What if this comes up in the Super Bowl ad? What if I get 30,000 hits in three seconds? How do you expect the service to react?”. It has to be more collaborative. The whole team owning quality in order to deliver what would delight our customer. That's the fourth aspect.
The fifth aspect is, for some reason, we all have been through that situation, including me and many of my friends. If you have an application that has seven screens and 70 functionalities, it should have 700 test cases. Go and automate everything. Go and automate all the button clicks and whatnot. Instead, what I personally call that, instead of automating everything, you cannot really get rid of manual verification. You cannot really clone your brain, the way your brain thinks, the way the product owner would see the product, the way your customer, millions of customers would see the product.
Rather, you should use the automation for its strengths. Meaning, we have to automate in a more rationalized fashion. For some reason because we had this situation of QAs, a separate quality team, etc., mostly automation meant to be automating at the surface, automating at the UI level, automating GUIs. Rather, we have to think about more in depth. As a team, why do we want to repeat something if that has been automated at the unit test level. If something has to be automated at the unit test level, why should we automate at the GUI level? If something can be automated at the web service level, why would you automate that at the GUI level? We had to take some rationalized approach even to automate things.
Instead of closing our eyes and automating from the screen and trying to automate every single button click, we've got to kind of … Every single script, every single automation that we do, is going to need a lot of love and maintenance. Over time, one of the conventional complaints that I heard over and over with many teams is, “We invested in automation. We invested in GUI automation. We created 700 test cases. After six months, everything started failing. We don't know why.” Of course, it would because you did not maintain. Your product moved on. Your code moved on. You maintained your product code, your production code, but you did not maintain your test cases. Why wouldn't it fail? It's a concern if it's not failing. It's implementing our energy in a much more constructive, thoughtful way, is important. That's the fifth aspect. I call it rationalized automation.
These are kind of, these five aspects are the backbone of [inaudible 00:11:06]. I would say, wherever I go, wherever I try to implement automation, wherever I try to implement quality, we would bank on these five things.
Joe:Â Â Â Â Â Â Â Â Awesome. There's a lot of things that you covered there. I want to try to unpack a few of the things that you brought up. Let's start with the rational automation. I guess, I'm dealing with this myself. I work with eight sprint teams. I try to tell them, “This is development. Automation is development. You're responsible for your tests. If the application changes, if you change it, you need a big testability entry application to know that. If you make a change here, you need to be thinking of all automation because it's going to affect it later on down the road and make it hard to actually run it in a CI environment.”
How do you encourage that? I think I read somewhere you've worked with 400 developers which is a lot more than what I've worked with. I'm struggling with it. How do you change that mindset or the culture to think of automation as a team activity, not just for a QA resource?
Sahas:Â Â Â Â Sure. It's kind of very interesting question. I wouldn't say I am doing that as a person. It's got multiple aspects.
- The foremost and very important aspect I believe in is the buy-in from your leadership. Fortunately, where I am in, we have strong buy-in towards improving overall maturity towards continual delivery, towards agility. Our ability to enable our customers to pull the product at any time, to pull the latest code that we have done, pull the latest feature that we have done at any time. To the leadership, what does that mean is that means a lot. Meaning, if our leadership is strongly committed, our leadership, that comes from our push from top to bottom. On the other hand, what we try then to create, what we try then to constantly search and find and figure out the aspects that our developers love.
At the end of the day, if you look at our, kind of in a fun way, we [inaudible 00:13:02] a firm called RDD. Like BDD, there's something called RDD. It's nothing but, it's again, completely personal, completely what we do within a small group. It's called Resume Driven Development. What developers love, it's like how, if I could go and blog about it. If I could go and talk in the MeetUp. If I could go and talk to may friends and say, “Hey, I'm working with this cool tool. I'm working in this cool technology.” Developers love that. We would love to work on a cool technology.
From a bottom-up perspective, how would you encourage your development team to participate in this thing is by constantly proving that value of, “Hey, here's a cool new technology. Instead of you going and, say, for example, JMeter. JMeter load test or JMeter API test, there's a new cool tool called Taurus, and under the hood, the test case might be running, using JMeter or Gatling or [Sung 00:13:58] or something else. You as a developer, you would just write your test cases using Taurus, which is using the .yml. It's much more readable. You can do code reviews easily. It fits into your whole CI/CD pipeline much more efficiently, and it's gonna give a lot more productivity back to you. That's gonna create, ignite some stuff at the bottom, at the engineer level.
Also, the third aspect that I believe in is the tooling platform. If I have a test case, if I have it running every night, and I am a QA, I walk in the next morning and I'm sending a report or I'm seeing a report saying, “Last night we ran 700 test cases. 200 of them failed,” it doesn't have any impact on the team. It barely has any kind of impact. Rather, if we slip that in part of a delivered pipeline, if we get to a point where you check in a piece of code. We will figure out what is the impact of that code. We're gonna run only those tests which relates to the changed code.
We're going to let you go through, let your code propagate across the environment only when the tests pass, which is likely to have higher impact, because if I check in a piece of code, it goes and gets employed to, say, that environment and it runs ten cases and two of them failed, I'm likely to get almost instantaneous feedback in 30 minutes or 10 minutes versus the next day. When I see the feedback in 10 minutes or 30 minutes, I'm likely to go and react. I'm likely to go and work on it and fix it and make it better so that I can see my code running. In my first environment, maybe I'm trying to evaluate that for a critical bug that my service breaks only when I hit 10,000 concurrency.
Our tooling has to improve. Our tooling has to support this whole idea of continual delivery. That's how quality gets injected into the delivery pipeline. Otherwise, quality will stand in the sidelines, QAs will keep crying, and these guys, developers, will keep ignoring it.
Joe:Â Â Â Â Â Â Â Â That's a great point that you brought up. I'm just curious to know how it works in the real world. From my experience, this is something we're struggling with. A developer checks in code into the main branch. Then, at night, we have a job that runs a full-out BDD test. If those tests fail, no one looks at the results, and the code is, still stays there. Even if we found that a test broke, the code still lives there. Someone else checks in code, same thing. All of a sudden, we have layer on top of layer of what could possibly be bad code. We don't know what change it is that caused something to break.
I guess what we're struggling with is how do we know at runtime with continuous delivery, continuous integration, what tests to run based on impact? Is there some sort of logic they say that this test will map to these three, I mean, this piece of code is going to map to these three tests? How do you know that at runtime, I guess, when someone checks something in?
Sahas:Â Â Â Â That goes back to a lot of technology-specific differences. I don't know, possibly I don't have an answer for JavaScript and Java and Scala and everything. It's at least for something, some technologies like that, there's something called testing packet analysis that comes part of Visual Studio and the TFS world of things which will give us, only draw up the impacted test cases. On the other hand, what I would also say is, we follow, we try to follow, I wouldn't say follow. It's a huge cultural shift. What you explain, is 99% happens all the places, but those 1%ers are kind of maybe some projects in Netflix and some projects in Etsy and maybe some place in Google may not have that struggle. They may have the ownership, when something fails, developers jump in and fix it. Including my case, we're all going through that transformation, and it's kind of a nice struggle to go through.
What we try to do is, we try to eliminate the flaky test cases. We try to, basically the idea is, “How can I provide the feedback as fast as possible on the face of the developer?”. If I'm checking in the piece of code, if my build has to stand in the queue, if my build goes and runs for 35 minutes, and it's, I have Selenium grid locally set up, or many places are using SauceLabs now, which is good. My test run is in SauceLabs, but my test runs for two hours and finally I get it. Two lines of code that I changed, if I have to wait for four hours to get a feedback, that's kind of the worst scenario, versus how can I get the fastest possible feedback?
We kind of tried to build something like a smoke suite and a regression suite, they are traditional things, but we kind of try to stick to it. A smoke, no-test runs more than a minute, meaning no test in the sense the GUI test that runs in SauceLabs, nothing runs more than a minute. You kind of set some high bar like that. I should get the fastest feedback. The moment I check in, it should start running in parallel. They should all run in parallel and start running in sequence and we should have our test cases …
We follow, or I believe in, I follow specific coding practices to design your test cases, all your tests should be independent. Even a GUI test, it should be independent. You shouldn't be depending on external data, meaning you shouldn't be assuming that I have a particular item in my inventory and I'm trying to change and edit that thing. If you're trying to change and edit something, you should better populate that item. You know that item exists. You just go and edit and update. Finally, when you come back, you just go and delete and come back.
You try to follow certain coding practices to give the fastest possible feedback to the developer. It's a cultural shift. Still, not a lot of developers care about it, but with the help of leadership, we're constantly grinding that thing, constantly talking about continuous delivery, constantly talking about team owning quality and constantly making developers accountable. On the other hand, I would go and, I'm not just going to make them accountable, I'm going to ask them, “For you to become more accountable, what do you want? What would help you?”. You know that feedback will go and improve the tooling around it.
Joe:Â Â Â Â Â Â Â Â Great points. What I love about, this is why I started the podcast is, sometimes when you're a testing or an automation engineer, you feel like you're the only one going through these things, but everything you're talking about is exactly what's happening in the company I'm working for. I'm sure a lot of other people are struggling with it. This is awesome stuff.
Sahas:Â Â Â Â Absolutely.
Joe:Â Â Â Â Â Â Â Â You brought up something about flakiness, and I think everyone who is listening can relate to flakiness. This is actually what caught my attention. I've been trying to create a dashboard to keep track of flakiness over time, to say, “Look, this feature file, every three days, fails, but then it passes every so often.” There's no way I could really track it that well. I was creating a really weird MySQL database, I had a Python script, that was just taking the data from the BDD and putting it into the SQL database. I was trying to write a front end in PHP. Then, you contacted me and showed me a screenshot of these awesome-looking dashboards that you have that actually are tracking flakiness. Can you just tell us a little bit about the technology you use to make these cool test automation dashboards?
Sahas:Â Â Â Â Sure. I'm glad that it's kind of useful. The technology behind it is, first of all, I would tell you that the technology and the stuff that I'm going to explain is not mentor test-automation at all. It is the purpose that driven me to use that product. I thought it would be cool to use it, and we started using it. It's called ElasticSearch, Logstash, and Kibanastack, ELKstack. It's completely open source, basically that's used for a few different purposes. One is collect, log collection and analysis areas. It's used in analytics. Lincoln is a big player in that. They kind of use a BI and combination of ElasticSearch to index their data. The whole backbone behind it is ElasticSearch indexing and the whole technology behind it works really faster to give you the data or whatever you want. They index it. There are some best practices, to index the data in a specific way so it gives you benefits.
To kind of round it up, we run the tests. Your test will generate some kind of log in a particular structure. You've got to define a logging structure. Like, here's the division. Here's the product. Here's the application. Here's the browser. Here's the browser version that I'm trying to run my tests on. Here is the test case scenario name, and here is the result. It's kind of a simple structure. Define the structure somehow. Generate that log from your tests and spit it into a place. Then, that, say for example you spit it into D: Drive->Test Run->Logs or something.
Now, you have to have Logstash or something called MXLog to monitor that particular folder for a specific file, like you spit out a JSON file. You're going to say, “Hey, Logstash, look out, go to this [inaudible 00:23:22], this particular folder. If you see any change in the files with the extension .JSON, grab the difference, ship it to ElasticSearch. Logstash will be the shipper which will kind of grab your logs and send them to ElasticSearch. ElasticSearch will store your logs in a time series database. You can generate a log, have test case one running on Chrome, latest, test case one running on Firefox, latest, now, and ElasticSearch will store it with a timestamp automatically.
Then, you run it again after hour, again after hour, again after hour, and the beauty of ElasticSearch is you can go on query and ask, “Hey. Tell me what happened in the last three hours. Tell me what happened between 1 AM and 2 AM. It's all in-built. You will literally just write some queries and ElasticSearch will give you the results. Now, the beauty of this stack, again, if you use Kibana, that eliminates that, too, meaning you don't have to construct the queries on your own. You would just go and use some of the, in our backboarding mechanisms that Kibana provides you, all you would do is you would end up creating filters in the sense that you have tons of data, filter it to a particular scope.
Then, you would write a query saying you have the data now, filtered data, you would write a query saying give me the logs of this particular product for this particular division for this particular application, give me the logs only of the GUI test. Still, you can go and filter down by, give me, you give me a set of logs between 1 AM and 4 AM, but give me what happened on Chrome, Latest. I don't care about Internet Explorer and Firefox for now. I want to analyze only Chrome. You [inaudible 00:25:11], I want to analyze only what happened in my production, not in [Perfare 00:25:14] QA. Kibana kind of gives it that kind of a capability.
When I saw that, maybe you might see that in Twitter, we were talking, one of the architects here, who's my senior guy, a general technologist, very passionate about technology and all that, he was the one, [Jeff Sokoloff 00:25:32], was the guy who introduced this stack to me. When I saw that, basically he introduced it to, as a mechanism to store application logs for our logging framework. When I saw this whole thing, it kind of, I kind of took the whole concept and applied it into our continuous delivery and quality. Now, along the pipeline, every build will emit the logs, deployment will emit the logs, testing will emit the logs. We'll be able to correlate all these things using something, maybe a build number, across the environments and what happened. Because of the rich filtering that it gets, be it the time filter and all other stuff, it becomes very useful, for example, to figure out what is the flakiness index of a particular application's test automation. Which one was failing, constantly failing in the last 30 days?
It can give us some of those answers very easily. Fairly easily.
Joe:Â Â Â Â Â Â Â Â Awesome. I'll have a link to this in the show notes. People have to see these dashboards. You have a bunch of posts on your CD Insight blog where you actually show step-by-step how you set up one of these up and what it looks like on the dashboard. It's awesome. What I was trying to do is, I was trying to write all of these complex queries to get all of this information out of the database, and like you said, using this technology makes it so much easier. I really loved the concept you brought up about the logs, how you can say, “This test was failing at this time. What was happening in the logs at that time?”. You can correlate that information in realtime and see it fairly easily, without much effort, which is awesome once it's set up. I think it's really cool. Thank you for sharing that.
Sahas:Â Â Â Â Absolutely. Thank you. I'm glad it's useful.
Joe:Â Â Â Â Â Â Â Â Are there any resources or books that you would recommend to someone to help them learn more about continuous delivery?
Sahas:Â Â Â Â Yeah. That's a very good … it's a timely thing. When it comes to continuous delivery, one thing that all of us should start out with is just go and follow Jez Humble and Martin Fowler and a bunch of those guys. Recently, I saw a book published by, “Continuous Delivery: Life Lessons.” It's a training, downloadable, by Jez Humble. It's the Pearson InformIT. That's kind of, I would say, anyone wants to start of on the path and kind of get a little more insight into continuous delivery, that's kinda the first one. Then, there are a bunch of books. Basically, Jez Humble, go around Jez Humble and see about the books that he wrote and the video training that he has done. That should get you started bigtime. I would just go and follow some of the engineering blogs that Netflix and Etsy and all of the guys are blogging about. They're all kind of passionately running behind continuous delivery and shipping 32 times a day.
Joe:Â Â Â Â Â Â Â Â In your experience, is there one thing that you see over and over again that most people do in continuous delivery are doing wrong?
Sahas:Â Â Â Â I would say continuous delivery has become a lot of a buzzword. Of course, I probably have joined the bandwagon, maybe later? I'm not one of the early adopters when Jez talked about it and all those guys. In my opinion, there's no right or wrong way of doing it. The whole idea is to improve our maturity, improve our ability to deliver products. If you're shipping the product, poor quality, if you're shipping a product once in six months, there is definitely an opportunity to make it to five months, 27 days. If you can make it to, from six months to five months, 27 days, that's an improvement. Then, you will learn along the way how you approached it. Possibly someone would have used the words continuous delivery. Still, we're shipping out for five months, but it's not bad. It's better compared to how we were shipping six months before.
The important aspect is, establish what we do here. We kind of established something called continuous delivery maturity model that includes your practices, all of it from product owner, how do they write the story? How do you write acceptance criteria? How do you verify acceptance criteria at the end of the story before you close that? All the way from there. Some of your coding practices, some of your development practices, code quality, your quality practices, that includes test automation, and we try to bring in … When I say quality, it's not only functional, it is functional, it is load and performance, it is security, it is operations and everything.
We establish a broad maturity matrix for lack of better term, and we try to measure our teams on that and constantly keep pushing forward. Some teams you might say after reading the whole matrix, you say I'm in red on all the categories, which is perfectly okay as long as you kind of take two yards every month forward and one day you will become yellow.
Joe:Â Â Â Â Â Â Â Â Awesome. Great advice. It's not a big bang. It's evolution over time. You're continually trying to get better based on feedback and improvement. I think that's a good point so people don't get discouraged. Like, “This doesn't work.” Well, it's a slow road, but as long as you start chunking it up and getting things done, little by little over time you'll have a huge improvement I think.
Sahas:Â Â Â Â Sure. Absolutely. I think you're right on spot. That's what I was trying to say, and that's how we approach it, too. Perhaps, I think, if we, I don't know if anyone has any kind of contact back to some of those guys who did this well, maybe in places like Netflix? If you go back and ask them, they would say, “Yeah. We've been doing it for the last five years.” Nobody did it overnight, unless you're a startup and you have 1,000 lines of code, probably you're not going to do it overnight.
Joe:Â Â Â Â Â Â Â Â Right. Okay, Sahas. Before we go, is there one piece of actual advice you can give someone to improve their continuous integration, continuous delivery efforts? Let us know the best way to find or contact you.
Sahas:Â Â Â Â It's not more of an advice, it's more of a suggestion. One piece of suggestion what i would leave is, when it comes to quality, we've been in that space, it's … We've been doing quality in a particular way for so long, constantly wherever I go, I would hear, “We're automated. We're running it every night.” What I would say is, it's great what we've been doing so far, but in the future, it's not gonna encourage that way of doing it. We have to bring quality into the team. We have to take the team along with us. There are some miscalculations. When somebody talks about Agile, people start thinking about there are no more, there is no room for quality engineers anymore. There are no QAs. Everybody should do everything kind of bandwagon. What I mean is, there's definitely a lot of room for quality experts, quality evangelists. Somebody has to look at it from a quality lens.
The catch, here, is how would you make it more appealing to the rest of the team? How would you engage the rest of the team? How would you enforce some of these standards through some sort of automation, using something like Sonar to stop your build if something, if the [inaudible 00:32:53] test failed? Elevating the awareness of disability using some of the dashboards that we talked about. Those are all the aspects that I have been personally focusing on on the quality side of things.
It's all interrelated. Quality is not going to stand on the sidelines while development is going really fast. We have to learn some techniques to jump on that train and be with them, stay engaged, keep the developers along with us and make them listen to your song and you listen to their song. It's all about that. The Agile quality is very interesting. I'm very passionate about it. For those quality experts, I'm sure there are plenty of other ways, but I would say these are some of the areas that I'm really looking forward to see some change.
Joe:Â Â Â Â Â Â Â Â Cool. The best way somebody could find or contact you?
Sahas:Â Â Â Â I'm in LinkedIn. That's kind of the best profile. If you go to LinkedIn, you could find me, Sahaswaranamam Subramanian. Also, I am in Twitter @Sahaswaranamam. It's my full first name. Those are the best ways.