About This Episode:
Automation is not just functional automation! In this episode, Parasar Saha, an enterprise automation architect, will share some ways to reduce repetitive and error-prone tasks at the enterprise level. Discover how to free up time and resources to help transform your team. Listen up to learn tips, tricks, and best practices for successful Digital Transformation.
The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!
About Parasar Saha
Enterprise Automation architect and QE Transformation leader with the experience of driving large scale transformation programs in multiple organizations in various business domains – Airlines , Banking , Pensions and High Tech. 17 years of experience of working in all aspects of Quality Engineering and Quality Management across the IT industry. Technology enthusiast and an QE Strategist , passionate to raise the quality standards in Digital Transformation journey.
Connect with Parasar Saha
Full Transcript Parasar Saha
Joe [00:01:38] Hey Parasar! Welcome to the Guild.
Parasar [00:01:42] Thank you Joe for having me on the podcast. Happy Holidays Joe.
Joe [00:01:46] Happy Holidays. We're about to enter the new year, so Parasar I'd like to actually jump into that. It's going to be one of my first questions. But before we do, is there anything I missed in your bio that you want the Guild to know more about?
Parasar [00:01:55] I think you did a great job of kind of like telling the bio and introducing me. I want to add that I had vast experience across the different industry and wearing different hats in the quality world which gives me the holistic picture from 10000 feet to one foot, how that strategy in terms of quality transformation, quality engineering should happen in the industry. So that's definitely helped me to kind of drive that strategy from a 360-degree perspective.
Joe [00:02:27] Awesome. So this is the strategy I want to dive into. Before we do, also the first question that comes to mind is this is being recorded right before the new year 2021. So you have been in the automation field or in quality for a long time. What was different or was there anything different about 2020 compared to 2019 that you think is going to transform how we do quality even more so than what you thought you had maybe, to begin with? Because of Covid the way people are working now, how does that change how you see quality run through a whole organization?
Parasar [00:02:58] I think that's a great question to start with. 2020, what a year it has been. The start of 2020, imagine what it's going to be. So one of the things which happened in Quality and it's basically across the IT industry that we are shifted to work more remotely and our operations kind of moved in a remote mode. Now, if you look at it, in 2019 before Covid happened, you could have survived running your testing operation using your on-premise infrastructure. But as soon as the Covid hit, we were moved to a remote situation. Now the cloud has become a really big part of our operations these days. Imagine if you have to have cross-browser done and how many machines, testers would have needed at this point? And how could you have done that with the remote situation? Now, adding to that complexity in the mobile world, which is even a bigger variation of devices and operating systems that you have to run. So I think that a cloud is becoming much more important these days in terms of the testing operations, especially 2020 actually proved that the testing operations can happen seamlessly in the remote situation with the backup from a cloud infrastructure.
Joe [00:04:21] Absolutely. And I wonder how many companies were ready for that. You know, we talk about digital transformation all the time. And now for some companies, moving over to software and mobile apps is the only way they were able to communicate with their customers. So do you see a rush towards the cloud? And do you think now going forward it's going to be more the norm? And when you say cloud, you mean like moving to cloud vendors to run our test, or is it to collaborate? What do you mean by cloud also?
Parasar [00:04:44] When you asked how we did it. Excellent! Yes. Excellent. Yes, it's absolutely excellent and people realized that there is a need, that you have to move into the cloud. And with that on-premise setup, it's just becoming too difficult or you cannot even survive in this situation. For some companies, they already transitioned, but Covid actually made it necessary. So it is no longer a wish list to have. It made it a kind of absolute need to have. And also one of the things we were able to do is Covid was able to prove that this works. It's not that okay, that it's just there. It works. You know, I remember the days when at one point in time and we backed like 10 years back because of a reason we were asked to work remotely and we didn't have the infrastructure and the network was not built to support that. At that point. On day one, when we started operating in a kind of remote mode, we crashed. The first day the VPN is not working, that's not working. So, so many challenges coming up. But the Covid actually proved that, yes, we are ready to adopt it and we are ready to use it in the full flow. So that's where I see that yes it proved that cloud works. Now, the other thing is that you mentioned, “What do I mean by cloud?” Well, I want to expand. One thing is infrastructure where you utilize the platforms of cloud, whether it's across device or across browser. Now, it can be a success provider where you can update, take care of the entire infrastructure, and maintain that in the rain. But there can be other options as well. The Best (??). I like to call Best (??) as the Airbnb (??) of test infrastructure. What happens is like being in the end wearing this hat for enterprise architecture, I always have to be conscious about the cost. That's where when you go to a SAS (??) provider, it's good to have and you always need them as your support in your tool stack, but also you need to build something which is a fast model that can support your non-critical things and in a cost-effective way. So this allows you to scale at a manageable cost.
Joe [00:06:57] Absolutely. And I don't know when you wrote this presentation you gave, but you actually have a section called Cloudify QA. So I don't know how long ago you wrote it, but it's almost like a blueprint for what teams need to do in order to move to the cloud. And like you said, it's not just running an infrastructure. You have other things like a global test data pool, cloud-based storage or test data lakes, distributed micro tests. Can you talk a little bit more about what your concept is about – Cloudify QA? And how that applies to this current situation we find ourselves in?
Parasar [00:07:28] Yes, absolutely. I like to look at it from this perspective. We are moving to automation which is actually moving your manual workforce into a digital workforce and you're creating a ton of digital assets and your digital assets need a digital home, a home that can scale on-demand and scale down on demand and where you can actually pay as you go. So cloud is your kind of that home. Now different things will need the cloud. A couple of things, one thing we talked about, the devices, the browsers, and stuff that definitely one area. Now, one other way you can make your testing more accessible for different teams is by making them in a web-based platform. So, I mean, gone are those days where you can use a desktop tool to do those things. The more. You can make those things more accessible to web-based ways it will allow you to make them accessible. Then not only your testing team, your dev teams can actually see those stuff. Your business team can see those stuff. The whole scrum team can access those tools and use them effectively. Now, where would you host those things? You want to host them into the cloud so you can get it on-demand. It's easy to deploy things and you don't need to break your banks to set those up. The other thing is in today's world, data is becoming so important, very important. AI lives on data. Now, this data part will more and more become critical, like whether you're talking about test data or your operational data that you're gathering over time. Now the more you can push it on a database that would be a way that you can…actually a database and a data lake that where you can handle this data, you can store the data, you can analyze the data. Now, you can maintain them in on-premise but with the cloud coming in, there are so many options where you can make your data stored in a more resilient way. So you really don't need to worry about how much data you need from the data storage unit from day one you can scale on-demand and your data is very safe in the cloud. So the testing operations can also leverage the capability of the cloud whether in terms of infrastructure, whether in terms of data storage, whether in terms of high availability, everything that they can leverage in terms of that.
Joe [00:09:52] So like you mentioned, we're going to have more and more data. So I guess as we look towards the future, since we are going in the new year, I know AI has been applied a lot to functional automation, but like I said like you said, also, I think it applies better towards machine learning, towards data, especially with performance testing and things like that, because you have so many logs that you can't as a manual tester go through. So do you see this as a growing area? Do you see like skill like R, like a programming language that works with data being a skill testers need to know more about in an organization at an enterprise level or going forward?
Parasar [00:10:23] Absolutely. Absolutely. Those are the area that will become a prime area of operations in testing as well. Now, R will definitely be one of the programming languages. You know, I listen to your podcast and in one of your podcasts, you talked about different tools that were going to be key tools for 2021, and Python was one of them. And I absolutely believe Python will be a great learning, a programming language to learn in 2021 because Python has a great ability to handle data. So those are definitely going to be new areas that people will find growing areas in 2021. At the same time, we also have to see how you can manage your data. When you are talking about big data, you cannot use the conventional way of comparing data and validating the data anymore. Because you are talking about terabytes of data and petabytes of data. Are you going to bring two sources of data together and do the comparison one by one? You can't do that. You have to use the same technologies that the big data world is using, whether it's a Hadoop, whether it's a Spark, whether it's Scala in-memory testing. Those things will become more critical over time. And you have to leverage those capabilities which the data engineering teams are using to do that kind of validation. So I think that will organically grow as the data will become more and more important. And also their infrastructure will become very important and the cloud will play a big role in that field as well.
Joe [00:11:58] Great. So as I mentioned, we first met at Applitools Future of Testing. And I took some notes down. I don't know if I wrote this down (unintelligible), but you said some really, really interesting that really struck me and I've been thinking about ever since. And I'm just paraphrasing. You said something like, “It's not about automation scripts anymore.” Now, back in the day, I was automating scripts, you said, “It's now about automating your pipeline.” And I love that because I think that is definitely the direction. I just couldn't put it into words as you put it. So can talk a little bit about that? Do I have it right? And how does that fit into I think you have a 360 automation platform that you describe to talk about this type of concept?
Parasar [00:12:29] Absolutely. You know, Joe after that conference I got a lot of interest from a lot of people about this 360. And I want to start from the bigger picture of IT to come down to that 360 story. IT has been changing. Things are becoming much more complex. Now, I do realize that it's not possible to handle this complexity in a monolithic team. So they started bringing down the team size and working out things in small packages. So now you would see those small teams, the scrum teams, which are doing the stuff. Now in the scrum team, you also have a QA person. Now let's assume the person is John. John has to do all the scrum activities and also figure out what he needs to do with the automation. It's become too much for that person. Now, how can we actually help John making him more efficient and accelerate at the same pace as the digital transformation is accelerating? That is where I think Enterprise QA plays a big role. And where we need to see is how Enterprise QA can help John with a framework that allows him to go fast in a standardized way. And that part of that framework, a lot of time we think about, you know, the framework is only about technology. It is not. It touches three pillars of your IT world, which is people, process, and technology. And, you know, I like to call it in a very simple word. I mean, if you simplify the things it's easier, people understand and remember. And I think that the key here when you're defining the strategy of the framework is coupling. What I mean by coupling is bringing things closely together. Now, when you touch that process thing, you have to couple that quality through transformation, the quality engineering team closely with what's happening in the scrum team. A lot of people think they adopt BDD and they use JBehave, they use Cucumber, and SpecFlow and they think, “Okay, I have adopted BDD in the best form.” No, BDD is not for that. (unintelligible) had BDD to make sure there's a close collaboration happening between testing and QA. So that is what the true value of BDD. So how you want to couple these two together – The business developer and tester? BDD is the best way to do that. Now when you go to the dev, then you kind of think about how can you build your tools, testing tools in the same ecosystem of development so they can use it. You know why Cypress these days is becoming so popular? Because it's coexistent, the developer ecosystem. So the more you make them easy for them, they will adopt it. So that's where you kind of couple them together. So quality engineering can be closely coupled together and build that piece and it will become much more effective. So our guy John will be much more effective. Now then I want to touch the people part of it. When you're driving the quality engineering in a large organization, the key to having the process started is to have a leader in that space who is equally expert in quality and engineering. The word is quality engineering. Now, quality means a happy face of a customer. I'd like to think about it when you go to a washroom and there are those like the happy faces and angry faces and the happy face is your quality. If your customer is happy it's enough. And the customer is not only your end customer. Your organization, your industry, your stakeholders, everybody's your customer. So you make them happy and how to achieve it is engineering. How do you bring engineering to achieve it? So that's how you touch that pillar of the people. Now, the big part of that engineering is that 360 automation, and I like to call it this way. Our 360 automation platform had three axes. X-axis. It's different types of testing from web, mobile, accessibility, nonfunctional, every type of testing that you think about. For Enterprise it's very important to have coverage across the board. So that's your X-axis and you need to build that coverage of different tools. And then comes your Y-axis, the Y-axis is your DevOps alignment. You want to make sure that you have all these tools, but the tool can align to a process, a standard process which are very interconnected. Now, that interconnection when you come to the testing part of the DevOps, you're looking at the continuous testing. You want to make sure you're test planning is connected. You want to make sure your code repository can be connected. You want to make sure your tools are connected where you're writing the things and you're making sure your execution platforms, whether it's SAS (??) or test or whatever it is, you are very well connected in that platform and your reporting is connected so you can extract all the data and report it back in a homogeneous way for all the types of testing you're doing. So that's how you build that kind of full-scale continuous testing and it needs to connect to the dev on the one side and the release engineering on one side so you can take the code the developer is doing, test it out, and push it out to the production. That's your Y-axis. Now, the last part of it is the Z-axis, your scalability. You have all these types of testing. You have these DevOps, but every time it grows. So you need to handle the scalability factor. And there are two things in the scalability factor. One is your infrastructure. The other is your data decision. Now the cloud is the way to handle the scalability factor. But again, the cloud is one part. AI is the other part of handling scalability. Why is that? Our guy John used to only test previously with one browser and he has to make a decision whether this test worked in one browser. Now he has to do it in 15 browsers or you may have to do it in 15 devices and all this stuff. So he has to personally make those decisions. But imagine if you have AI to assist there, make the decision for you, how easy it's going to be for you to do that scalability. Now the other piece of it is how many defects you have to kind of handle these days. Yes, you can bring up a large team, but at one point you will feel that the cost is too much. So that is where I think that AI can be a big help there. So with overall all these three axes, you build a 360 automation platform, which has the tool, which has your DevOps, and your infrastructure and AI together serve you all the needs of your enterprise.
Joe [00:19:21] Absolutely. And I guess one of the points that came up at the conference as well, was visual testing, but they introduced something called a visual grid, which I think allows you to use AI and run and get all these different types of devices and everything. Is that something your company uses? Is that one of the tools that you can see helping people with this type of issue?
Parasar [00:19:40] Yeah, so visual grid is something that we use from Applitool. Applitool, I think they brought up a great option, not just going to the cross-browser tools in the cloud but you can also do it through Applitools. So definitely that's one great option to try out the visual part in a different browser, how the rendering is happening.
Joe [00:20:04] Awesome. Another thing that popped in my head is a term I've been hearing a lot about and I don't think you mentioned this at all was remote process automation, RPA. And we talk a lot about the testers and developers getting involved in automation and things like that. But how about the business user? Do you see the business user becoming involved now with automation as well, and data using tools like RPA to try to automate not necessarily you know, you have testing types, but maybe processes that they may do as well that becomes part of the 360 platform as well?
Parasar [00:20:33] The whole concept is simplifying the process. The whole concept of IT is to simplify the process so that any common person in the world can run your process. Imagine what the banking industry was doing 15, 20 years back, right? The complexity of opening a bank account was so difficult. But what IT did is handle that complexity behind the scene so a user goes and fill out something and get the account created. If you can make the testing simplified where the business user can go to some website and say, “Okay, I want to run the test for this feature.” The business user themselves can run the test. If you can allow the chatbot to be there integrated with your platform and that's one of the things integrated into the platform is when the user goes and do, “Okay, I need to run this test.” And they can go in the chatbot and say this is the test to run and they can get feedback saying, “Okay, this is the results.” Then at that point, you break that silo of only the testing team running the tests, so quality is no longer then a functional group. It's a quality, it's a culture everybody in your organization can run.
Joe [00:21:50] That's a clear use of case I didn't think of. How many chatbots run a test for you with a command. That's cool. Cool idea. So I think, you know, you've nailed all these pieces but I just want to make sure people understand. You have something called a transformation maturity model that you touched on but I want to make sure people know what this model is because I think it's very helpful. And the first one, I think was Conventional QA. A lot of people find themselves doing minimal testing and then you have Automated QA So a lot of people think Automated QA is the be all end all, but you have two other steps in the maturity model. The third one being Cloudify QA, which we talked about, and the last one where we talked about as well as Predictive QA. Now is there anything along that maturity model that you'd like to add or expand on before we go to the next type of topic?
Parasar [00:22:29] Yeah, I think a lot of people are thinking that Automated QA is the end goal. That is not because as soon as you automate that, you hit the problem of infrastructure and the cost is something that becomes so huge that you need to come up with Cloudify QA. Then you cannot adopt AI to a very large extent until you have a good amount of data. I work with AI teams as well, the teams who are implementing AI in the organizations. And I help them to design the testing strategy for them. Now one of the key things I learned working with AI teams is in the AI world, the data is the big boss. You know, it's a garbage in, garbage out. If you put the wrong data, you get it wrong. That is why you cannot proceed to the AI level unless you have digitized your operation unless you have enough digital data that you can fit to your AI model and give you an accurate result. And that is where you will develop the confidence. A lot of these tools have AI, but how many people are adopting it will be determined by how much data you have and how much the efficacy of that prediction is.
Joe [00:23:49] Absolutely. So this for this model – maturity model, does everyone have to go through each stage and stay at each stage for a certain amount of time, or could someone go from one to four and skip two and three? I guess you need two and three in place in order to achieve four. So is there a way you roll this out in order to make sure that you're not rushing it and that you are learning feedback from your particular lifecycle and that you're optimizing it for your company not say what other people are doing?
Parasar [00:24:15] So I think the model is like this. Your first is conventional QA and then you go to Automated QA, where you have covered all the different types of testing that you want to do through automation. But you will never hit the endpoint of automated QA because every now and then the industry will come up and challenge you with a different channel of automation and they will say, “Okay, now there's IoT, there will be something else tomorrow.” So whenever something comes up, you have to address that. So you're in that phase that will keep on continuing, but you can start building once you have a good Automated QA process, then you can start building that thing of Cloudify QA because it depends on the demand. When you start creating it when you start stepping into the next thing when the cost is hitting you when you need to scale fast, when you have too much digital asset that you need to find a solution, that is where you start your thing. So there's an overlapping thing and you start moving to the cloudify thing. Once you cloudify thing, you start gathering the data. You form the backbone of the AI. You start capturing your data and then you kind of move on to the Predictive QA. But you never stopped your Automated QA, even though you are in the Predictive QA because when you are in the Predictive QA you may get a challenge that this channel is new and you have not covered that. So it's an ongoing process. All those things will be ongoing and they are overlapping in phases.
Joe [00:25:43] Absolutely. And what I love about your presentation, you wrapped it up with value drivers. And I think as technologists, we like to geek out on technology. And I'm the biggest culprit of this. I love automation and tools. But at some point you have to ask ourselves, I guess, why, why are we doing this? And a lot of people just get sucked into automating for the sake of automation. So what are these value drivers? And I think if people understand these value drivers, maybe it will make less automation because we'd start focusing on things that actually matter that need to be automated. Not sure if that makes sense, but you expand on your value drivers that you talked about in your presentation?
Parasar [00:26:14] You know, the value driver I think we can never take our eyes off why we are doing it. As I mentioned, quality engineering. One is your customer and you always have to keep your empathy with the customer. What is your customer need? Your customer in an organization is your business team. What is at the end of the day you are producing for the business? What you are producing is you have a digital transformation going on and everybody wants to revolutionize and accelerate it. With a better quality, what you gain is your digital product will have a higher confidence with your customer. Your higher confidence with customers will fuel customer growth, which will bring you higher revenue for your organization. The next thing that you're going to do is your operational cost will go down because if you are having fewer issues and if you are reducing the number of fixes and the bugs and stuff like that, and if you are producing things at a lower cost, you are actually producing things at a higher profit margin. So that's a kind of something which is going to make your finance guys happy. And overall, with this acceleration from the platform and the digital transformation and pushing things faster, you're going to establish your company as an industry leader because you are going to give the customer a faster turnaround on the features they are looking for with a higher confidence and a higher quality. And then what will establish your company as a market leader? That's the whole value of that thing, is like like quality engineering can actually give you a great value, a business value to the organizations.
Joe [00:27:53] Absolutely. I also think there's another benefit. I guess people are self.. not self–centered but if your customers, I know a lot of groups complain they don't have a budget for certain tools or techniques, but if they are valued on making their customers happy and the business says, “Oh, well, look, we're making more money because our customers are happy.” You'll have more money than to spend on the tools and the and the technology that we all love to geek out on, I guess, as well. And that's a good point.
Parasar [00:28:15] Yeah. You know, I learn to look at it like over time, every quality engineering organization has to transform from a state where the cost of quality is more than what they're invested in. That is a starting point. The cost of quality is more. Then you will go to a neutral point where you're producing as much value that people as invested in. Then you are going to go into revenue earning point where you are allowing the team to accelerate more and giving things more and you're seeing a huge profit margin for your organization. This is the same way you develop a cost model for a startup. When a startup starts it's always on the loss. Then you kind of hit the break-even point, and then there's a spike coming in.
Joe [00:29:04] Right. And I guess I just thought of another issue, though. How do you quantify quality? Once again, I worked for a large enterprise. They only quantify, “Oh we released feature. This feature is going to use by X customer. We'll make X amount of money.” But we always had a hard time quantifying that you know we found twenty bugs for a reached production. How do we get credit for that? How do we measure the quality cost or savings that we're actually giving the organization in order to give them confidence in investing more in quality? I don't know if that makes sense.
Parasar [00:29:31] There are several metrics in the industry to measure quality. I think the best way to measure any metrics is to kind of look at every single metric and holistic metrics. And the best way to do it is to kind of like ask your customer what is the confidence level of the features that you're rolling out. I always followed this thing. Social media companies, when they're evaluated, the valuation is based on only one thing, and that is how many users they have. When WhatsApp was bought over by Facebook, there was only one metric. And that metric has a holistic metric. Yes, you can have ten different metrics, but all need to lead to one metric, and that is the confidence of your customer.
Joe [00:30:16] I love that. Good answer. Okay, Parasar before we go is there one piece of actionable advice you can give to someone to help them with their automation, transformation testing efforts? And what's the best way to find or contact you?
Parasar [00:30:27] You know, one thing I would say this. If you want to be a leading organization in quality engineering and if you want to be a leader in quality engineering, always follow the industry, the larger industry. What is IT doing now that is going to be coming in the testing scope tomorrow? And if you follow that trend, you will always be ahead of the curve and you will not end up having technical debt. If you understand what is going on in the industry in terms of data infrastructure, I think that's exactly going to come into the testing scope for tomorrow. And if you build coverage for that for tomorrow, if you have already thought about it, you are always going to be ahead of the game. And the best way to connect with me right now is going to be through LinkedIn. So definitely you can find me on LinkedIn. Type my name and I'll be there and I always follow any questions sent to me. And I regularly post stuff on LinkedIn as well. And I would appreciate any feedback on the post answer.
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.