About This Episode:
Are you interested in learning how to optimize your testing processes? In this episode, Aviram Shotten will share how QA is innovative and can help transform your software development. Discover QA optimization and how it applies to your software delivery lifecycles. Gain insight into your team's data using AI/ML to know where your real challenges are. Listen up for a roadmap to help you find the quality gaps in your software and how to fix them.
Exclusive Sponsor
The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!
About Aviram Shotten
Aviram has been in the quality industry for over 20 years filling many roles. He is currently the Chief Solutions Engineering Officer at Qualitest.
Connect with Aviram Shotten
-
-
- Company: Â www.qtwiki.net
- LinkedIn: aviram-shotten-083a804
- Twitter: dshotten
-
Full Transcript Aviram Shotten
Joe [00:01:23] Hey Avriam! Welcome to the show.
Aviram [00:01:27] Hey! Good to be here.
Joe [00:01:28] Awesome. So before we get into it, is there anything I missed in your bio that you want the Guild to know more about?
Aviram [00:01:33] I think you did a splendid job. I do think that experience is important. And I'm here for a long, long time. I have seen things that have been rather successful and have seen things sinking faster than a Led Zeppelin. So no, just appreciation of, you know, I'm here for a long time.
Joe [00:01:52] Awesome. Well, that's what I like. You've been here for a long time and, you know, there used to be a joke that development, had all the innovations going on. But I've spoken to over 300 people on my podcast. It seems like the past few years, though, it seems like testing and quality, there's been more and more innovation. So with all your years of experience, can you give maybe a little bit explain how you see quality today and how it might be different from where it was when we both started, you know, back in the day?
Aviram [00:02:19] Sure, look, I think that in reality, testing used to be perceived as a necessary evil, while testing is actually an enabler to make sure that you delight your customers with amazing software. The way that testing transformed in recent years starts with agreeing with what I just said. It's not a necessary evil. It's basically a key process you would like to install. That, on the one hand, will not slow you down because you need to deliver very fast and you need to deliver a lot, but still to maintain a level of inspection and testing to assure that whatever you go to market with, it's fit for purpose and your customer will love it. Once you understand that, all of a sudden the dynamics, the psychology of testing have shifted from a necessary evil to let's do the best thing we can, given constraints. And that enabled us to be at the position today where testing is fast, testing is innovative, testing enables faster, safer launches of software, and basically testing enables innovation because, at the end of the day, innovation needs to be safe. You cannot risk your business. And that's exactly what quality assures you.
Joe [00:03:39] So I think actually, probably not many people think of testing as innovation. I've never heard that term before. Can you just talk a little bit more about that? Because I think a lot of people see it as the opposite, especially as you talk about creating software faster. The first thing that comes to mind and most large organizations are we need to get rid of QA and streamline it. Not necessarily that this testing actually is innovating and helping us create better software for our customers.
Aviram [00:04:04] There are two types of innovation. First of all, test innovation. Those are really exciting things that are happening right now from your sponsor. SauceLabs will bring in the ability to test across multiple devices, multiple browsers very quickly and to get feedback, very fast AI in testing, which I'm sure we will be discussing later on in the conversation. And this is all about innovation in testing. Testing is an enabler for innovation. It's testing in the process of innovation to represent the balance. We can't be innovative without being safe or at least be able to analyze and understand what level of risk we assume when going to market very fast. And that's exactly what I meant. I meant that testing is a facilitator of innovation by addressing and presenting the risk, we assume by going to market very quickly.
Joe [00:04:57] Absolutely, and I think I saw a webinar that you gave. And it talked about QA optimization, so I definitely want to dive into AI. I think AI is a piece (unintelligible). At a high level, what is QA optimization? How do you see it?
Aviram [00:05:10] So QA optimization is basically the opposite of bog-standard approaches to fix testing gaps or quality engineering gaps. When we say QA optimization, meaning that we apply a standard process to evaluate perceptions of what are the gaps around the software delivery lifecycle, the testing lifecycle, and compare that with reality. So it might be that one can think that test automation is his biggest problem. But if we take the perceptions and we will compare them with the data that this specific project or department has, we will be able to approve or disapprove of their perceptions. So I always come back to this great example that we were challenged that there isn't enough test automation when we apply the QA optimization process. We realized that actually there was a healthy amount of automation, but it could never reach a point of utilization because of lack of environment and lack of data, and lack of ability to analyze the results of automation. So basically, even if you had two weeks or twice as much automation, you would still not benefit, not in terms of quality, not in terms of velocity. So QA optimization takes the perceptions, puts them against reality, and we use very advanced methods to establish reality. We use machine learning and advanced analytics to identify where the problems are. And then we're showing how to get to a place where we educate people about the challenges. We show what the challenges are in almost a scientific manner. It's no longer suited and booted consultant. It's about the data and the implications of the data and the roadmap to fix it. And for every customer and every project we address with QA optimization, the result is slightly different. We don't believe that there is a one-size-fits-all. There are probably a few sizes fits all. And you have to eat not only the things that are right, but you also need to address the biggest gaps and the showstopper first. And those are all identified and visualized by QA optimization.
Joe [00:07:27] I think this would actually help a lot of companies because I worked for a large enterprise. Whenever we did planning for an upcoming release, it seemed like there were a lot of assumptions. Just like testers had some assumptions, but they couldn't really tell you why they felt that way. And the assumptions ultimately usually ended up being wrong, like once it gets in the wild. So it seems like you're taking a different approach. It's not just like quality optimization is running more automated tests. It's actually gathering all this data that's probably available throughout your whole software development lifecycle and pulling out maybe insights that are unique to your organization that you need to address in order to optimize your QA strategy. Am I understanding this correctly?
Aviram [00:08:08] You're absolutely right. And then when I was under speaking or I was under-appreciating how important it is to be very contextualized, those assumptions we use, they are helpful. But sometimes we miss the mark in terms of what we actually have, how efficient we are. Let me give you an example of a very large UK enterprise we surveyed. We apply the optimization process. And basically, there was a big mismatch between management perception of how much testing needs to be conducted, the test team's understanding of how efficient they are, and what are the most value-adding test activities. And you could see it in the survey, right? There was a big gap in terms of what people thought about efficiency, etc. And when we analyzed the data, we analyzed test cases and defects and we use the NLP to do that. And the syntax that was used to describe the test cases and the syntax that was used to describe the defects was almost a complete mismatch that showed us that while they invested thousands of hours in planning and test design, they actually found all the defects elsewhere. They found them using exploratory techniques. They found them using automation. And if we will all be a little bit more scientific in how we operate and strive to get better data to base our decision on, I think we will find ourselves working a hell of a lot less. I hope it makes sense.
Joe [00:09:39] Yeah, like how do you get better data? Because there's a lot of data being produced and a lot of people just either getting overwhelmed or like you said, they're looking in the wrong place for the data.
Aviram [00:09:49] I don't want to be inclusive, I don't want to say everyone is doing the same mistakes, but I think that sometimes we give too much weight to our gut feeling and what happened last time around and our subjective understanding of what happened last time around. We should be more careful with that. The basics is to get your testing business intelligence right. Make sure you have your testing dashboard installed. Make sure you have visibility into defects versus test cases, execution where the defects are clustered, who finds the defects, which method? Once you have that, you're much, much smarter. I'm not even talking about applying NLP. This is for the advanced organization that can afford that. But doing the basics does not require a lot of investment. It requires the discipline and understanding that you want to be data-powered rather than gut-powered.
Joe [00:10:40] So if you are analyzing the data, then how do you find correlations, maybe different data points to know that you're maybe using the data to create a correct strategy?
Aviram [00:10:50] We created a framework to look at all areas of testing and we don't think that we will ever be able to drive things forward without consensus. If we want to be able to show people that there is a structured process to review all the interfaces of testing and within testing that we touched on all the key areas and we actually collaborated and connected with individuals, whatever process we will install will not be successful. What I'm trying to say is that, yes, we developed a framework that looks at testing through the perspective of an external user, external client to testing if you'd like and the internal stakeholders – the managers and the engineers. We are doing that. On the one hand, we're using data to make it less objective, more objective. This is what you guys are having and it's stated in your data. But we are doing this by driving consensus and by communicating frequently throughout the strategy process to the findings, to the changes we recommend. We make it somewhat democratic so people can, you know, vote about priorities. As an example, there isn't a process that is complete without the technology and people and vice versa. It's about getting all things on this together. So whatever we will come up with will be executable and easy to understand.
Joe [00:12:25] So how do you implement this, though in an organization? It sounds like it's a lot of work, like your framework, like other steps that are easy to implement, or is it gradual? Is that a big bang type of implementation?
Aviram [00:12:38] So this is our biggest concern that people will lose interest. They will bring on a team of people like myself or my team or my company, and they would like us to advise them on how they can be better in quality. And all of a sudden you come with this very fancy report and people lose interest that way because it takes time to see the outcomes and it takes time to get a return on investment. The first thing we apply is a very ROI-oriented framework. Our plan usually ends in six months’ time and more than 70 percent of it needs to be demonstrating value within 90 days. So basically we're walking with 30, 60, 90, and 180-day plan. And if it's not fixable in six months’ time, then you might want to take it aside. If you need to drive a bigger change in the organization, a longer change, you don't know what will happen in six months’ time. So we are limiting ourselves to something that will keep people engaged. If we want to drive, for example, to drive more in-sprint automation, there has to be a process that shows value in thirty days and probably converges in three months so people can see the value of in-sprint automation. How can we do they want automation? If we are going to sell this promise to someone and it will show it will bear fruit in a year from now, I promise you too many things will change and this initiative will dissolve into the air.
Joe [00:14:07] Absolutely. I guess, a lot of things that should people up, though, at the stage as being able to measure it like in 90 days. It's like figuring out what the correct APIs, KPIs, or metrics are. So any tips on what to track with KPIs and metrics when you're implementing this type of strategy?
Aviram [00:14:23] I'm a big believer in looking after the trend. I don't think that…and a lot of things are relative. Some people demonstrate a higher degree of satisfaction versus other people that a certain period will show a lower degree of satisfaction. So I would like both subjective and objective measurements to be checked over time to check over the trend. I would look at methodologies like technology innovation assessment that we developed in Qualitest, but also there are other industry standards like TMMi and like TPI, and wherever you start from, just make sure that when you engage again after a year, you get better. The key areas to look at, of course, are defects, test case utilization. You need to come up with meaningful productivity metrics. How many test case is executed is less important probably because it can be anything between 100 to 10000. What is the value of test cases? How many test cases have identified issues? How many test cases were successfully executed in the last 30 days versus the previous 30 days? Those are things that you can analyze and make insight out of. I think there's also a big difference if we're measuring test automation and manual testing. Manual testing tends to be a little bit more stable because at the end of the day, if something is changed, the manual tester will be able to get over it and the test will be successful. Automation, however, it's not the case. Automation tends to break if software changes and if we don't have a really synched process to capture changes, identify them, inform the test automation expert or the asset within the sprinters, anything in that shape or form, and make sure that automation, when executed, is already up to date with the change, automation will break. And measuring the success and the stability of automation over time is a key metric to prove the ROI and test automation is a heavy lifting investment and people want to see the fruits of it via their peace of mind. If I had my automation running and nothing was found and it was successful, I can deploy safely. This is the best return on investment so automation stability will be something that I encourage people to look at.
Joe [00:16:42] Absolutely. And also on the webinar, you had a webinar on Scientifically Optimizing Your Process with AI. I'll have a link to it in the show notes. But one of your team members, I think, brought up the point that a lot of teams or they see a lot of companies now looking at change with the quality lens. So they may have KPIs and metrics, but they're just looking at like a technology or a process, not necessarily the quality outputs. So if you talk a little bit, why a quality lens when you're thinking a change is so important when you're doing this type of framework?
Aviram [00:17:10] If I remember correctly and if I understood the question correctly, I think that at the end of the day, you need to have the kind of measurement that will demonstrate the right things for your organization. Not all test business intelligence means everything to all people. Sometimes you are in the process of change and you would like to measure things that relate to successful transformation. And sometimes you just finished your automation that your backlog of automation and now you want to look forward into how to integrate automation from day one and at that moment in time, your KPIs might need to be a little bit different. I also urge people to come up with the right metrics for the right stakeholders. Unfortunately, the things that are very meaningful to an SDET can be less meaningful to a decision-maker, two levels above that person, literally in the same building. So a contextualized measurement coming up with the right KPIs for the right situation, the right project, and the right persona, I think will drive the most value and most alignment. Alignment is so important when it comes to software delivery. Alignment is important in general, especially in software delivery.
Joe [00:18:27] Absolutely. So you did mention dashboarding. Are there any other tools that can help teams come together to really make quality bubble up to everyone, not just SDET, but like you said, a high-level CEO or CFO? To make sure everyone's on board and speaking the same language and that what one person thinks is important is important to the whole organization.
Aviram [00:18:47] With taking the chances of coming across as too salesy for a company like Qualitest, the ability to be transparent and to show exactly where testing is, where development is, we use our own tool, surprisingly called Qualiview. It's ready, just ready-made with all the interfaces. And it's all good to use, but the tool is less of the issue. The most important thing to drive is consistent use. And when you launch a test KPI or test BI tool, make sure it integrates with the test cases with the defect that sometimes stored in two different systems or managed in two different systems, wherever you maintain your user stories, production defect, production it if you can have access to all those into a single repository, you can drive very meaningful insights like escape defect, stability metrics, etc.. There are many great tools out there in the market. Ours is equally good.
Joe [00:19:54] Absolutely. I know you don't want to come off as a sales pitch and I appreciate that. But if someone is interested in learning more about the AI piece, because a lot of times when you hear about AI, it's just AI is focused on helping people write better automated tests, not necessarily helping the organization create better quality software. So can you maybe talk about is there AI built into this product that can help them with these types of goals that they might have that are beyond just help them write better automated tests?
Aviram [00:20:19] So far, we cover business intelligence. Qualitest has a very well-defined approach to artificial intelligence in testing. The reason is because we don't want to be everything to all people. There is no such thing. AI is very complex. It comprises so many subfields and expertise. What we do with AI we have the approach of use cases. And so far Qualitest has launched three use cases that leverage AI in one shape or the other, and they are very focused. It's not everything. But we made a lot of studies and we invested a lot of brainpower in coming up with the most meaningful tools or use cases and the three use cases we're addressing, or the three challenges we are addressing using AI are the following failure prediction because in 99 percent of the cases, our customers are sitting on a goldmine of data and AI or machine learning has so much capability to take historical data like defects, test case execution, software changes that are registered through logs and compile logs and Jenkins, etc. and to come up with a prioritized list of what to test first and what's the probability of that test case to fail. This is so helpful if you want to drive velocity and it's not everything, it's not test automation, but this is something that we know and we proved, I think, 20 times in the last two years that it's doable with very high accuracy and it's working. And it's not a fake promise. It's not everything there is to do about testing. It's just failure prediction. Where will we break next? The second one is about efficiency, our ability to harness algorithms to make sure we don't have overlap between defects, test cases, and the requirements. We call it dev consolidator and basically crunches your test cases, on the one hand, your requirements, on the other hand, and your defects. And it removes duplications. If you have the same defects twice, it prompts that you have it twice or it looks the same. And then you can check why your test case all of a sudden reappear. We close this test case three months ago and all of a sudden we have a new test case that looks and works just like the previous one. What happened? And the manual person will struggle to do that, NLP doesn't. The same goes to test case application. We usually find anything between 22 to 45 percent duplications in test design. This incredible number, the maintenance effort associated with so many test cases is incredible. The cost and the frustration are incredible and we are able to capitalize on that. I don't want to take too much time here, but definitely, machine learning and AI hold so much promise and we are carefully launching more and more offerings to make the lives of our teams, but also our customers and anyone's life much easier by harnessing those powers to the benefit of quality engineering professionals.
Joe [00:23:33] As I mentioned once again, the webinar, what I love about it, I think you mentioned this also on the webinar is your approaches. You're now pitching AI as like this magical tool. You or someone on your team said AI is there to help with low-level decisions. It allows engineers to focus on more high-level decisions. And these low-level decisions were like you said, is a goldmine of data. So it seems like that is the approach that you are focusing on, which I think is really helpful for people to follow as well.
Aviram [00:23:59] Yeah, low order tests need to be handled for higher-order tests like user experience testing. Those are the things we want professionals to handle. We don't want them to do regression all day and all night long because there's a potential risk hiding somewhere, handle the risk through AI, free up a significant amount of your day to focus on the things that make your customer the happiest. And this is what we're trying to do here.
Joe [00:24:30] Love it. Okay Avriam before we go, is there one piece of actionable advice you can give to someone to help them with their quality strategy initiatives? And what's the best way to find contact you or learn more about Qualitest?
Aviram [00:24:43] So first of all, I'm always available. My email is my first name dot my surname at qualitestgroup.com. Our website has just been revamped and relaunched. And I really urge you guys to look at qualitestgroup.com. There is a lot of information there and look me up on LinkedIn and Twitter and I promise to be responsive.
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.