About This Episode:
Welcome back, Guljeet Nagpaul, to the Guild! In this exciting podcast episode, we dive into test automation, exploring the differences between tools and platforms and discussing the groundbreaking advances in AI like ChatGPT. Join us as we unravel the mysteries of AI-powered codeless test automation on the cloud and delve into how AI transforms ERP software testing for giants like SAP and Oracle. According to the latest Forrester report, find out what's new at ACCELQ, a company that's not just playing catch-up but leading the charge as one of the top testing platforms available. Don't miss this engaging and insightful conversation with Guljeet Nagpaul!
About Guljeet Nagpaul
Guljeet is an accomplished leader with experience ranging from being a Software Engineer to a role in leading the platform and market strategy. Guljeet spent ten years at Mercury Interactive, where he was the North American head for ALM practice. During this time, ALM captured the highest market share, and Mercury was acquired by HP. He was a key leader in growing the ALM portfolio with Spydynamics and Shunra acquisitions.
Guljeet believes ACCELQ is the biggest breakthrough in the Continuous Testing space since Mercury's introduction of specialist ALM tools. He is excited to lead ACCELQ’s Product Strategy and Marketing. Guljeet holds a Master's in Management of Information Systems from Carnegie Mellon University.
Connect with Guljeet Nagpaul
- Company: www.accelq.com
- Blog: www.accelq.com/blog/
- LinkedIn: www.linkedin.com/in/guljeet
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.
[00:00:25] Joe Colantonio Hey, it's Joe. Welcome to another episode of the Test Guild Automation Podcast. And today, we are speaking with Guljeet, all about NextGen, AI, evolution, and test automation. Some new features at ACCELQ and a bunch of cool things about testing platforms and where the future is leading us. If you don't know, Guljeet is the Chief of Products and Strategy and a founding member at ACCELQ. He is an accomplished leader with experience, ranging from software engineering to leading platforms in marketing strategy. He spent 10 years at one of my favorite companies. I always say this Mercury, when he was the North American head in practices. During that time, ALM captured the highest market share and Mercury was acquired by HP. So you could tell he knows this stuff. Really excited to have him back, I think it's the third time he'll be joining us. So it's going to be fun catching up. You don't want to miss this episode. Check it out.
[00:01:15] Joe Colantonio This episode of the TestGuild Automation Podcast is sponsored by the Test Guild. Test Guild offers amazing partnership plans that cater to your brand awareness, lead generation, and thought leadership goals to get your products and services in front of your ideal target audience. Our satisfied clients rave about the results they've seen from partnering with us from boosted event attendance to impressive ROI. Visit our website and let's talk about how Test Guild could take your brand to the next level. Head on over to TestGuild.info and let's talk.
[00:01:15] Joe Colantonio Hey, Guljeet, welcome back to the show.
[00:01:56] Guljeet Nagpaul Hey, Joe, How are you doing? Great to be back.
[00:01:58] Joe Colantonio Awesome to have you. Last time we spoke, I mean, officially in an interview was a few years ago, actually. And I think a lot has changed so I thought we dive into a few things. Before we do, I always ask, is there anything in your bio that I missed that you want The Guild to know more about?
[00:02:13] Guljeet Nagpaul No, you were very generous. The only thing I would point out is my stint at all those other companies that helped me to learn about our field and experience is now kind of ACCELQ always tines all of those. Yeah. We're going pretty strong in 8+ years and leader in the market. So, yeah, as much as I always used to shine on my earlier career, ACCELQ is just taking over in terms of how we're leading the market.
[00:02:45] Joe Colantonio Absolutely. And I know there are a lot of great innovations going on at ACCELQ that I think even recently they've been released. I want to dive into that. I guess the first thing since the last time we spoke, it's been a while, is the rise of ChatGPT, AI seems to be accelerating. I know it's not true A.I., but it seems to be really catching fire now and a lot of testing tools have tried to integrate with it. I just want to get your views on maybe generative AI, ChatGPT and how do you see it impacting testing in test automation tools.
[00:03:15] Guljeet Nagpaul Yeah, it's caught all of us in the storm, especially from a layman's standpoint, ChatGPT kind of really caught the scene, and then people who've kind of just heard about the AI off and on and really never got into experiencing what impact an end user or a layman can have in terms of that technology. So ChatGPT, I think more from anything else, I would say more than the generated model of revolution that it perceives to bring. I think more, so it revolutionized an end user to understand what impact it can have on our lives. Right. But if you really think about it, that machine learning algorithms have been around for over a decade. We've leveraged it pretty much since the inception of ACCELQ from 2015, 2016 onwards. And that's been the normal trend, right? We started out with the algorithms which were more kind of traditional, which would do dataset analysis, give you a combinatorial analysis, right? Then we transformed the machine learning algorithms which are learned from data sets and then help you do predictions, right, whether it be the neural network algorithms or the Klingon algorithms or sequencing algorithms. They help us do prediction. So they would learn from datasets and they would help us do trend analysis predictions. They would help us with correcting our, for instance, in the automation or object interactions, whether it be self-feeding or so on. It was a natural transformation going from those machine learning algorithms to then leveraging that learning or deep learning to generative models. But if you really think about it right, some of the generative models, which are not language models, which a layman experience have been around for a few years now, like I don't know if you've noticed. One of the chief architects of Tesla's Autopilot last year came out and said 70% of the code he writes is written by a Git co-pilot. Right. So that's an algorithm, the Codex algorithm, which is also a generative AI algorithm, but not a language model algorithm. So it has kind of existed in our technology world for a few years, but the end user never experienced its impact, right? So generative models have various different forms. The reason ChatGPT for so much light is it's a language model with just conversing with us. So it just makes searching and behaviors different. So we all got caught up in that storm. But in reality, there will be different versions and different variations of GPT models, which would be either Van or Gan models or codex-like models or like Google's Lambda modules, which would do a different kind of generation, whether be mid-generation video generation or code generation, like a codex model best for platform Git. It's a natural evolution, but yes, it has definitely brought it in the forefront of how this is going to impact the world in the coming decade.
[00:06:21] Joe Colantonio Absolutely. So like you said, you started off ACCELQ has been AI power for years and years and years. And I just think this brings it forward more. Do you see ChatGPT or the adoption of impacting codeless solutions, because now people can just get code generated for them? Or how do you see that when someone says, Oh, I don't need codeless anymore? I don't know if anyone ever says that, or is it more like, Oh, I can use codeless now because now I see how it's happening and it seems like it's going to save me a lot of time. And if that made any sense.
[00:06:53] Guljeet Nagpaul Absolutely. I think it is, the generative models are becoming so mainstream that it is going to come into both codeless and code world. Just like in the testing world in our world, how these two worlds still exist fairly. Decently, healthily. Where we have this whole code world of test automation and the codeless world pretty much existing in parallel and the generative AI and cogeneration models are going to become more and more mainstream. Even if you look at code data Git co-pilot already has like ten competitors, which do a pretty decent job in generating code. Both worlds are going to adopt it and sooner than later, because all these algorithms are getting more into mainstream, they're getting more componentized. Companies and products which are coming around it, don't have to reinvent the wheel by writing these models. So it is certainly going to come in. The question would be, the efficiency of generation whether it be in the codeless world or it in the code world because at the end of the day, even if you look at GPT models, it all comes down to how much context we can bring to that generation and whether it's going to be how much noise it's going to create as opposed to how much efficiency it's going to give us in long term scalability. One thing's for sure, Joe, the basics of design can never go away. Right. And that's the strength that we need to look at, whether we look at the codeless world or the coded world. And that's kind of where they always shined. And that's why we really differ from the market because we understand the reason open source got so adopted in test automation was it needed that core design sound approach because without that, you can scale up. You can patch code whether you write it or you generate it, you can patch it as much as you want, but it's going to become garbage. It's going to become throwaway. And if it becomes throwaway, it doesn't matter that a million lines of code was generated or something wrote it. That context, whoever brings it, is going to come out on the top, right? Whether it be in the coded world or the codeless world. I'm trying to be agnostic, just given the spirit of your question.
[00:09:10] Joe Colantonio Absolutely. Using a tool, certain tools you think can help you with a design better then? So if you start off with a solution that's in the cloud that can scale that, maybe you could train the algorithm on test data specifically to your solution, making it even more intelligent than just using ChatGPT, that's more generic. Those are the type of things that will help with design?
[00:09:33] Guljeet Nagpaul That's right. So, bringing in more predictability with learning application behaviors. And then kind of helping the generation, which is in context to a sound design, which can then scale up as opposed to trying to generate a dump of code that will see a whole bunch of players do. That's kind of where the industry would evolve and how players can kind of clearly differentiate themselves right. And we are uniquely positioned in that space because that's the basics, which has always set us apart in terms of how that design first approach or that don't repeat yourself approach. Right. Which basically is the key magic to automation. I mean, you can build a bridge without having a sound design. It's going to fall. That context of how you bring in the generative A.I., whether it be in any part of the lifecycle of automation. So if you dissect the automation lifecycle, you have the design, you have the development, you have the test data generation, you have environment preparations, and then you have the governance and you have the analysis and predictions so that you can really test smartly. At the end of the day, what is our end goal with all of this is? I want the most lean and mean test. I want to test the least amount to get the most amount of coverage so that I can really produce a good quality product. And if that context is put across the lifecycle in leveraging generative AI. That's kind of what can different approach.
[00:11:06] Joe Colantonio Absolutely. A lot of solutions I see out there focus mainly just on web testing. What I like about ACCELQ is you have basically a full platform that tests all the things, I guess, where you really shine, maybe, ERP software testing, like SAP and Oracle. So you already leverage A.I. to help with that type of testing, is that correct?
[00:11:24] Guljeet Nagpaul We do. We've always leveraged AI, more like I said on the machine learning front, whether it be deep learning and natural language processing to bring code less to more accessibility. If you think about codeless, we used to call it scriptless a decade ago. And the reason it was scriptless was it would generate a recording dump of which would be transparent to the test engineer. But we all saw in the past 15 years with the onset of open source frameworks, that that was not scalable because real-world complexity needed that human touch. ACCELQ was always in the forefront of leveraging natural language processing and machine learning algorithms to be able to bring that human touch, but yet not bring the syntax-based programming complexity in that code injections because a combination of codeless and code actually makes things worse. More stress engineers don't want to deal with that, right? So we were always kind of ahead of the curve, so to say, in bringing those two worlds together. But I think where we are getting towards would be more generative models which can be used kind of in a Git-like fashion, right? Where test engineers can literally, interact with the application but not just stop there, but for generating that codeless automation, but also being able to leverage generative AI to actually recommend tests, recommend test data sets, because I'm sure you've heard in the past so many years, test data is probably one of the things which slow down test automation the most. Right. So there are just so many areas that we are looking at. And believe it or not, everything's just kind of around the corner in terms of how we plan to progress through this year.
[00:13:12] Joe Colantonio Very cool. I think this also points to a larger trend I've been saying. Every year I released my trends for the New Year and one of them was actually going from automation tools to automation platforms. I don't know if you've seen the same thing. And if so, would you consider an automation tool versus a platform?
[00:13:29] Guljeet Nagpaul Yeah, that's a great question, Joe. The difference between a utility and I always like to call, sometimes tools are nothing but glorified utilities and they may be really good in doing what they do. If you look at a very specific area in our test automation, Web has become very kind of mainstream. But if you look at like, for instance, database testing, ETL testing or the way API testing started out, if you really think about it, those automation tools were mostly just utilities. They would do a request-response verification and they would do it really well. And because that practice was starting out, whether it be testing databases and complex ETL procedures or whether it be testing API, those utilities kind of made sense and it felt like they were solving a purpose. But as our applications and our architecture has gotten so componentized, these omnichannel interactions, whether it be microservices, whether it be nonrelational databases, whether it be different types of enterprise service, whether it be a user journey that starts on web to mobile, to going to a microservice that have become really integrated and these utility testings just no longer scales an end to end integrated automation behavior flow that needs to be validated. And the idea behind a tool and a platform is you can't just put together multiple utilities and call it a platform if these are individually testing your behaviors, whether it be mobile or API or database or green screens are still, well, an active right in most enterprises. There are few dimensions to differentiating a tool and a platform. One is under the same hood organically being able to handle across technology stacks. But most importantly, not something that just throws out a script at you, whether it be codeless or coded, it doesn't matter because efficiency lies in managing that script, reusing it, just like when we write code. If I go back to our code word, you want your code to be as componentized, as modular, as object-oriented as possible, no matter how fancy technologies we use. At the end of the day, that heart of object-oriented programming is still what makes our code reusable, easy to maintain, and scale up to the application changes we need. The same is true for automation, and that's been ignored. So that's the difference between a tool and a platform with something which can manage those assets, modularized them, version control them, and then bring in that modularity and business relevance to it. That's kind of where we've always built the platform to work. And it's a journey. I'm not going to say ACCELQ is done with that journey, we've been long on that journey, but it takes time to evolve a tool and then add breadth to it. That's the third dimension, right? You can just live it, nonfunctional testing. I mean, you know how important accessibility testing is. Shift left performance, which is catching those performance bugs sooner than better. Yes, I can stress the system and then test it out with my nonfunctional testing. But wouldn't it be better if right in my sprint too. I can find those bottlenecks so that I can improve the architecture of my application. So that's the third dimension of a platform that it can add. But the important thing is building it kind of all organically under the same hood, right? That's kind of where we've been in our journey on now.
[00:17:00] Joe Colantonio Nice. And also, I guess, is it like to go to place for the whole team to get insights on all things testing related where before it almost seems like siloed with these different utility automation tools? Is it everything feeding in because it's feeding into this one place as well? Be able to test all the things in one place. You're able to actually bubble up inside. So you probably could do, but you'd have to do a lot of coding and a lot of connecting different platforms together to get that information.
[00:17:25] Guljeet Nagpaul Yes, very true. And that kind of comes back to our earlier discussions where we say, why are we doing all this for? You're doing it to do the most lean and mean test. And the way you can have it is if the governance is tied very closely to what you're testing. So to your point, can I go in and leverage A.I. to really slice and dice all the assets of a developer to tell me what I should be testing for this next sprint or for that makes a release? So bringing that right and if you think traditionally test management has always been this quote-unquote utility or a platform, but if you think about it, that doesn't make sense. If I have automation living in a separate world and it's governance living in a separate world, no matter how fancy I make the report, it can't work well together. I would say it's still an easy problem to solve because whether we like it or not, we live in an agile world where user stories need to be equally intertwined to the test assets, and then you have the defect lifecycle that you're managing, and then that whole thing needs to be part of a CI/CD workflow. So there would still be a lot of interactions. But the idea, at least the way we have progressed, at least fitting into this ecosystem, even though we would want ACCELQ to be that one-stop shop. But the reality is the ecosystem of how much Jira is leveraged our Jenkins is leverage is to be able to really closely tie with these ecosystem tools, but at the same time, for all things test become that one-stop shop. It's still an ongoing kind of evolution to fit into the ecosystem but yet have a one-stop shop. But that's kind of been the whole principle behind how the platform has differentiated itself.
[00:19:05] Joe Colantonio That's a good point. I'm not going to pick on HP, but they try to make everything closed off almost from open source and that's what they lost, though, I think. It sounds like, yeah, you want one place, but you don't want to be able to be integrate built anything that may come up in the software development lifecycle.
[00:19:22] Guljeet Nagpaul Right on. So, a specific example there is we don't force Agile scrum masters to get out of their Jira. So we've tied up with Atlassian so closely and I'm taking a very specific example that if I'm the Scrum master, I don't have to leave my dashboard screen. I should be able to see for my story within my sprint how many tests were run, what failed. Is it an environment issue? I don't want to go to ACCELQ because I'm a Scrum master. I just want to live in my world. Right? So that's the key, like you said. I mean, and we've seen Microsoft evolving from the company it was.
[00:19:55] Joe Colantonio Yeah, exactly.
[00:19:55] Guljeet Nagpaul To an open ecosystem, right? So yeah, we care about that aspect in terms of community and how the community would adopt a tool at the end of the day. As opposed to trying to kind of take over all the process areas.
[00:20:09] Joe Colantonio Absolutely. So, you said it's a journey. Even ACCELQ is still going through a journey. But as you're adapting to all the new releases, all the new things that are coming out, I think you've released a pretty big version called SkyBolt. Can you talk a little bit about Sky Bolt, how it maybe is set up to help you integrate with AI in the new things that are coming up in the future?
[00:20:30] Guljeet Nagpaul Yeah, I mentioned, the generative models have been around for a few years. We've been tracking it pretty closely. The usage of codex models or other generative models even long before kind of ChatGPT language models came along. Sky Bolt was a step along that direction. In fact, it's been now almost a quarter of it has been released and well adopted. So even before we heard all of this about ChatGPT and it kind of positioned really strongly in this market. The new introduction in Sky Bolt was really the stepping stone towards leveraging all these different AI models. We came up with more of a studio environment to design and test as opposed to a logic editor environment, right? That's kind of where traditionally the industry has been that it'd be codeless or coded world. If you think about it, it's always been about let's generate scripts, right? Whether it's codeless or it's syntax-based. We've moved towards more of a design studio environment which kind of emulates the underlying application behaviors and helps kind of create process flows in stuff still looking at and getting to the object level. So if you think about traditional automation, it's still fighting the battle of, hey, how can I best interact with this object? Is XPath the best way or the jQuery, the best way or should I write my own DOM-based interactions? So the automation tradition world and we fought it also. We fought it for the first few years of our inception and how they evolved, right? But with this new introduction, we are able to kind of almost become a layer about that so that those process flows, almost make it transparent to the end user, to more like how an end user is looking and interacting with the screen as opposed to trying to fight the automation technologies which interact with. We've been able to advance our platform to that level and remove all the flakiness which comes with our traditional automation because you know how flaky the traditional automation is. If a DOM changes are inevitable, changes the framework, everything just breaks. And this is more common, even more so in these days when applications have three or four releases a year. And you mentioned ERP applications like Salesforce, Oracle, SAPs of the World, like Workday, ServiceNow. So for that, we also released something pretty cool called Live to be able to keep live release alignment and then also have behavior-processed flows which can be used in the design studio. So other than that, SkyBolt also increase the breadth like I was talking about a different dimension of the platform. We added virtualization, accessibility testing, shift left performance testing. Pretty excited to see our customers kind of adopting. ACCELQ also for these nonfunctional needs as we kind of expand on that front.
[00:23:19] Joe Colantonio Love it. And like you said, you're using AI, pretty much since you started. So you're not playing catch up. And I think what reflects that also, I think you pointed me to a Forrester report out recently that just rates a bunch of tools in different areas for platforms. ACCELQ, I think based what I looked at, I think it's in the number one spot or it did pretty well on each of these areas. How did you anticipate, like because you're not playing catch up. It seems like everything that you would look for for a platform you already have in place. I'm just curious to know how that happened. You designed something from the beginning to be able to handle what is eventually going to be the norm down the road.
[00:23:55] Guljeet Nagpaul Yeah, absolutely. Like you said, we've been building towards it. Back in 2019, when we were still kind of young. We had a relatively young company. And back in 2019, we were, of course, much younger. And when Foster did its first analysis, actually in the history of any Forrester or Gartner analysis, which have been done not just in testing across the board, no company actually straight up entered in the leader quadrant. They always have to claim above the challenger to that differentiation of platform to tool has always what set us apart. The industry, without naming tools, had still been evolving around utilities, whether it be generating and throwing a Web Script at you or API script at you or a mobile script at you while ACCELQ was always building towards that full stack design first approach everything integrated into bringing that user experience while leveraging AI. We didn't look at AI as just making your job easy. We looked at AI as can it bring efficiency to the scalability of automation. So where we differentiated was a customer who's looking to automate 10,000 automation assets across a 100-application portfolio as opposed to just providing a utility to that one of the team was just trying to do small test of 25 automation scenarios, right? So that's kind of been the building blocks towards this. And then like I said, that has multiple dimensions to it, right? Just like working on an offline version of an Excel sheet versus an online version of Google sheet, if you and I are on the same Google sheet while we work on the cell, a particular cell, it will be collaborative in real-time. So a cloud platform doesn't mean just dumping my script in a cloud. It means a real collaborative workspace, which can have you and me share that while being able to have version-controlled branching merging to bring in codeless, not just the scripting, but bringing codeless to all these different layers. That's kind of what has really differentiated a tool and a platform that reflects in not just Forrester, but if you see kind of our customer base that reflects in kind of the maturity of how an adoption is across our customer base.
[00:26:13] Joe Colantonio Love it. Great catching up, Guljeet. A lot of innovation there. I guess. Before we go, though, is there one piece of actual advice you can give to someone to help them with their automation testing efforts and what's the best way to find contact you or learn more about ACCELQ?
[00:26:25] Guljeet Nagpaul Yeah, always good chatting, Joe. It was a long break. So appreciate you having me back. Yeah. I mean, I love to always share. I get your hands in there. ACCELQ offers a completely no-obligation free trial. Proof is in the pudding, right? No matter how much we can talk about it just get in there. We've now started community initiatives where every quarter we do hackathons where you can actually participate, learn within a week, and then you get a 36 hour window to basically go at it. And we have different versions of the hackathon, whatever area you like to be in. We have full stack hackathons where it goes across technology stacks. If you're just into API testing the API, specific hackathons, we've gotten very mature in mobile testing hackathon. So yeah, get in there and try it out and compare it for yourself and that's the best way to experience it.
[00:27:16] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a446. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:27:53] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.