About This Episode:
In this Halloween special, Joe Colantonio and Paul Grossman discuss the evolution of automation testing, focusing on the integration of AI tools, project management strategies, and the importance of custom logging. Paul shares insights from his recent job experience, detailing how he inherited a project and the challenges he faced. Paul also goes over his Optimus Prime framework and uses it to explore various automation tools, the significance of dynamic waiting, and how to handle test case collisions. The discussion also highlights the role of AI in enhancing automation frameworks and the importance of version control in software development.
Exclusive Sponsor
Discover TestGuild – a vibrant community of over 40k of the world's most innovative and dedicated Automation testers. This dynamic collective is at the forefront of the industry, curating and sharing the most effective tools, cutting-edge software, profound knowledge, and unparalleled services specifically for test automation.
We believe in collaboration and value the power of collective knowledge. If you're as passionate about automation testing as we are and have a solution, tool, or service that can enhance the skills of our members or address a critical problem, we want to hear from you.
Take the first step towards transforming your and our community's future. Check out our done-for-you services awareness and lead generation demand packages, and let's explore the awesome possibilities together now https://testguild.com/mediakit
About Paul Grossman
Paul Grossman is the Dark Arts Wizard of Agentic Test Automation. He has been a Test Automation Evangelist for 25 years. He has touched every tool including Mercury WinRunner, UFT, Selenium and many paid tools. He uses his TikTok and YouTube channels for side-by-side comparisons of tools with his CandyMapper challenge.
He co-authored “Enhanced Test Automation with WebdriverIO” with Larry Goddard.
Most recently he started supporting a Typescript Playwright called Optimus Prime.
Paul has been Joe Colantonio's Test Guild guest every Halloween for 10 years. At conferences he always has a live demo and the occasional magic trick.
Paul's current passion is leveraging Agentic LLMs to implement missing features automation platforms.
Connect with Paul Grossman
-
- LinkedIn: http://linkedin.com/in/pmgrossman
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:28] It's alive! It's aliiiive!
[00:00:36] Joe Colantonio Hey, Paul, welcome back to our Halloween special that you've been joining us for the past, I don't know, 4 or 5 years. Welcome.
[00:00:41] Paul Grossman 10 years.
[00:00:43] Joe Colantonio 10 years, it's been 10 years? Oh my gosh, it is nuts.
[00:00:47] Paul Grossman Oh my, yes. Thank you very much for inviting me back. It's always a great pleasure to be here and give you an update of what my job status currently is.
[00:00:54] Joe Colantonio What is your job status?
[00:00:55] I got a new job working for a company called R and R shipping. And they have a really cool project that I kind of took over. I guess I'll start telling you a little bit cool thing about that. Is they had two projects going on. One is the Selenium project. The other one was a Playwright project. And they were trying to get everything kind of automated. They have about a 100 test cases and my predecessor left. And they said, we need someone to take over and kind of do this. Are you up for it? And I sat there and said, yeah, that'd be kind of cool. I know Playwright, not sorry. I know WebDriverIO and TypeScript and I had all these cool ideas in my book. And I basically have a two year project that's been in flight and my predecessor is long gone. I had a few minutes to talk with him. Just saw him run our test cases. His test case is just someone to verify. Can I log in? That's all I need to know. The other thing I don't care. I'll figure it out. And that's basically been my last 4 or 5 months working with them to keep their automation script working and running. And adding a whole bunch of new enhancements into the Playwright Framework.
[00:02:19] Joe Colantonio I'm already off script, so I'm just curious to know, when you inherit a project like that, how do you handle it? So I'm sure a lot of people start new companies and they have an existing framework and they just get thrown into it and they have no idea where to start. Do you have like a process you follow?
[00:02:36] Paul Grossman Yes, if I can, I try to get as much time as I can with my predecessor. I really had just a couple hours with this guy because it was on Friday and the company was very nice to be able to get us overlapped a little bit. I could talk to them and just see where is it located? What are you working on stuff like that? The best thing, of course, I always recommend one of my first projects. I gave three weeks notice and said, Hey, bring out whoever you want to take on. So we have enough time because I had a four year project when I first started back in 2001. Second thing I go for is to, well, during a session like that, I want to see how stale are the test cases. Can something run and get to a pass? And that's really all I look forward to start off with. I can move on from there. Third thing I look for is how many hard-coded wait statements are throughout the entire framework. And that's a pretty good thing.
[00:03:34] Joe Colantonio It's still a thing, cause that's a horror.
[00:03:36] Paul Grossman Oh yeah. It is a horror, it is awful, but it tells me like how much challenge were they having with page synchronization and we'll talk a little bit about that during the session over here, some of the things I try and get into, I will say some that reminds me of something you just recently had last month, you had this event in my town out in, you said it was in Chicago's in Schaumburg, which is literally where I grew up. It was right down the street. So it was really great to see you in live, live person. If anyone gets a chance, Joe's got these sessions. I think you're doing more where they're in live sessions in different cities, if I understand that correctly, if not.
[00:04:22] Joe Colantonio Test Guild IRL in real life coming to place near you.
[00:04:28] Paul Grossman Yeah. And you gave me some table space and I got to show off a little bit of stuff and some of the stuff you're going to see today is going to be some of the things that I was trying to in some cases actually succeed in showing off. The point of that was you had a great guest over there. You had Jason Huggins. And if you haven't seen it, the session just before this one, you should go back, go see the Joe's interaction with Jason Huggins. He's talking about Vibium. Which is a brand new thing that he's getting into, which is really, really awesome. You were saying the things that I look for. Yeah. I said the, the wait statements and then try to understand what are the current challenge that they have. My first thing is always like how much detail am I getting out of the reporting when I'm going and a lot of people like, how are you, why are you doing all this console log stuff? And I'm like, cause I need to know what's going on. One of the first things I did was I created a custom console log. I did that, but I've done this every time for the last 25 years. I created custom version of console log, but it was a little bit different this time because as you may have known, I got out of coding for about 3 years, I worked with the tool is all plain English and while it was actually just a side note, while I was out of work for the month of April, I looked at 24 other tools that are all plain, English stuff. If anyone ever wants to ask me about any...
[00:05:58] Joe Colantonio Where's that listed, Paul? Do you have that listed on a website? All the tools and the results?
[00:06:04] Paul Grossman I am putting together Candymapper challenges with most of these companies, as long as they said they're okay with it. I can tell you some of the companies I looked at, of course, ZapTest, Alex, his tool, we worked with that over there. Wopee, I just got a chance to look at Harness which is kind of cool BugBug, which nobody's ever heard of, but they are doing some interesting things. They just need some version control going on over there. Who else have I talked to? There's one protocol Gina from testing on or ..... I'm not sure which company it is, but their put together Functionized, definitely been looking at those playing around with them. And that was one of the things is that when I got hired on, they said, by the way, we heard about this, this company called Test Rigor. We know, you, know a lot about it. And I said, yeah, I do. I know where all the skeletons are hidden in the closet on that one. And I said, let me take an opportunity to go look at other tools and see exactly what's out there. And they are not, I said go for it. Go take a look at anything. I have been looking at a whole bunch of these different companies out there and just looking for three things that kind of one, how easy is it to with it, two, how much of. Functionality that I'm looking for. I mentioned virtual control just a moment ago. Some have it, some don't. And what's the speed? I know you'll ask me at the end, kind of like tips and tricks. One of the things I'm seeing with AI, which is the major talk of this today, what I'll talk about is that everyone's talking about, Hey, you need AI and it's all AI, it's, all gotta be, everything's got to be AI in there. And I do know Ben Fellows was talking about this saying, yeah, it's interesting, but if you go overboard on it, come a problem. Exactly what I'm seeing as well. You need the understanding use AI like salt in a soup. So you have a little bit of AI, you put a little salt in soup as the AI. That's fine. If you put the entire container of salt into the automation framework, it's probably going to bog it down to the point that it's like, well, that's great. We got a hundred chest cases, but it takes two days execute. Cause the AI is constantly taking all this time to go and try and find stuff. And then at that point, you also might end up with the problem of tokens. I'm sure you've heard of tokens. You get basically 300 free tokens for your co-pilot inside of a tool that you're usually using. I'm using copilot inside of visual studio, visual studio code, I should say. And one of the first things I learned three months ago was that I burned through all 300 tokens in about two weeks. And I still had enough of two weeks to go. And I have to say, I was a little addicted to the tokens that I was like what do I got to do to get more tokens? I actually made friends at Microsoft with a guy and I said-
[00:08:56] Joe Colantonio It's like Halloween candy, you have to go and keep getting more and more.
[00:09:03] Paul Grossman Yes. Just like the candy web for website is it tells you where all the candy is. They hasn't worked in seven years, but okay. Yeah, true. And you got to figure out where to get more. And that was the biggest challenge for me. And in fact, it remains the biggest job trying to set up how, I don't care if I pay for something out of pocket, just to learn how to use it. I always invest in myself. That's another thing to tell me. It's just invest in yourself. Don't wait for a company to pay for your training and stuff. Go invest in your self. But I couldn't figure out how and where to get that done. I had to go like take a side trip around, go find somebody. It's like I'm not sure how to do it either, but I reset your tokens. I'm like, okay, thank you. Really nice guy. But I've gotten a little bit more judicious as to using the tokens. Now, where did I go with this? I went off a little off the track, I was just trying to make the point is that AI, everything isn't really a great solution. You want to have it just use a little bit. I'm going to say we, I have a little demo of some stuff.
[00:10:09] Joe Colantonio Before you get into it, Paul, I heard this year for Halloween you're going as Optimus Prime, that's your segue.
[00:10:18] Paul Grossman Yes, I am. I do name all of my automation framework. So this year I came up with the name Optimus Prime for the automation framework because we work in the trucking industry. There's a whole bunch of different trucks. And if I can get this to do what I want, I'm gonna show this to you. Let's go do a share. All right, there is my code and I will go share my screen. I always love to show off this stuff here. This is my Optimus Prime. I'm going to go and run one of my executions over here and see if I get the optimus for the first thing I did with the AI. I'm going to run a lot of these test cases, but the first thing I wanted to kind of show off is that I wanted something that basically says how many test cases am I running? How many test users am I running? And also have the Optimus Prime logo up there. Essentially, it's going through, my reporter over here, you can see, it's actually interacting with the elements on the screen as it goes through. And it's actually reporting out what's doing now. Everything it's putting out here, it's actually saying what tests it's running. And a lot of information about what's doing it's basically saying, Hey, I'm putting in this field, but the field is not yet visible. There's a lot of extra stuff that I like to put in there, and this is basically custom clicks, custom fill, custom assertions that I put in there. And I'm always just putting in as much information as I can and telling exactly which function was it actually executing as it did this. It basically puts out and says, Hey, it's successfully filled this. Now, one of the things that we do here is I'm doing a console log over here, but this is updating in green. I also have the text written in blue. That's not a part of Playwright. It'll just put it out in just a standard kind of gray on black. What I wanted to do was, Hey, ask it to say, create a custom AI, a custom console log. So log that will colorize the error text in red. And there were many things that I put in there, but I'm essentially using the console logo here. I've got three different types of LLMs I can work with. I generally like Claude's Sonnet. It's fast and it's smart. Gemini Pro is slower, but it does bigger projects and GPT is also pretty good at doing big stuff.
[00:13:10] Joe Colantonio Paul, why not use auto?
[00:13:12] Paul Grossman That's a brand new feature that I put in there. The problem I'm not sure is I don't know which one it actually uses when I switched to auto.
[00:13:22] Joe Colantonio Does it recommend? Does it try to find the best one for the use case or do you know?
[00:13:29] Paul Grossman That's the theory behind it that I'll grab whatever I typed in there. It'll grab the one that's useful. But I prefer to know exactly which one. Cause I've had experience working with all three actually technique all four. I like Claude the best. GPT and Gemini are good for like, if I have something really big, I need to grab this function, extract it and put it into the fixtures, then that's more of a big project and GPT in Gemini or better at doing that. But they're slower. It's sitting over here, actually chugging through all of my things. And what other things you'll see that I found in my experience last four months doing agentic let's call it vibe coding. I think there was a better term for that. I think it was augmented coding is what Ben called it, which I like that term better. It does tend to apologize because it's having issues or errors. I don't see that it's actually put anything in there, but it's trying to figure it out, put it together. I'm going to cancel this. I just want to kind of show that there are different ways you can do this. Mostly a lot of people use ask. And at your conference, you actually had a quote on one of your screens. I recommended that when you do something like this, you're adding a new feature, ask it to do, use the ask mode first, and then whatever it says, review it and see that does this look good? Is it okay? And if it's okay, then go and switch over to agent mode and tell it, good, great, go implement this stuff.
[00:14:54] Joe Colantonio I didn't know that was an option. Why don't I see it? Oh, there it is. Ask. Cool.
[00:15:01] Paul Grossman Also another thing is I mentioned those tokens. You can see where your tokens are. This big blue line here is telling me that we had about 50% of the tokens ready.
[00:15:11] Joe Colantonio How'd you get there, Paul? I'm trying to fall along on my machine.
[00:15:17] Paul Grossman Not a problem. There's a little co-pilot head on the lower right-hand corner out there. You just hover over it and they'll come up and I'll tell you what your premium requests are and that's telling you how many of your 300 tokens are. And you can see this is going to reset either today or tomorrow, right on Halloween. If it gets out to the entire align, like I said, the first month I had used the entire thing and then what happened was it was interesting. Is that it automatically just switched over to GPT 4.0. Now, to be fair, I can describe these guys. These guys here are like a combination of Sherlock Holmes and Albert Einstein, and they are quick and fast, and they really know stuff better than I do. Then you got Chat GPT, which is pretty much like Albert Einstein except he's a little drunk and he needs to take a little sleep on the bar for a little while. That's what you get for zero tokens being used out there, I suppose. By the way, on your side, I bet you see more than just 3 when you're looking at that, do you get like about 5 or 7 of those, just kind of curious. If you see that.
[00:16:29] Joe Colantonio Oh yeah, I see 4.5 Sonnet. 4 Sonnet. GPT 5, Cheeta? I don't even know what Cheeta is code supernova?
[00:16:37] Paul Grossman Those are all the new LLMs. And I'm going to tell you honestly, I don't know why I don't see those on my side. I can tell you how you can see them. I'm logged in under my company account up here.
[00:16:53] Joe Colantonio I do have the $200 plan, I don't know if that matters.
[00:16:58] Paul Grossman That might be it, but it seems to me if I log out and log in under another account, this resets and then it'll show me all of the different agents that I've got available.
[00:17:08] Joe Colantonio And I use auto because I'm lazy, but I like how you actually know each model and what you do the thinking of which one's the best, what you need to do.
[00:17:19] Paul Grossman I started off with Claude and then I used Gemini and GPT. And like I said, sometimes these guys fall asleep doing stuff. And I have to say, Hey, wake up and it'll go, Oh yeah, I'm sorry. I forgot I was doing this something for you. One of the other downsides I found with this was you'll notice that in my output, let me go back to my output over here. Let's see here's some output.
[00:17:43] Joe Colantonio I changed it to red.
[00:17:46] Paul Grossman Yeah, it'll put in, I've already rewrote it. I just wanted to do like a demo of what I did over here. I do have a lot more stuff in there but it's also putting in an emoji things like say this failed and it's a check, but the problem I had is when I initially put that in there, I said, Hey, listen, I got a custom function that will automatically put these emojis in here and the AI went. Oh, you want emojis everywhere that you put use that statement. I'm like, no, no, no, it's already doing it. You don't need to add more. And it's kind of that's one downside. I don't like it. It's kind polluted my code with emojis, they'er all over the place out there. That's my custom log, a custom logger. I'll even go and take a look at it.
[00:18:36] Joe Colantonio One thing, Paul, so you've been doing this for years, you mentioned you create a custom logger everywhere. How helpful was AI to do this?
[00:18:45] Paul Grossman Oh God, it made it so fast. I guess the great thing is that I can sit there and say, I want it to do this. I want to do that. Can you make, can you change it to? And it will go through that and add the features. And I can now just kind of imagine, what else do I need this to do and the AI will just kind create it and implement it in there. I would say though, you and I years ago worked with QTP UFT. And one of the things I had to be careful about that is that if I was sending information out to the console log, the good thing was the console log was infinite, it never got to the end of anything. The downside was if the more you sent to it, the more it slowed down the automation framework and because it was working in basic, it was already kind of slow to start off with who I had to be careful about how much I actually sent to this Playwright. Is so fast with that information. It's not much that I have to be all that concerned about.
[00:19:46] Joe Colantonio Breakpoints.
[00:19:46] Paul Grossman Yes, custom breakpoints. So this came out of another issue. So here's all my test cases over here. My issue is that some of these are using the exact same data record, and sometimes they fight with each other to decide which one they want to run. I'm going to go and execute this and let's see, you might see something kind of cool. I'm running six tests with six workers, by the way, I can run up to 9. I was getting up to about 12 before my system kind of went, forget this.
[00:20:24] Joe Colantonio How'd you do that? Is that a config file?
[00:20:27] Paul Grossman Oh yes. That's in fact, I asked it over there. I can go and say, send the Playwright config. So if I'm looking for. Oh boy, I started everything over there. We'll get back to that in a second over here. This you can see is actually running all the browsers at the same time and chugging away, but it's actually moving them all over to different areas. We'll get to that in a second. So this was a basically some, I said, Hey, I don't want these all to run at the time. I want the ability to lock the system because if I have a problem where multiple test cases are running and they're bumping into each other. I know there's a term that will come to my head and everybody's thinking it. That's a collision issue. I want some of my test cases to run in serial and it said the AI said, well, what if I just put up this little block over here and I'll write a file that says this test case is locked and currently we're running this test case and it is waiting for this one to complete and then it'll just sit there and spin. And I went, that's kind of cool. I didn't have to write anything for it. So I can take some of my test cases and tell them to run in serial, even though it's running the test cases in parallel.
[00:21:44] Joe Colantonio AI recommended you to do it this way or you told it?
[00:21:49] Paul Grossman No, I came up with that idea. I just asked it. I asked how to implement that. And one of the point of that was, is that you can run the whole suite over here, or you can write a debug mode, but debug by default runs everything in serial and it's hard to debug something that is giving you a collision issue if you're running them in serial, because it doesn't happen, because they're not running in parallel. That's one of the things I injected in there to have these test cases to sit around and wait for the previous one to complete even though it's running in parallel, and that was just one. I just kind of asked for it to do and like, Oh, that's cool feature. Now all of this is actually doing one of trickiest things. I was talking to someone that over at your conference, a guy I worked with before. And he said, the reason that Playwright is beating Selenium is because the ability to find elements is so much easier and the wait, the smart wait for pages to be ready is so much better. And I said, that's cool, but that's not the reason I went to it. I always have to write a custom function that will wait for the page to be ready. And one of the things I found was that. I get back over when I think one of these guys that might be chugging away, maybe. One of the things that these test cases had to do was to pick up on a toast message, you ever had an issue where you're looking for a toast message and how long are they out there for about 3 to 5 seconds. And my problem was, is that my custom wait behind the scenes would sit there and identify that toast message as, Oh, toast message came out. So it has, it's changed. I'm going to wait a little bit longer until the toast message essentially went away. And then the next step would be verify a toast message appeared, which is I have missed the whole thing. The first thing I'm usually looking for is to see, well, one thing is did the browser actually close and just exit out and I'm looking forward to see if this multi alert, which has the name that I'm using to identify a toast message is there and it immediately exits out and says, forget it. I'm out. The next step is probably looking for that toast message. After that, essentially the one trick that I've got, and again, this is in my book is I'm counting up how many elements are on the page and it looks to spill look for the page, and then it does my one and only hard-coded wait, which is 333 milliseconds. And then it counts it again, and it counts again. If it gets the same number three times, it says, you know what, three quarters of a second, the page hasn't updated. It's probably ready to go. If the numbers change, it just resets and starts going again. It's a dynamic way. Looking forward to page two.
[00:24:48] Joe Colantonio Why 333? To avoid 666?
[00:24:53] Paul Grossman It's actually, I want to check it three times per second because perceptively when you're showing a demo, if you see a page set up there for one second, it seems slow if it's up there four, three quarters of seconds, perceptibly people say, wow, this is fast. So I go and try to, if it decides pages built after three quarters of a second, Hey, it's fast. If it's been sitting there for a while, but you see something spinning. It's dynamically waiting to the point that you don't have to put hard coded waits in there is. And all I do is I take this wait for page load and I stick it into my click. So every time I do a click, I wait for the page load. Every time I call that custom click. Last thing I'll mention on this guy is at the very end. I also take a look at how many elements are actually on the page. And if the elements are less than let's say 70, it means I've got a blank page, something went wrong. And usually it says cycle through again and go hit a refresh the browser and see if you get more than these elements, which eliminates even more maintenance or flaky tests as you go by there's the object count looking for basically 21. Over there, if it's less than 21, the page was blank and something went horribly wrong and just go reload it. For the rest of this, I'm going to switch over to a pre-recorded video, because that's a little bit easier to kind of jump through and get to. So give me one moment. So essentially we were, we already showing off that we've got this lock where a lock a page over here. There's a few other things that we can do and is over here, I'm looking for spinners and I've got, this code that goes and says, look for something that says circular. And if you see circular, that means there's a spinner. It's essentially highlighting saying, Hey, why is this thing waiting? Because there's, a spinner on the page and I want to visually see it. Obviously, I don't do that when I'm running headless. It doesn't matter. But it does sit there and kind of give a visual identification that things are going well. What else do I have it in here? I think this is also the, yeah, there's another spinner. This one's interesting. Well, I did the recording here. I just kept getting browsers on top of it, but one of the biggest things that I found that I think my automation framework is really benefiting my company is finding that's when the deletions are taking too long. The deletion response should be about three seconds should come back and say, Hey, everything's good. But what I'm finding is that some deletions are taking like up to 30, 60 seconds. And that's more of an addressing a good metric was an SLA service level agreement. You should have response within five seconds. And if not, it should be reporting out saying, Hey, this took like about 7, 10 seconds to go. Also try to, can some of my test cases actually expect that there may be a two minute wait and I can customize them and say, okay, you guys wait around two minutes. I know the timeout is 30 seconds, but you want to on these cases, you want a wait around two minutes. Cause we know that there's a wait time on that. And I think that's really helping on my ROI, which we're saying, okay, these are some things that we're kind of identifying as we go along here. Let me see what my next one is. There was an example right there of a blank page. It just didn't build anything. It would sit there and detected say, Hey, there's only seven objects on here. Let's recycle that. And then this next one coming up is features over here. I was mentioning that page, wait for page sync. Inside of that has one extra thing, which is it's looking for the word error to appear on the screen. And if it ever appears, it goes and highlights that and says, Hey, there's a problem by the way, this is kind of a toast message, so the check for the word error comes out first and says I'm going to go and highlight this. And essentially highlighting. I was checking for the word error and said, I found the word error highlighted in green. But it's also giving you this kind of, or a little orange red thing saying, Hey, this is a problem. We might want to be able to get ahold of this and fix it. This is one of the major things I've got with AI that again, all of these, I've been vibe coding quite a bit and just trying to get these functionality into my framework and show that I can do a good ROI for the company and also do other things. This one over here, I'm highlighting it. We're expecting to see that the name cannot expect all these errors. In this example, I've got. I'm essentially going through and checking to see if there is an invalid character that is being added in here. And do we get a message that says the first name cannot contain special characters and this is one of them. Again, I asked the AI, can you please write me a highlight function that will be up there for a quarter second and or 333 milliseconds and just have it up there enough that I can visually see it but not slow down the entire automation of the framework. That is a lot of the cool stuff I've been doing. As I said, I've got wrappers. I always do wrappers for everything. I never use the implicit Click or Fill or anything like that, that those implicit ones are only listed once. And that's in my custom click, my custom fill and all sorts of extra stuff is built into those wrappers as I chug along.
[00:30:55] Joe Colantonio What about auto it? I don't even know what you call it. What do you call, Paul?
[00:31:01] Paul Grossman Yes. All right. So at the very beginning of the video over here, I showed you that I was essentially running all these browsers, but the browsers are jumping all over the screen, so it gives a much better demo instead of them all laying at top of each other. The how I do that. The problem is, is that Playwright and almost, I don't know of any other tool that can do it. It's does not have the ability to move browsers to a different location. But for all my life, all my career, 25 years, I've been using this other tool out here, which is auto it and that's auto it script over here. And I basically said, can you write a little script that will look for a browser and the great thing about auto it, it used to be used by network administrators who wanted to add a whole bunch of brand new employees, but they didn't want to go through the whole manual process, they could auto it. Automate it with auto it and two things have happened in the last, I guess, six months or two years. One, this code is all written by AI. I just went and asked the code over, I went and ask over here and said, Hey, can you write something in auto it script to move browsers left and right, as long as you see some sort of text on it, like sign into your account and that's how identifies that there. Is a brand new browser, make sure that it loops and keep sitting in the background. And it is actually sitting there. This is what it is. This is the window monitor that's sitting here waiting for some browser called sign into your account to be up here. And when this is running, it sits there and randomly moves browsers all around.
[00:32:48] Joe Colantonio It's like a poltergeist. Poltergeists on your machine.
[00:32:52] Paul Grossman Yes, it's a poltegeist and the great thing about is that the last update that it had was in 2020. And I figure it's dead. A wonderful tool. And just last month. They had an announcement saying there's been a whole new update for Auto it's script. And so the project itself is still alive. The people, the developers behind it, they just, I guess they took a four or five year break. And it's back and I can understand why, because I don't have to sit there and try and write all this stuff myself and figure it out. You can just have the AI do it. I did this thing in about 5 minutes and it got it right. Pretty much on the first shot, I had to do one or two extra things to get it to do exactly what I wanted, but there's so many other things you could do with this because one thing I'm thinking about is the console log. What if the council log didn't go out here? What if I opened up a pipeline to a window that has multiple windows open for each one of my test cases and then direct those messages to those. And that way I don't have to use any of the internal stuff on my browser, my IDE over here to output that information.
[00:34:11] Joe Colantonio How hard is it to deploy? You ran this on another machine when you need
[00:34:17] Paul Grossman Not at all, not hard at all. So to test it, you would click on go and it'll run it directly inside this environment. But if you go and select build over here, you go to build and it will create an executable out of this code that looks very similar to basic. And now I've got an executable that's running this one that I showed you over here. This is an EXE. I take this EXE and I throw it into my project over here. And now it's available to anyone who checks out the project. They can run that. I could probably go to the point that it's goes and launches has the framework go and launch it first and say, okay, put it up there. If we're running headed, that's cool. If it's headless, who cares? Nobody cares if it's moving place to place. Last thing I wanted to mention as we get to the end over here. And I heard this also from Ben again. It's like, what's some of the cool things. You can do an automation over here with agentic coding. And the coolest thing is let's say I have a brand new project. You can ask Agentic AI. You can have this to create a new Selenium project. I'll be switching the framework. And if I do that and I'm in agent mode, I've got this version control. It doesn't matter what I do. I can always undo it.
[00:35:43] Joe Colantonio That's a tip always have version control.
[00:35:47] Paul Grossman Always use the version control. Oh, in fact, yes, I'm sorry. There was something I wanted to mention. I've got my version control over here. This is what I use. I like a GUI thing for my version controls. So this is from this GitHub desktop. And one of the things I try to do is that when I've got something that is working really well, I will actually document how many test cases passed and how long did it take to get that done. And that will tell me this was a good running spot. And then later on, you can see, Hey, this other 51 passing, Hey other 61 passing. And it's doing it in 22 minutes. And as I keep going on, and you can say 71 passed in 15 minutes. I know that those points are good running framework versions of the application. It's asking you to open an empty folder to continue. I'll hit allow and it just did it. It just went over and.
[00:36:53] Joe Colantonio You're doing this in an existing project and who's enough to start a new one? Very cool.
[00:37:00] Paul Grossman I probably should do, I would just do a brand new one. Just go over to the file and say new, but just for time. It does this. And then of course, the other thing that it'll do, I'll cancel all of this over here. The other thing you'll do is they'll go to your console and it will go and do all the executions that you need automatically to set everything up. And if I go back far enough, I'm probably saying.
[00:37:26] Joe Colantonio Whoa, whoa, whoa. Is this pouring this over to Selenium then from Playwright?
[00:37:32] Paul Grossman It could.
[00:37:32] Joe Colantonio It's not doing that, it could, that's crazy.
[00:37:34] Paul Grossman It could, it could do that. You could just say, Hey, take this and convert the language to from JavaScript to Java or to Python. In fact, that's one of my new plans is to have a new versions of my book out there, but do them in Cypress, Selenium, whatever the next new thing is cause not everybody's working in WebDriverIO. It's a wonderful tool and it's even getting better. Every tool is getting better because of this.
[00:38:01] Joe Colantonio All right, Paul, before we go, is there one thing you'd like someone to take away from right now that they can implement right away to help them with their automation testing efforts? And what's the best way to find or contact you?
[00:38:12] Paul Grossman Absolutely, the best thing you can do is use AI to document your code. That's the one thing is I didn't know what this guy was doing. And I basically said, can you go through this code and document it? Like I'm a 12 year old and use it to add your headers, your footers and all sorts of stuff like that. A ways that you can find me. Oh, I'm all over the place. Look on any social media site. Look for the dark arts wizard. You'll find him there. I am on X I am, on LinkedIn. I'm on YouTube. You can go to YouTube/PaulGrossmanthedarkartswizard. You can see all sorts of videos of everything that I do and show off. Can you email me at thedarkartswizard@gmail.com. Don't forget to add the The. If you just say dark arts, wizard, it's going to somebody else. I don't know who that guy is. Where else can you find me for my testing websites, which is candymapper.com, which is backed by GoDaddy. You can go and practice your skills as an automation engineer against a GoDaddy based website. And if you go to candymapper.net, you'll find a version that is hosted by Wix. And then you can even see if your test case runs on one and then switch the entire underlying environment, see if it runs on the second one. They're essentially identical.
[00:39:35] Awesome. And you get all these links to these tricks and treats down below. Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a564. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:40:16] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:40:59] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.