Speaker Pitches (Vote Now) Automation Guild 2024 with Joe Colantonio

By Test Guild
  • Share:
Join the Guild for FREE
TestGuild Automation Feature Joe Colantonio

About This Episode:

In this Test Guild Automation Podcast episode, we’re pulling back the curtains to unveil a community-driven spectacle, setting the stage for the 8th Annual Online Automation Guild event, slated for February 5-9, 2024.

We embark on a yearly journey with our Guild members to carve out a space where automation in testing technology, innovation, and community spirit converge. We listen intently, gathering insights and struggles, to tailor an experience that’s not just an event but a gathering of learning, growth, and connection.

This episode is a call to all Guild members!

We’ve sifted through the guild survey results for topics that you want at the next event, and now it's time to vote!

With over 100 Speaker Pitch videos submitted, each echoing the pulse of our community’s needs, we’re entrusting you, our valued members, with the power to shape our speaker lineup.

So, are you ready to step into a realm where your voice isn’t just heard but celebrated? Dive into this episode, explore the possibilities, and cast your vote to etch your imprint on the Automation Guild 2024.

🗳️ Vote Now: testguild.com/vote

Exclusive Sponsor

Discover TestGuild – a vibrant community of over 34,000 of the world's most innovative and dedicated Automation testers. This dynamic collective is at the forefront of the industry, curating and sharing the most effective tools, cutting-edge software, profound knowledge, and unparalleled services specifically for test automation.

We believe in collaboration and value the power of collective knowledge. If you're as passionate about automation testing as we are and have a solution, tool, or service that can enhance the skills of our members or address a critical problem, we want to hear from you.

Take the first step towards transforming your and our community's future. Check out our done-for-you services awareness and lead generation demand packages, and let's explore the awesome possibilities together.

About Joe Colantonio

Joe Colantonio

Hi. I’m Joe Colantonio, founder of TestGuild – a dedicated independent resource of actionable & real-world technical advice (blog, video tutorials, podcasts, and online conferences) to help improve your DevOps automation, performance, and security testing efforts.

Connect with Joe Colantonio

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:04] Joe Colantonio Well, hey, welcome back to another exciting episode of The Test Guild Automation Podcast. I'm your host, Joe. And today, I have something special lined up for you. We're on the brink of the 8th Annual Online Automation Guild event happening from February 5th to the 9th 2024. And as always, we are committed to tailoring this event to meet your specific needs and challenges. So every year I reach out to previous attendees, and ask them what they're currently struggling with and the topics they're eager to explore. It's a tradition here at The Guild that ensures our sessions are not just informative, but also laser-focused on addressing real pressing issues that you all are facing. It's all about community-driven topics designed to empower the Guild. So I've taken your valuable feedback and put out a call for speakers to create sessions around the topics you told me you want. And now, we're at the next exciting phase. Voting. Yes. You heard it right. We need your help to select the sessions for this year's event. Remember, this is a community-driven initiative, and every vote counts in shaping an event that's tailored just for you. And this year, we've added a twist, our potential speakers have submitted speaker pitch videos, giving you a sneak peek into what's in store and why you should choose their session. We have over 100 incredible sessions and have to listen to a few of them. We need you to vote on all your favorites. So to do that, all you need to do is head on over to Testguild.com/vote and also I'll have that link in the comments and in the show notes and make your voice heard. This participation is crucial to helping us put together the most impactful speaker lineup possible ensuring the 8th Annual Online Automation Guild is not just an event but a transformative experience like it has been the past 7 years we've hosted it. So are you ready to dive in? Let's explore some possible speaker sessions together. And remember, your vote shapes the future of the Automation Guild. Let's make this year's event the best one yet. Vote now. Have your voice heard. Go to Testguild.com/vote and look forward to seeing your favorite sessions and your feedback as well.

[00:02:17] Daniel Reale Hi, my name is Dan Reale. I'm a lead engineer supporting multiple teams at Dun & Bradstreet currently, I specialize in web application testing and SAS products mainly. During a meeting for a major upcoming project, a senior developer asked me, What is smoke testing? Right then and there, I knew I had some explaining to do. And this is not to knock that developer by any means. He truly just did not understand what smoke testing was and why it was so important. I'd love to share that story in a little bit more detail and just provide some insights into how a simple question and conversation led to more quality advocates on our team and more respect for QA and what QA does. I like to talk about what smoke testing is and where it originated from. I also like to present a list of do's and don'ts for smoke testing just based on my experience throughout different industries and different products. We'll take a look at the steps needed to implement simple yet effective smoke tests and you can leave with a playbook for your organization. If you're not utilizing smoke testing today or even if you are, some things that maybe you can include in your smoke testing. I hope to see you all at The Guild.

[00:03:38] Karime Salomón Hi, everyone. My name is Karime Salomón. I am from Bolivia. And, well, one of the reasons why I think that The Guild is going to love my session or is going to want to see, it is because I propose the topic. The name is Load Testing Expedition: A Beginner's quest for Practical Application. And the main idea on this is to show the people some real-world cases that I had the option to leave or the challenge to leave with some of my teams. Nowadays after the pandemic, we realized. How important is the Load Test because everything is online, everything is on the web and everything is on the Internet than in applications. And I think now more than ever, load test is really important or if it's not important yet for everybody, it's going to be. So the main idea on this presentation is to talk about not only concepts about Load Test Fundamental and not only tools, of course, I'm going to mention and put the relationship between both. But the main point in this presentation is to talk about how if you understand the fundamentals, how you will apply this so that you can get a better result, you can have a better analysis. And of course, with your load test, not only find bugs but also keep a real value from your testing to your product. So as I said before, it's not about concepts and it's not about tools, but it's about the application of real cases which monitoring you need to have in order to define your scenarios, which are things you need to do before defining your Load Test, which kind of analysis you need to do after you're doing it. And when you can actually define something as accepted or not, or which kind of box or problems you're going to identify and you can report it. So that's the main idea. I hope everything is clear. And of course, if you have any questions, you can just reach out to me. Thank you very much. Bye bye.

[00:05:47] Hey, Joe, thank you for the opportunity to present on the Test Guild conference in case, I will be selected. And I want to quickly summarize what's going to be the topic of my presentation. So I want to make a comparison between Cypress and Playwright Frameworks, but not in the context of the just comparison of the technical capabilities of both frameworks. I want to compare more the context of framework applications. So how the differences between Playwright and Cypress can drive the decision-making for the company, either to choose the Playwright framework or Cypress framework depends on the framework's pros and cons, advantages and disadvantages. The second topic is a bonus topic that I want to present would be the concept of user-facing locators. I remember you listed the hot topics and one of them was how to handle their dynamic locators and how to make the tests more stable. So user facing locators concept is related to that. This concept was introduced by the Playwright first, and I want to show how to use this concept and also I want to show how to use the similar concept in Cypress in case someone uses Cypress and wants to follow this same pattern and follow the same approach. How to use user-visible elements in order to make the locators more stable. So that's pretty much it. This will be the topic of my presentation. And I hope you guys are going to like it. Thank you.

[00:07:15] Anaïs van Asselt Hi there. My name is Anaïs van Asselt. My talk is about Solving Universal Pain Points Bug Management and especially the frustrations of allocating the bugs to the right teams. That is why I'm excited to introduce La Cucaracha, the AI-driven bug Management Solution born from a hackathon through GPT API, it automatically matches bug reports to the right teams and further streamlines the bug management process through the integration with Slack, JIRA and Amplitudes. Together, we will explore this solution and how GitHub copilots can help us during the development of it. Learn actionable techniques for efficient bug management and get inspired to boost productivity through AI. See you at the event.

[00:08:02] Agustin Pequeno Hello, everyone. My name is Agustin Pequeno. I am currently working as a QA director at Merkle Danmark, and I wanted to give this talk about this cool library that I have been developing for the last year and a half called Ruby Radar. The idea behind Ruby Radar is to kick off your automation with an array of open-source tools. So from the get-go, we have a free solution that you can use for your own projects for customer base projects. For Playwright, a little bit with automation, for learning a little bit all about it. And it's Ruby-based because I pretty much love Ruby. I think it enhances my automation skills and it makes code coding more fun, and more accessible. I really think that my talk will appeal to everybody who wants to get into automation or to everybody that are already into automation and want to try something different, or maybe to that niche of Ruby Automation, A testers that are out there and want to find alternative solutions. So that's pretty much my pitch. Thank you so much for your time. Have a lovely day. Bye bye.

[00:09:16] Ashwini Lalit Hi all. I'm going to talk about Art of BDD Scenario Writing. When we started with the behavior-driven development at NimbleWork, we initially read about it and discovered that it is so much to do with given random scenarios. Eventually, we started doing these scenarios collaboratively with the developer tester and the product owner working together. It has been a long journey and evolved over a period of time. I learned from a lot of hardships and then we put down our own best practices and there's no looking back. What dawned on us is that the practice of BDD can assist in developing software that aligns with business goals. And it's a great enabler of agility in today's rapidly changing technical landscape. So the goal of my session is to elevate BDD practice and help teams shift from siloed automation initiatives to creating living documentation of application behavior that aligns with business objectives. My name is Ashwini Lalit. I help Quality engineering at NimbleWork and I enjoyed training teams on BDD, continuous improvement, continuous testing, and bringing in the right balance between upstream and downstream quality.

[00:10:48] John Chesshir Hello. My name is John Chesshir, and I am a senior software engineer in Test or SSET for Aspire Software, a service titan company that specializes in producing software that fits the end-to-end needs of landscaping and snow plowing in janitorial companies. As the senior SSET, I have owned the process of building a test automation architecture from the ground up for our flagship product, Aspire Landscape. For those of you who regularly automate regression tests, I'll bet you've probably come across the following scenario multiple times throughout your career. You're creating automated tests for a portion of a legacy system, and you write a new test that seems to expect what the software under test or SSET should do. But you find that the test fails. So you go to the software engineer or SE and show him the code and find to your delight that he confirms it's a bug. So you write up a bug report from it and put it on the backlog, but then your bubble gets bursted when you find out that although it is a legitimate bug, it's not a high priority to get fixed. Sound familiar? What do you do with your brand new, legitimate but failing test? Do you comment out the assert that failed? Do you leave the new test in an unmanaged branch until it's fixed? Or do you go ahead and merge into failing test? All of these strategies have serious problems. I think there's a better approach. Consider changing your new failing test into what I like to call a Progression Test. We all know what regression tests are, tests that tell when the SSET's functionality has regressed. But what about a test that points out when changes to the SSET have brought about progress? Here's how it works. You have test code that asserts what the such should do, and you also know what the SSET is doing now. So since this bad functionality is deemed acceptable for the time being, you write a little more code to expect that bad functionality and pass the test as long as that assert checks out. You also provide a link to your bug report and then you tuck your legitimate asserts into another method that will never get called until the SSET changes. Weeks or maybe months later and SE will find some reason to change the way the SSET works, which will cause your assertion on the bad functionality to fail, as soon as it does, your progression test code will immediately run those legitimate asserts that you had tucked away to see if it has changed for the better. Whether those assertions passed or failed, the test as a whole will fail. Thus, immediately bringing your progression test to the SE's attention right while she is thinking about the changes she just made and the failure notification will also give her information indicating whether her change was inadequate to solve the problem you uncovered or if she just fixed the bug. Perhaps in the process of fixing something else, if she was working on something else, the failure message will also point her to your bug report, which she can pick up and add to the list of things she accomplished that day. I hope you will vote for me to speak at the Automation Guild Online Conference 2024 so I can tell you more about the benefits and potential pitfalls of converting your failing or commented-out regression tests into live passing progression tests. Once again, I'm John Chesshir with Aspire Software. God bless.

[00:14:51] Hi, I'm Swapnil Kotwal . I'm a Lead Engineer at SailPoint Technologies. Basically, I work on building the automation framework and many other things. My day-to-day job is mostly to deal with the CI/CD pipeline. We also have diversification of our automation team, like security, performance, and our regular automation thing. I think. I am currently leading the performance team and I'm planning to talk about more about observability. Let's see how it is different from the traditional approaches. Observability, this term is not very new in the SAS world. But I'm trying to explain the open telemetry. What is open telemetry and how is it different from the other observability tools like Datadog, Prometheus, Grafana and things like that?

[00:15:55] Robert Lukenbill All right, we're recording Robert Lukenbilll from Scorpion, LLC. I'm actually in Kansas City. Our company is out of Valencia, California. The quick pitch for why you'd want to hear what Scorpion is doing on quality assurance is really quick. We went from a very large manual QA operation down to just 3 automation engineers who ran all of our quality assurance for our department. The change went on in January of 2023. In 2022, we just had we had around 15, 16 manual QAs. And the decision that we made in management to make the paradigm shift for our company was simply we were spending too many man-hours on quality assurance and not receiving enough return on our investment. We were still rolling out a lot of bugs and we were putting out. Unfortunately, a lot of expense and maintaining the application. So we made this paradigm shift in January and we changed the entire methodology of how we did QA in using Cypress in that integration, it allowed us to open up so many more windows, save over 160 person-hours a month, and it allowed us to lower our bug count significantly with automation. And I'd love to share with you and your companies on how we accomplished this.

[00:17:23] Rahul Parwal Hi, I'm Rahul Parwal, and at this year's Automation Build Conference. I will talk about skyrocketing your test coverage using model-based testing. Model-based testing is a very interesting and fascinating test design approach which uses the power of models that are visual intelligence and mathematical graph computations would derive and generate test cases in a moment. It supports generating tests for different coverage levels from small sanity checks to different cases to even doing exhaustive testing with complete flow and path coverage. How do you do that? The model-based testing decision is about that. How do you use Model-based testing to generate tests within a second and use those new tests to update your automation with the right impact analysis with the right direction where you have to execute and do major adaptations in your test? So this approach will going to talk about all these factors and there are much more practical examples that I'll be covering in this talk. Stay tuned. Thank you. Bye.

[00:18:43] Benjamin Bischof Hello, everyone. My name is Benjamin Bischof, and I think you will really enjoy this session because I will show you that the Karate framework is much more than API testing. I will demonstrate how you can use it for your UI testing needs and why this could be a smart move in terms of onboarding and alignment of your tests. So it would be nice to see you all in this session.

[00:19:13] Joe Colantonio Thank you all potential speakers for your automation awesomeness. For the links to everything of value we covered in this episode, head on over to testguild.com/a468. And while you're there make sure to have your voice heard by voting, by clicking on the vote button in voting on all your favorite sessions. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my Mission is to help you succeed in creating end-to-end full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Testing Castlevania, Playwright to Selenium Migration and More TGNS136

Posted on 09/23/2024

About This Episode: What game can teach testers to find edge cases and ...

Boris Arapovic TestGuild Automation Feature

Why Security Testing is an important skill for a QEs with Boris Arapovic

Posted on 09/22/2024

About This Episode: In this episode, we discuss what QE should know about ...

Paul Kanaris TestGuild DevOps Toolchain

WebLoad vs LoadRunner vs jMeter (Who Wins?) with Paul Kanaris

Posted on 09/18/2024

About this DevOps Toolchain Episode: Today, we're thrilled to have performance testing expert ...