Browser Conference, OpenSource LLM Testing, Up-skill Test AI, and more TGNS125

By Test Guild
  • Share:
Join the Guild for FREE
Testguild devops news show.

About This Episode:

What free must attend the vendor agnostic Browser Automation Conference is happening this week?

Have you explored the practicality of the open-source LLM evaluation framework that can significantly enhance your testing capabilities?

Do you want to know how to Upskill your Testing Team with AI and Navigate the Future of Quality Assurance?

Find out in this episode of the Test Guild New Shows for the week of Jun 16th.  So, grab your favorite cup of coffee or tea, and let's do this.

Exclusive Sponsor

This episode of the TestGuild News Show is sponsored by the folks at Applitools. Applitools is a next-generation test automation platform powered by Visual AI. Increase quality, accelerate delivery and reduce cost with the world’s most intelligent test automation platform. Seeing is believing, so create your free account now!

Applitools Free Account https://rcl.ink/xroZw

Links to News Mentioned in this Episode

Time News Title Link
0:24 The Browser Conference https://testguild.me/browsercon
1:22 Running Test in Java https://testguild.me/prbxc0
2:09 ortoni-report https://testguild.me/q0mcld
2:54 Turbocharge Playwright https://testguild.me/kubeweb
3:15 BlinqIO 5 million https://testguild.me/s96rj4
3:49 AI Upskill Your Testing Team https://testguild.me/blinqweb
4:05 Generative AI Not Replacing You https://testguild.me/6g9d15
5:07 Panaya AI-Codeless I11 https://testguild.me/wla6pr
5:56 DeepEval https://testguild.me/11xbsh
7:01 Windows Recall rollout https://testguild.me/bmljit
8:05 Rapid7’s AI Engine https://testguild.me/606pmh

News

[00:00:00] Joe Colantonio Free must attend agnostic browser automation conference is happening this week. Have you seen the open source LLM evaluation framework to help you test? And do you want to know how to upskill your testing team with AI and navigate the future of quality assurance? Find out in this episode of The Test Guild News Show for the week of June 16th. Grab your favorite cup of coffee or tea and let's do this!

[00:00:24] Joe Colantonio First up is all about the browser conference. The browser conference is set to take place this week, June 20th, and it will provide a neutral platform for discussing browser automation. And this free online event will feature speakers from companies such as Google SauceLabs and Selenium, focusing on topics related to testing, scraping, and AI. Some awesome topics are covering is AI-driven testing, large scale scraping challenges, and next generation WebDriver Bidi protocol, the browser conference is a free online conference, and they have sessions around working with libraries such as Puppeteer, Playwright, and Selenium. And they also have a fantastic speaker lineup as well. Make sure to register now, even if you can't make it, because replays will be available for all registered attendees. Just go to testguild.me/browsercon and sign up now, or use the special code down below and hope to see you there!

[00:01:22] Joe Colantonio Are you running your tests in a Java maven project? If so, I just found a new blog post by Andy Knight, The Automation Panda. And this post outlines all the steps needed to effectively run tests in a Java maven project. The blog emphasizes following the standard directory layout for placing test code and main code separately, so it covers adding the Maven surefire plug in for unit test in the Maven failsafe plug in for integration tests, ensuring test runs smoothly and comprehensively. The post also advises on maintaining a clean build environment and adhering to Maven conventions to avoid complexities and maintainability issues. To learn more about leveraging Maven's built in lifecycle phases and keeping the Pom file well organized and how it can significantly improve your test automation, definitely check out that post in the links down below.

[00:02:09] Joe Colantonio Are you looking for an HTML report generator designed for Playwright test? Well, I just found this new solution on my LinkedIn feed by Koushik, who has announced the release of OrtoniReport v1 which is a new HTML report generator designed for Playwright test. And this tool aims to enhance reporting capabilities by providing a structured and visually appealing way to organize test results. Key features include how to group your test results, flexible configuration options, automatic directory creation for screenshot storage, and color coded test statuses, and users can install this via npm and configure it within their Playwright setup. So for more details, definitely check out this npm page that I have for in the links down below.

[00:02:54] Joe Colantonio And speaking of Playwright, if you also want to know how to turbocharge a Playwright executions, you want to register for our upcoming webinar all about how to do so using the solution I just learned more about called Testkube, which is the first test orchestration platform for cloud native applications. And once again, they'll be a link for you to register for this webinar down below.

[00:03:15] Joe Colantonio Next up is a Follow the Money segment Blinqio, an autonomous generation AI software testing platform, announced that has secured 5 million in funding, and this funding will support the company's expansion in the U.S and address the global shortage of test programmers. Blinqio is AI driven platform automates test script creation and maintenance without human intervention. Founded by Tal and Guy, you probably know them. They've been in the testing space for a while. Blinqio really aims to help innovate in the software testing space with its technology, which supports over 50 languages and operates 24/7.

[00:03:49] Joe Colantonio And to actually see Blinqio in action, make sure to check out another upcoming webinar we have with Blinqio that goes over all about how to upskill your testing team with AI, navigating the future of quality assurance, and you can register for that once again in the links down below.

[00:04:05] Joe Colantonio So speaking of Gen AI, is it going to replace and your engineering teams? Probably not. But why? Let's find out more. The StackOverflow blog written by Charity Majors CTO of honeycomb.io outlines why or what is the limitations of generative AI in building engineering teams. While generative AI can assist with writing code, it cannot replace the complex human roles required for effective team building and collaboration. And the article also emphasizes that while AI generated code can be a useful tool, it lacks the contextual understanding and problem solving skills inherent to human engineers, and as such, relying solely on generative AI for team development is insufficient. Software testers should recognize that while generative AI can aid in automated some task, it's not a substitute for human expertise and understanding project requirements, team dynamics, and strategic decision making and while it won't replace, it actually can help assist with certain things. And you should definitely check things out.

[00:05:07] Joe Colantonio For example, here's another tool I just launched, which is an AI codeless test automation solution. I believe it's called Panaya has unveiled its latest AI driven codeless test automation solution, which is designed for ERP, CRM, and enterprise cloud applications. And this solution leverages change intelligence to enable comprehensive cross application testing. This platform integrates requirement management, test management, low code and no code tests automation, and change impact analysis, providing a streamlined and intuitive experience for users of all skill levels. And the announcement goes over that by automating test scripts, generation and validation using its generative AI solution, and aims to streamline testing processes, particularly for SAP and Salesforce environments, which I know a lot of people struggle to automate using open source type of solutions.

[00:05:56] Joe Colantonio All right. We talk a lot about generative AI, but how do you handle well, how do you test LLMs? I came across Marie's LinkedIn post to highlight some of the challenge of testing large language models. Goes over how can you ensure reliability in customer facing applications and why it's essential. And they point to a confidence AI DeepEval solution. And what's cool about the solution? It is a simple to use open source LLM evaluation framework. It's similar to Pytest but specifically designed for unit testing LLM outputs, and includes the latest research to evaluate LLM outputs based on metrics such as G-Eval, hallucinations, answer relevancy. So whether your application is implemented via RAG or fine tuning, LangChain, or anything else, DeepEval has you covered. Definitely check out DeepEval for yourself to help streamline LLM testing by integrating automated, comprehensive evaluations into bake them into CI/CD pipelines. Hopefully this approach ensures that your code and prompt changes do not compromise the reliability of generated answers.

[00:07:01] Joe Colantonio And in security news, Microsoft has delayed the release of recall, a new feature in Windows 11 that allows users to research their computer for previous viewed content. Initially set for broad availability in June, Recall now will be available only for Window inside a programmer participants, and Microsoft made this decision to ensure the features meet high standards for quality and security following feedback from the Windows Insider community. This recall feature, which captures screenshots of users screens every few seconds, raise concerns among security professionals and privacy advocates. Critics highlight potential security risks, prompting Microsoft to make Recall an opt in feature, encrypts its database and also requires enrollment for activation. This delay follows a ProPublica report on Microsoft's handling of critical vulnerabilities, including pressure on the company to improve its security practices. So as a software tester, you should definitely always be concerned about security. Be raising this within your teams before it goes public with things like these types of features.

[00:08:05] Joe Colantonio And last up, we have Rapid7 who actually makes one of my favorite security tools. Metasploit, they've announced that they've introduced generative AI capabilities to its AI engine, enhancing its managed detection and response services. And this update aims to improve threat detection, alert, triage, and incident response. This AI engine now uses large language models to analyze vast amounts of threat data, helping security and analyze quickly identify and respond to malicious activities.

[00:08:34] Joe Colantonio All right, for links of everything of value we covered in this news episode, head on over to all the links in that comment down below. So that's it for this episode of The test Guild News Show. I'm Joe, my mission is to help you succeed in creating end-to-end full-stack pipeline automation awesomeness. As always, test everything and keep the good. Cheers.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

AI for Test Coverage, Why Playwright is Slow, Crowdstrike and more! TGNS129

Posted on 07/22/2024

About This Episode: Do you know how much of real production usage your ...

Mark Creamer TestGuild Automation Feature

AI’s Role in Test Automation and Collaboration with Mark Creamer

Posted on 07/21/2024

About This Episode: In this episode, host Joe Colantonio sits down with Mark ...

Javier Alejandro Re TestGuild DevOps Toolchain

Building a Real World DevOps Pipeline with Javier Alejandro Re

Posted on 07/17/2024

About this DevOps Toolchain Episode: Today, we have a special treat for all ...