Automation Testing

Exploring the Value of AI in Test Case Creation (Pros and Cons)

By Test Guild
  • Share:
Join the Guild for FREE
A man looks surprised at the text "Boost Test Coverage" as an AI chip and brain hologram appear beside an upward green arrow on a tech-themed background, highlighting both the pros and cons of AI in test case creation.

As testers, we're used to balancing test coverage with tight project deadlines.

Right?

In fact a recent TestGuild community survey found that 39% of respondents struggle with test management, while many others find it challenging to balance test quality and speed.

The bottleneck, in creating test cases

Truth be told.

Creating test cases can really eat up a lot of time!

With more than 25 years under my belt, in the field of test automation I've seen firsthand how much time we can pour into this task.

For example:

  • Examining the specifications of the product to pinpoint features that can be tested.
  • Crafting test scenarios that encompass all situations.
  • As the application progresses it's important to keep those test cases up, to date.
  • Making sure that all necessary tests are conducted thoroughly without any duplication.

For many, in our Test Guild community— 40%, to be exact—the challenge is real.

They tell me they spend half of their time on test development and implementation tasks alone!

Also nearly 45% of you find keeping up with new technologies a major challenge, highlighting the need for more streamlined methods and strategies.

Enter AI….

AI's Emerging Role in Test Case Generation

I also know many of you are sick of hearing all the buzz about AI testing.  But a significant 33 percent of those surveyed expressed an interest, wanting to learn more about AI technologies.

And I don't think it's all marketing hype or snake oil.

In my exploration of AI driven testing tools I've come across some real practical applications that truly bring value to testers.

For example recently explored a solution called DevAssure that focuses on many issues.

One being generating test cases automatically using your product documentation and design materials.

Here's something that caught my attention about this AI driven approach.

Linking Design, with Testing Scenarios

One feature that really stood out to me was the option to import Figma mockups directly and review them in conjunction, with product requirement documents.

This establishes a link, between the ideas that designers imagine and the requirements that testers must verify.

When teams utilize Figma (a tool, among development teams) it removes the need, for manually converting visual designs into practical test scenarios as the AI assesses the design components and recognizes possible user interactions and validation criteria.

Try DevAssure Now

Finding Edge Cases and Coverage Gaps

I was really impressed, by how the AI could spot those situations that we might overlook when creating test cases during my evaluation process.

The system carefully checks your PR document for any details. Even asks questions to make sure our tests cover everything they should.

This method assists, in addressing an element of human QA knowledge.

Recognizing user actions and edge cases.

All the while allowing for additional time to conduct investigative testing.

Connecting the Gap Between Development and Quality Assurance

During automation testing coding sessions, in VS Codes extension feature stands out as it checks for gaps and test coverage problems, in the code being written by developers.

For the 78% of community members responsible for maintaining the framework's integrity and functionality, this tool provides real-time alerts.

It helps catch issues early.

This reduces the chances of problems being discovered late in the development process.

I think this tools can help foster collaboration, between developers and testers which is often seen as a hurdle, in our community interactions potentially enhancing communication by bringing testing issues to the forefront within the developers tasks.

Self healing automation testing

Are AI Generated Test Worth it?

When I try out a testing tool I prioritize results, over fancy features every time.

Here are my observations, from testing it with a project:

  • In my experience, with a feature implementation task I managed to reduce the time needed to create test cases by, around 40%. This allowed me to dedicate time to testing sessions where I usually discover the most critical issues.
  • It only took me around 30 minutes to get up and running with it; however getting the hang of the features took me about a day. Which seems quite reasonable, for such a sophisticated tool.
  • Incorporating into existing systems is a factor to keep in mind when working with these tools – to other software solutions, in this field that also need adjustments for seamless integration, into certain CI / CD pipelines.
  • In some cases the AI struggles to grasp the business rules to my field highlighting the importance of using these tools to support rather than substitute human knowledge and skills.

Robot Chaos

Discovering the Optimal Balance

I know each team and their testing context is different.

But a team that is having difficulty with test coverage on a scale.  Or with the transfer of responsibilities between quality assurance and development might find it beneficial to consider utilizing AI support, for test creation.

However it is crucial to maintain expectations:

  • These automation tools can't substitute the knowledge and analytical skills that seasoned testers possess.
  • They only perform effectively when under the supervision of humans.

The true worth lies in managing the tasks of testing to create space for engaging in strategic work.

And it's not all unicorns and rainbows when it comes to AI in testing.

Here are some pros and cons.

Pros and Cons of AI-Generated Test Cases

Pros:

  1. Speed: AI can generate test cases in a fraction of the time it would take manually.
  2. Enhanced Coverage: AI can analyze vast datasets and user behavior to identify edge cases that human testers might overlook, significantly increasing test coverage.
  3. Adaptability: Generative AI models, like those integrated with tools such as DevAssure, can adapt to application changes, reducing maintenance effort for test scripts.
  4. Predictive Insights: AI can prioritize tests, optimize test suites, and even predict defects, helping teams focus on high-risk areas ‌1‌‌9‌.

Cons:

  1. Non-Deterministic Behavior: AI can sometimes produce probabilistic or inconsistent results, which might not align with deterministic validation logic ‌1‌‌14‌.
  2. Quality Concerns: AI-generated test cases may lack context or miss critical business logic, requiring human oversight to ensure relevance and accuracy .
  3. Maintenance Challenges: Over-reliance on AI-generated scripts can lead to issues with maintainability, especially if the underlying AI model isn’t well-tuned or if the generated code is overly complex .
  4. Initial Setup and Costs: Implementing AI-driven solutions often requires upfront investment in tools, training, and integration into existing workflows .

Once again, repeat after me AI is a powerful assistant, but it’s not a replacement for skilled testers.

The key is to use it as a complement to human expertise.

Try to Use AI to Generate Test Cases YourSelf (The Ultimate Test)

I always believe it's best for you to try things yourself and make informed decisions.

Why not test DevAssure with your own proof of concept?

They offer a 45-day free trial, giving you plenty of time to explore.

Use the code TESTGUILD45 to get started!

Try DevAssure Now

FYI: This article is based on real testing scenarios and community feedback. While I've evaluated the mentioned tool, your experience may vary depending on your specific testing environment and requirements.

A bearded man with blue glasses and a black-and-white jacket smiles at a microphone in a studio setting.

About Joe Colantonio

Joe Colantonio is the founder of TestGuild, an industry-leading platform for automation testing and software testing tools. With over 25 years of hands-on experience, he has worked with top enterprise companies, helped develop early test automation tools and frameworks, and runs the largest online automation testing conference, Automation Guild.

Joe is also the author of Automation Awesomeness: 260 Actionable Affirmations To Improve Your QA & Automation Testing Skills and the host of the TestGuild podcast, which he has released weekly since 2014, making it the longest-running podcast dedicated to automation testing. Over the years, he has interviewed top thought leaders in DevOps, AI-driven test automation, and software quality, shaping the conversation in the industry.

With a reach of over 400,000 across his YouTube channel, LinkedIn, email list, and other social channels, Joe’s insights impact thousands of testers and engineers worldwide.

He has worked with some of the top companies in software testing and automation, including Tricentis, Keysight, Applitools, and BrowserStack, as sponsors and partners, helping them connect with the right audience in the automation testing space.

Follow him on LinkedIn or check out more at TestGuild.com.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

5 Top Model Context Protocol Automation Tools (MCP Guide 2025)

Posted on 04/09/2025

What is Model Context Protocol (MCP) Model Context Protocol (MCP) is an open ...

What is TDD (Test Driven Development)

Posted on 04/05/2025

What is Test-Driven Development (TDD)? Test-Driven Development is a software development approach that ...

The Best Open Source API Testing Tools for 2025

Posted on 04/01/2025

Here is my list of the best open source API testing tools for ...