Software Testing

4 Keys to Succeeding with Software Quality Management

By Test Guild
  • Share:
Join the Guild for FREE

Welcome to Episode 99 of TestTalks. Do you know what the four keys to effective software quality management are?

Software quality is even more critical in the customer-focused development environment many of us are finding ourselves in. But do you struggle to create software that your customers love? If so, this episode is for you.


It’s hard to execute the task of creating software that “wows” your customers if your team doesn’t have the fundamentals in place that will allow them to succeed with their testing efforts. In this episode Neeraj shares his principles of success, which will hopefully enable you to succeed with your software quality efforts.

Listen to the Audio


In this episode, you'll discover:

  • The four keys to succeeding with software quality management
  • Why thinking like a customer is so important for QA organizations
  • Techniques to really simulate customer behavior
  • Tips to improve your software quality management efforts
  • What an enabling environment is and why it’s so important
  • Much, much more!

[tweet_box design=”box_2″]I’ve seen teams with 40,000+ #tests & their proud of having all those test. My question is how many defects are you finding?[/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:

Question: How do you create an enabling environment for your engineers? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Kama

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on Stitcher.com so if you prefer Stitcher, please subscribe there.

Read the Full Transcript

 

Joe: Hey, Neeraj, welcome to Test Talks.

 

Neeraj: Thank you.

 

Joe: It's great to have you on the show. Today I'd like to talk about your principles of effective software quality management. Before we get into it, can you tell us a little bit more about yourself?

 

Neeraj: Sure, Joe. First of all, thank you for inviting me for this interview. I'm excited to share what I have learned in industry about myself. I've been in IT industry for 20 plus years now. I was fortunate to start my career with IT consulting companies. Working for these IT consulting companies really allowed me to work with different clients, different domain and spread across the globe with the new problem and challenges. That really helped me to build the perspective of overall as GLC. Last 10 years I've been focused on software quality assurance for different client as well as myself leading large QA organization where I set up QA from scratch. That's my experience in the QA world.

 

Joe: Awesome. I love speaking with people that have 20 plus years of experience, they've worked over multiple organizations. I think it's going to be a great talk. What I'd like to start with is I believe you have 4 principles, key areas that you focus on within your principles of effective software. I thought we'd go through each one and then take it from there. I believe the first one is customer experience. This could mean many things to many people. How do you define what is customer experience?

 

Neeraj: I'm glad you mentioned that. It means different thing for different folks. I want to focus our conversation on software quality assurance. My perspective on customer from a QA organization perspective, that QA, a quality assurance organization is the best buddy or should be best buddy with the customer, the reason being when organization building software, obviously they have an idea or concept to promote that on the market. When they are going through that idea or the process of building the solution, most of the time the nitty gritty things get missed, or it gets diverted. What I really want, and that's what I [inaudible 00:09:34] to my team is the QA organization, only thing they need to do is keep the customer perspective in their mind. What customer is really looking for? They should be basically not only looking at the customer requirement. They should be putting themselves in the customer shoes and thinking about if this solution we are providing to customer, how they can consume? What are the different scenarios they should be using this solution?

 

That's, when I say customer experience, very important for QR organization to wear that customer hat and think about all the possible scenarios and then build those scenario in their test designing and test execution, so that they can eliminate as many defects as possible. Then they can increase the better user experience for their customer.

 

Joe: Awesome. I love your answer, and I definitely agree that more and more teams really need to start focusing on the customer first and then take it from there before they even write a line of code. I'm curious to know how you actually get there though in your experience. In my experience I work with a large team, 7 to 8 scrum teams, sprint teams all over the world. We started using something called behavior driven development where we start from what is a feature, and we try to the words, language of our customer. As a radiologist, when they log in I should see my exams before we even write a line of code. How do you go about actually driving this customer focused experience throughout your organization? Do you use anything like behavior driven development, or is there another technique that you highly recommend for this approach?

 

Neeraj: Behavior driven development is I highly support that because that's … I won't say it's the latest or greatest. I think it's been there for long now. Organizations have started using those very extensively. I definitely support that as it's very important to keep the customer perspective. The other thing which I would always recommend for a quality assurance organization, that if their customer or their consumer are the general public, obviously they can wear that hat very easily and try to assimilate what scenarios they can go through by creating some user profile, some visual diagram. They can do a lot of technique to really simulate the customer behavior. When their users are a specific group with their specific need, I think that's where it becomes challenging for QA team to understand what customer really wants and what kind of behavior they are expecting. In those kind of scenarios, what I have encouraged in the past to my client and my team is QA team should not only be participating on the requirement session. They should be integral part of the operational meeting.

 

When I say operational meeting it's typically the sales team and the business operation, they have very regular occurrence of meetings to understand what kind of operational issues we have, what kind of enhancement request we are having from our customer. QA being upfront into those discussion and contributing to discussion helps them to understand the customer perspective. In addition to what you just mentioned, the behavior driven development, that's a great tool and techniques, but on a day to day basis the operational meeting and having exposure to real customer is what really helps QA team to build that perspective.

 

Joe: Awesome. I definitely agree. It almost makes me think of a concept that I think a lot of customers are going towards also. It's something like a shift right where we deliver the software to our customers, and back in the day that's where it would end. Now it sounds like almost we need to be in the loop, that after we deliver it we also need to be involved in the quality, getting the customers feedback and iterating based on what we released to them.

 

Neeraj: Absolutely. You said it right.

 

Joe: Awesome. Along the same lines, I guess, are there some key areas for customer experience we should look at? What would you think would be good quality? Is it always just good quality is making the customer happy, or are there some other key indicators a team can use to make sure they're on track to really make sure they're giving the customer a great experience?

 

Neeraj: To be frank, nowadays it's easy to measure the customer experience as it's been before because of the social network and the social media you can easily, if you are building an app, by rating of your app you know that your user experience is better on the app. The other major one irrespective of you are on social media or not, it is your customer retention, and the getting referral is key measurement in terms of your quality and user experience. That metrics is directly proportional. If you're getting a more referral on your product, it means your user experience is better and vice versa, and same thing with your retention percentage of your customer. If you can retain many customer on your application for your product, I think that's important metrics for quality or customer perspective too.

 

Again, I don't know how much it is true. I was reading a article recently that nowadays, folks, if they are using a new app or new application, if it doesn't perform for a couple of seconds they move onto the next one and then start using that. That's the key metrics for a customer experience that if they went to your side or your app, and you are not able to provide their experience, the chances are they will never come back. Those kind of metrics are key to measure how are we doing in terms of retaining users and increasing the user experience?

 

Joe: Once again, a great point. I think nowadays we really do live in a customer centric type of development world where like you said, social media, people don't realize it. If you get a bad review on Yelp, people aren't going to go to your restaurant. If someone downloads your app, and it doesn't install within 2 to 5 seconds, or it doesn't work as expected, you're never going to get those customers back. They actually will review it and rate it. I think that's a great metric that people can actually get from real customers in the field. I think that's a great point.

 

Neeraj: I agree.

 

Joe: I think we focused a lot on the customer side. I believe the next piece of your principles of effective software quality management is enabling environments. What is an enabling environment?

 

Neeraj: This is one of the key area when we talk about the principle of effective quality management. I think most of the time, and I'm not saying that this is done intentionally, because of different priorities and the cost challenges, many organizations and many leaders are not able to build the environment where they can best leverage their resources. It means for any application to be built, ultimately it's been built by human resources. There are other resources, which is hardware tools and all. It's important for a quality manager and leader to build an enabling environment. It's not like a one formula or one hat fits all, the hat in the enabling environment. There are things which needs to be done in a manner where resources within the team feels empowered, feels that they can do the right thing, and they feel that they are accomplished by doing particular things. Let me go into detail.

 

One of the things which as a leader they should be doing to enable their resources to do better job in the quality world is having them involved actively. When I saw actively, QA involvement, what I have seen is many QA team or QA team members, they will get involved whether in operational meeting or the requirement phase when they go into those meetings, and their involvement will be very passive because their mindset is my turn has not yet come. My turn is when the development will deliver the product to me. I will test it. I'll find the defect. That's where I will be more active. The leader has to change that mindset. They need to make sure when the resources, QA resources joining these early meetings, they are actively contributing. Now to do that, what have really worked for me, and I'm sure other leaders can use different techniques. What I have done is I made sure that when these team members are joining those conversation, they have the document called requirement [ambiguity 00:18:52] or test condition, something they need to come out of those meeting and start delivering, so that when they are sitting in those meeting, they're asking questions. They are documenting their understanding, that where the requirement quality improves, as well as understanding when those requirements and challenges, the QA is better prepared.

 

The reason I call it enabling is those same conditions, those same questions, those same techniques, they can apply in their designing of test case and the testing of those test cases like a test execution, so that you are enabling your team to be more knowledgeable in terms of requirement, and you can do better in quality. Another thing which I mentioned here and talked to you before I believe is we can continue to do whatever is best [inaudible 00:19:42] industry, but industry itself keep changing. In IT, as you and I have seen, things change astronomically. The resources on the team, they have to work together. They have to be innovative, and what does it mean? Innovation does not mean that they have to come up with the scientific formula every 6 months. Innovation means they need to find a way to be more productive, cost-effective and maybe some automation. They need to constantly think about how we can really reduce the cost. We can speed up the market, so that we can produce our product or do the testing faster, so that we can deploy to market. Those innovation, we as a leader needs to encourage and give some ideas, so the team can do those innovation.

 

I talked about empowerment, but that's the focus, Joe. The empowerment is another important thing. You need to build future leaders. You need to enable them by giving the right tool and helping them to make the right decision. Again, I spoke too much on this, but this is very close to my heart is every QA leader needs to really think about how they can empower their resources and best utilize them in the QA world.

 

Joe: Once again, I think this is the key piece here because when I think of enabling environment I think of culture. If you have a toxic culture, it doesn't matter what initiatives you're trying to get through, it's never going to get done correctly. For example, I've worked with a team where the QA resources are saying, “We need this information from this other team,” but they're being told, “No. You need to go to the scrum master, and your scrum master needs to work out with my scrum master before you can talk to my team members.” It's actually not enabling this environment where it empowers people to take initiative and talk to other people across other teams. How do you get around that? How do you build in this culture of quality, not only enabling them but also … This is the hardest piece. I don't know, any tips around the culture piece of it?

 

Neeraj: First of all, you give a great example, Joe, I think, where folks are not able to get the information because of the organizational boundaries. You're absolutely right, so a couple of things. One is, and I'm going to give an example where things worked really well for me in the past. First, let's talk about the boundary part. There is a lot of scenarios where the QA organization will draw their own boundary. Development will draw their own boundary. A business analyst team will draw their own boundaries. I think what is more important from a leadership perspective is setting the metrics or the measurement, which are more focused towards achieving the goal rather than individual performance because when individual performance get affected or get highlighted doing this kind of job, they start growing those boundary. That's the kind of thing which has really helped me to set those kind of goals where organization has the same goal whether it's the dev, BA and QA, but let's go a little bit more down level in terms of how it's really worked.

 

When I talked about active QA involvement early in the requirement, what really helped those active QA involvement is sharing those documentation, sharing those understanding with BA because they're ultimately [inaudible 00:23:20] BA, business analyst team because they are the ones who are documenting those requirements, and they want to make sure that they have the solid requirements. QA, not only helping themselves. They are helping their BA partner solidify the requirement. Other thing which really worked is when we go into the defect or test execution phase really going defect back and forth. Whether it's a defect or not defect, what really worked is forming a triage team where we have representation from development, and QA and the BA. Their metrics, the triage metrics are the measurement. What really worked is how soon they can allocate defect to the right resource for the quick resolution. When you start using those metrics for a team, you're not using it for individual, and in that team you have a BA, dev and QA, the whole shifting on how things should work, it changes. The environment gets changed. Again, this is my experience how can you really set those goal by a team rather than individual resource, really work to build that kind of environment.

 

Joe: Awesome. I think this is, like I said, one of the key areas. You also have a few other areas of your principles, and one of them is repeatable and reusable. I think this is also a key initiative to really keep quality going throughout the organization. What do you mean by repeatable and reusable?

 

Neeraj: This is another good aspect and great example. I'll give you very recently, if you have followed the news, the SpaceX team, they reused their fuel tank. [inaudible 00:25:08] millions of dollars every time you put that in the ocean. Again, this applies to everywhere. In the QA world, I think it's a similar thing. If we are building test cases or automation every time from scratch, that's not productive. It's counterproductive. When I think QA team or QA leaders sit down and review the organization, they need to find processes and tools which can be repeatedly used and take an example. If they are building automation in the organization, and they have 10 different lines of business, and if 10 different lines of business using a different automation tool, the chances are when they want to put together a product, which communicates with each of these lines of business, these automation scripts are not going to function because it will require extra step for them to communicate, and integration will be tough.

 

From the reputability or usability perspective, the leader or whoever is the team lead from the QA automation perspective needs to think about if I'm choosing a tool, is that tool consistent across my organization? The tool selection becomes important. When I'm selecting a tool and building a framework, is this framework scalable given my organization size, so that if I need to continue to increase my automation and maintain the script, can I do this particular tool or this framework? That's where I call it as repeatable and usable. If you build a framework, and you can keep building on further automation, that's definitely the right thing to do. Similarly, if you have your test team or QA team who are using, just very simple example for everybody. If every time they are using a login to their application to start testing, do we really need to do that manual?

 

If every 10 tester or 200 tester, they need to log in into application to start testing, those kind of things should be automated right away. That's where I think it becomes very important to start identifying areas which can be automated. If these are repeatable, absolutely, that's a candidate for automation. If there are processes which can be eliminated, we should look into that as well. Then if there are functions, which can be reusable, I think even for test cases. I'm not saying it's only automation. Even for manual test cases, if you have a test case for one particular product and similar test cases you are using for other product, you don't need to write it. You need to build your test case framework in such a manner that you should be leverage what is already there. Those are the couple of things team in a QA world, they should be thinking and doing every 6 months, identifying what I can reuse and what I can repeat.

 

Joe: Awesome. I definitely agree with that. Also, like you said before, if they're reviewing every 6 months, there could be pruning. A lot of times teams seem to add on things. Also, it could be the time where you look at something and say, “This may have made sense 6 months ago, but where we are now it doesn't. Let's delete it because it's creating more noise.”

 

Neeraj: You absolutely are right. A great example, I'll tell you, in the QA world they try to create a lot of test data script because they want to create the consumer. They want to create different scenario. If they are not reviewing this test data script every 6 months and even if they can do more frequent, better, those scripts keep creating data, and it becomes a challenge to maintain that data, as well as those scripts might not be valid. Your point is absolutely right that periodic review and enhancement of what processes whether it's repeatable, reusable we have put together, is key to the success.

 

Joe: Awesome. I know we mentioned functional automation, but there's a lot of other automation that teams have been working with now to set up environments for them. Now I think this might also help feed the enabling the environment section that you have where if teams have an automated script that actually can build their test environments for them automatically, if they don't have to worry about that, that can almost empower them to focus on the things that they do best and automate the things that can be automatable and repeatable. It doesn't necessarily need to be a functional test. Anywhere you can use automation you should, and that will in turn enable you in environments to have a better environment, I think, to work in.

 

Neeraj: Great point. I think with that goes also not only building the environment, even the testing environment that it's built up right like using some smoke script. It's key for that automation, and that it saves a lot of time for teams. Another example can be many organizations today, Joe, they work in an onsite offshore model. What happens is somebody builds an environment, say, in the United States, and they go back home. Then offshore team is supposed to pick that environment up, but if nobody has done the smoke testing, and that to be an automated fashion, then I think the offshore team loses the entire [day 00:30:32] because there were no scripts to verify the environment is up and running or not. Other thing, Joe, I will add is when you talked about this build an environment is the continuous integration. If you see many team today, they want to move towards continuous integration, and the QA team can be a great contributor there. That falls into repeatable and reusable. What I see is QA team, typically, they build a lot of regression test cases, and obviously then they want to automate those to ensure that the things which were working before still works.

 

I think the same regression script, what QA is building, if they can start sharing with their development counterpart and start having those scripts, running the continuous integration environment, those defects can be found very early, and those can be eliminated early as well. That's where again it comes also how they can work together and help each other. It's not only QA which is testing. They are helping their development team to build product with the last defect. I'm glad you mentioned that. I wanted to bring the continuous integration aspect as well.

 

Joe: Awesome. Along the same lines of continuous integration and automation, I think this leads in really well into your last point, which is time to market. When we talk about CI automation, is that what you mean by time to market?

 

Neeraj: That's definitely one of the aspect where we can left shift quality. That means we can find defects early. We can eliminate defects early, so then we are reducing a time to market. Other important aspect, Joe, from a time to market perspective is, and I touch upon this topic is how prepared your QA team and how engaged your QA team from day 1 has in terms of having the right kind of script. When I say right means, and I continue to use that, a test less and test smart. That means you need to write less number of test cases, but you should be finding more defect. That's called test case effectiveness. If QA team is writing test cases and scenario which are going to hit into one particular area and don't have a broader coverage, the chances are you are writing test cases and executing test cases, but you are not covering your product. Continuous integration, one aspect. Your test case effectiveness is definitely a second aspect in terms of how well and how early you can test.

 

The other aspect is your cycle time. Cycle time depends on 2 things. One is obviously when you're running your test scenario is your test scenario is automated or not. That obviously affect your cycle time. At the same time when you are using the right kind of tools to log your defect where you can provide a better description, you provide the screen shot. You provide what really you were doing when this defect occurred, that helps development team to really debug and resolve the defects early. That's why I continue to say working together, finding those right kind of tools are very, very important. Otherwise, QA team will spend time to explain how and when the found defect. Development team will have to spend time to reproduce that defect, so that's the third aspect where the defect management and using the right tool are going to speed up your time to market. That is obviously affect your cycle time.

 

Now other aspect from a time to market perspective, we already touched upon the automation, but more important is how good your regression is because there will be scenario where you really want to deploy early because if it's a priority one defect, or if there is an enhancement which is waiting for long, and customer cannot wait any longer, you want to move things early into production. At the same time, you cannot compromise with quality. That means you have to have a better regression test suite where you feel comfortable and confident that you covered your business critical function. You covered your security breach kind of function, and you're not compromising quality on those area. That way you can run that entire regression test suite in automation fashion, and you should be able to deploy early. Then obviously, continuous integration we already talked about. These are the couple of things will enable team to really speed up time to market in terms of reducing the QA times and QA cycles.

 

Joe: Awesome. I want to hone in on a point you made about less test cases. I think this is an overlooked aspect. It really is critical because I've seen teams that create more, and more and more tests without looking to say, “Are we repeating anything,” number 1. Number 2, do we still need this? That actually kills your time to market. If you're trying to run your whole test suite, and you have all these extra tests that aren't really adding any value, but for some reason you're impressed by the number of tests you're running without thinking, like you said, how actually effective are these tests? Are they actually covering risk? How do we get there? How do we prune down our test cases to make sure that we have the amount of coverage we need but no more than what we need?

 

Neeraj: Absolutely. One of the organizations, a couple years back there when I walked into that organization they had 40,000 plus test cases, and they felt proud of having those 40,000 test cases. My question was how many defects are you finding on these test cases? How much time really are you spending to run these test cases? Joe, I agree. That's overlooked, but organization cannot be effective until they start looking into test case effectiveness. This also goes back to, Joe, another point, very recent trend in the market. What they are doing, many organizations, they are paying many vendor by number of test cases they write for their product like in Y2K time the pricing model was line of code. I started seeing now the recent, the folks are started doing the number of test cases. In those kind of scenario, if somebody is doing that kind of pricing model, it is important that the number of scenario should not only alone be the consideration for the pricing model. It should be associated with the valid defect and the unique defect. It's not like the same case is finding similar defect 10 times, and you say, “Yeah, found 10 defects.” It has to be unique. It has to be valid defect, and that should be embedded into your pricing model.

 

Joe: I'll probably edit this out, but I was just dealing with some consultants, some contractors. They're actually quoting us automated tests. Per automated test they had a number associated with it. That's probably a bad metric because of course they're going to try to create as many automated tests as they can. We're actually trying to pay them. We're paying them for their brains to say, “Look, here's one test, but it's covering these aspects,” rather than here's an automated test, here's an automated test.

 

Neeraj: Absolutely, Joe. More importantly, I think we touched upon is even if you automate today, I think the test cases, what's the longevity of the test cases and how maintainable it is because the same vendor … I'm not saying their intention is wrong, but the same vendor will come back after some time and say, “The maintenance is going to cost you on top of 20% and 30%. They have to consider that as well.

 

Joe: I lost track of time. I don't have my normal system set up. I try to keep this under 30 minutes or a little bit over. I think we still have maybe 5 to 10 minutes. Is that okay?

 

Neeraj: Absolutely, Joe.

 

Joe: I want to go back to enabling environments. I guess one of the key questions I have is you have a concept where you think QA professionals need to continue honing their skills. Environments should encourage them to learn new techniques and best practices. I guess this is a long way of asking are there any books or resources you recommend to a QA professional or a tester to help them stay up to speed with the best practices and the techniques that they need into today's Agile, fast-paced world?

 

Neeraj: I think there are many conferences nowadays as far as the QA is concerned. I think there is [inaudible 00:39:31], I believe. There are many star testing specific conferences. Even if not everybody can attend, I think the QA team should be following up in terms of what are the topics the speakers are talking about? That's important for them to understand what's happening in the world of quality assurance. The second aspect which I highly encourage not only for QA professional, for everybody actually in IT, to keep the eye on the digital world. Things are changing left and right, and I'll give you an example of why I continue to say that QA professional needs to continue to hone their skill is take again, the QA team. If they are doing the devices testing or the browser testing, especially the devices testing, Joe, what I have seen is many QA organization, they continue to buy devices because QA team says, “I need to test with iPad. I need to test with iPhone 6. I need to test with Android. I need to test with this.” Then obviously organization think it's important because customers are using that, and they need to have those kind of devices.

 

After 6 months, as you know, the new devices come in. Then you probably start investing into those devices. QA team especially needs to be knowledgeable that they are not limited to devices. There are tools available in the marketplace, and few of them are open source, where you can really either emulate those devices, if you want to be very cost-effective, or you can have real devices used by this tool, so that every 6 months you are not adding devices or buying devices to do your testing, and similarly for browser. That's where it's very important for the QA team to keep an eye on what's happening in industry, what kind of tools are available. There are so much media right now. You can read about all those tools and start encouraging your team to really review that.

 

One thing really worked for me, Joe, is in my previous job or previous [inaudible 00:41:44], I believe, we had a forum, quarterly forum where we started putting a newsletter. We had a very cadence set up where one person or 2 persons, they will bring a new topic from the industry specific to QA world and not only for in general they can put the topic there that this is what's happening in the world. They should also put recommendation how that particular new topic, or new trend or new tool we can apply into our organization and what will be the impact in terms of we apply that concept, and creating again that enabling environment. You're not asking everybody to really spend 2 hours or 3 hours every day. You are asking quarterly one or 2 person to pick a topic, review it, share with the entire team and then put some reward and recognition around that which builds up your culture and environment where they can continue to see what's happening in the world.

 

Joe: I think that's great advice. I think that's a great strategy to get everyone involved and thinking about quality and keeping their sting fresh for what's new in the industry. That's a great point. I think I covered all the areas I wanted to cover in this episode. I have one last question, but before I ask that question, is there anything you really wanted me to ask though before we left?

 

Neeraj: No, I think you covered almost all bases on the QSI. I'm enjoying our discussion, so I'm actually good.

 

Joe: Before we go, is there one piece of actual advice you can give someone to improve their software quality management efforts? Let us know best way to find or contact you.

 

Neeraj: I think again the best advice, as I said, is a customer centric, customer focus, what your customer perspective is is key for any QA organization. If I need to say one word or one sentence, they need to think about how they are the best buddy of their customer and how they can help them instead of helping their own organization. That, I would say, is a key. They can contact if somebody wants to contact me, there is a personal email ID I can share with you, Joe. It's Gmail. I can send it to you by email, and definitely they can reach out to me.

 

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 03/14/2024

What is ETL Testing Let's get right into it! An ETL test is ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...