Have you ever had the feeling that you’re running too many tests? Or redundant tests that are causing you to spend too much time maintaining an ever-growing test suite of what seems like wasted test cases? Does your test suite take a long time to run when all you want to do is make sure a small change you made to a specific area in your application didn’t affect anything else? And did it leave you wondering what combination of test cases would give you the verification coverage you need?
These are issues I struggle with, as well – which is why last year at HP Discover in London I was interested to learn about a model-based testing using a tool called Agile Designer. Ever since seeing a demo of the product done by Huw Price, I’ve been hoping to get him to appear on my podcast. I’m happy to announce that today, Huw Price shares with us his thoughts around model-based testing and test data management.
About Huw Price
Huw Price is a VP at CA focusing on quality and acceleration of the entire software line cycle, really, but specializing in test data and accelerated automation frameworks.Huw has provided high-level architecture design support to multinational banks, major utility vendors and health care providers. A strong believer in balancing pragmatism with a visionary approach, he has been able to rapidly bring new products to market while maintaining strong quality control.
Huw writes for numerous publications including CIO UK, Professional Tester, The European Software Tester and Test Experience and is an adviser to Kings College KCL in the United Kingdom. He has led many academic discussions with academic groups, including KCL and presents frequently at conferences and exhibitions including HP, QSIT’s STeP-IN conference in India, the SQS international conferences, Oracle User Groups, the UKCMG and StarEast. He also has experience of training large groups and offering strategic advice to large corporations like Deutsche Bank and Capgemini. Huw does not shy away from lively discussion amongst audiences about best practice and the limitations on current testing techniques.
Quotes & Insights from this Test Talk
- We convert what is a deceptively simple model into, in effect, a set of test cases. The number of potential paths could be in the billions and trillions, and we can get it down into the tens and hundreds by using this mathematics. That's really the starting point of where we began with Agile Designer and agile modeling.
- The problem is we don't have time to model because we're all too busy doing our job, but then of course what happens is your job gets too busy because you're fixing bugs because you haven't modeled. It's the classic catch-22, so it's a tipping point. You have to design test cases. The tester is modeling. They are actually building a model which should, in theory, should reflect the requirements and the requirements is a model. It comes down to who's actually going to do that modeling. In my sprint team, because the Agile Designer team is a very efficient bunch of 22-year-old developers, and we run about 3-4 week sprints, and what we'll do is say, “Look, in this sprint we need to have done the model.” I don't care who's done it. I don't care whether it's the tester, it could be me, think of me as the user, or it could be the developer, but at some stage or other we're going to have to create a pack of regression tests to test this.
- I think the thing then is the realities that things will change because someone will miss something and then everyone in the world has got to discover that themselves, whereas if we have this community, you could almost create a little forum around some of these common tests in the banking industry things like that, credit card checking and all that stuff. If someone finds that actually there is a specific … Was it the heartbeat or the heart bleed bug that went through? We actually modeled the test cases for that that were being used and we found the bug in about five minutes. Now, if we'd had a community of people looking at the model that would never have happened, would absolutely not have happened because the industry would have come together and said, “Whoops, we're missing this. We're absolutely missing this.” I think it would just bring a degree of stability, and rigor, and reuse, and a community which seems to be lacking in the testing world really, apart from when we all get together at trade show and have a good time.
- The thing there is to think about synthetic data because production data is actually very thin. If you turn on code coverage tools, you only cover about 20% of your code with a production run, typically, whereas you should be in the 60%-80% code coverage. The easiest way to do that is to synthesize data, to create data with parameterization around it. We've got some very powerful synthetic data generation tools which will put data directly into a database. It will put it into a file, it will put it into an MQ message, but also I think, actually this is coming back to automation, is that quite often it's better to create an automation framework which puts data in through the front end of the system as well.
Resources
Connect with Huw Price
- Twitter: @datainventor
- Company: Grid-Tools
May I Ask You For a Favor?
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.
Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!