Software Testing

Managing Test Teams in the 21st Century [PODCAST]

By Test Guild
  • Share:
Join the Guild for FREE
Perze Ababa Managing Test Teams

Welcome to Episode 92 of TestTalks. In this episode, we’ll discuss Managing Test Teams in the 21st Century with Perze Ababa, a Software Testing Manager for Johnson & Johnson‘s IT Application Services, a consumer R&D group.


Managing test teams has always been difficult, but it seems even harder now in an Agile DevOp environment. How should we handle rapid change and achieve quicker delivery of our applications to our customers? Discover a better way to handle common test team issues with Perze Ababa.

Perze shares with us his experience in running a testing organization in the 21st century, as well as the tools and techniques such as visual validation testing that can help anyone improve their testing efforts.

Listen to the Audio

In this episode, you'll discover:

  • How to promote best practices among different teams doing different types of testing.
  • Why visual validation testing is used every day in Perze's test teams.
  • Is a tester’s main job to be a quality gate?
  • Tips to improve your test management efforts
  • What the context-driven school of software testing is, and how it helps test teams to test more efficiently

[tweet_box design=”box_2″]You're one team, you're all responsible to deliver something that will help your company do better~ [/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:

Question: What skills do you think every tester should be working on? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Kama

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on so if you prefer Stitcher, please subscribe there.

Read the Full Transcript


Joe: Hi, Perze. Welcome to TestTalks.


Perze: Hi, Joe. Thank you for having me.


Joe: It's great to have you on the show today. Before we get into it, could you just tell us a little bit more about yourself?


Perze: All right. My name is Perze Ababa. I'm currently the test engineering manager for Johnson & Johnson's consumer platform. What we really do as a team is that we provide testing services as well as test tool and support for the current web platform that we have where eventually all of the consumer sites for J&J will be migrated into.


Joe: Awesome. Right off the bat I just want to ask you some questions on the test tooling support. Do you support multiple teams across Johnson & Johnson? If so, how do you select what tools they're going to use and how do you promote best practices among different teams doing different types of testing?


Perze: It's a pure web application, the one that we're dealing with, so definitely from a tooling perspective it's also good to understand who we're supporting. We have three distinct teams that we are providing support at the moment. There are two scrum teams and we have one team that's just primarily focused on upgrades as well as [inaudible 00:01:15] type of work. It depends on what we need to test, that's also where we kind of jump into the type of tools that we use.


Obviously since we're using web it kind of begs a solution of a web driver type automation framework, and then on top of that we're actually- We just started working with Applitools, and we are using that extensively to look at the visual component because part of the scope that we have in our work is really more on upgrades and just looking at functionality and performance and usability and all these other things just shows you a certain part of the story or just a certain piece of information that the value proposition that we have with having visual test, or and having a visual testing tool that can actually provide you with a better way to compare multiple sites or multiple versions of a site in multiple environments is very helpful to us at the moment.


That's really just from the automated scripting support, also for the team members that we have that's involved in the scrum teams, there's other types of automation that we can rely on. For example, since the platform that we have needs to be able to support multilingual data and content creation having a test data service that can actually create an article in all the languages that we support is something that's very helpful. I guess the overall question is depending on the need we have a particular tool that we've been using and some of the tools we've developed and some of the tools we've bought.


Joe: Awesome. I guess the two main points I want to pull out of that and explore a little bit more, the first one is Applitools, actually Moshe introduced me to you and Moshe's awesome so he'd kill me if I don't explore that a little bit more. The second piece I'm going to ask you after that is how you handle localization testing because that's something a lot of companies I think struggle with so I'm just curious to know how you handle it.


The first question is, for the visual validation piece of testing can we explore a little bit more what exactly you've been using it for? Have you been doing more just proof of concepts, is Applitools something you're using every day now? If so, can we just explore more of a user case of how it's being used currently in your framework?


Perze: This is actually something that we use every day now. We've primarily been using this on a majority of our site upgrades where- So we introduce a new version or a new functionality that we commit into the code that gets deployed into an environment, and we then start testing. If there's any introduced risks that's pretty obvious and start digging for thing that we could explore a little bit further.


The quick and dirty way that we really used Applitools is if you have a finite set of URLs that you can gather, we take screen shots of all of those in a previous state. The previous state can either be an accepted set of data that you already have on your site, or whatever is in production. Then we push all of that data in to the environment that contains the new piece of code, we gather all of those URLs and take our screen shots and compare that into the baseline. You can actually see did something go horribly wrong in any of these pages or are there acceptable changes that goes through?


The good thing with- One that we've seen with Applitools is that if you have a static baseline, something that you know will not change, something that you can rely on, and if you use that to compare any new information to that baseline then you have pretty much an idea of what is it that changed with what we did? Is this change actually acceptable? The beauty that I've seen with Applitools is that you're not limited to a pixel by pixel comparison. They have the exact comparison feature, you can switch that to whether you just want to focus on the content, if the content changes or if the layout changes, or they have this magical strict comparison feature where they were able to codify what's I guess viewable, or what that human eye detects or mimics.


There's definitely also some challenges between these levels of comparison because it depends on how strict you are if you go with the exact you definitely have more false positives with it, but from the get go we've really seen a lot of value more than waste when it comes to using this tool so we're just getting started and I think we've used this for the past three months or so. We've gotten to the point where the automation framework that we have at the moment, you just feed it with a base domain URL of production or the baseline and then you feed it another target URL and then we have a crawler that will just dig through all of that information and do a one-to-one comparison. The reporting tool that we have pretty much highlights what is the difference and if we want to dig deeper we can just go through the Applitools dashboard.


Joe: Awesome. Now, this isn't an episode on visual validation testing, but it is a topic that I've been hearing more and more about so since you have been using for three months I know you're still probably learning it, are there any things that caught you off guard that you think someone starting brand new with an Applitools implementation should know about, and what's your workflow? Like when there is a mismatch who looks that and approves it, and do you have a process in place that will- Who handles that to say, “Yes, this acceptable or this is definitely a bug, let's open up a bug report for this.”


Perze: Yeah, for us right now the workflow that we have, it's a tester that actually looks into the results. When it comes to automation and results one of the challenges that you need to be able to deal with is that when something passes you need to make sure that that's passing for the right reasons. When something fails you also need to know that it's also failing for the right reasons.


After determining that it's failing correctly it's how soon can we actually dig deeper into that failure and find a resolution to that failure? With regards to the workflow that we have with our visual validation, our tests right now [inaudible 00:08:54] we work in conjunction with [inaudible 00:08:56] labs, we get to the point where we have enough VMs that are allowed so that we could run a lot of our tests in Parallel that some of our tests take more or less 5 to 10 minutes to validate 50 to 100 URLs at a given time. The results are the one that's actually taking us longer to go through because what we really end up doing is that we have a tester that just pours through a reporting dashboard that just looks through the failures.


One of the things that Applitools doesn't really give you is that- Let's say if you have a failure that happens at the global scale, so you know like a particular div on the third column of your site, your purposefully changed that feature to justify right instead of aligning left, right now if you have … let's say … a thousand URLs that has that change, then all the thousand URLs will fail. I think one of the things that I'm waiting for is one of the features that's happening that I believe that Applitools is working on, is a way for us to easily group these test into a single common thread. That's something that we're definitely looking for.


I guess to go back to your original question, what are the things that we need to watch out for when you're first looking at Applitools, I think it applies with testing as a whole because when it comes to testing you're driving information out of the system that you're testing and based on this information the people that matter end up making their decisions on whether the risks that are included in that information … or at least known risks … are I guess acceptable enough for you to whether release or just go to the next step in your pipeline.


The one message that I really want to be able for the people that want to use a lot of these tools is that this is just one piece, just one part of the story, there's other areas where you need to focus. Don't make this the only thing that you will be doing, there's other things that you should definitely need to do. Introducing better test design and better test strategy will definitely result to better test execution, and out of that you can derive better reports for your stakeholders to understand, whether they're new problems or it's good enough to move forward.


Joe: How do you handle that now though? For example, a lot of these things are automated now where there's a continuous integration environment and people have- If 20% of things fail then the build doesn't go in, how do you measure real risk though in this type of case? How do you know that 20% that failed, that it's not a high risk thing that failed and therefore even if it was just 20% or 10% that failed that you still shouldn't put that build in because it was a critical feature or critical piece of software that's broken?


Perze: Numbers are definitely misleading because that remain 20% might be the one that makes or breaks your business. I guess it's very challenging to be able to get into a one-to-one correspondence with if an x percentage of it passed then how is that related to something that's good enough? What we really do at this point is that we have a collated test results and I end up- Me, as a test manager, I work with our head of engineering, our head of product, and head of product management, and we go through the results. See which ones we know would affect it, and then as a group we collectively make a decision on, “Okay, we'll push this out but we're going to have our caveats when we know, yeah, we'll push this feature out but we know these are going to be problematic if you do A, B, and C.”


Part of our definition of “done,” is not just testing but also documentation. If we're being a part of a heavily regulated industry our documentation really needs to be very solid because if someone finds cause to sue one of our brands then we definitely need to be able to provide all the necessary information that we did to be able to test a piece that led up to that challenge.


Joe: That's another great point, bigger companies, larger companies, a lot of times they have to deal with extra regulations, and you've been testing it looks like for almost 16 years. I think the software development lifecycle, the way we develop software has changed in those 16 years so how do you handle … I don't know … as Johnson & Johnson if they use more agile type practices, how do you handle more agile fast delivery type practices and yet still have that quality in place that you know that you're not missing anything, you're still moving fast yet you're still making sure that there's quality involved from beginning to end of your development lifecycle?


Perze: Wow, that's a very loaded question. I'm going to make this a big caveat, I don't think I could say that I represent Johnson & Johnson in it's entirety but I can say based on the experience that I have with the team that I'm working with … which is just I guess a dot in the plain of teams that's in Johnson & Johnson … there's definitely some challenges that you need to be able to hurdle over. I'm used to working with probably more agile teams in the previous companies that I work with. Before this I was working with Vicom and then before that with NBC Universal, and before that with the New York Times, and the one thing that I can remember when I worked at the New York Times was, I think even before agile was the marketing buzzword that it is now, the team that I worked with were pretty agile in a sense that we have a project that we want to work with, I sit almost next to the developer and the project manager, and it was a lot easier to be able to just push things out.


In bigger companies like J&J for example there are some gates that you essentially have to go through so that- You know, we just want to make sure that from an enterprise prospective we're not breaking anything that we're not supposed to break. I'm not sure if I just went around your question or if you just want to clarify something else.


Joe: Awesome, yeah, I'm always looking for insight. Like I said, I work for a big company so there's a lot of regulations and they want us to move agile, they want us to be lean, they want us to move fast, yet we still almost have to do waterfall type processes because if they FDA ever came in and said, “Where's the proof that you tested this widgit A?” We need to be able to produce it, so it's not like Google or a simple web app that you can just release. I think me and you work on enterprise software so it's like this extra layer that I wonder if as many people are using agile if they've dealt with it before an dhow they handle it. That's the type of insight I was looking for.


Perze: Right. Definitely when you say we've done agile, it's probably better off it I said, “We've done a form of agile but it's really based also on the culture of the team and our understanding on how we produce software.” If you simplify all of it we work as a team to be able to create software that brings value back to the company and however we do it as long as we align with the rules and regulations of the context that you're in, then you should be okay. I mean there's going to be some challenges when it comes to waiting for things like if you want to have something provisioned and the corporate policy takes a lot longer that what you're used to, there has to be a way for you to be able to inform your boss of that risk so that person can then help into accelerating certain things, or if not, just do something else, there's always something in the backlog that you can work on.


Joe: Maybe people won't agree with this, is I don't think there's probably a real pure version of agile, it's based on your company and your company's culture like you said, so it's not necessarily one company's doing agile right and another one's doing it wrong, it's probably just a flavor of agile principles but I don't know if there's a right or wrong way per se.


Perze: I completely agree with that. If you look at the agile manifesto it really just talks about- [inaudible 00:18:40] look at the principles that's laid out, it talks about guidelines. It's not like it's something that's set in stone and a lot of the challenges that I have with certifications out there is that it seems that they've found a solution on the what and the how to do agile, but the one thing that I think is very key and that sits on top of that, defining the what and the how to do certain things is the who, because that really defines the culture that you have within the team whether the principles that you adhere to, how you communicate, how you react to problems, how the managers try to unblock certain problems, it's a continuous learning process that we have to go through as a team.


If you're a tester who thinks that, “Look, I'm quality gate, nothing goes through because I'm going to have to stop the build because I can always find a bug.” it ends up being harmful than helpful. Thinking as, you know what, you're one team, you're all responsible to deliver something that will help your company do better, I think that really, I guess it simplifies and breaks down a lot of the barriers that some traditional testing teams have.


Joe: I completely agree, I just did a few webinars around this concept of back in the day you used to be, if you were a QA person, a tester, you were the gate and you blessed something whether or not it would be released. Now it's more of a group decision, you're giving the information that the people in charge need in order to make the decision, but ultimately one person isn't the [inaudible 00:20:31] quality, it's everyone on the team's responsibility for quality.


Also I've noticed in almost all of your online profiles, you mention you're a firm believer in the context driven school of testing, what does that mean to you? I've interviewed a few people, a few testers, and they mentioned this context driven school of testing, I'm just curios to know what it means to you and how does it flavor your day to day work as a tester or as a manager of testers?


Perze: For me personally it just tells me that what I know now can be better. There's a lot more to learn, but there's seven basic principles of the context driven school of software testing and I can probably go through the list but I'm going to pick and choose the ones that have been very valuable to me.


The second principle that says that there are good practices in context but there are no best practices. I think this is a pretty big misnomer when it comes to introducing and idea that succeeded somewhere else and we try to introduce to a place that has a different culture that can apply that. Having that in mind it gives me the idea that, you know what, what I know about the product that I'm working with, the product that I'm testing, there will still be a better way to test this as long as I continue to improve my knowledge of it as well as I improve the relationships that I have with the people that's working with me for that project.


Which then leads to the third principle saying that people working together are the most important part of any project's context. We can introduce as many processes as we can to force people to be agile like literally standing up or having [retros 00:22:38], but if we don't respect the value of what you've done and not bring any learnings from that into the team and make the team improve then it just makes it very unvaluable. I do believe that good software testing is a challenging intellectual process. There's a lot of times- A lot of the conversations that have been happening in the past, maybe 10 years or so, where the tester as a career is now about to die and primarily it's because of automation.


If you talk to people such as yourself, maybe even others who have been very successful with being able to create automation as part of the solution in building better software is that there's the piece that really requires a lot of thinking and a lot of understanding which is the piece that cannot be automated. That would just be I guess a few of the principles that has made a pretty big impact to me and to be honest with you the people that- I think some of them you've interviewed like Matt Hoiser for example, he's one of the few guys who have really helped me realize the value that I have as a tester, pretty much made me or got me to where I am now because of just the way they've mentored me in a sense of asking me why I think this is this or that is that. It made me question some of the beliefs that I had before when it comes to testing and pretty much made me go beyond the ideas that testing is just doing the same thing over and over again, but it's so much more than that.


Joe: I'd like to go back to something you mentioned earlier on in the interview and that was localization, testing other languages. Here's another thing I think a lot of companies struggle with or they have different approaches for. Is that something visual validation can help with, because I heard that you could use that for localization or what approach do you use for testing other languages within your applications?


Perze: When it comes to localization, based on my experience, I think we can look at it from a purely functional basis where you know you have a piece of content that was translated into something so we're going to have to assume that the translation was correct and it's perfectly colloquial, that we're not insulting people based on our translation, and what we usually do with this now is that we currently have a translation service that just gives us the ability to scrape that data, convert that into the CSS selectors and the locators pretty much, and assign that to a specific locator and then we do the validation. It's a template, a heavily templated based site so we know what to expect and the fact that it's a platform for a lot of these other sites that we know that the name and conventions are definitely still going to be the same.


Let's say if we want to compare that, like we have a product page for example, so we have a header, we have a bulleted list of the description of what the product can do, so there's a way for us to validate the correctness, so we have a master source or you can say a baseline then we just compare that according to the langue that we have. The tricky part that we've done so far is getting to the point where we have a baseline and then we can abstract that the localized data out of it and then somehow combine them back together again and pretty much run it as a data driven test to perform comparisons.


With regards to visual baselines we employee the same exact approach because we can pretty much capture all the data of the site that's already in that language, we upgrade that site and perform the comparison, so at this point if there's a difference in the characters for some reasons Applito0ls will definitely catch that but when it comes to the correctness of the actual translation, that's something that's already been done for use even before that copy got to the sites.


Joe: You've been a director of test engineering, you're a software test manager, I interview a lot of people and I get asked questions all the time from my blog, “What skills should I be working on?” Are there any skills you see when you're trying to hire people that you would love more people to have, that you find that it's hard to find, that you think every tester should have or you think there should be more people with that type of knowledge set?


Perze: The key that I've really been- Most of the people that work for me that have been very successful or that worked with me that has been very successful are the ones that have that curiosity and the ones that are really fast learners. A lot of times they're not afraid to experiment, they're not afraid to fail. One thing that's really common from these folks is that they learn for the better because you can put in your resume or you can show that you have the skill to know how to use something but considering how fast paced our world is now when it comes to the turnover of technology and the knowledge of using that technology, I definitely prefer testers who are very independent in their learning and can pretty much think on their feet.


Joe: That's such a great answer, and that's what I try to pull out when I'm interviewing with someone because I don't need someone that knows everything and knows every technology, I just need them to be able to Google something on their own and not have you wait for you to tell them, “Look, you need to find this out.” or they're waiting for training or they're waiting for a college course, that's never going to happen. I definitely agree with that.


Perze: Yeah, it's definitely a challenge because you can see really adverse affects of people who don't have the initiative to learn, these are the folks who will always be waiting for a requirement to be explained for them before they can actually test something so as a manager I hate being such a micromanager, I cannot stand micromanaging other people, I want people to be able to bring their own ideas or even challenge me with better ideas. I try to keep up to speed with the technologies that were used when it comes to testing but also I do stay in the trenches with the testers as well and having the experience to just be up to date with what are we actually doing and understanding what our challenges are.


I think that's also one of the challenges when you jump into management because you really technically no longer testing but you now have a different job so to speak, so if you don't keep up to date with what everyone is doing then you definitely will lose that edge.


Joe: Great advice and I agree with you 100%, it's one of my pet peeves actually that even people will come to me, “Joe, what happens if I do this?” I'm like, “Well, have you tried it?” They're like, “No.” I try and say, “This is what happens, you could've done that but you've been waiting around for eight hours, I don't get.” I guess people are different but still, just drives me nuts.


Perze: Yeah, there's definitely some people who have different ways in learning. There are some people were you just give them a very high level idea and they can run with it and there's also some people where you have to work with them patiently and be more of a mentor more than anything, but of course you expect them to not stay that way, you want to see a sign of maturity when it comes to doing the work by being more independent. I think as a manager that' something that I always try to look for.


I've really learned a lot especially after I had my first kid and after that kid started growing there's a lot of learning experience that you can have, “Oh, I actually need to explain this because all of the biases that I have or all the understanding that I have for a given concept that person might not actually know or understand it that way.” There's a need for you to align when it comes to to knowledge but it's really up to the person to do whatever he or she can with that knowledge in hand.


Joe: Before we go is there one piece of actual advice you can give someone to improve their software testing efforts and let us know the best way to find or contact you.


Perze: I'm going to answer that starting from the end. You can contact me via Twitter, you can reach out to me via Skype, but please do reach out to me through Twitter, my Twitter handle is Perze, that's P-E-R-Z-E, and that's my first name. With regards to the piece of advice, don't ever stop learning, that's the one thing that I would encourage anyone that's in the software testing field. Now that we have a very accessible and connected world you can actually find a lot of authors, people who are pretty much experts in our field that you can easily reach out to in Twitter or Facebook. I've always seen these people like Matt Heusser, James Bach, Michael Bolton, these folks are people that talk back. You don't hit an expert wall when you ask them a question but they do work with you and help you with your interest if you are interested in improving and becoming a software tester.


The other piece of advice that I can give is that as a tester don't keep the knowledge to yourself, if there's something that you could share to the rest of the community participate and share. If you don't have a blog yet please do, I know it takes a couple of tries to be able to write, I'm struggling with it, but as long as you don't keep this knowledge to yourself and share, that's also another opportunity to learn.


I guess lastly, there's a lot of tester meetups that are slowly starting to pop up now and I definitely would encourage you or anyone to participate. I particularly belong to the NYC testers meetup, I'm one of the founders of that meetup and we have something that we've scheduled that's coming up usually on a monthly basis so check us out if you're within the NYC metropolitan area but if you're in other areas there's definitely also other testing meetups that just give you a sense of community that as a tester you're not the only one that's struggling through the challenges of sometimes only having a dissenting opinion in a software development team. I guess those three things are my advice.   would encourage you or anyone to participate. I particularly belong to the NYC testers meetup, I'm one of the founders of that meetup and we have something that we've scheduled that's coming up usually on a monthly basis so check us out if you're within the NYC metropolitan area but if you're in other areas there's definitely also other testing meetups that just give you a sense of community that as a tester you're not the only one that's struggling through the challenges of sometimes only having a dissenting opinion in a software development team. I guess those three things are my advice. would encourage you or anyone to participate. I particularly belong to the NYC testers meetup, I'm one of the founders of that meetup and we have something that we've scheduled that's coming up usually on a monthly basis so check us out if you're within the NYC metropolitan area but if you're in other areas there's definitely also other testing meetups that just give you a sense of community that as a tester you're not the only one that's struggling through the challenges of sometimes only having a dissenting opinion in a software development team. I guess those three things are my advice.


Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 03/14/2024

What is ETL Testing Let's get right into it! An ETL test is ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...