Keeping Healthy and Cost-effective Kubernetes with Amir Banet

By Test Guild
  • Share:
Join the Guild for FREE
Amir Banet TestGuild DevOps Toolchain

About this DevOps Toolchain Episode:

On this episode of DevOps Toolchain, host Joe Colantonio interviews Amir Banet, an expert in Kubernetes and software development and the CEO of PerfectScale. PerfectScale helps organizations optimize their Kubernetes clusters and reduce costs in a data-driven manner. Amir shares how their company started and how they are bridging the gaps in the DevOps space regarding monitoring and decision-making. Find out how PerfectScale disrupts monitoring and observability by providing a more macro view and predictive decision-making. Discover how to eliminate waste and unnecessary resource provisioning and ensure peak performance at the lowest possible cost with data-driven intelligence that continuously optimizes each layer of your K8s stack.

TestGuild DevOps Toolchain Exclusive Sponsor

SmartBear believes it’s time to give developers and testers the bigger picture. Every team could use a shorter path to great software, so whatever stage of the testing process you’re at, they have a tool to give you the visibility and quality you need. Make sure each release is better than the last – go to smartbear.com to get a free trial instantly with any of their awesome test tools.

About Amir Banet

Amir Banet

With more than 20 years of experience in software development and product management, Amir has led teams in creating solutions to streamline the work of Developers, DevOps, QA, and IT professionals at companies like Hewlett Packard Enterprises, SeaLights, Samanage, and SolarWinds, and is now leading PerfectScale to become the industry standard in regards to K8s Optimization and governance.

Connect with Amir Banet

 

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Hey, it's Joe. And welcome to another episode of the Test Guild DevOps Toolchain Podcast. Today, we'll be talking with Amir Banet all about keeping healthy and cost-effective Kubernetes. Amir has more than 20 years of software experience in software development and product management. These LED teams in creating solutions to streamline the work of developers, DevOps, QA, and IT professionals like companies like Hewlett Packard Enterprises, SeaLights, SolarWinds, and a bunch more. And he's now leading a PerfectScale to become the industry standard in regard to key optimizations of governance. Really excited to have him on the show. I've known him for a while and you don't want to miss it. Stick around.

[00:00:56] This episode is brought to you by SmartBear. As businesses demand more and more from software, the jobs of development teams get hotter and hotter. They're expected to deliver fast and flawlessly, but too often they're missing the vital context to connect each step of the process, that's how SmartBear helps. Wherever you are in the testing lifecycle, they have a tool to give you a new level of visibility in automation, so you can unleash your development team's full awesomeness. They offer free trials for all the tools. No credit cards are required. And even back it up with their responsive award-winning support team. Showing your path to great software. Learn more at SmartBear.com today.

[00:01:39] Joe Colantonio Hey, Amir. Welcome back to The Guild.

[00:01:43] Amir Banet Hey, Joe. Thank you for having me.

[00:01:46] Joe Colantonio Great to have you. So I guess before we get into it, I usually botch bios. So is there anything in your bio that I missed that you want The Guild to know more about?

[00:01:53] Amir Banet No, I think you covered the vast majority of it, and that's fine.

[00:01:57] Joe Colantonio Awesome. Just thinking of you by your bio, I know you've covered pretty much every stage of the DevOps pipeline or software development quality testing. Why focus in on Kubernetes now when I think you could probably have your choice of different gigs to get involved with?

[00:02:14] Amir Banet Yeah. So first, I love learning new areas. The experience of going to a new area and mastering it. It's always something refreshing that I recommend anyone to do from time to time. And when we first met, I think I was the product manager of QTP and UFT, this is how we became familiar with each other. So since then, yes, I've been through a few other, let's say, startups or companies with touching many different aspects of always in IT. Always providing services, and solutions for QA, Dev, DevOps, and anything in between. And regarding your question, the reason now that I joined and actually started this company is first regarding the opportunity. So we do see here a really great opportunity to disrupt the market that hasn't been addressed properly in our eyes. And second, this is my partner. I joined forces with someone who has a vast experience in Kubernetes and knows it in a really highest level possible. And he was exactly the persona that we're addressing. And he felt the problems that our solution now helps to provide on his daily kind of routine. And because he had such vast knowledge, again, about the problems and the persona and what is really needed in order to solve it in the way that will meet the needs of the DevOps or the SREs, we decided to basically to do a test run and we started by having more than 30 different companies that we interviewed to understand what are these problems that we will talk about later on is something common? And when we understood that there are and there are no real solutions, that they're able to address them, we had kind of the eureka moment. Yeah, we have here something that we can really turn into a successful company.

[00:04:05] Joe Colantonio Awesome. I know you sent me some things ahead of time to prepare. I know there are four gaps you identified that there was with the current monitoring observability tools. Are those the ones that made you think of what the problems may be that people are facing? So I want to kind of find out what those problems are, maybe are these gaps, what the problems were that made you think, okay, you need a solution to fix the gaps that we've been finding?

[00:04:27] Amir Banet Right. So first, I would like to say a few words about what we are doing. I think it will help to get a better understanding about later on what are the gaps that we're addressing. So PerfectScale is all about day 2 operation in Kubernetes done right. Okay. And as part of the day 2, it means how you're maintaining and keeping your clusters in your application in Kubernetes, both healthy and also cost-effective. And there are hundreds if not thousands of different parameters which either developers, DevOp, or SRE, needs to manually decide what should be the right values for them. Now, if you're not choosing the right values, it automatically affects either the performance your end users are getting or the cost that you need to pay for these applications. And this is exactly where we want it to come in handy and to provide data-driven decision-making for those personas so they will know exactly what are the accurate values that they need to use per each one of these parameters. And when we were asked about what is our differentiator or comparing the solution in the monitoring space, in the observability space, etc., the main gaps that we identified are as follows, First, they are providing you data. They are not really providing you the answers or the insights that you need in order to take data-driven decisions in a heartbeat. And why is that? The main reason is that because they were designed to meet the needs of the pet owners and not the cattle owners. Are you familiar wtith this analogy pets versus cattle?

[00:05:58] Joe Colantonio No, I may have heard it once before, but maybe you could explain it for people to have it.

[00:06:02] Amir Banet Yeah, sure. So if you have a pet or a dog or a cat, you want to know as much information about it. You want to know whether it is well-fed, and what is its energy level? Whether it is healthy. And this is exactly what you are getting from the monitoring tools. You you're getting all the information that you can think of on a specific pod, on a specific node server, etc.. But this is not really the level of information that is needed for the DevOps manager or for the platform engineer or for the SRE. They prefer to have a more holistic view of what is going on and if they need to find the needle in the haystack to be able to do that quite fast. But again, the macro is much more important than the micro. And this is exactly what our solution provides, is the macro view. And if you want to easily again dive into the sick cow and know how to treat it. So this was the first gap. The other three gaps are also important. It's about prioritization. Again, with the monitoring tools, you are kind of bombarded with multiple data points, multiple alerts. How can you really prioritize what is critical for the business and what can wait? The first gap is about whether you are able to have a good alignment and collaboration between the different stakeholders. Because in the Kubernetes realm, we have many more chefs in the kitchen than ever before. Do you have additional personas? I mentioned a few of them, like the DevOps, the SREs, Platform engineer. It can be also FinOps guys. And you need again to have one system to align all of them regarding what is the single place of truth and whether the action items are actually taking place. So to have governance of that. And that again, not something that the monitoring and observability tools are good at. And lastly, and maybe more important and very relevant only for the Kubernetes space, because this is an ephemeral system that it's constantly changing. You need to have these decision making done as soon as possible, even in a preventive manner to some extent, because you don't want out of memories, you don't want to have high throttling that will affect the latency that the user is experiencing. So you need again, to have here is something kind of predictive that helps you to take decision in a heartbeat and even doing it autonomously on your behalf. All of these four gaps were the reasons or these were, first, the pains that my co-founder felt and the reason why we decided to create a solution that disrupt the market by doing things differently.

[00:08:28] Joe Colantonio Awesome. So I want to dive to each one of those. Before I do, just like I am new to Kubernetes. So just a dumb question right off the bat. You talk about parameters that people struggle with setting manually. And I know this, you can run it like an AWS, you can run it on Google Kubernetes Engine, all those parameters, different per environment that you run on. So does that make sense to people struggle with, okay, I need a tune for AWS, but it's not the same if I now have like a backup that now runs on Google Kubernetes engine.

[00:08:56] Amir Banet So some of them are, but the vast majority aren't. So that's one of the beauties of about Kubernetes is that it is agnostic to where you're running it in the vast majority of the parameters. The same parameters and let's give you a few examples. For example, what is the memory and the CPU request and limit? This you need to define no matter which environment you are running it, it can be even your own Data Centers. What is the threshold to create more replicas of the same pod? And what is the nodes that you need to select? Actually, the node is something a little bit more kind of relevant for each environment separately, because the Amazon of the ward has different nodes than the Google ones and you have no solution that are actually taking the decision-making regarding which node do you need away from the equation. Okay. So this is also something that you see the industry kind of evolving in order to reduce the complexity that is associated with the decision-making that I mentioned before.

[00:09:53] Joe Colantonio Very nice. So I know you have four gaps. I just want to see how you fix them. So decision monitoring. They would assume that people are getting overwhelmed because there are so many things you can monitor but not necessarily know. Okay, maybe I have a high level. Here's a problem. Let me know that's a problem easily. But then I can drill down and have the people that and the like. You mentioned a bunch of different roles and the different roles can then look at the different metrics I guess that they have to deal with. Was that the main thing?

[00:10:20] Amir Banet First is again, gathering all the information and providing it in a very accessible manner. So this is, I think, one of the secret sauces that's what we are doing, is that we are helping to make the data more accessible for you in a manner that is putting focus on the insights and less about the data. Then the answers and less about all the data that kind of compile it. Another thing is that we are providing the evidence. Why our recommendation? Because we are not just again, a monitoring or observability solution. We are not that at all. Actually, we are never replacing the monitoring solution in the organization because again, we are a layer above it. It provides again the insights that are coming from this information. And I could give you a few examples of the evidence that we are giving in order to support our recommendation. We talked about before about the memory and CPU request and limit. Here are things like the usage patterns that actually happen in your cluster and how your cluster was dealt with both under pressure and also in idle mode. So we are looking at this seasonality. We are sampling every few seconds what is going on in your clusters and understanding the utilization of each one of your pods. And this is also part of the decision-making that later on effect our recommendation, besides the utilization, we are looking at many other patterns, even like whether the pod was initialized during this analysis period, yes or no. How many replicas of the pods were running and whether all the replicas behave the same under the same kind of load? These are just a few examples of the metrics that we are looking at as part of our recommendation decision-making.

[00:12:02] Joe Colantonio When someone logs in in those, your insights would bubble up and say, hey, your usage pattern has changed over the previous two months or something. And so it automatically tells you that there's kind of an issue and then you can drill down to find out what may be causing that. As I am, I think of this correctly.

[00:12:17] Amir Banet The usage pattern doesn't necessarily say that you have an issue, okay? Usage pattern is something healthy. And this is again, one of the advantages of Kubernetes that it is allowing, no matter what is your, let's say, load reload to adjust the scale accordingly. But again, in order for it to be optimized, you need to figure out what is the right values that you need to set on each parameter. We are using the usage patterns in order to fine-tune exactly the values of each parameter that you currently guesstimate on what should be the right value.

[00:12:52] Joe Colantonio And so the fine-tuning, I assume, is done automatically, and then it adjusts based on what it learns from what the change was?

[00:13:00] Amir Banet Great question. So currently it is not automatically the autonomous fashion will be released quite soon. So this is what we are working on. You will be surprised. But when we started the company, our first belief was that we want to go to the autonomous mode from day one. And then we had a discussion with our design partners and they said, Hey, hold on a second, before I trust you to do this autonomous because you need to understand by taking full control and doing it autonomously, we are basically taking control on the most important thing of the business, which is whether their application will run successfully and how much it will cost. So they told us, okay, wait, we need first to trust you and your recommendation and only then we might consider to go to the autonomous way. So we change our approach. We put a lot of emphasis on the UI to why our recommendation makes sense. Our solution is fast. So we are collecting more than a couple of hundred of fit of clusters as we speak from multiple companies and we are always adjusting our recommendation or learning from the data that we are collecting and improving our recommendation engine. And now we reach to a point where our customers are telling us, okay, you prove your value, you are accurate, We want now this to be done completely autonomously. What we currently provide and this is again available in the last half year or so is one-click remediation. So you see the recommendation, you can understand why it happened. You are able also to adjust how much buffer, how much we are calling it the resilience level. You want to set each one of your workloads as each one of your applications and based on your configuration plus our recommendation, you are able to with one click again to implement these suggestions.

[00:14:57] Joe Colantonio Does it give you information on the suggestions, like we implemented the suggestions and here are the results now. And so people know that, okay, this actually worked and maybe I need to do more adjustments based on more data that I've learned about the environment.

[00:15:10] Amir Banet So the recommendation is for this point of time, based on an analysis window that you're selecting, where there it will be a month, a week, or whatever, and you are getting the recommendation, you implement them and from there on to see the impact on the system, we have both KPI's and reporting that tells you again what was the impact of your doing on the system from a cost perspective, waste perspective, and also risk. And I didn't talk about the risk. I think this is a very important aspect to mention as well. Unlike many other players that are only focused on the cost optimization. Our focus in the same level, if not even more, is about resilience, durability, performance issues, and risk. We're putting a spotlight on which issues do you have, how to fix them and what is the priority severity of each one of the risks that you have in order for you again, to improve your performance and to provide better service for your own customers? So we kind of guarantee that your SLA breaches and SLO breaches will drastically drop using our solution.

[00:16:15] Joe Colantonio That goes into the second point. You said prioritization. That's what it does, a focus on not necessarily just cost savings. Obviously, cost savings are important, but you may have a resilience issue that's holding you back or maybe killing your SLOs/SLAs sounds like.

[00:16:28] Amir Banet Yeah, so I can tell you that this is if cost is a nice to have kind of a thing these days, it's actually becoming more critical than ever before. But resilience and durability and again, performance, this is a must. All organization needs to guarantee that they are providing, the service, the quality of service that is expected of them. Definitely, something that is critical and one of our biggest valueable position.

[00:16:56] Joe Colantonio All right. I don't want to gloss over costs. I have read an article, I forgot who did it as they move to the cloud. And then they noticed the charges were in like astronomically went up and they decide to bring everything back in-house again. Back in the old days, we had like raise floor with their own bare metal machine.

[00:17:12] Amir Banet It was Amazon.

[00:17:14] Joe Colantonio It was Amazon. All right. So how do you get to those extremes then? Like, how do you save costs and not when companies do this, they do it right. And it's not that extreme where, oh, we have to just pull out now and do our own data center again?

[00:17:25] Amir Banet Few points on that. First, not all clouds are the same. Okay, So you need to don't chase the hype but choose the solution or the technology that fits your application needs or your business needs. In the Amazon case, they chose Lambda Serverless in their case for again, for the unique use cases that they had in mind for what the application should provide. Lambda was the wrong choice, so they moved to back to a monolith. But it's not really a monolith, monolith runs on Kubernetes. So but again, it's a completely different kind of architecture that what they use in the beginning. And yes, you need to do the analysis in advance. Don't chase the hype and make sure that you are moving doing this digital transformation to become cloud-native only if it really suits your needs.

[00:18:16] Joe Colantonio If they used your solution, would they have notice like based on the insights that there was an issue? Or because it wasn't the right fit that even no matter what solution to be using, would be able to highlight that wouldn't have an issue down the road?

[00:18:28] Amir Banet So our solution is focused on Kubernetes and not on serverless. So it wasn't that relevant for them. But you will be able to easily see both the costs going up. You will be able to see the ways you will be able to see the risk using our solution. Theoretically, it will give them all the kind of understanding that they're heading in the wrong direction from very early ages.

[00:18:51] Joe Colantonio Nice. I guess the third thing you talked about had good alignment with different stakeholders for your solution. So if you just talk a little bit more, maybe how it works for different players within the team that would use it.

[00:19:01] Amir Banet I can give you an example of one of our clients. I will leave his name, their name out of the equation right now, but this company has what is called Developing Platform. Okay. So it can be built either open source solution, it's going to be built also on homegrown solution backstage is one example of a platform like that. And in their case they wanted to use some parts of our UI will be delivered directly to the developing platform. And why is that? Because their development is kind of cozy in the environment that they are. They don't want to have context switching. So in their case, we are building them in integration. That will take again the recommendation of what to do in each one of the workloads, putting it directly in the developing platform. Here you have the kind of automatic integration between the DevOps that are using our UI to the developers that need only pieces of our UI in order to take a decision just on the workload that they're responsible for. This is one example. Another example can be between the FinOps guys, the guys that are more responsible for the financial observability and control to the DevOps guys. So we heard multiple times that the FinOps are kind of coming to the DevOps or to the SRE with questions regarding either spike in the cost or projections about what's going to happen next quarter because I need to prepare my budget accordingly. And the DevOps are kind of left clueless, especially if they are asking for a drill down into a specific business unit or a specific team or specific product because that's not the way that the existing kind of cost solution are working. They're providing you more on the node level, on the entire cluster level. So this is the granularity that you are getting and this doesn't fit the needs of the FinOps that once again focused on a business unit, focus on a product. And this is again nice the use case that our solution is able to provide because you are able again to group according to the logic that you have in mind, either whatever scope it can be cost, it can be waste, it can be even the resiliency issues. How many do you have? And to give this kind of overtime analysis to the relevant person.

[00:21:16] Joe Colantonio Is there a different dashboard, like if I was on the FinOps, I would log in, I would see what I wanted to care about, and if I was on performance, I would see what I care?

[00:21:22] Amir Banet So currently, because our solution is pretty new, we don't have what is called rule-based access control to, let's call it to slice and dice the solution into multiple kinds of ownership. So you are getting a full view of all the clusters that you have in the organization. In the very near future, we will be able to can slice and dice it and change it per the rules that you entered with.

[00:21:48] Joe Colantonio Great. So you mentioned how right now the process of manual so people can start trusting and believing what your solution is telling them.

[00:21:56] Amir Banet It's Semi-manual because again, we are feeding them. We're giving them all they need to do. They just need to open their mouth.

[00:22:02] Joe Colantonio Exactly. So how does PerfectScale ensure maybe the accuracy and the relevancy of its analysis?

[00:22:09] Amir Banet Yeah, great question. So first, this is where our machine learning algorithm takes real proud of learning more and more. And the more data that we get, the more accurate the decisions are. This is also coming from the fact of who are the people that we hired in order to create those algorithms. And we have here the brightest minds with a lot of experience in Kubernetes that again brought their experience into the algorithm. And lastly, this is, again, based on what are the metrics that we take into account in order to come up with those recommendations. So we really don't leave any stone unturned as part of this decision-making. We are looking at all the factors that matter and I mentioned a few already like the usage factor. And I talked about whether or not a pod was initialized, yes or no. Looking at all the replicas of a pod is part of the decision-making and many other things, whether it was a spot node or not. I cannot tell you all the different metrics because this is part of the secret sauce. All I can say is that once you start using our solution and see the recommendation and see the supporting evidence to why we came with that, you are easily turning into a kind of a fan and starts acknowledging that all of their recommendation does make sense.

[00:23:24] Joe Colantonio Almost I think of one of the cost savings. A lot of times people what they think is that in provision enough resources for the cluster and it turns out maybe they may have over aggressive that may add to cost. So does it help you also uncover maybe over-provisioning of resources?

[00:23:39] Amir Banet Of course. So that's not our side of the equation, as you correctly mentioned. Most developers, when they define the resources, they prefer to be on the safe side. Why? First, because it's not my money, so why do I care about how much I put in? Second, they want the best interests of their end customers to have a good quality of service. So yeah, in most of the cases, what they are doing is what is called overprovision and giving too much resources unnecessarily. And yet definitely one of the things that our product is doing is pinpointing where you have the biggest waste of resources, and that's part of the prioritization that I mentioned before that is lacking in other tools and B, telling you exactly how much you should reduce those resources without jeopardizing or risking your performance. Giving you, again, the rights a silver lining between those two contradicting forces from the one hand is, again, I want to give the best performance, but from the other end, I want it in the most cost-effective manner that is possible. And we do give the user ability to control, again, the level of headroom or buffer that he wants to set for this specific workload. The more critical it is for the business, the more headroom theoretically the user will select.

[00:24:55] Joe Colantonio I know a lot of things that hold some companies back from getting the solutions that they need as it doesn't integrate with other solutions. So it becomes a nightmare. So does it integrate with things like JIRA or GitHub? Like maybe what's on the roadmap for PerfectScale to make it so it does seamlessly fit into plug-and-play any type of enterprise that may need the solution.

[00:25:17] Amir Banet Right? Great question. So we are proud of the fact that we are streamlining to whatever workflow you have in your organization, which means that based on what is your desires, that our recommendation will kind of flow into becoming the actual remediation. We will kind of satisfy you. Today we're supporting out of the box with integration with JIRA and with Slack. We will support very soon integration with Monday.com. So we are able to create a ticket directly Jira or directly to Monday. But the real kind of advancement will be once we will support PR, so you will be able either to create PR and then in the next deployment it will again do the adjustment or it will automatically do the adjustment and create the PR just as an FYI. Just so we will know and have records about the changes that the system has done. So this is what we are currently working on and it should be available in a matter of weeks.

[00:26:15] Joe Colantonio I know another thing that companies sometimes resist getting solutions. They don't see results right away. So you talk a little bit. I think you have a free trial. And if they have a free trial, would they be able to see the value of the solution right off the bat? I think it's a 30-day free trial. Could talk a little bit more about that?

[00:26:29] Amir Banet Yeah. So first, we are, as you correctly mentioned, we are providing a 30-day free trial, no commitment needed, and no limitation. You can use us alone how many clusters you want. And this is very helpful because if you want to do a health check, just to get to know what is the health of your clusters, if you want to do the optimization of it for free, do whatever you want. We don't care. We actually encourage you to take advantage of our free solution. After the 30-day, it can be free also because we are providing a very generous community offering. So if you have a total cluster, compute cost per month is less than $10,000 per month. You are getting our solution for free and only if it is above 10K. We are charging between 1% to 2% based on the package that you want and again the value, you're able to take full advantage of our solution for 30 days for free and maybe continue to be free. And if you are still in low scale in your Kubernetes adoption.

[00:27:27] Joe Colantonio So, Amir, one of the buzzwords that's going on is digital transformations, and it seems to be accelerating more and more. But I see a lot of enterprises struggling with it. It's going slower than they thought it should. So do you have any insights how maybe PerfectScale can help with this or why people are seeing the slowdown or bottlenecks when they are trying to make this digital transformation shift.

[00:27:48] Amir Banet Yeah, there are a few reasons for that. I would like to mention one that is relevant also for the value that our product provides, and this is the fact that the persona that we are addressing, which is again the DevOps manager or practitioner, the SREs team, the platform engineering team, they are the most, let's say, pricey or critical elements to achieve this digital transformation, not because they are the ones that are getting the highest salaries they might, but it's not necessarily the case. But because they are in some cases the bottleneck. And why? Because their skill set is so rare. Then having someone who has a lot of knowledge in Kubernetes is very rare. So because they are a bottleneck, we want to again free their time to focus on things that are more critical to the business than just finding the right configuration or just doing troubleshooting to what went wrong. So instead, we are giving them a tool for their daily kind of for work that helps them to save hours of their time again, to focus on things that are more important for the business, like digital transformation.

[00:28:53] Joe Colantonio That's a great point. I think a lot of companies overlook at having the right solutions in place, even though it may cost money for the solution, actually saves you more money because you're going to free up resources, like you said, to actually focus on the critical risk areas that I think you pointed out. That's a great point. Sure. Okay, Amir, before we go, is there one piece of actual advice you can give to someone to help them with their DevOps or Kubernetes efforts? And what's the best way to find contact you or learn more about PerfectScale?

[00:29:18] Amir Banet Yeah, great. And everything that you ask is kind of converted and can be combined together. So also for the novice users of Kubernetes, don't be alarmed from the amount of knowledge and the amount of guesstimate that you need to take. You have here a solution that can help you and guide you on how to do this second-day operation easily done. So take advantage of our framework solution. Learn as you are kind of understanding from where our recommendation came from and make an impact from day one. As you are entering the new organization, always you're studying this field and in order to take advantage of this solution, all you need to do is go to www.PerfectScale.io And you can find how to both to read the blogs that contain a lot of technology, information that you might find interesting, and also start the free trial or take a demo, whatever you want.

[00:30:11] And for links of everything of value, we covered in this DevOps toolchain show. Head on over to TestGuild.com/p114 and while you are there, make sure to click on the SmartBear link and learn all about Smartbare's, awesome solutions to give you the visibility you need to do the great software that's SmartBear.com. That's it for this episode of the DevOps Toolchain show, I'm Joe. My mission is to help you succeed in creating end-to-end full-stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers

[00:31:15] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Promotional graphic for a TestGuild podcast episode titled "The Future of DevOps: AI-Driven Testing" featuring Fitz Nowlan and Todd McNeal, supported by SmartBear.

The Future of DevOps: AI-Driven Testing with Fitz Nowlan and Todd McNeal

Posted on 04/24/2024

About this DevOps Toolchain Episode: In this DevOps Toolchain episode, we explore the ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Copilot for Testers, GPT-4 Security Testing and More TGNS117

Posted on 04/22/2024

About This Episode: Have you seen the new tool being called the  coPilot ...

Jon Robinson Testguild Automation Feature Guest

The Four Phases of Automation Testing Mastery with Jon Robinson

Posted on 04/21/2024

About This Episode: Welcome to the TestGuild Automation Podcast! I'm your host, Joe ...