About this DevOps Toolchain Episode:
Today, listen in and discover how Google transformed its developer experience to deliver features at lightning speed. Hugo Santos, the founder and CEO of Namespace Labs, previously helped build Google's microservices platform that powers essential tools like search, photos, and Google Assistant.
He shares insights from his extensive experience at Google, offering a glimpse into innovative strategies and technologies that continue to shape developer workflows.
From discussing the intricacies of Google’s internal platforms to his new venture at Namespace Labs, this episode promises a deep dive into improving developer efficiency and collaboration.
Tune in to discover how these advancements may influence your own DevOps practices!
Try out Insight Hub free for 14 days now: https://testguild.me/insighthub. No credit card required.
TestGuild DevOps Toolchain Exclusive Sponsor
SmartBear Insight Hub: Get real-time data on real-user experiences – really.
Latency is the silent killer of apps. It’s frustrating for the user, and under the radar for you. Plus, it’s easily overlooked by standard error monitoring alone.
Insight Hub gives you the frontend to backend visibility you need to detect and report your app’s performance in real time. Rapidly identify lags, get the context to fix them, and deliver great customer experiences.
Try out Insight Hub free for 14 days now: https://testguild.me/insighthub. No credit card required.
About Hugo Santos
Hugo Santos is the Founder and CEO of Namespace Labs. With over 20 years in the tech industry, Hugo offers unparalleled insights into product development, software engineering, and infrastructure.
During his nearly nine years at Google, Hugo was instrumental in creating the Boq microservices platform, a cornerstone for services like Search, Play, Photos, and Assistant. He also co-led the Assistant Platform, the backbone of Google Assistant and Google's home hardware segment.
Before his career at Google, Hugo co-founded Blaast, an innovative edge computing company in Helsinki. Blaast was later acquired by Facebook.
At Namespace Labs, Hugo is building the developer stack for fast-moving companies: revolutionizing development workflows, and providing services that achieve 2x-10x faster builds and tests. Their ephemeral compute platform supports developer infrastructure companies, ensuring rapid and secure compute for their customers.
Connect with Hugo Santos
- Company: www.namespace.so
- LinkedIn: www.hugomgsantos
Rate and Review TestGuild DevOps Toolchain Podcast
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:00] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.
[00:00:18] Hey, if you're like me, you probably wondered, how did Google improve developer experience to deliver features faster? Well, you're in for a treat because we have Hugo Santos joining us. He's the founder and CEO of Namespace Labs and brings over 20 years of expertise in tech innovation. At Google, he helped build the microservices platform that powers search, photos, assistant, and co-led the assistant platform driving Google's smart home ecosystem or ecosystem. Now at Namespace Labs, he's transforming developers' workflows with groundbreaking tools and platforms for faster, more secure builds, and tests, really knows his stuff. You don't want to miss this episode. Check it out.
[00:00:57] Hey, before we get into this episode, I want to quickly talk about the silent killer of most DevOps efforts. That is poor user experience. If your app is slow, it's worse than your typical bug. It's frustrating. And in my experience, and many others I talked to on this podcast, frustrated users don't last long, but since slow performance is a sudden, it's hard for standard error monitoring tools to catch. That's why I really dig SmartBear is Insight Hub. It's an all in one observability solution that offers front end performance monitoring and distributed tracing. Your developers can easily detect, fix, and prevent performance bottlenecks before it affects your users. Sounds cool, right? Don't rely anymore on frustrated user feedback, but, I always say try it for yourself. Go to smartbear.com or use our special link down below and try it for free. No credit card required.
[00:01:54] Hey, Hugo, welcome to The Guild.
[00:01:55] Hugo Santos Hey, Joe. Thank you so much for having me.
[00:02:00] Joe Colantonio Awesome. So you have a great background, I guess a little bit like how did you get into tech? How did you end up at Google? What was that journey like?
[00:02:07] Hugo Santos Yeah, it's an interesting question. I'm very much an entrepreneur at heart. And I knew that without knowing it from an early age. So I actually started building software by myself. This was a long time ago when you could still get magazines that had printouts with code. So that's actually how I got started. Learned C, C++ on my own, reading through KDE's source code back in the then had a little bit of a love affair with BOS, a Mac before that, but then BOS, not everyone will know that operating system, but spent a little bit of time there, built a bunch of software, sold my first piece of software, and I was 15 for BOS, an image management software, and then went through the regular path. Went to college, spent a lot of time kind of on open source, got my first job. I wasn't too happy about that. I felt like, okay, I want to innovate. I want to kind of build out new things. So it was clear that I needed to start something. Managed to find some crazy dudes back in Finland that had similar goals. Moved to Finland, started my first company. We were doing edge computing before edge computing was really a thing. We were streaming, so it was an advent of the smartphone, streaming smartphone applications from kind of very close from telcos, a lot of fun, business terrible, lots of learnings from that. And then after going through that path, I had some connections with Google because I knew a few people that worked there. And honestly, after my first startup, I needed a break and Google came knocking and asked me, would you like to join? And I thought, okay, let me go and check this out. And it ended up really being, having a major impact on my life. I grew up a lot. I came in as a little bit of a youngster with a lot of energy, know it all, and I got humbled by a fantastic set of engineers that I got to work with and really see how world-class technology gets built. I still have very fond memories of that time.
[00:04:15] Joe Colantonio What year was that when you were at Google? I'm just curious.
[00:04:18] Hugo Santos Yeah, I joined 2013 and then I left 2021. And a funny thing that I still remember with fondness is when I joined, Google was already a large company, but I remember in the internal Google Plus, this was kind of the time Google Plus was still a thing competing with Facebook. Actually, that's part of the origin story here, but I saw many posts still, hey folks, we need to make sure that this feeling of being a startup we need to retain that for as long as possible. And me, as just coming from a startup, seeing that when it was a 50,000 person company, it was really surprising. But actually, a lot of the properties of a startup were still present back then. It's really incredible how far they took it.
[00:05:05] Joe Colantonio That's what I was going to say. I mean, it's still cool, but you were there when it was cool, like 2013 is where everyone wanted to be at Google and you were there. I mean, I'm not saying it's not cool anymore, but it seems like it was a really cool gig at the time for sure.
[00:05:18] Hugo Santos Oh i was humbled by it. If you would ask a software engineer back then where would you like to work and i think there would be a few places in google would be one of them.
[00:05:29] Joe Colantonio 100%. You had a lot of unique experiences there as well. One of them was creating a microservices platform. What's the name of it? BOQ?
[00:05:36] Hugo Santos Yeah, you know what's interesting? It ends up still to today, still used to build a lot of applications at Google, but we never really talked about it externally. And so it's called BOQ, Boq. And I was also fortunate that I joined a team really at the start of Boq. So they were coming in from there. So the kind of the backstory is that this was at a time where the competition with Facebook was fierce. We were building out Google Plus. Facebook was just growing like crazy, and it was so important to move very quickly. And so Boq appeared out of necessity was really to expedite the ability for Google to ship new products. And I joined really at that inflection point, and then I helped build Boq to the platform that it became where it tackled a lot of different because originally was very centered around how you build services. And when we were done, it was really about the full delivery story from all the way how you build services to how do you deploy them to production, how you manage them as well. And I played a role in that story, but I want to also highlight many of our, my kind of brilliant colleagues that also kind of drove the development of Boq because it was really this massive infrastructure project that we did.
[00:07:01] Joe Colantonio How did you get involved then? Were you one of the folks that came up with the idea for it? Or did you pitch it to someone and say, we have this idea for this microservices platform. Think it's really going to help. Like, was it almost like you were like a mini startup within Google pitching this idea and then creating it?
[00:07:16] Hugo Santos I really liked that question. A lot of these projects, they surface organically. And even when Boq was started, so I was not there day one. The first starting point was, it were building all of these disparate services and all of these services, so RPC services, front-end services, they all have the same properties. They all need kind of request throttling, they all need kind of context propagation. There were some properties they all needed. That was the step one, building a common layer to build services. And that predates me. And this particular area of the company was very Java centric, both for back-end development and for front-end development. And so that's what's kind of the genesis of Boq. where then me and my team's help kind of drive was beyond this kind of service layer of how these services behave in a common way, all the way to, hey, we need to have a common specification of how to deploy these services. What's the release processes? What are the monitoring metrics that we have in place that get exported by default so that you can then generate dashboards to monitor your services. After Boq, you would write application code. So you handle a request, you issue multiple requests to your back-ends, you run some business logic. And primarily, that's what you're doing as an application developer. Everything else from, let's compile this into a binary that gets distributed to production, to a fleet of machines, to how it gets monitored, what are the release stages, how do you test it as well, which ended up being one of the instrumental pieces for Boq and it had built -in end-to-end testing, which prior to Boq was extremely hard to achieve. It was kind of really fully serviced end-to-end and that ended up being organic. It also surfaced in a place where it was needed. There was never a mandate of go and figure out the infrastructure that is required here. This was an infrastructure team that was embedded with the application team and working hand-to-hand to solve real-world problems. And on the way we realized that, well, actually these problems that we're solving, they're not unique to this organization. We know these other teams that have the same problems and organically working with those other teams, we started expanding the scope of Boq and eventually it was elevated to be kind of more of a company level infrastructure product that ended up being deployed throughout the whole company.
[00:10:01] Joe Colantonio Nice, really cool. What was the problem or challenge that it solved for developers?
[00:10:06] Hugo Santos Yeah. So a couple of different problems. One of the problems was you want to start the process of starting a new application. For example, photos, photos was a completely new experience. The starting a new application at Google was extremely difficult. You have to from day one, be able to handle a lot of users, a lot of traffic. You need to make sure that your processes meet the bar from a production perspective. You want to make sure that you have as comprehensive testing as possible. Getting started was extremely difficult. Boq solved that by removing a lot of the complexity of, okay, how do I deploy these services to production in a way that my SRE team will be happy with? How do I write these end-to-end tests? What are the metrics that I should be instrumenting and then aggregating and write dashboards for? What should I be looking at? What should I alert on? All of those problems were removed. So someone else figured out solutions that were tackling 95% of the use cases, horizontally. We use that word quite often, like horizontally. So for multiple verticals. And so that was one big accelerator. The other part that became more obvious over time was that because now all of these new services get built in a similar way, then teams can collaborate much more effectively. When you're talking about my service makes a request to your service, I can actually look at your metrics, I can actually look at your tests to understand what's going on because they're built in a similar way as mine are. This really had a compound effect in helping collaboration. not just within a particular application area. Not just, let's say Google Plus with a photos team, but then also across application areas because there were touch points on those as well. So whether the YouTube team would use some infrastructure built by someone else, like being able to rely on a common set of technologies really simplified a lot of those processes.
[00:12:24] Joe Colantonio How did the other teams like YouTube, how did they know about this platform? Like, oh, they're using this for photos. Maybe we can use it. Was it more, you had to evangelize it as well, or is it the word just spread.
[00:12:34] Hugo Santos Yeah, so there was a bit of both. When you have such a large company like Google, things that we see that happen in the industry, like for example, you have an engineer that is in a company and they really love a tool or a piece of infrastructure, and then they move on to another company and they bring that infrastructure with them, right? Because they saw the transformation that they'd had to their team. So a bit of that happened. You could think of it, it's kind of a property of Google. It's at least when I was there, it was fairly common for engineers to move teams from time to time. There was kind of a natural dissemination by I used to be in this team that relied on Boq to build services. And now it feels like I took a step back and I'm back three years of how we used to build things. Can we move over? Now, obviously changing how you're building software is extremely painful, it takes time, it takes deliberate effort. Everything that was Greenfield, it was kind of an obvious conversation. Everything that was already established, the adoption cycles took longer because, and that was the case of YouTube, for example. So they would have known about Boq earlier, but they only adopted it later when it kind of made sense in their development cycle.
[00:13:54] Joe Colantonio Absolutely. And it sounds like microservices now have a very well-known microservice architecture, but it sounds like this is probably one of the early precursors to that as well. You're dealing with newer technology that maybe had been proven yet to all the teams.
[00:14:09] Hugo Santos I must preface with the term microservice. I find it to be so divisive in a way in the industry. Maybe our definition of microservice, because indeed internally, that's how we called it. We said that Boq is a microservice platform. The properties of microservices that Boq had was service composition. You could have, you were very service-centric. So service-oriented architecture, and then composition of different services. But it wouldn't go as far as, for example, to handle, since we're talking about photos, to handle a photo, you need 10 services, and in each service can only own its own data, it's fully encapsulated. It wasn't dogmatic, it was very pragmatic. If a team would want to share data across microservices, they could if they would want to, but there would be a very service-oriented architecture that Boq facilitated. Everything that was on a service boundary worked just much better. So for example, metrics, you would have automated instrumentation, based on RPC methods that you would have in your services. So that would kind of motivate teams to describe an architecture around services because they would get these features out of the box. If you wouldn't wrap your business logic in a service, you would have to go and figure out metrics yourself. And you can do that, but it's just additional work and complexity. One of the instrumental things, even today, I find that I haven't yet seen that replicated in the industry is Boq testing framework. And I cannot claim any sort of, I had my contributions, but they was work primarily driven by a fantastic team that they built something unique. Every single service boundary was testable and would be replaceable by either a fake or you could also replay traffic at that service layer. You could build fairly comprehensive end-to-end tests that would be very representative. Boq didn't like mocks too much. So it was often kind of the real implementations. And that really meant that we could build very robust systems with the confidence that they would work. Like testing was very representative. And all of those operate at the service layer. The microservices was really primarily because of having a service-oriented architecture more so than anything else.
[00:16:51] Joe Colantonio Gotcha. Yeah, so testing is a huge piece. And I would assume that would help the developer experience because traditionally, I don't know if developers how strongly I were testing, how much they like testing. So consuming this platform, then it already had the ability to test it end-to-end. And also, I would think performance testing because it sounds like you're doing YouTube searches, Google Assistant, all these things that require probably fast, snappy responses with a lot of load on it. Did it have performance built into it as well? And that was another benefit to the developer experience?
[00:17:21] Hugo Santos Let's start there. On the load testing side, kind of understanding performance, it did have some components, but at least while I was there, they were not as far as we would like them to be. We wanted them to be exactly the way that perhaps you're thinking about them, that you kind of would get some performance data from kind of test runs and understand those. You could, and again, at the service boundary, we could run a load test, so there were. systems that were built into Boq to do so, but they weren't as plug and play as everything else. When it came to end-to-end testing, the difference to the traditional setup was dramatic. There were systems like Teams prior to Boq they would do end-to-end testing, but it would be a never-ending story of fighting flakes in particular, deciding which systems from other teams do I incorporate into my tests? Which systems do I fake? Because of the interoperability, sometimes you wouldn't even be able to easily integrate the system of another team that you're dependent on because the way that they built and developed that system was completely different. With Boq, you could say, well, my system, that for example, handles photos, it needs to interface with a system that does blob storage, like object storage or equivalent, I could declare that dependency and then as I started an end-to-end test, it would bring a real blob store that I would interact with in my test. We wouldn't be faked. It would kind of be the real implementation. And that was enabled by this kind of common interoperability layer of like everything that would be developed in Boq would be naturally usable in an end-to-end test. But then there would also be these connectors at the edges that would be able to connect back with kind of the legacy and the external world, let's put it like that. That made writing end-to-end test much simpler. And because they were fully descriptive of all of the dependencies, they could also run in a much more controlled environment. They didn't have to hit production systems or staging systems. They could run those dependencies as part of the test environment in an aromatic way. And that's really dramatically improved the developer experience because there were a lot fewer flakes that you had to work with, that you had to kind of solve and understand, but also the kind of plugging in and everything was really like a one line. I also depend on blob stores. I also depend on object storage and it was just kind of magically appear in your test environment. And that ease of use really transformed how teams were then writing these end-to-end tests.
[00:20:15] Joe Colantonio Very cool. What was driving the end-to-end test though? Was it technology, was it APIs, or was it like Selenium, was it some other thing?
[00:20:23] Hugo Santos The end-to-end tests were also very driven by service boundaries. You would write a test and it is issuing requests to this system under test. As you can see, there's kind of a pattern here. It's very service oriented. Your test driver is actually emitting requests back to its back-end. It's almost operating as a custom client, which you control. You can say, I want to issue this request and then this other request. And so that's how it operated on the back-end. On the front end, it was also very service oriented. The testing setup was a little bit different there, but kind of similar principles around you're making a request to a subset of the system, like service that service is a part of your front-end.
[00:21:12] Joe Colantonio Gotcha! Now you mentioned flakiness a few times, so I assume that one of the developer experiences that helped was when a test failed, maybe developers would pay more attention to it. Was that one of the things as well that maybe helped that, Oh, it's just a flaky test. It's like, Oh, this is usually really very reliable. Maybe I actually have a real issue here.
[00:21:29] Hugo Santos Because of the way that the tests were set up, that they allowed us to run the whole test, so what we would call the system under test, in a controlled environment, like we would create an environment and we would start all of those, not only your server that has your code, but also the servers that you're interacting with to issue requests. And that environment was very controlled because it was not handling any other requests. It was not subject to different load. I was not subject to, okay, I'm issuing a request across regions. The latency might be different. It was a very controlled environment that by itself, dramatically reduced the flakiness of tests. Then we also had folks that were invested in deflaking. Because you run this test in a way that is deterministic, because the environment is fully defined. Again, you're not dependent on any external system. You can run them multiple times to try to figure out, okay, is this a real failure? Is this a probabilistic failure? Was this failure introduced when this change was made? And that whole journey was really helpful for developers.
[00:22:44] Joe Colantonio Nice. I know working in a big company, so I'm having to worry about dependencies and environment. Do I have the right dependencies? This sounds like it's solved that as well. It was so self-contained. So it has solved all those issues.
[00:22:54] Hugo Santos Yeah, it had, I mean, there were pros and cons with Boq. I must say.
[00:22:59] Joe Colantonio With a concept.
[00:23:03] Hugo Santos Like life is all about the trade-offs. And there were definitely trade -offs that you did when you adopted Boq. It guided you towards a particular model that might not be your personal preferred model. We had teams that like to build very complex, high-performance monoliths, and it made a lot of sense for their business needs. And moving over to Boq required them to rethink that architecture and that was a tension point at times. Like they didn't want to, it wasn't obvious from at the beginning sometimes the benefits that they would get with having to change the way that they wrote code. I think that was one of the major things with Boq that it required, if your code base was not already structured in a particular way, it required an investment. Your engineering team would have to do some refactoring. would have to move things around to be able to obtain all of those benefits. Another problem was that it's kind of interesting because it was a design principle, but over time it became, at least, in my opinion, a little bit of a liability, it was very monolithic. Even though Boq told you build services in a very composable way, Boq worked really well end-to-end. If you would want to go and do testing in a different way, for example, it would be very difficult, extremely difficult to interpret with the system. And the reason, or one of the reasons that was made, well, one of the reasons was because it was simpler to implement. But another reason was also, we felt that it was a tool to drive cultural change. As here's how, let's build on the many years of accumulated experience of building services and now help teams kind of go down the happy path, as sometimes we called it, from the get-go. But as we kind of expanded to more parts of the company, it was clear that inflexibility was at times constraining.
[00:25:15] Joe Colantonio All right, so you start off at a startup, then you start off, then you went to Google, you were there for looks like almost 10 years. Sounds like a really cool gig. What made you then go on a new adventure and start Namespace Labs?
[00:25:29] Hugo Santos Yeah, it's a great question. Was extremely difficult to decide to leave Google because it's such a fantastic place. I think I'm at heart an entrepreneur. I like to think through, I like to connect with the customer. I like to understand their pain points. I like to be able to talk with them about how we can help them. I love technology. And a reality is that to innovate, you're kind of unconstrained outside of Google. Inside of Google, the stakes are extremely high, and doing any sort of larger change requires a lot of people to be aligned behind what you want to do because just as you can imagine, it's a well-functioning machine, and if you want to kind of steer it a little bit, there's a lot at stake. I really appreciate personally that agency, and I know that many of the folks that work with me at Google also appreciated that agency of just going in and building new technology that could help teams and in particular kind of interacting with our customers directly. And that kind of was the primary thing that precipitated starting Namespace.
[00:26:42] Joe Colantonio So I guess with all your learnings with the developer experience and developer workflows, what is Namespace Labs then? What did you build that you're like, this is really going to solve an issue I know is going to help a lot of people.
[00:26:52] Hugo Santos Yeah, so namespace at its heart, the mission is to kind of bring the same kind of managed developer experience that we had at Google to many teams out there, to many companies in the industry, because we see this kind of over and over that the same problem, like different teams face the same types of challenges and they kind of find their own way to solve those problems, but very often in a way that is not unique to them. And it's also always struggling for time, because if you go and talk with a startup or a mid-market or even a larger company, they're very focused on their products and the value that they want to bring to their customers. And infrastructure is something they do because they have to, not because it's the thing that is bringing value to their customers. Trying to bridge that gap. That's generally like we're leaning on our experience to help engineering teams out there to kind of go faster. What Namespace is doing today is something very specific. That's we actually stumbled upon it as we were building out our own technology. The performance of build and tests is kind of underserved in the market. The hyperscalers are fantastic. They do great at supporting production workloads. But then when you use that infrastructure to run your tests, run your builds, to run various orchestrations. it just doesn't quite hit the mark that you would have, like for example, when you're using your own laptop. Like I talk with so many folks that they have an Apple laptop, whether it's M2, M4, doesn't matter, and they're building their application and it's so fast. And then they go and ship that to CI and all of a sudden it's so slow. And it's like, why is that? Well, part of it is because those workloads are running in traditional kind of production oriented, hyperscaler server hardware and architecture. And so we took a different path. We said, okay, what if we would target building the best performance possible for built-in tests? What if tests wouldn't and builds wouldn't start from scratch all the time, but they would be incremental very much as they are in your local workstation. That's what we set ourselves to do. And it's been working. That's what we hear from our customers that they get a lot of value from it. The tremendous difference in performance.
[00:29:22] Joe Colantonio Or this is a big pain point I know from working for large enterprises. Test work locally, put in CI/CD, it fails. Don't know why. So can you talk a little bit more about the build and test approach then? Is it an ephemeral environment that spins up? It has all your resources and data you need. It runs, it's fast, runs, tears down. You don't have to worry about maintaining all these physical environments and connections. I'm just assuming, I guess, talk a little bit about what the build and test approach is.
[00:29:47] Hugo Santos Well, you did a great job at describing at the heart what it is. It's a specialized compute system. We're a specialized cloud provider that can create ephemeral environments really quickly, cold booted. So without any previous state, we create the environments that run a set of containers that you can define in under two seconds. And then you can use those to run any workloads. One difference that we do is we deploy those workloads on hardware that is the best in class for built-in test. If you think of the best workstation that you could get in the market that gets you the best build performance, the best test performance, that's the hardware that you're using when you're deploying to Namespace. But there's a second part to the magic story, which is we understand that when you build something, when you test something, there are objects that are intermediate outputs that are persisted, that are useful next time that you run a build, next time that you run a test so that it's incremental. We have this feature called cache volumes, where in the second run or nth run, we have folks doing thousands of runs in the span of just a minute. They have access to these previous data and very much as you would in your local workstation. And that means that those runs will be full incremental. It's not uncommon that we see the transition of even in a small repository, kind of build and test end-to-end, 3 minutes and a half in a small repository as a baseline, moves over to Namespace, 2 minutes, just much better hardware and our scheduling moves over to our incremental and of caching solution. And it's 10 to 20 seconds. Kind of going from three minutes and a half to 20 seconds, that just changes how a company operates and how often they ship changes. That's kind of the vertical that we're very focused on tackling. And we do that kind of with full heart. One of the things that we do that is unconventional is we deploy our own hardware. We couldn't find the right hardware out there in the hyperscalers in AWS, Google, Azure. And we wanted to build something of extreme high quality for our customers, extreme high performance. We had to go and deploy hardware ourselves. We actually have hardware in multiple locations that we orchestrate end-to-end to serve these workloads.
[00:32:30] Joe Colantonio Can you give some quick examples of what's stored in that cache? Is it like a test needs data to run the next round of tests that the first round created? What's the cache for, I guess?
[00:32:41] Hugo Santos Yeah, so if you're in the Go ecosystem, the Go compiler will generate objects for every single of your libraries. Every single library that you have, it will have an intermediate object representation. If that doesn't exist, it has to be rebuilt. And so what Namespace does is it configures the Go toolchain. Actually there was something was really important for us. And it was also a hard lesson connecting back to Boq. so Boq required you to go and change your application. We wanted something that required as few changes as possible. We configure your existing toolchain, for example, the Go toolchain, and we tell it, when you're creating these intermediate objects, rather than placing them somewhere in a particular directory, place them in this other directory, which is where our cache volume exists. Then we rely on the fact that the Go toolchain is correct when it reuses these results because correctness is very important in the built-in test world. And so next time around, you'll have all of these intermediate objects already present. When you're doing Go test, which requires a build first, most of your build outputs are already there. And Go also caches test outputs. If the test binary didn't change, It will remember that test already passed. And that's also persisted. So it's very much the same experience that you would get in your own workstation. You write some code, you run the tests, you kind of close the laptop, you go, you do it, open the laptop, do another change, and it's fully incremental because all of your previous outputs are still there. And that's what Namespace does as well.
[00:34:30] Joe Colantonio Now, when you start, you start in 2021, your company. And I think AI has been around forever, but the ChatGPT really became really popular in 2022. I guess my question is, where do you see AI evolving with the developer experience in the next one to five years maybe? And have you planned for that to bake it into your system?
[00:34:50] Hugo Santos Yeah, it's a great question. It's obviously top of mind. I think internally at Namespace, we believe in AI empowers humans. And we see engineers building better and more robust systems because they have the support of AI. So that's how we're thinking about the problem. Right now, none of our features leans on large language models, but we have a few things that we can already see that would benefit from kind of a semantic understanding across multiple runs. For example, you're trying to understand why a particular error is occurring. And if you have to sift through kind of many services that you're running to understand, okay, what was the actual root cause? That's the kind of thing that a large language model can help you kind of surface much more crisply. Actually, it seems like there was also an issue here. There's this log line. We're looking at AI as kind of empowering developers. Then the other thing that we're seeing is that with code generation, there starts to be a lot more code out there. There's a question on how does validation look like when you have so much more code being generated because that code we cannot or today, we don't assume that it's correct, right? So we still need to build it. And folks know that, like if you use co-pilot, perhaps in January 2025, it's slightly better, but a lot of the outputs of these AI tools, they still need to be verified by an engineer, by a human, by a developer. We're also thinking about how can we help with automation around some of these verifications that can scale with the increase in code generation in the industry that is kind of driven by AI. But for us, it's still nascent times. We're looking for, we're very value-oriented. We're developers. Developers don't like gimmicks, BS. We like to kind of work with folks and kind of bring very concrete value. And so we have ideas, but we haven't yet nailed that value, but it's something that you expect to get to as we go into 2025.
[00:37:17] Joe Colantonio Okay, Hugo, before we go, is there one piece of actionable advice you can give to someone to help them with their developer experience DevOps efforts? And what's the best way to find or contact you or learn more about Namespace?
[00:37:29] Hugo Santos To learn about Namespace, head over to Namespace.so. And advice, which I see folks kind of doing less, but I think in today's age is still important, is become comfortable reading code. Code is the ultimate documentation, is the ultimate interoperable layer. And if I think through the best engineers that I've ever worked with, they always were very comfortable sifting through their code, other people's code to understand what's going on. And so that's something that I tell people that I work with nowadays, like just read through the code, feel comfortable reading through the code. And maybe use a co-pilot to help you do that, but just feeling comfortable going through your code base, your teammates code base, other people's code bases, I think that's a superpower that is under invested in.
[00:38:23] All right, before we wrap it up, remember, frustrated users quit apps. Don't rely on bad app store reviews. Use SmartBear's Insight Hub to catch, fix, and prevent performance bottlenecks and crashes from affecting your users. Go to SmartBear.com or use the link down below, and try for free for 14 days, no credit card required.
[00:38:45] For links of everything of value we covered in this DevOps Toolchain Show. Head on over to Testguild.com/p1799. So that's it for this episode of the DevOps Toolchain Show. I'm Joe, my mission is to help you succeed in creating end--to-end full stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers!
[00:39:07] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:39:51] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.