Ship It! – Episode #93

Hybrid infrastructure load balancing

with Wanny Morellato & Deepak Mohandas from Kong

All Episodes

Wanny Morellato & Deepak Mohandas from Kong join Justin & Autumn to discuss building, testing & running a load balancer that can run anywhere.

Featuring

Sponsors

FireHydrantThe alerting and on-call tool designed for humans, not systems. Signals puts teams at the center, giving you ultimate control over rules, policies, and schedules. No need to configure your services or do wonky work-arounds. Signals filters out the noise, alerting you only on what matters. Manage coverage requests and on-call notifications effortlessly within Slack. But here’s the game-changer…Signals natively integrates with FireHydrant’s full incident management suite, so as soon as you’re alerted you can seamlessly kickoff and manage your entire incident inside a single platform. Learn more or switch today at firehydrant.com/signals

Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes

Chapters

1 00:08 This is Ship It!
2 00:40 The opener
3 16:14 Sponsor: FireHydrant
4 18:45 What is Kong?
5 20:58 Importance of hybrid infrastructure
6 23:16 Kong deployment models
7 26:30 Managing infrastructure
8 27:34 What handoffs look like
9 30:04 Continuous deployment & rollouts
10 33:27 Handling changes in dependencies
11 37:27 Kong x Ingress
12 39:18 Handling customer feedback & changes
13 42:22 Kong's part in the ecosystem
14 44:48 Gaining customer trust
15 47:51 Balancing automation
16 51:41 Failure modes
17 55:01 Where to reach out
18 56:22 Sponsor: Changelog News
19 57:57 The closer
20 1:05:32 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello, and welcome back to the Ship It show. I am your host, Justin Garrison, and as always, here’s my co-host, Autumn Nash. How’s it going?

Hey, how are you?

Doing just fine. This show is all about what happens when code turns into software, or what happens after you hit git commits. Anything that happens after you wrote that code, and now it’s someone else’s problem. And in today’s show, we’re going to talk to Wanny and Deepak from Kong HQ, which is a very fancy load balancer gateway, all sorts of network traffic… But what was really interesting to me was the fact that they run hybrid infrastructure for their own infrastructure on managing King. They run it in the cloud, they also run things on prem, and then they have to provide it as a product that other people as customers can run in a cloud environment and on prem. And so they have to test it and validate it. A lot of that was interesting, we were talking early on, and so that’s the interview for today’s show. But as always, we want to start with a couple of links that Autumn and I found interesting, or cool, or maybe even horrible and disturbing, I don’t know… One of these weeks we’re gonna have something that’s bad. But for now, it’s still gonna be good stuff, right?

Let’s hope it stays on the good.

Yeah, we keep it positive mostly. This isn’t for news, but this is just something that we both were reading and thought that you all in the audience would also might find interesting. So mine is going to be from cep.dev, and it’s Jack’s Home on the Web. I don’t know who Jack is or what they’ve done, but the title of the article is “Almost every infrastructure decision I endorse or regret after four years running infrastructure at a startup.” It’s a very, very long sentence of a title, but really cool to see just someone going through four years of startup infrastructure and everything that they thought was good or bad about their decisions looking back. That hindsight is such a great tool for other people. And I’ve found this really fascinating, just because I’ve never ran infrastructure for a startup, and now seeing how other people are doing it, now that I work at a startup, it’s a different experience than working at a large enterprise. But a lot of the decisions, I guess, were surprising to me. Things like choosing Kubernetes, where Jack was saying he thought that was a good idea. This was something that he would still endorse for other startups to use, and for someone else going forward… Because I always heard from the enterprise side “You probably don’t want Kubernetes early on. It’s gonna be too complex, you don’t need it.” And there’s definitely truth to that as well. But at least in this case, he was happy that he went with EKS.

And some of the other things that he regretted were things like paying for AWS premium support. It was too expensive for them at that stage, and something they didn’t really use very often. Same thing with something like Datadog. I’ve used data dog in enterprises and thought it was fantastic as a tool, but I never had to pay the bill. So that’s probably why I was saying “I don’t really care how much it costs, because it provides value.” And for some smaller startups, people trying to find their business or make money, it’s probably not a great idea to throw all of your money at service providers up front. So really good article. There’s probably 20 or 30 things in this list of endorse, regrets, no feelings about it… So have a look, because I’ve found it really cool.

And I thought it was interesting because in the four years that he has been kind of collecting this information, it has been a huge change in infrastructure and infrastructure as code, you know… So if you compare it to other four-year periods, I don’t know if they’d be as exciting, because I feel like the last four years people got really serious about infrastructure as code… And I just think in general software engineers - maybe not everybody, but people spoke more about like the actual writing code, and getting code to run. But I think people have talked so much about like infrastructure and making things – you know, how you’re getting that code out there, and just building it, and like green/blue deployments and that type of deal… So so much has changed in the world of infrastructure that the way that products that he’s used have grown or been better, or have been disappointing are really interesting, because there’s been so much competition, and growth… And products have done really well, and some haven’t done really well, or some have been what they expected, or… Even just - I think something that he had said about Kubernetes… Like, there was a tool that he was using, and then Kubernetes ended up getting that feature, but at the beginning he didn’t think he would ever get that feature, you know.

So I think it’s been such a growth period, and people have learned so many lessons that I think that even if maybe you’re not in the market for building infrastructure at a startup, just – it’s almost like a post mortem. I love reading post mortems, because the way you can learn from other people’s struggles… I think that’s another reason why I love reading COEs, and just any kind of learning from other people’s struggles. It’s almost a post mortem of infrastructure, and like the goods and bads and learning from it… So I thought it was just a great article for learning opportunities.

Pointing out the fact that this is looking back four years, so basically starting in 2020. And in 2020 it was the year of COVID. So what you were doing in 2020 and what’s changed since 2020 is very different than if this was an article written, say, in 2019, looking back to 2015. It probably would have been too early.

[06:00] That’s what I mean, that’s a very special four-year time period… Because I feel like at COVID, and then the COVID cloud infrastructure as code - so much has happened in that last four years, where we started and where we ended up, and how people were like using and deploying things… I feel like other four-year periods would not be as exciting to talk about.

Yeah. Or all of them would just say “We regret Jenkins.”

Exactly. [laughter] So I just think that –

It was like a constant for ten years. [laughs]

I think it’s a really special timeframe, and he did a great job of kind of summarizing them all. And it’s just enough information for you to learn, and maybe you can go down the rabbit hole that we all go through, where you’re like “Oh, I’ve never heard of this”, and then you kind of get into it… But it’s just enough information to be concise and be like “Oh, I didn’t know this existed”, or “That’s really cool, and I didn’t know about that feature.” And then you move to the next thing. It’s a long article, but it’s not hard to read. He did a very good job of kind of summarizing it.

Yeah, the way he just broke it down… Like, one of the things he points out in here was using Homebrew for company scripts. I thought that was brilliant, because we’ve had lots of company scripts in places I’ve worked, and it’s always been in some Git repo. And you have to “Oh, go git clone the scripts”, and then pull it down, and then source it in your path, or something like that. But in this case “Well, we’ll just pack it in our own private repository.” He doesn’t even go into that. But as soon as I read it, I was like “Oh, I know how you would do this”, and that’s really cool, because it automatically goes in your path. You already probably have brew installed in a lot of – I mean, not on Windows, but Linux and Macs have brew installed and you can just download those scripts however you want, and they go in your path automatically. And homebrew update is something that a lot of people are already doing. So really cool to see how they’re doing that, and the fact that in this case, in the article he would endorse other companies using Homebrew for scripts, too.

Not just that, but there’s only so much you can learn at school; there’s so many so much you can learn in different ways, and a lot of being a software engineer and working in technical fields are just really on the job. And I feel like articles like this and post mortems are just like such a way of learning more. It’s just one of those underrated ways of like absorbing knowledge, you know?

[08:05] Yeah, for sure. And my link in the outro today is something I learned on the job this past week. And so exactly - there’s things that you learn in formal education, and there’s things that you just learned because someone else was doing it, and you’re like “Oh, this is how it works. Okay, cool. Let’s figure out how we can use it.”

And just what you were saying, now you work at a startup, and what if you could take that Homebrew thing that you learned, and then take that now, use that at your new job, you know?

Yeah. How about you? You’ve got a link for this week?

So my link is about Reddit. So I’ve been a longtime Reddit user. I don’t know about you, do you use Reddit?

Not very much. I don’t know why. I was a big Digg user back in the day. And then Digg kind of exploded, and I never went over to Reddit as much. I use it occasionally, but it’s not the place I hang out.

So I love reading about other people’s views of like a book that I’m reading, or like a chapter, and I’m like “Can anybody believe that this just happened?” And then I like watch a bunch of other nerds have opinions, and have a moment about it… And it’s awesome, because you can’t talk to your friends about all this stuff, because they’d probably like be annoyed that you’ve obsessed over this much detailing… So I love Reddit, and it was kind of crazy to me that they were going IPO after such a long time of not IPO-ing… And I was wondering how that would have effects on this – it’s a big social media, but it’s kind of not like… You know, Facebook got big, and new, and then kind of people stopped using it. MySpace got big before Facebook. But they get big, and new, and everyone loves it, and then people hate it forever. And Reddit’s kind of like one of the big ones, but it’s – I wouldn’t say like underground, but it’s not as popular as other ones. Some people either really –

Well, just the structure of Reddit was so different than the other things, because it was the first sort of interest-based social media, right?

It wasn’t your actual social network. It wasn’t people that you knew in person. It was like “Hey–”

Which is why I love it.

Yeah, if you want to go deep on, I don’t know, hair trimmers or something like that, there is a Reddit subforum that you can go and –

You can do like on smoking meat recipes, how to clean things, plants… You know when you go down a rabbit hole, when you’re like a nerd about technology, or about something you really like… That is where it happens, on Reddit. People give you like every answer, every plot twist, or how they thought and an ending could happen, and it’s awesome, because you’re all geeking out on something that you really, really like. And if you probably talk to half people you knew that much about something you really liked, they’d be like “Shut up.” So that’s why I love it. Because it’s like the one place that you don’t have to tell your grandma and all your friends about like every interest you have, because you can just put it on Reddit instead of Facebook or Instagram.

So what’s the article you had about Reddit?

Reddit signs an AI content licensing deal ahead of IPO. So I was kind of wondering, Reddit’s been along around for a long time, and they’re just deciding to IPO. And I was wondering how this would change the context of like how Reddit operated. And it’s interesting, because I wonder if they would have made this deal if they weren’t IPO-ing… Because it’s a $60 million deal with what they’re saying is one of the biggest AI companies, but they haven’t actually said what company it is. And now Reddit’s advised IPO evaluation, they were advised to seek $5 billion. So that’s obviously a very different evaluation than they may have given Reddit not too long ago. But also, how does this make people feel that use Reddit? I felt like Reddit was almost like a safer place away from – you know, after the whole Cambridge Analytics thing with Facebook… And I think Facebook got really dark for a while… It’s interesting to think “How will this change Reddit?” And what kind of data are you getting differently from Reddit, that you’d get from other social media platforms? Because like we said, it’s a very different social media platform, right?

[11:54] Well, and it’s always had a lot of the most human-curated content. It seems like Reddit is still one of the last safe havens for humans to explain their theories in depth on research and various things. I’ve seen so many cool Google Sheets, LinkedIn comments, they’re like “Hey, what’s the best taco truck in LA?” And I’m like “This is fascinating to me, let’s go.” Someone’s like “Here. I tried 50 of them. Here’s all my links and ratings.” I’m like “That’s amazing.” And I know Reddit went through like API changes, and there was a lot of backlash with some of the communities in subreddits… But I’m really curious too how this affects the infrastructure and the software. Because as soon as you’re like “Is this just open for a single company to crawl their information? Is this going to be something that they’re going to be doing the AI infrastructure and crawling on that?” And so that stuff is really cool, because I mean, for a lot of people I know Reddit is the thing you put on the end of your Google search. You’re like “What’s the best toothbrush? Reddit.” And you’re just gonna find this subreddit about electric toothbrushes, and there’s gonna be some person that has like “Here’s every Amazon listing.” It’s like, that’s how people use it now. But if this becomes more ingrained in the product, or if it takes all of that human curation and knowledge, what does that do to the application? What does that do to the infrastructure? That seems really fascinating to me, obviously, because I’m a nerd in that regard.

I feel like Reddit subthreads are like - you know when you and a bunch of your nerdiest friends get together, in the privacy of your own nerddom, and you have your moment… But now, it’s like, obviously, any free social media, you know that you are the product. We all know, to a certain extent. But it’s almost like pulling back the sheets on the fact that this is where you go to kind of be more of yourself on social media… How does this data change the way that people are affected by algorithms? It’s interesting; does it make them more accurate? Is it going to start pulling – I remember years and years ago, there was a target… I wouldn’t say it was AI. Well, I guess it is kind of AI, but they were guessing that people were pregnant before they even knew they were pregnant.

I remember that. Very cool.

Yeah. And I remember that there was this dad, and he was so offended that they thought his teenager was pregnant, and she really was pregnant… And it was like how they were going off of people’s purchases. So can you imagine, with so much detail of like human curation and like interests, how does this change algorithms, and then how is that used to profit off of? In a way, it’s kind of cool, because maybe it helps us to find more of our tribe out there, or people. But then also, how will that be monetized? So it’s both interesting.

As a complete aside to what you’re just saying - because you were talking about finding your own safe space for your nerddom… You’re speaking at the Southern California Linux Expo in March, and I’m going to be there. I help run Kubernetes community day and scale, and I’m excited – it’s gonna be the first time we actually meet in-person, but if you want to come to a nerddom…

In real life…!

Yeah, in real life.

It’s gonna be so fun.

And so for anyone listening, if you like infrastructure and software, check out socallinuxexpo.com. Scale is March 14th, I believe is when it starts, and it goes through the weekend. It’s actually like a really family-friendly events. Especially for like Saturday, there’s a game night…

I think I’m gonna bring my kids next year.

Yeah. My son gave a talk last year, and the game night’s really fun… They have a lot of just more family-friendly – it’s a community event, and so it’s not necessarily like large sponsorships, or single technology or company… So yeah, I didn’t even realize, like, I haven’t even mentioned it on the show once, and it’s coming up by the time you hear this show.

Hear me out - next year we get our kids to co-write a talk and give it together, and then we’ll co-write a talk and give it together. We’ll do the kids track, and we’ll do like the adults track.

Yeah, that would be cool. So yeah, for anyone that’s listening, if you’re in or around Pasadena in March, and you want to come to a safe space to learn a lot from other nerds, check out the Southern California Linux Expo. And with that, let’s go ahead and jump into the interview with Wanny and Deepak, and learn all about how they manage their hybrid infrastructure and make a product that other people run in their infrastructure as well.

Alright, thank you so much Wanny Morellato and Deepak Mohandas. I hope I said those right. You’re both from Kong, and I’m so excited to have you here to talk about how infrastructure runs for Kong, why it’s important, and specifically around running hybrid applications or hybrid infrastructure. Wanny, why don’t you tell us what you do over at Kong?

Yeah, thank you. I’m Wanny, VP of Engineering here at Kong. Kong is an open source cloud connectivity company. Kong products are designed to sit in front of your API and microservices. They manage the traffic, the end authentication, rate limiting, logging, monitoring, and a lot more. Kong gateway specific, it acts as the entrypoint of your infrastructure. It manages and secures your API. Kong Mesh allows you to manage the communication East-West between your different microservices within your distributed application architecture. And both together, they really enable you to better control and get better visibility and scalability across all your different microservices in your infrastructure.

So fancy load balancers, right? I mean, that’s like the – it was like sparkling load balancers. It’s kind of where we’re putting these things. If we want traffic coming from outside to come in, we can go through here and do that rate limiting stuff. And if we want traffic inside of your network, we just put load balancers everywhere. Client side, server side, wherever a load balancer might load-balance, we’re going to load-balance it, right?

Exactly. That is what we do.

And Deepak, how about you?

So my name is Deepak. I work on the SRE team that powers Kong’s cloud platform. So we call it Connect. So that’s the platform that powers the control planes of all these gateways. So my team is responsible for building the entire platforms, and then build these IaCs and GitOps tools for other teams to deploy their softwares automatically, or at least fully self-serviced, rather than us doing all the stuff. We build automations for them to do it.

An you said SRE, but that sounds like platform engineering. That’s just like – titles are hard.

That’s the direction we are pushing. We want to be a platform and build the automations and a self-service model, and teams get full flexibility to own and operate their own softwares on top of us.

And so why is a hybrid infrastructure, something that runs in a cloud environment and on-prem - why is that important for your infrastructure and for customers?

Yeah, so that is crucially important, because you really want to balance the high performance, low latency that you expect between your microservices and your API. You don’t want to add, as we were saying before, load balancing all over the place, and add under a millisecond latency on your API code. You want those routing paths to be as efficient as they can possibly be. And that means that your gateways, your meshes, your sidecar need to run very close to where the workload is. And sometimes that is in the cloud, sometimes that is on prem, sometimes that is on the edge. And so Kong data planes need to be able to run everywhere.

At the same time, you are the central point of view, your manager control plane to run everywhere, too. You want that to be centralized into one place that is highly available, that you don’t need to worry about. And that is what we really focus to build with Connect. We have built the data plane to be flexible and smart enough to be able to run everywhere, but then we took the complicated part of the infrastructure, that is the control plane collecting the analytics, collecting the telemetry, the backup, and all of that, and centralize it as a SaaS offering, so that you don’t need to worry about that. It just works and runs for you. And you just care about defining where your data plane needs to be run.

And we’ve found that this is super-important, because it really simplifies the life of your platform SREs. They don’t need to run and deploy very complicated infrastructure. They just need to sprinkle Kong data plane where they need to be. And the complicated part, [unintelligible 00:22:55.11] the backup, all the big data analytics, data crunching, anomaly detection stuff that needs to happen, it can happen outside your infrastructure. And so this hybrid mode of configuration is what we see picking up a lot with our customers, and also internally with ourselves.

And the hybrid piece of that - because someone that wants to run Kong, they can run it themselves in their own data center, and then the collection of data and analytics can go up to you. So they’re like “I need to deploy this. I can get a VM, I can have a Raspberry Pi, I can have whatever I want locally, that just runs this load balancer gateway thing.” But then the data gets sent. Or I can run that in my AWS account, and I can say “Here, everything is isolated in my AWS account, but I still can send out that data.” What is your typical deployment model for that load balancing? I mean, I’m used to NGINX, and I can just go run NGINX anywhere I want, and they don’t collect the data, they just – I scrape it, or I do whatever I want. Why is running a load balancer with that data more important to have that somewhere else?

[23:59] You may start simple, with a simple NGINX and a very static configuration. And that brings you from zero to one. But then when you try to do one to [unintelligible 00:24:07.12] getting much more sophisticated, a number of policies and governance and rules that you want your data plane to enforce for you. And so soon, that traditional CI/CD flow where you need to update your NGINX config, and restart NGINX to pick up the new role, start becoming just either slow, because your CI/CD may need to deploy and update the config of hundreds or thousands of different NGINX, around your different deployments… And at the same time, sometimes you really want much more central governance. You don’t want to just review what your different environment in GitHub looks like, you just want to know exactly what happened a millisecond ago. And so from that point of view, you want to decouple a little bit, too. You want to have a data plane that is optimized to run in all the different types of infrastructure that you need. With Kong, we’re building – Deepak was saying about that, we built a Kong operator that is fully optimized for Kubernetes. So if you have the flexibility to run Kubernetes on your infra, either through one of the cloud provider managed EKS or whatnot, or if you run it on-prem yourself, we have an optimized deployment for that. You install the Kong operator, and then we take it from there. If you run it on OpenShift, we have a different solution for that; if you run it on bare metal VM, you can just run Kong there. But all of them, what they do is [unintelligible 00:25:43.04] they actually will look for the control plane, and for instructions on what to do next. So you can actually centrally manage your policies and your rules and your services and your URL paths and all that you have to do centrally in one place, and then all these data planes become very stateless. They become very [unintelligible 00:26:05.01] You can treat them much more like cattle; when they get sick, you shoot them, a new one will come up, will connect, and just get the configuration, what it has to do, and be up and running. And this really makes it much simpler to think about all these low balancers around your service. You don’t need to count them anymore, they just do the right things.

Let’s move away from the product and sales pitch here into your infrastructure. How is your infrastructure run in a hybrid way? What are you using to manage it and to roll it out? Is this like a “Hey, we have TerraForm for the AWS stuff and Bash scripts for on prem?” Or how does that work?

Yeah, so we do kind of a little bit the same as what our customer does. So we run a lot of the things in AWS. We deploy EKS, and we run the Kong operator for Kubernetes. We actually have another environment that is still in AWS, but there we run our Kong more traditional hybrid mode deployment, because that fits best that type of environment. We have other environments that run more traditional VM setups in that scenario and whatnot, and there we just run Kong on a VM. And for that case we use TerraForm, and we install TerraForm [unintelligible 00:27:21.20] that. In the cases where we have Kubernetes, we use TerraForm actually to get Kubernetes up and running, and then we actually install the Kong operator into Kubernetes.

How does that handoff work? Because that’s a typical pain point for some people, where it’s like “Hey, we have TerraForm”, and at some point you need to break the – we’re gonna use TerraForm for the Kubernetes resources, which isn’t always the smoothest thing, or we’re gonna switch over to GitOps, or we’re going to do a helm file, or we’re gonna do something else once the Kubernetes API is available. How do you bootstrap that into an environment to make sure that “We need to run this, too”?

[27:59] So the way we try to build with TerraForm was supposed to be provisioning the foundational layer. So in our case, it’s like EKS, the most foundational one that you get it. And then what we want is foundational layer installs next to a full CD system. In our case it’s ArgoCD. So Argo gave the beauty of a lot of engineers who want to have a UI to understand what’s happening in the cluster, and it has a lot of functionalities. From our side, what we gave them is - like, there’s a GitOps approach where they can define Argo applications as code. So the Argo app says nothing but like you define your own deployments, and you define your own deployment strategies, and how we want to deploy. Because we span across different geolocations. So we have services running in US, Europe and Australia.

Now, we want to give the teams the flexibility, like you decide how do you want to deploy them. [unintelligible 00:28:47.15] we start from there, but we don’t push to all the geolocations. We have a rolling fashion. So we give that GitOps approach where you define your deployment as code for Argo, and then we have a GitOps approach; we take the definition, apply the definition, and then let Argo take it from there. And we have a very complex system where deployments start from our least region, because we have secondary regions… Or let’s say like Australia is – we’re pushing on US times. Australia is the safest time, because of low traffic. So we start from there. And then once we deploy to that region, we’re gonna do like an integration test, end to end test to make sure everything looks good. And then we go to the next region. And this is a combination, it’s Argo, as well as GitHub Actions. So it’s like you deploy to one region, and then we have some hooks that triggers a GitHub Actions call and perform another action. So it’s kind of a chain workflow, I would say. And then we have different stages. We stop and verify that region is completely deployed, it’s safe, by running these integration tests. If it is kind of having some regression, we break there, page the user or the engineering team, so they know “Okay, should they roll back, or should they fix forward?” It’s up to them.

Well, and you’re describing the production side of things, right? This is more or less one environment, but Wanny, you were talking about you have multiple environments with VMs too, that aren’t necessarily the same. So I am a developer, I want to commit this new feature, and I’m pretty sure I wrote it right, and I have a couple tests, but I need to go through all the stages first of environments. So it’s like, do I just git push and like automation does all the magic? How does that actually go from I checked in the code, to now I’m doing a production rollout? There’s a gap in there. Something happens, right?

No, so by default it’s like every engineer pushes to the main branch, which is the core of their service, what they have; we constantly deploy, so there is no like timing or batch deploys. We continuously deploy. So we constantly test in dev, and the same pipeline exists there. Whenever they push, we build an image, which is a container image, we deploy to their environment of choice, which is the dev environment, and there is this integration test [unintelligible 00:31:00.04] Now, for production, they say “Okay, I want to production.” They put it up, so there’s a separate workflow that picks the verified one in there, and then takes over, and then do the production. [unintelligible 00:31:13.05] continuous deployment system that we have.

How long does that process typically take? I mean, even with continuous deployment, if you’re going through multiple environments and multiple regions - I mean, that’s a multi-day rollout, to be safe, right? You’re not just throwing out like “Hey, I can get this out in 10 minutes.” Like “No, actually you have to do 18 different steps to get there.”

Yeah, I think the slowest I’ve seen is like 40 minutes, because of every region, they have integration test suite, and then that kind of tests almost all routes what the service does. Basically, it stress-tests all the abstractions of that service and makes sure there is no regression there. So there’s a lot of things that they test in every region [unintelligible 00:31:56.10] Even in dev, like I said, there’s two different geolocations, because we have some global entities, and we have some regional entities. So some services have a presence in both, so they have different logic and abstractions based on where they’re run. So when they push that service, it behaves a little bit different, so they have to test end to make sure. So the slowest I would say is – like, 40 minutes is the worst I’ve seen.

[32:19] And maybe something that probably we didn’t cover is we actually really stress a lot on the automation of this. So when Deepak was describing us rolling the deployment across the different stages, we don’t really have a manual check, somebody that goes [unintelligible 00:32:33.02] Actually, those automation checks, those validations that prove that the deployment was successful are actually part of the deployment itself. So as the Argo rollout policy happens, and we actually look at the metrics that tell us that that service has passed those checks, it has passed those smoke test, and so that it gets promoted… And this automation side detecting a deployment going bad, and reverting back to the previous [unintelligible 00:33:01.11] And that is what really allowed us to not take us a couple of days to do the whole dance, but to be very quick, and in a couple of minutes rollout every region. So even when we serialize the different environments, we just go at five minutes at a time, and we end up to at most take half an hour or so, and not every manual process in the middle that will force you for the things to take a couple of days.

And how does that apply when – you said you’re using EKS, so you have to upgrade EKS, the base Kubernetes at least three times a year just to stay current. How does that impact your testing and your rollouts and your other things? Because those are sometimes breaking changes, or need fundamental infrastructure changes. How does that impact the rest of what the developers are trying to do with pushing out code?

I think the key there is defining the right abstraction. So we are pretty diligent about all the features that we use and not use. For example, all our dependencies - Kubernetes is one of them, but databases are the same. So we try to be not picking up the [unintelligible 00:34:05.12] just for fun, but we try to be very diligent about the least of the dependencies that we know. So then when we actually come to deploy one of these dependencies - you can take EKS, for example - we actually have the least of what we use, and so we can actually go and validate for that.

And then we use the same process. We actually go from the earlier environment to the latter environment in that cadence… And sometimes those actually take much longer. Especially when we know that there is a breaking change, that we work with the product team, that they actually have something, the application needs to change to move from a version of the secret operator to another, for example - that is where manual reviews actually happen, so that we stage that in a more slow fashion between the environments to give the team to update, to pick up those dependencies, and stuff like that. But I think the key, at least from my point of view, is really tracking the features and functionality that you use in your dependencies. So you don’t get surprised when you update, but you can actually review what changed and what not, and you can actually plan that upgrade to be successful.

How does that apply for customers? Because you’re shipping something that someone else is going to run in their environment too, and you’re running a very specific version of Kubernetes on EKS, and someone else has a GKE version, or an old on-prem self-deployed version, and those aren’t going to line up with what versions of – like, that matrix of what does Kong support, and what Kubernetes, and what VMs and what operating systems we support is not something you can simply just say like “Oh, I’ll just look at my dependencies.” Like, that is too complex to know ahead of time. How do you deal with that sort of shipping code to a customer?

Let’s start probably from this simple use case, for example - you run Kong on a VM. The list of instructions and dependencies that you have when you ran on the VM - they’re very clear. And so when you do your yum install kong, or apt get install kong, that is actually what guides you through.

[36:05] When you actually go in Kubernetes, that is where for example if you look – we used to have [unintelligible 00:36:09.11] a lot of the challenges that you’re describing. And that is one of the reason why we actually started thinking about building a Kubernetes operator, that we call the kgo. That’s really to give you a little bit of a better abstraction, and not having to rely on something that changes often, and that explosion of test metrics. Instead, taking ownership and get the Kong operator to actually have a minimal footprint of dependencies and just build on top of that. And so that makes you much more resilient about version changes and stuff like that.

Still, there is some times where if you migrate from this version of Kubernetes to that, then you need to actually use the next version of the Kubernetes operator, and stuff like that. And that still requires human communication. We didn’t find a silver bullet to fix that. That is still complicated sometimes, and it still requires review… But yeah, I think the key there is Kong has done a pretty good job to keep the list of the subdependencies to a minimum. [unintelligible 00:37:17.15] we always pick something that is pretty stable. And so that gives you the kind of like ease your mind most of the time.

And your Kong is an ingress and load balancer inside of Kubernetes - like, Kubernetes just went through a whole change of ingress and gateway, like APIs changing. And that was a big change for a lot of people, where ingress was never stable, but you were building on Ingress, and it’s not – you know, like as a beta API, and customers were using it across the board, and now gateway is there, and people have or have not moved to it. And they have a lot of dependencies and kind of understanding around how that works, and why they were using it for an NGINX ingress, or something like that. That is a very different problem where you have a dependency on a beta API, that people have been using for a long time, that no longer is going to be upgraded. That’s a big change. Like, that’s a big software – like, you have to test both sides of it, because you’re gonna have customers on both sides… Like, that’s not stable at all.

Maybe let me open a parenthesis. We have a lot of customers that actually never even wanted to uptake those APIs. And they’d be running Kong in Kubernetes not as an ingress, but as another service, and using Kong to route the internal services. I see some customers that are just not willing to run on the bleeding edge, and they will just not uptake those APIs. And for those folks who have been running as Kong as another service, and use Kong to route the service inside, outside the “Kubernetes ingress abstraction”. I see that a lot. But you know, we see also a lot of people that like the Kubernetes integration, [unintelligible 00:38:53.24] but with that comes a little bit of, you know, you are on the cutting edge, and you need to have the expertise and the resources to deal with those breaking changes. Deepak has been to some of our internal upgrades, maybe you can talk to that and see the challenges that we face ourselves as the stuff was breaking, and we were figuring that out.

Yeah, Deepak, I feel like there has to be some very interesting stories, as Kubernetes has changed drastically… But even not just Kubernetes. Customers’ requirements and expectations have changed. How has that impacted the infrastructure that you’re running? When you expanded to three regions, you made that decision, and you had to add those extra hooks in that workflow to make sure that you can deploy and make sure they happen in order, and all that stuff depends on where your customers are, and how they’re trying to use this as a product.

[39:48] We give the same feedback. Even for us, when we run our own infrastructure - like, we are an open source friendly company; we have a lot of open source software that we use. And like you said, the upgrades are very critical, so we do like every quarterly check to make sure if there’s versions that we need to upgrade for these specific ones… And there are breaking changes, right? So we start with the change logs; there are teams that still have these kinds of [unintelligible 00:40:10.06] we were doing an upgrade two weeks back, to 1.2.6, and then we found there are teams that had very legacy versions, like alpha one of [unintelligible 00:40:19.08] So we do that check. We do that check to make sure what are the APIs that the teams leverage, which is pretty old and deprecated, and then we tend to work with them. So then we go to them and say “Hey, we’ve found out this is problem”, we try to work with them. For me, we just send PRs to them and say “Hey, we’ve found this one. It’s very stable now [unintelligible 00:40:43.06] you might have missed it”, so we are helping you there, get it up.

And then there are cases where we have to upgrade some critical ones, and that’s the most tricky ones. That’s where we kind of trust as the dev environment, we do like end to end tests to make sure – like, in some cases it’s a breaking one, and you have to migrate off. So that requires an old and a new one, so you have to have dual versions of those entities… Let the application shift over to the new ones… Like, for example the external secret operator. There was the old version and the new version, so there were breaking changes that they have to make in the application [unintelligible 00:41:17.07] So we work with them, we do help them how to zero downtime migrate with them. That’s another way we deal with them.

I think the operator was the main thing… And the other one, like you said, because Kube, you have to do it like every three months or four months. Now, there are certain things that we go and help them in the service reviews, like help them to configure [unintelligible 00:41:38.18] budget. So when we recycle the nodes, the data plane nodes on the Kube side, we guarantee you we’re not going to cause any sort of issues… Because these are like customer-serving services, so anything that we cause here is going to have a cascade effect to our customers. So we try to educate them, we try to evangelize them the benefits, and we try to use our learnings and self-apply, which is something we also do with customers also… Because in the end, these abstractions are the same. Customers also have the same thing. So we dogfood certain things when we do kong upgrade ourselves and say “Hey, we’ve found this one.” Now, sure, there are customers who are doing the exact thing, and we give early feedbacks and stuff like that them to those.

With all the people racing towards hybrid cloud, and like infrastructure as code, and automation more, and more and more like HashiCorp and TerraForm are running towards more automation, and everyone’s trying to build tools to make hybrid cloud more doable and easier to run, how do you feel like you guys set yourselves apart from maybe the stuff that is already out there or where that’s coming out?

Very good question. I see it less as setting ourselves apart and more as playing good into the ecosystem. I think when you’re really trying to deploy something that is not just an Hello World, but is a money-making application, it is coordinating between different vendors and different solutions; it is not just one button that fits all. And so it is about “How do you get all these different components to work well with each other?” And I think that is where – you know, Deepak was saying open source comes to really help this discussion. It’s very difficult; if everything’s closed source, you don’t know what anybody else is doing, and you cannot forecast it. Being able to see the different components in open source allows you to think through the interaction that can happen, and kind of like plan that out, and have them to play nicely together.

[43:38] One stuff that we see for example internally that is working very well is we do GitOps across the board. So all our different configurations are in Git, and so what we start doing for upgrading some of our dependencies - we’re kind of like following the Dependabot model; tracking our dependencies and upgrading with that. And we internally script to also upgrade from one version to another, [unintelligible 00:44:00.17] or stuff like that. Just last week we noticed that folks were forgetting to put the timeout, for example, in your GitHub action. So what we did is we wrote a “Dependabot” automation that went through everybody’s repo, figured out if you’re not following the best conventions setting up your job, and we sent to everybody [unintelligible 00:44:21.29] already with the patch to upgrade that type of GitHub action workflow. And that I think is, at least in my experience, what we see internally. It’s a very easy way to remove the complexity and the interaction between teams by just – you know, you have this magic PR that shows up, and you just review it, “Oh, this looks good to me. I’m gonna merge it, and everything [unintelligible 00:44:44.01]

How do you build the trust for them to take that PR though? Because also just - I mean, giving the control away to automate things by other people… You know what I mean? Like, when you’ve been running the Bash scripts, and you’ve been running your infrastructure a certain way… Especially if you’ve been running on AWS, and now you’re learning how to run hybrid cloud. Or you’ve been running with TerraForm or Ansible, and you’ve done all this automation, and now you’re kind of trusting people… I feel like it’s hard, even in open source, where there’s a governing body of some sorts - like, you’ve got maintainers - to get people to trust the next upgrade, or the next change, or the next, you know…

Yeah, a hundred percent. And I think that is probably the key, at least in my opinion. We are not asking the engineering team to give up control, and say “You just don’t see this part of the infra anymore. You just get this new abstraction.” No. We actually send them the PR; they still see what is going in that PR. There is the link to the documentation of what the change is doing. So that allows them to actually understand the change, approve it, merge it… And that over time builds trust, because they see that this type of bot is helping them.

At the beginning, I remember when I started using Dependabot. I was like “Oh, this PR”, what they’re doing, and… You know, you start writing tests, you start trusting it more, you start spending less time to actually be worried about that breaking up, because you have built some tests that will tell you if they break during your CI process, and stuff like that… But you still get the visibility into what it is doing.

In my previous work, we were working in places where they’d change totally the abstraction. They’d give you just like a black box that you just drop your thing inside, and you didn’t know any more… And I feel a lot of pushback from my team in that model, because they wanted to know. Even if they wanted to delegate some tasks to somebody else, they still want to know what is going on. And I find that this inversion, where instead of like hiding things from developer, automating things for developer really helped to connect that trust and get consistency in automation, and get people to keep up with the dependency updates kind of like work.

I think there’s always a worry of like if you don’t have the context, if it breaks, you can’t fix it. I don’t know if you can say, but what is the, I guess average amount of people that will accept your PRs? Is it like a high percentage, or…?

Yeah. Nowadays, 90% of the PRs get accepted. There is always the snowflake, to be honest. I’m not here saying that we are in the perfect world where all microservices looks the same. There is always like “Okay, but we cannot accept that because we rely on this other stuff, and we need to do this, pay back this tech depth first.” But that also helps you to track it. Once you have the PR, we have a couple of scripts in GitHub, so I can actually centrally track all the servers that already accepted, and the ones that have not. And then for the ones that have not, we ask the reason why, and we build a plan to build that path to convergence.

That’s interesting.

[47:50] I feel like there’s a fine line here for - when you mentioned the Dependabot model as being a good thing, it’s not my general experience. Because it’s just like “Oh, it’s just sending me more work to do”, at some level. Like, “You know what? I don’t have time to focus on that thing, because I have something else to do.” And the more automation, the more work you’re putting on someone. And in a lot of ways, best practices is very expensive… As we were talking about already - timeouts, and pod disruption budgets, and all these things are like, you don’t have to worry about them originally, because you’re like “I’m just getting started.” And then you mature a little more, and you’re like “Oh, I’m gonna get used to doing these things, and I’m gonna get a little bit better at them.” And I’m teaching my kids how to do things, and they’re not doing that right the first time, but that’s okay, because they’re still trying, and they’re still putting an effort. And then later, I’m gonna correct them a little bit. But at some point here, especially with infrastructure - and infrastructure has such a large impact on an organization, downtime… Companies can go out of business with major outages. This is a very critical piece that you can’t just automate everything, because you’re going to affect someone up the stream that you don’t know what they’re using or why. Or even a customer, of like “Oh, I didn’t know you had that API. That’s crazy. This is a surprise to me. This is amazing.” But that automation fine line of how much is too much, how much are we actually going to put on developers and just say like “Just accept these.” Because 90% of people do. “9 out of 10 developers accept our PRs, so you should, too.” It’s like the sales pitch of like buying a toothbrush, or something. Like “Hey, you should just do it, because everyone else did.” And at some point, you get numb to the automation of like “I’m not learning anything from this. It’s not actually helping me get maturity out of it. It’s just noise to be able to say, “Yeah, sure. Whatever.” I’m not the one that fixes it either, because at the SRE side of it I don’t know if developers are getting paged for that. Or if the SRE team fixes and debugs everything, because they watch Git a lot closer to what a developer might be doing at like a global scale. How does that problem show itself inside of a company that is building something for other people that are also trying to automate on top of you, and as a product that has a hosted service that needs to be up and available? Because I’m sure your availability needs to be higher than your customers’.

Yeah, I think you need to go back to the concept of designing for failure. To be honest, designing for failure is not cheap, and it’s not simple, and it takes time. At same time, when you work on the infrastructure level, or on Kong as a load balancer, that is a must have. There is no way around that. It takes time to train the people, for people to get familiar, to design [unintelligible 00:50:23.18] from failure; from the way you design, the way you test, the way you do your performance tests, stress tests, the way your CI/CD pipeline is set up…

Yes, we have a lot of new ideas, and new projects that start, and don’t do this from day one, because they’re proving up a concept, they’re proving up a market, or an idea, or whatnot… But when you start shipping this as production-grade software, then that level of maturity - it is, I will say, non-negotiable. So you need to embrace this responsibility, I will say. That is probably my message for people that build this type of software. That is what your users expect. This stuff has to work; it has to be able to cope with a certain amount of failures. You need to be able to gracefully degrade when this happens, and you need to think about this. And this - yes, this is extra work, but it’s kind of like unavoidable when you build this type of software. If you build a load balancer that doesn’t have good uptime, or a good failure mode, it may be the best one you ever built - people will run it for a couple of hours, it will fail on them, they’re gonna revert back, and move on to the next one.

And pulling out what you just said about this type of software, right? Because that failure mode is very different depending on what you’re building. I took down my personal website last night, and no one said a thing, right? I was messing with it, and I broke it, and like “It’s fine. It’ll come back up and it’s mine.” No, I’m not making money off it. Even the people that might come, like, maybe they’ll come back tomorrow. I don’t care, there’s nothing here; there’s no responsibility here. But on like running like Disney Plus, our frontend was just a few NGINX boxes, and the failure modes of that was like auto-scaling groups are fine. Like, we didn’t need Kubernetes at the edge, we didn’t need a lot of those – like “Hey, a few NGINX boxes handles the load, a load balancer in front of that, and we’re good.”

[52:19] The failure modes are very different for different people and what they are doing. If someone can’t load Moana, that’s okay. They’re going to hit Play again, and we’ll probably pick them up next time, in the new box that’s going to come. And so failure, even if it’s not an option as you’re building this for customers, there is a vast array of spread of what failure means for different people, and how much tolerance they have for like “Can that be down for 10% of my hits? Can that be down for five minutes?” And that all is very negotiable when you’re talking to a customer.

It’s also amazing, because it’s different industries. Retail people have gotten to the point of they’re so used to having instant gratification. I think if you take two seconds more to load, people will go to a different website and spend money there. It really depends on the industry you’re talking about, because it’s amazing the amount of like demand that people want when it comes to buying something.

Yeah, that’s interesting with Kong. We have all these kinds of different industries, because we power the APIs. And it could be healthcare, it could be entertainment. And like you said, there’s different failure scenarios. Some can take that hit, some are very sensitive, in finance, banking, for example… You’re doing like a transaction which is like the sub-millisecond. Sometimes you have a one-time failure, okay. If you have consistent failure, that is a very bad user experience.

Yeah. You need to understand the space you are in, what you are okay with, and we are not okay with, so that you can make the right decision. And you can decide if you - going back to what I was saying before, if you want to dedicate a sprint to update your dependencies, or if you want to do it during your daily work. It depends on what you’re doing. And I think we need to trust and empower the engineers that are responsible for that service to make that decision. I don’t think there is some top guidance that applies for everybody. Just give them the tool, give them the knowledge, give them the automation, and they will have the context to make the best decision. And that feedback loop to keep making things better and improving and learning always exists. But then the responsibility [unintelligible 00:54:23.09]

Yeah, from an infra point of view, the common pattern, at least I’ve observed, is people tend to copy-paste a previous stale config when they create the next one. Like, it could be a lack of service onboarding or something, but the common pattern is like “Oh, there is something already working. I’m building a new microservice. I’m just going to take all these infra abstractions. I can [unintelligible 00:54:43.13] something, copy-paste here, make some changes, get it up.” So you take that baggage and it goes over.

I am absolutely convinced that every company has one Jenkins file, and everything was a fork from the one that worked, right? Someone got the Jenkins file to work, and everyone forked it from there, and that’s just like, that’s how it started. Well, Deepak and Wanny, this has been great, and I think the best thing that I’ve learned so far is maturity can go up, but it does depend on how much maturity you need. You don’t have to be the utmost, most mature, “We do everything perfect.” Actually, that just takes a lot of time, and that’s really expensive. And you have to figure out for yourself where you’re going to spend that time and that budget of like, “I can’t do this today, and that’s okay. Because I don’t need that level of maturity.” Maybe I don’t need pod disruption budgets; that’s fine for your industry or whatever you’re doing. So that’s really cool.

So I want to thank you both again for coming on the show and talking to us about your infrastructure, and the process for rolling out code… That’s been great. Where can people find either you online, or more about Kong?

Yeah, so if you guys just jump on Konghq.com, there you’ll find our public site. If you want to at us on the social media, I’m @mrwanny. Deepak, if you want to leave yours…

I’m on LinkedIn. I’m fully active on LinkedIn, so just find me there.

Thanks so much for coming on the show.

Thank you!

It’s nice meeting you both.

Have a great day, guys.

Thanks a lot.

Bye.

Break: [56:09]

Okay, thank you so much, Wanny and Deepak. That was awesome, just to be able to hear how you’re managing your infrastructure, and how you’re managing product, because that’s something that not a lot of people have insight into, because either they aren’t doing it now, or they don’t have to do it… So yeah, if you’re learning from any of these episodes, feel free to reach out to them online. Both of them, I think, have their LinkedIn profiles here in the show notes, and I’m sure they’d love to hear from you. And if you have other questions, hopefully they’re okay with you reaching out and talking to them.

If you have questions for us on this show, please email us, shipit [at] Changelog.com. We would love to hear what you are interested in. If there’s someone you would like to hear on the show, or a topic you’re interested in wanting you want to hear in a future episode, please email us and let us know, because we’re always looking for more topics. For now, we’re just running with whatever interests us, and whoever’s reached out. And so the more we hear from you, the better.

What’s your dream topic, Justin?

Ooh, dream topic… That’s a hard one. I love hardware and datacenters, and I actually think I wanna talk to someone –

You need to reach higher. Higher. Like dream. What would you nerd out?

I mean, I guess at that point it’s probably like some special guests to have on the show. I want to talk to – I don’t know, some of the first creators of… Tim Berners Lee, or ARPANET creators, or… That stuff’s super-cool to me. I love the history of it all…

We should do a history episode.

We should do like “How did this come about?”, it would be really fun.

That’d be cool. Or “How was this made?”

Oh, man, that show, “How’s it made.”

I love that show.

Now I’m gonna go down a rabbit hole…

I love that show. You have a 14-hour plane ride.

I have a 14-hour plane ride there and back. I’m going on vacation. But I bring a month’s worth of movies.

You’ve gotta have choices.

Yeah, right? Whatever I feel like at the time. I don’t know why…

I always go with way more choices than I actually use.

Yeah. I’m gonna bring like eight new things, and I’m gonna watch something I’ve already seen like 20 times. So for the end of today’s show, we want to talk about something that we learned this week. I dubbed this session “This week I learned”, or TWIL. I’m just making up acronyms now. Everything gets an acronym.

[01:00:08.05] Look, I need you to make your Justin acronyms for every end of show thing.

Most of them have something, yeah. We have some way to pronounce it wrong.

[laughs] What was the first one? Because that made my whole life.

That show never shipped. It was WTA.

We’re bringing that back.

WTA. No, we’ll get that episode in soon, or at least that outro in, at some point. But yeah, we had a “What The Acronym” acronym. So this week is TWIL, This Week I Learned. And for me, this week I learned all about conventional commits. If you go to conventionalcommits.org, it’s a way to have your code committed in a way that’s kind of human-friendly. And I have always stuck with kind of an old school way of committing messages… I mean, there’s always the fun messages of like “Will it work this time? Please, Jenkins, don’t fail”, all that sort of stuff, and plenty of swear words… But in this case, conventional commits are all about making it a little more scannable for humans to be able to see like what your commit’s actually about. And so you have things like a fix, or a feature, or docs, or refactor are like headings on it. And then you can also put things like exclamation points in the title of your commit message to know like “Oh, this is a breaking change.” So you can say like “Oh, this is when we broke something.” And then you still have some of those like sentence structure about what you changed… But it’s a much more scannable way of doing it, and they do this at my new job, at Sidero Labs, and then also with a project Bluefin, which is a Linux distro that I’m part of. And so I was committing to the main repos on both those this past week, and in both cases I realized they were doing this, and I didn’t even know what it was called. And so thankfully, in the YouBlue or the Bluefin contributor guide they said “Oh, use conventional commits.” I’m like “Oh, I did not know this was actually like a formalized thing.” And so there is a spec for it, and some – it’s not a hard rule of like “You have to do it this way”, but it’s like “Hey, we would like you to adhere to this as much as possible, so that it’s easy for us to maintain long-term.” So I’ve found it fascinating, and I wanted to share it in case anyone else didn’t know about it and wanted to start using it.

That’s really cool.

What have you got this week?

So I don’t know if it’s new, but I was reading about threat modeling, and I really like OWASP for like just learning about stuff. And they actually have a whole GitHub repository about - it’s called the Threat Model Cookbook. And there’s just so much to learn, I guess… I really geek out about security and just like learning new things, and I really like reading post mortems, of course… But it’s just really cool that not only is this something that is contributed by multiple people, but there’s all these different examples of different threat models and how you can publish them, and how you can kind of learn… So it can be in the form of code, graphics, or text. And they’re all these different tools, and methodologies, and technologies… And it’s a really cool place to start if you’re getting into threat modeling and you want to learn more about it… Because I think the more we educate people about security and kind of make it digestible, and something that people can on their own learn and get information about, it becomes something that everybody becomes passionate in. And it’s easier to get people on board when you want to make serious changes.

So I thought that was really cool, that people put in the work to really build this repo, and put in all this information. And it’s a cool place because people can contribute more information, and kind of like their ways of doing it. And the fact that there’s all these different mediums of threat modeling. And not only can people go and learn and contribute, but I just hope that helps people to get more passionate about security in general.

And speaking of acronyms, because we were just talking about that, OWASP stands for Open Worldwide Application Security Project. So you can find it at owasp.org. It’s a neat foundation or group that has these sorts of guidelines for people that are like “Hey, how does security work at a larger scale than my one application, or my piece of infrastructure?” We need to be able to do this more broadly across companies, individual companies, and also like open source projects and everything else.

Honestly, they’re just such a valuable resource… There’s so much you can learn from just their main website, and so many products they’ve been involved in… I took security classes where they almost like verbatim took OWASP text. They were like “We’re not even gonna do a textbook this week. Just go read their website.” That’s how good it is. There’s so much valuable information, and I think it’s – when it’s stuff that we need… I think that security is a need, it’s not a want. And I think the fact that people make it freely accessible to learn about it is just really cool. And it’s an organization that’s a nonprofit, where people are contributing their time and effort to push security forward and make it better, and I just think it’s a really cool organization and it’s such a valuable resource, and they just do really neat ways to teach people.

Well, that’s awesome. So thank you everyone for listening to the show, and again, if you want to have a guest on the show, or a topic that you want us to cover, please reach out via email at shipit [at] Changelog.com and we will see you all or at least talk to you all next week.

Thanks, Justin.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00