Ship It! – Episode #18

Bare metal meets Kubernetes

with David Flanagan & Marques Johansson from Equinix Metal

All Episodes

In this episode, Gerhard talks to David and Marques from Equinix Metal about the importance of bare metal for steady workloads. Terraform, Kubernetes and Tinkerbell come up, as does Crossplane - this conversation is a partial follow-up to episode 15.

David Flanagan, a.k.a. Rawkode, needs no introduction. Some of you may remember Marques Johansson from The new changelog.com setup for 2019. Marques was behind the Linode Terraforming that we used at the time, and our infrastructure was simpler because of it!

This is not just a great conversation about bare metal and Kubernetes, there is also a Rawkode Live following up: Live Debugging Changelog’s Production Kubernetes 🙌🏻

Featuring

Sponsors

RenderThe Zero DevOps cloud that empowers you to ship faster than your competitors. Render is built for modern applications and offers everything you need out-of-the-box. Learn more at render.com/changelog or email changelog@render.com for a personal introduction and to ask questions about the Render platform.

SentryWorking code means happy customers. That’s exactly why teams choose Sentry. From error tracking to performance monitoring, Sentry helps teams see what actually matters, resolve problems quicker, and learn continuously about their applications - from the frontend to the backend. Use the code THECHANGELOG and get the team plan free for three months.

Equinix Metal – If you want the choice and control of hardware…with low overhead…and the developer experience of the cloud – you need to check out Equinix Metal. Deploy in minutes across 18 global locations, from Silicon Valley to Sydney. Visit metal.equinix.com/justaddmetal and receive $100 credit to play.

Grafana CloudOur dashboard of choice Grafana is the open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Before the episode I mentioned about my history with Packet, that neither of you are aware of. I almost joined Packet in the summer of 2019, and do you know what happened?

Okay. That’s the answer which I was expecting… [laughs] There’s two people that know what happened. It’s Zach and Dizzy. Zach, “Emails got lost in the shuffle.” That’s exactly what happened. I didn’t know what he meant at the time, because I knew nothing about Equinix Metal, so I couldn’t imagine just how busy Zach and Dizzy were at the time. This was, again, summer of 2019, and 2020, in January, it was announced that Equinix is acquiring Packet. So we almost ended up working together… And it’s not the reactions which I was expecting. You’re just looking at me like – I’m not sure if it’s disbelief…

It is disbelief, because I can’t believe what you’re telling me, that you almost worked at a company, and the reason that you’re not is because an email just was missed?

Emails got lost in the shuffle, yes. So I didn’t follow up as much as I should have maybe… Actually, I think you’re right, David. Maybe on paper I wasn’t as good as I thought I was. So Zach, when he got my email he like replied because Bruce at the time was the head of engineering. But Bruce just left. And Dizzy, Dave Smith, I think he’d just joined. So there was a big shuffle, and in the background, I’m sure the Equinix stuff was happening, so he was too busy. And I was like, “Hey, can we talk? This is what I’m thinking.” “Oh yes, sure. Let me connect you to Dizzy.” And he said he’ll get back to me, but he never did. Which was okay, because so many things happened then, even for me, so I wasn’t really insisting that that happened… But it never happened, and it could have.

[04:16] You should just email tomorrow and be like “Hey, can we pick this back up?” And then just come and work with us. I think that’d be great.

Okay, I’ll think about that. Thank you for that. That’s one idea, for sure. But the thing which I wanted us to talk about is what attracted you to Packet in the first place? I’ll go last. Marques, would you like to go first? What attracted you to Equinix Metal?

Sure. It’s interesting, your setup there, because I hadn’t realized that you have a strong engineering background before we got to know each other the last time… And I’m wondering if the role that you would have been looking for would have been in engineering, or would have been, say, on our team, in the dev rel team. And that question, or that answer (whatever it is) of yours, that’s what I was looking for. So I kind of moved from this pure engineering role to this hybrid engineering/marketing/dev rel role, and that’s what attracted me. I had other opportunities on the table that were more engineering-focused, and I really wanted to be able to have the freedom that goes along with not having the same sort of engineering, “You know, we need this sprint over in two weeks, with these PRs merged.”

I liked what I did previously at Linode, where I was pulling together an ecosystem of tools, and I guess I wanted to relive that experience a bit with the learning of Kubernetes behind me. And there was a strong use and need for that kind of tooling a Equinix Metal.

That makes sense. What about you, David? What attracted you to Equinix Metal?

It was all one huge misunderstanding, and I’m surprised that I’m still here.

So it’s basically the opposite of me, right? [laughs]

I thought I was joining the Metallica fan club, and now I’m writing code and doing dev rel for a bare metal cloud company. No, I think I’ve found an interesting career, and I think because I’ve always worked directly with bare metal for the last 20 year - you know, there was no cloud back in 2001 when I got my first role… And I worked with bare metal, I had to drive on-site to fix the bare metal, I did a cloud migration ten years later, but always ended up back at bare metal.

So when the opportunity came around to work for a cloud company that was allowing me to use an API called “Get a physical server in a rack, with networking, with GPUs, with CPUs, with RAM, and no other noisy neighbors, no virtual machines” - I mean, it just seemed like magic. And the team at Equinix Metal is just phenomenal, all the people that are there. So it was a combination of my background of appreciating and preferring working with the metal, but also just the team that Mark Coleman and Tom [unintelligible 00:06:46.07] were putting together.

I think that makes a lot of sense, because you’re right, that’s one of the things which attracted me to Packet at the time, in that you could get those really amazing machines, really amazing hosts which you couldn’t get anywhere else via an API call. That was as simple as it is, being able to make an API call and get this bare metal machine was new. And we could get compute via API calls as popularized by EC2 and AWS, but not bare metal machines. I think they came later to AWS. And even now, I’m not sure how they work; I think it’s more complicated than if you just went to Equinix Metal.

The other thing is the focus on networking. I could appreciate the focus that Packet at the time was putting on actual hardware networking, layer 2, layer 3 stuff - that is very, very rare, by the way. And the Equinix acquisition makes sense. Equinix - isn’t it all about networking, data centers? That’s how I know Equinix.

So what I’m wondering is now that Packet is with Equinix? How is it different? Were you there before, or did you know Packet before? What changed since Equinix, do you know?

[07:57] I came along after the announcement was out there. What hadn’t changed yet was the name, Packet; it hadn’t yet changed to Equinix Metal, just as a division, as a product of Equinix. What I’ve noticed is that going form an org of about 200 people to an org of about 10,000 people, there’s a lot more going on, and sometimes there’s overlapping products and overlapping teams… So finding the right people out there to help contribute to what you’re doing, and getting input from other teams and other customers - I’ve noticed that that’s really played out… A lot of the products that we’ve been delivering in the last year and a half - when I say “we”, I mean like the broader Equinix Metal - have been delivered because that’s what the Equinix customers are looking for, and it’s what the Equinix Metal customers, who are also Equinix customers, are looking for. They have services in racks, and now they wanna be able to bridge those services together.

Okay. Do you remember much about Packet, David, before Equinix Metal?

I had used it a fair number of times. I thought it was a really cool service. It had some limitations around availability facilities. I think Packet before the Equinix acquisition was only available in 6-7 facilities… And when you look at it, that’s a great acquisition, because Equinix literally are the backbone of the internet, to a certain degree. They’ve got over 60 sites around the world. In fact, I think the number is larger than that. And you know, direct fiber lanes into AWS, and Azure, and Google Cloud, and all of these other providers. And it’s been able to take what Packet did really well, which is the ability to just stick an API in front of a bare metal machine, and be able to expand that to those extra facilities all around the world, and just make it that you can build low-latency, ridiculous services anywhere, using any [unintelligible 00:09:46.08] for instance… Something you can’t do on other cloud providers, but possible through Equinix. I think it’s just awesome. It was such a great acquisition. It was a really exciting time to be there.

So the one thing which what you’ve told me reminded me of was what bare metal servers used to be like before Packet… And I don’t think people realize just how big Equinix Metal now is. So before, anyone used ServerBeach or ServerCentral?

IBM, they acquired some – SoftLayer. That’s it, that’s the company; do you remember SoftLayer? Do you remember RackSpace, when you used to get server from those companies? ServerBeach even precedes them… But there was that SoftLayer, there was also in Europe OVH; they were a very big bare metal hosting provider… Online.net, which I use a fair amount even today… And there’s a few others. Ah Leaseweb that’s another big one. So that’s what getting a bare metal machine used to be before Packet.

Packet came along and you thought “Well, this is neat. It’s small, it’s interesting, it’s a crazy, good idea, very simply executed…” And I think since Equinix Metal Packet grew a lot. And I don’t think people can appreciate just how big Equinix Metal actually is. I don’t think people can appreciate how big Equinix Metal actually is.

So you mentioned 60 locations… What about the instances? What about the networks? Did anything change regarding the services that it offers?

Well, you’re asking what’s changed from the Packet days to the Equinix Metal days, and I think there are a number of things worth highlighting - we’re moving our hardware from our older, what we call legacy facilities that Packet owned, into these massive IBXes that Equinix has all around the world. By doing so, we are able to take advantage of their network capacity across global sites, and really, the capacity is not even the amazing thing; it’s the [unintelligible 00:11:41.04] table. You know, Equinix having so many PoPs around the world, when you make a request, what you wanna see is something efficient, that’s going to get you the minimum amount of hops to the destination that you need… And Equinix has the infrastructure. They have that [unintelligible 00:11:53.29] table and they have the ability to make sure that your request is the best that it can be.

[12:00] So by leveraging their backbone, using their network, moving our hardware into their facilities, we’re getting access to all of that. And because Equinix have direct partnership with all of the major cloud providers, every workload on the internet is probably on AWS, GCP, Azure, maybe some Equinix Metal, maybe some DigitalOcean. But Equinix has all those connection. So by running your workloads on Equinix [unintelligible 00:12:22.20] Equinix Metal, when you’ve got to speak to other services and other clouds, you really are getting the most efficient route to that traffic… And I think that’s a really important aspect of it.

And of course, there’s the hardware component of it as well, something that Marques kind of touched on - there’s a higher price on the actual server itself taking up the physical space within the business exchange… So we have had to let go of some of those smaller instances… But you know, that’s a trade-off we’re making just now, and hopefully something we can address in the future.

I think that’s a great point, and I think this is most likely the best outcome, or at least one of the best outcomes, because you still keep the simplicity of provisioning these instances, of defining your network, but you also benefit from the scale of Equinix. And that is a great combination. So I don’t think people realize just how much of the internet Equinix actually runs - the switches, the routers, the physical cables… There’s so much of it worldwide.

And why would people, right?

Exactly.

Until I joined Equinix, I had no idea who Equinix was, and then I’m in the door three months and just overwhelmed by how much Equinix really is there across the internet. Really, really cool.

Exactly.

You kind of don’t wanna know that, right? …when there’s some sort of status page outage, something wrong in an Equinix facility, it is a big deal, so it’s good to not hear about those things, to have them not actually happen.

Yeah. One thing which hasn’t changed, by the way - or at least I think hasn’t changed - is that you still forgot to build Kubernetes. Do you remember that post from Zach? That was a great one. “Sorry, we forgot to build the Kubernetes platform”, or whatever the title was, but that was a great one. I’ll link it in the show notes. So you still don’t have a managed Kubernetes service… Why is that, David? Why do you think that is?

Yeah, that’s a great question. I think Equinix Metal is probably one of the few clouds – in fact, probably the one cloud that doesn’t have a managed Kubernetes these days. I think what we’re seeing is that Kubernetes is now becoming this ubiquitous API for deploying applications, and all cloud providers should make that easier for developers… But not Equinix Metal.

And there’s some really good reasons for that. One is that Equinix Metal have no control over the hardware when it’s provisioned to you. So you come along, you use the API or any of the providers and say “Here, I want some bare metal.” Other than us stamping on a network configuration, that machine is yours. We can’t get onto it, we can’t modify it, we can’t change it… So actually providing you a managed Kubernetes experience is something that’s really difficult to offer, and we wanna give people the flexibility and the power – that’s why people come to bare metal as well, I think - you want something that you’re not getting from virtualized hardware, and it comes down to just either pure CPU performance, networking performance, access to GPUs…

People want flexibility and the power of bare metal, which is why you come to [unintelligible 00:15:15.24] All of that is really important. You don’t want Equinix Metal’s provisioning steps or anything that we are doing to get in the way of that. The machine is, for all intents and purposes, your machine.

I think you want the purity, right? You want the purity of hardware. So keep that experience as pure as possible, without adding any of your daemons, or any of your agents, or whatever you wanna call them… So keep it as pure and pristine as possible from what you would get if you were to physically provision it in a rack… But then make it easy for people to add whatever they want in the best possible way, including Kubernetes. So what is the best way of providing Kubernetes on top of bare metal? That’s where you come in; specifically you, David. Or at least that’s my understanding.

[16:01] That’s where I come in specifically?

Yeah. Like, you with the Rawkode Academy, right? What is the best way that you can get the neatest Kubernetes on top of the bare metal infrastructure, as well as many other things? Because let’s be honest, Kubernetes is an amazing technology, but it’s just that. It’s just one way of orchestrating containers. And a few other things - nice API, the ubiquity across all the cloud providers… But it’s just software. And maybe five years from now there’ll be something else even greater than Kubernetes. As difficult as that is to imagine, I’m convinced that’s going to be the case.

So rather than pinning yourself to a specific technology, you’re keeping the two separate, but still allowing users to mix it nicely, so they get the best of both worlds, without basically having the abstractions leak into one another, right? Because that’s what tends to happen - CNIs, CSIs…

I think that’s a really interesting point that you mentioned about “Will we be running our workloads on Kubernetes in five years?” And I wanna kind of come back to that in a different tact. What is Kubernetes? It’s a distributed system for running distributed systems. It is a distributed system made of multiple components. We actually have - and this is where bare metal ties it together as well - you can build your own Kubernetes cluster however you want, through all these different interfaces that are available.

Of course, the Kubernetes project only makes certain things flexible right now, which is the CSI, the CRI and the CNI. So you have free rein to pick whatever plugins you want there. But I see that evolving over the next five years. I don’t think we’ll be running Kubernetes in five years as what Kubernetes looks like today, but more bespoke implementations, particularly the scheduler and the kube-proxy. I think these are components that people are having a lot of hurdles with, especially at scale, and especially for HPC. Nobody is running the standard Kube scheduler for high-performance workloads, especially on bare metal. You have to use your own custom implementations to make that work.

So I think we’ll still always have the Kubernetes API in five years, I just think that the underlying components of that Kubernetes cluster won’t look like a Kubernetes cluster today. And with regards to how you get Kubernetes on bare metal - we’re seeing conversions on Kubeadm. I think being able just to run Kubeadm on a machine is the way to go. We’re seeing other tools, like kcs [unintelligible 00:18:15.23] all offer that same release sample initialization onboarding component. Kubeadm has gone into that for Kubernetes without removing some of the constraints that we have in those other configurations… But yeah, it’s a really interesting space, and I’m really excited for what’s gonna happen in the next couple of years.

Break: [18:35]

So Kubernetes on bare metal sounds great… I’m not a Kubernetes expert myself, but I have been running it in production for a couple of years. I know most of the components fairly well. I know where to look when there’s problems, I know how to fix many things - not all things - but still, Kubernetes on bare metal sounds daunting to me. What are you making, David, to make it simpler? Because I know that this is a space that you are passionate about and you’re working towards.

People that want their access to the [unintelligible 00:20:08.10] are typically power users, and they will have their own custom configurations for Kubernetes. One of the reasons that we don’t offer a managed service. But we still wanna be able to make bare metal Kubernetes for people that are just interested in the performance a little bit easier. And I think Marques would be a great person to discuss some of those options that we have available.

As David was saying, Kubeadm - that seems to be the most popular way to deploy Kubernetes… And what you can do is you can layer things on top of that experience and express more opinions. We offer a bunch of Terraform modules that are essentially proof of concept; integrations where you can run Terraform, define a few variables, give us the token that you want for your account, terraform apply, and then depending on the size of the nodes, your provisioning, depending on which integration, within a few minutes you’ll have a cluster.

These take advantage of Kubeadm underneath, and we have others that take advantage of k3s, we have others that take advantage of Anthos, and the list goes on. There’s OpenShift integrations… These are all Terraform modules, and there’s a – Pulumi is another example, where Pulumi takes advantage of Terraform drivers… I believe David has been working on some Pulumi integrations that provision a Kubernetes cluster… So there’s lots of ways to get a cluster easy on metal, but the experience is generally going to wanna be tailored to what you’re doing. So we don’t have this managed one-size-fits-all solution; what we tend to find is that our customers are more varied, and have more precise needs.

One of the patterns that we’re trying to promote is the Cluster API way of deploying, because a Cluster API is an opinionated way to deploy Kubernetes. It is a Kubernetes resource, it takes some set inputs, and a few minutes later you have a Kubernetes cluster that is managed from another Kubernetes cluster… As David was pointing out with the CNI and CCN - Kubernetes has been taking on this responsibility of managing infrastructure, and another piece of that infrastructure is Kubernetes clusters itself. So it’s turtles all the way down.

I think where Cluster API fits in - I give a lot of credit to that project - is that if you were to provision a Kubernetes cluster on bare metal with Equinix Metal through [unintelligible 00:22:28.05] through Terraform, through whatever means (even Ansible), you’re still solely responsible for operating that control plane. No one else is taking care of that. You have to nurture it, you have to feed it, you have to tuck it into bed and put it to sleep. You really need to take care of the Kubernetes control plane; it’s a very temperamental bit of software.

But the Cluster API actually brings in that reconciliation from Kubernetes, to monitor and help nurture that control plane for you. It does remediation of control plane nodes, it can do in-lane updates of closed nodes, so it can spin up new ones, cutting out old ones when things are unhealthy, you can take it out of the pool, you can add it back in… There’s some options right now called cluster resource sets, which allow you to automate deployment [unintelligible 00:23:07.04]

So Cluster API is literally single-handedly trying to make this experience easier for people that don’t necessarily know how to operate Kubernetes… Meaning it can use a managed service, like GKE or EKS, to run Cluster API, but provision a bare metal cluster on Equinix Metal… Get all that performance and flexibility, but trust the cluster API remediation to make sure your cluster is hopefully always healthy.

So that’s really interesting… If I’m hearing it correctly, you can have a managed Kubernetes cluster to manage other clusters. Is that right? Is that what you’re saying?

Exactly what we’re saying.

[23:43] Okay, that’s very interesting. So how would you be able to visualize all the clusters that you’re running? Because as we know, two leads to four, and four leads to eight, and so on and so forth; that’s the way it just goes. So how can you keep that under control if you have one Kubernetes cluster, or even multiple Kubernetes clusters, which manage other clusters? How do you do that? It’s an interesting problem.

It’s an interesting problem that, in a sense, isn’t our problem. There’s a lot of tools out there, a lot of organizations that are trying to figure out that space. I mentioned Anthos as one, so if you have your GKE clusters and you want to run something on bare metal alongside that, you can use those integrations, so that you can manage your cluster that resides on Equinix Metal servers from within the GKE control panel. On cloud.google.com you’re seeing our Kubernetes nodes. Rancher is another one of those tools where you can manage multiple clusters, and we have Rancher integrations…

We mentioned Kubeadm and k3s - they’re yet another installer. You can take advantage of Docker Machine drivers to deploy their nodes. There’s a lot of different solutions out there in the cloud-native ecosystem.

One that I like the most probably is just using Flux or Argo, because those both have UIs. The Flux UI is quite early right now, and the Argo one is much more sophisticated… But because the cluster API is just declarative manifests, all your cluster definitions live in a Git repository that are applied in a GitOps fashion. And then you can just take advantage of the Argo UI to see all of your clusters. Those provide labels, whatever you need, and they’re just there. And the same with Flux UI. And I think we’ll see more tooling above in this space as well.

Because the Kubernetes Cluster API project is using the [unintelligible 00:25:24.12] on all of those objects within the control plane cluster – no, the management cluster they call it… The Argo UI can also show you when you’ve got nodes that are unhealthy through a nice visual indicator.

You can use tools like Rancher, like Marques said, or you can use Argo. Once you’re in the Kubernetes API, you’ve got this unlimited flexibility, which is both a good thing and a curse. There’ll be dragons.

I really like that idea. I can see how that would work. So I used ArgoCD first… I think it was a few months back, with – I think it was episode 3 or 4. I can’t remember. The one with Lars, where we – it was like the follow-up to “Why Kubernetes?” I think it was episode 5. And we looked at what it would look like for Lars’ Noted app, which is a Phoenix Elixir app, to run on Kubernetes from scratch. And in that context we used ArgoCD, and it was really nice. We’re still not using it for Changelog.com, but we will, very soon, I’m sure of that.

I’m wondering if you have an example, David, that you can share, of what it would look like - a management Kubernetes cluster, which is managed by ArgoCD, which in turn manages other Kubernetes clusters. Do you have such an example?

Not specifically…

Not yet? Tomorrow. Yes, okay.

If you go to my YouTube channel, the Rawkode Academy, there are videos of me deploying Kubernetes clusters on Equinix Metal with the Cluster API, in a declarative, GitOps fashion. I don’t specifically load up the Argo UI, but for you, I will spend some time this week and we will make this happen.

Yes, please. I think that’ll be amazing to see in the nodes, to see what it looks like. I’m a visual person, among many other things, but I just understand things better visually. Sound is great and audio is great, but being able to see it in one picture - I think it just lets you imagine things differently. Or at least that’s what it’s like for me. So I think it would help to be able to see what that looks like. Because until I’ve seen the ArgoCD and how well the UI works – I mean, there are some screenshots, and sure, that would work, but what does it mean for this specific use case? I just couldn’t visualize that. So having this I think would be very, very useful. I didn’t even know that Flux is working on a UI, by the way… Flux CD.

Yes, they have an alpha UI available right now. You can install it to your clusters and it works. [unintelligible 00:27:44.02] I subscribed to the project and I’m keeping an eye on it, because I really do like the simplicity of the Flux approach to GitOps. But the Argo UI is hard to pass up, because it’s just – as you said, for that visual representation of what is happening within a cluster or multiple clusters… It’s spot on. So hopefully, Flux can catch up with that, too.

[28:03] Yeah, that’s right. I haven’t tried Flux CD, and one of the main reasons why I said “No, I think I’ll go with Argo” is because of that UI, I’ll be honest with you. Visually, it just makes so much sense. But now that Flux CD has a UI - interesting. Interesting. I think I need to speak to someone about that. But thank you, that was a great tip, David. Thank you very much.

So what is this Rawkode Academy? I think we might have mentioned it once… We definitely mentioned it just a few minutes ago. What is it?

Yeah, I really need to work on my marketing skills. I should be saying it every 60 seconds.

No, no… Not on this show. [laughs]

Episode brought to you by…

Exactly… No. Go on…

So yeah - in 2019 I spoke at 42 conferences. I loved being out there, meeting people –

Sorry, sorry - did you say 42, four, two?

42, yes. 42 conferences.

Oh, my goodness me.

Because I love going out and meeting people and talking about problems, and technology, and how technology can help them… And I lost that with Covid. So when I joined Equinix Metal, I needed to kind of find a new outlet for sharing knowledge with other people, and I started a YouTube channel. So I’ve been streaming now for about 13 months, and the Rawkode Academy is what I’ve gotten out to show for it. It’s livestream-focused, technology, cloud-native, Kubernetes learning experience, all broken down into 90-minute livestreams.

Fortunately, it’s not me doing most of the knowledge sharing. I’m smarter than that. I get really good maintainers and founders from cloud-native open source projects to come on. They show me their project, I ask them all the questions, we break it, we fix it, and we wash, rinse and repeat as often as possible.

There’s usually 2-3 episodes every week, looking at all these amazing cloud-native projects that we have in the landscape. And the landscape is so vast. I’ll probably never run out of projects to demo.

That sounds like a great idea to me, and I especially like how – because you can’t go to conferences anymore, since Covid, you did this. That’s a great reason to do it. Okay… Obviously, it’s not that. It’s all the interactions and all the stuff that you can’t share, or are limited in sharing, so you had to find another outlet for that, and this is it.

Yeah. You’ve gotta try and work with people and help people. This is really a difficult time; technology is constantly evolving, and it can feel really difficult to keep up. And I think we should just encourage more people to share their stories through articles, through podcasts, through livestreams, because we just need all the help we can get. This stuff is hard. It’s really hard.

I’m glad that it’s not just me thinking that. As fun as it is, it’s damn hard… And sometimes, the fun comes from the fact that it’s hard. It’s a challenge, and we like a challenge, and this is it.

So Marques, which is your favorite Rawkode Academy video or livestream that you watched?

I’m not sure which channel… I know David’s face is prominently featured. “Klustered” is by far my favorite format that he has. And my favorite episode is Thomas Stromberg and Kris Nova going head to head, trying to wreck each other’s clusters. That’s a great watch, I recommend it.

That is the biggest, best serendipity I ever had when [unintelligible 00:31:01.09] Klustered. I’ll tell you what Klustered is if you don’t mind, and then…

…I’ll tell you about that episode. So people are saying operating Kubernetes is hard, right? Nobody thinks that stuff is easy. We’ve got the [unintelligible 00:31:11.16] all these certifications from the Linux Foundation, that people want to go and get, and tell people that they know how to do this stuff. But the learning resources don’t really go deep enough, and Klustered wanted to solve that.

I had this ridiculous idea of getting some of my Kubernetes friends to purposely go and smash, bash and crash some Kubernetes clusters. And I thought “I’ll just go into livestream and see if I can fix it, and have some people join me and help me along the way.” And we’re now over 20 episodes in, over 50 broken clusters, most of them fixed, fortunately… And it just provides a really interesting way to see how the control plane works, how to debug it, how to fix it, what to do when things go wrong. Again, we don’t have these resources available online. You really learn the hard way, and that can be challenging.

[31:58] What’s really special about that episode that Marques mentioned is that Kris Nova is a kernel hacker, and one of the earliest Kubernetes contributors there is. Thomas Stromberg worked at Google for 15 years, being involved in forensic analysis of exploits, break-ins etc. in physical hardware. So by sheer luck, putting them together, we got this episode where Kris uses LD_PRELOAD, kernel modules, eBPF to cover up all of the tracks, all of the breaks on this machine. No normal person would have ever been able to fix this cluster. But Thomas came on, and with the forensic analysis at Google knowledge, he used something called [unintelligible 00:32:32.16] which is apparently a tool that can give you a snapshot of all the changes on a fail system within X amount of days or hours, and said “I’m just gonna leave that there in case I need it”, and then went on to debug the cluster. Having this wonderful pot of gold at the side, with all of the answers [unintelligible 00:32:45.28]

So he tried to it the hard way, by doing the work to see what’s wrong, and debugging… All the answers were over there, just waiting for him. And it was just a phenomenal episode. So many tips, tricks, and things to learn from it.

I’m definitely going to watch that. That sounds like an interesting one. Thank you very much, Marques. I think I would like to have one more episode to watch, so David, which is your favorite Klustered or Rawkode Academy episode, which is not the one that Marques mentioned?

Damn, that’s a hard question. So there is a really early episode, and I think I like it most because of the technology perspective. It was with the team from MayaData, who were working on a CSI driver called MayaStore, written in Rust, using the [unintelligible 00:33:28.12] These are all really cutting-edge technologies. And the demo was fine, but the real awesome part of it was just their CTO talking about that storage base and what storage is going to look like over the next couple of years. And I think it just stuck with me this entire time; it’s just one of those great episodes. Getting knowledge from someone with so much experience, that otherwise we would not have access to. So I really loved that episode as well.

Okay. Do you remember by any chance which one it was?

It’s “An introduction to OpenEBS.”

Who was the CTO for [unintelligible 00:34:02.04] at the time, do you remember?

The CTO was Jeffrey Molanus.

Alright. Okay. So not the person I have in mind. Episode 14, “Cloud-native chaos engineering”, the one with Uma and Karthik; Uma was definitely on MayaStore before co-founding Chaos Native.

Yeah, that whole team was on the OpenEBS project, working on MayaStore beforehand. The [unintelligible 00:34:21.19] a spin-off from the test suite that they wrote for MayaStore and OpenEBS, which I think is really, really cool.

That’s right. It is really cool. Okay. Can you say the name again? I forgot it.

Jeffrey Molanus.

Jeffrey Molanus, that’s him. Okay.

When you’re running through all of these episodes - the format has shifted from the beginning to what he’s currently producing. The earlier episodes has individuals fixing multiple clusters, and one of the earlier ones had my manager, [unintelligible 00:34:49.21] just go on there and just fix cluster after cluster after cluster… And these clusters didn’t have one or two problems, they had layers of destructions. So I’m always impressed just to know that my manager, the guy who I tell when I have to take a day off, is able to fix all these clusters in a phenomenal way. What does that I think is just having this solutions architect background, and working with Kubernetes and working with clusters in that way.

That’s interesting.

Yeah, I think that breaks on clusters have evolved as well with the format. We started off with just two people on the stream, trying to fix a bunch of clusters. The breaks were, you know, someone stopped the scheduler, or someone broke the config. Now, 20 episodes later, we have [unintelligible 00:35:36.04] where you’ve got Container Solutions, and RedHat, and Talos, and DigitalOcean, all breaking these clusters and handing them over to the other team and going “Good luck.” And it’s became so fun and joyous and competitive at the same time, and the breaks are getting ridiculous. People are now modifying the Go code for the kubelet, recompiling it, publishing an image, and then shipping it to the cluster. The creativity in the way that people approach this now is jsut evolving so quickly. It’s just so much fun to watch.

[36:05] So what I’m thinking is you should rename Rawkode Academy to “Break my Kubernetes” or “Fix my Kubernetes.” You know, like “Pimp my ride.” “Pimp my Kubernetes”, something like that, I don’t know… But this is a great idea, because I think there’s a lot of good stuff coming out of this which is unexpected, and it’s almost like a thing of its own, where – “This sounds great.” Imagine how small the problem that we’re experiencing right now in Changelog.com… And that’s, by the way, how this interview started… David asking about some debugging Kubernetes. Well, guess what - we have a problem in our [unintelligible 00:36:40.22] cluster, which I would love us to be able to debug… And I think a follow-up episode is in order, because there’s nothing broken; but still, it just goes to show the complexity that goes into these things and you wouldn’t even know. It’s almost like every problem is unique.

You know that expression about distributed system, how when they’re happy, they’re all happy the same way; but when they’re broken, they’re broken in individual ways, in unique ways. And I think a Kubernetes cluster is exactly like that. Every single one was different. Which makes me wonder - what hope do we have? If all our Kubernetes clusters are broken in unique ways, in weird and wonderful ways, what hope do we have for running them efficiently? What do you think, David? Would you agree with that? It surely can’t be that dire.

Unfortunately, I think you may be correct. Kubernetes as a system is distributed with infinite flexibility to swap out the container runtime. What I’ve seen over the many episodes is that the symptoms you see from one break to another can be completely different, and the break actually turns out to be the same. So you really have no idea when you’re looking at the symptoms from the cluster what is actually going on… And I think that’s why we’re seeing this really strong push-through for observability these days. It’s the hottest topic, we’re getting more and more talks about it at KubeCon, and it’s because people have realized that we need better; we need to monitor these systems better.

Yeah, that makes a lot of sense. Okay. So we talked a lot about Kubernetes, and this is interesting, because this was meant to be about bare metal infrastructure, real networking, API, stuff like that. But it jsut goes to show that it’s everywhere… And I’m wondering if we are getting Kubernetes everywhere, or Kubernetes just really fits so many situations and so many places, and it just makes things easier, better? Easier to reason about, I don’t know… Because what do you do if you have a thousand servers? How do you manage the workloads on them? I don’t know anything better than Kubernetes. I mean, I’m sure there are things better than it, but I think many people realize as broken as it is, or as complicated as it is, what’s better than it? I don’t know… What do you think?

We didn’t get here by accident. We started with our System 5 configurations in our scattered user local [unintelligible 00:38:47.29] config files, and we moved towards containers because it helped to keep all of the system components common, and the variability of those containers reduced. So we needed a way to manage all of our containers, and Kubernetes became the common solution for that. I think the real big gains in Kubernetes that we didn’t have in all those previous things - we had too much variability, we had too much interaction between components. “Why isn’t it running correctly? Oh, somebody’s running Apache on the same port.” That’s probably unreasonable. But those are the kinds of problems that you had. And perhaps it’s still possible to do that in Kubernetes now, but it’s all stated in a common way. And having this stateful declaration of all of your resources in our place makes debugging a bit easier. It makes it easier to reason about what’s running at any given time, and what’s being exposed at any given time.

Kubernetes is better than where we were before, but it’s also not, in a way, because we still have all that same underlying architecture, that same underlying OS configuration that can get in our way.

[39:54] And [unintelligible 00:39:52.24] If we go back to applications ten years ago, we were writing monolithic applications that we scaled horizontally by just snapshotting the image and throwing it out. But those monolithic applications became exceedingly hard for large development teams to be able to cooperate and deliver and maintain any sort of velocity that kept a competitive edge. And as wise as we are as technologists, we picked microservices as a way to combat that, and push that complexity from the developers down to the operations stack… And Kubernetes is what we’re stuck with now because of that, because we now need to be able to horizontally scale a wide variety of microservices written in different languages, deployed on containers. That’s the trade-off we’ve made as developers to be able to move quicker, deploy faster, and keep our customer happy as quickly as possible, [unintelligible 00:40:40.04] feedback loops. And that operational complexity is just the outcome of it.

Even though we do have a monolith at Changelog.com, we’re still using Kubernetes, because it handles a lot of complexity that we would need to handle differently in other places, and it would just hide it. So for example, managing DNS now is a declarative thing that happens in Kubernetes. Not all the records, and that’s like another problem; external DNS is not as mature as some of us, including myself, would like. For example IPv6 - it doesn’t manage. Multiple IPv4 - they don’t work very well. So there’s like a couple of limitations to external DNS as a thing that you run in Kubernetes. But the way it composes, it’s really nice.

So you have these baseline components, that’s what I call them… But one component that works really well which runs right alongside it is Cert Manager. So we manage a certificate using Cert Manager; it works fairly well, it manages all our certificates. We have about en domains; eight, nine, ten - somewhere around there. And not only that, but then within Kubernetes we run something which then keeps the certificate that Cert Manager manages synchronized with Fastly, which is our CDN. And all that complexity lives in a single Kubernetes cluster, including running the Changelog app… Everything is declarative, so even if you have a monolith, you may consider Kubernetes, because of all the other things that Kubernetes could manage for you, not just the app itself; there’s all the other concerns - CI/CD. Guess what - Argo CD, Flux CD, Jenkins X… There’s so many CI/CD systems that you could pick, and it works fairly well, I think.

I’m so glad you said that, because I wrote an article in May, and the title of the article was “You may not require Kubernetes, but you need Kubernetes.” And I think it’s because we do get service discovery, we do get DNS, we get reconciliation and we get remediation… All of these things are just built into the control plane. And then there’s the ecosystem. We have controllers of controllers of controllers; your ability as a cert manager, as a controller to provision TLS certificates, and then another controller to synchronize them to your Fastly CDN… [unintelligible 00:43:39.16] controllers and custom resource definitions in our declarative fashion. So it’s really cool that you’ve got a monolith and you chose to run that in Kubernetes, because we’re taking advantage of this ecosystem, this community and all of this software that is built to make certain applications easier. It applies to most applications.

[44:00] And this is where Marques comes in… So I know that David doesn’t know this, and I know that very few (if any) listeners know this… But me and Marques - we started talking while Marques was at Linode. And at the time, we wanted to manage our Linode infrastructure for Changelog.com more efficiently using Terraform. And Marques was managing a few Terraform modules at the time, and I think he also started working at what would soon become Linode Kubernetes Engine. So it was like the beginnings of that.

Marques since went to Crossplane, by the way. That was a very interesting period, and I was thinking “Oh, hang on… Maybe this Crossplane is worth a look.” I didn’t have the time until recently… I will continue with that. And now Marques is with Equinix Metal. So if you think about it, this is where I stand by what I say, in that Ship It is about the people that make it happen. So we’re having these conversations because Marques, you’ve always been in this technology space. So my question to you, Marques, is - we use Terraform… We stopped using it, by the way. Everything is now running in Kubernetes. I’m thinking of using Crossplane, and I’m wondering, Marques, what else should we be using for the Changelog setup that you have know over the years; you’ve been fairly familiar with it since 2018, I think… So what do you think comes next, based on what David just mentioned?

So you’ve been moving your infrastructure from – a term I first heard from you, I think, which was [unintelligible 00:45:26.13] You’ve been moving from that to some sort of stateful configuration where you can treat your entire deployment as [unintelligible 00:45:34.23] And I think that’s probably come up a few times. You’ve probably hit some walls and just taken advantage of that kill switch and just rebuild… And Terraform was that answer, I think, for a lot of people, and it’s still where a lot of people are. It allows you to just have however many components you need, have each one expressed as a few lines of HCL configuration, destroy the entire environment, reapply the entire environment.

One of the hurdles of that situation though is when things don’t apply cleanly, or you need somebody to actually push that button, and that’s where Crossplane comes in. Crossplane takes advantage of the Kubernetes reconciliation loop to bring these infrastructure components back to life, provision them the first time, sync things up. One thing is in a failed state, another is in a successful state… That failed state is eventually going to turn green. Your deployment is going to succeed, whereas in Terraform you’re generally not gonna have that experience. You might have to destroy the entire environment and bring it back up, and you’re gonna have to probably push that button to reapply it.

So what do we have on the Equinix Metal side that allows you to use Crossplane? We do have a provider, and that provider allows you to deploy devices, [unintelligible 00:46:50.05] and IP addresses. There are many more infrastructure components that we can introduce, but we started with the ones that are most relevant.

There are some other integrations with Crossplane that are useful to consider here, because when you are provisioning something – if we take the Terraform model, you’re provisioning infrastructure and then a lot of folks will rely on SSH-ing into that infrastructure to get it configured the way that they want it to be configured. We don’t have an SSH provider in this Crossplane ecosystem, at least not a fully fleshed out one… So we have to take advantage of user data. And what user data allows you to do is when you’re provisioning a device define all of the scripts that need to run on that machine on first boot, and that takes out all of the variability of SSH, of “Am I going to connect to this machine? Am I going to run the same script multiple times?” You’re going to define with your user data what to run at boot-up. You will not require external access to that machine, because the cloud provider’s API is going to make sure that that script is executed.

[47:55] In our environment, where we have layer two configurations, you cannot SSH into the machine to perform the actions that you want without going through a gateway node, or without going through a serial terminal. So the way that you execute code on those machines or execute scripts on those machines is through user data.

One of the formats that’s popular for configuring your user data is – well, cloud-native is the tool, cloudconfig is the format. It’s something like Salt, or Puppet, or Chef, where you have this declarative language to describe all the packages that you need installed, whether or not the system should be updated, describe files that need to be created, services that need to be running… And a common way to approach user data is to just provide a cloudconfig file that declares all of that.

So one of the Crossplane providers that I worked on introduces [unintelligible 00:48:44.02] to Crossplane, which you can use in conjunction with the Equinix Metal provider, and you can stagger your deployment, in a way, to say “When this resource is fully configured, take some component of that, tie that into this cloudconfig script, and then when that script is ready, use that to deploy this Equinix Metal device.” So you can get these complex compositions, taking advantage of Crossplane’s compositions, and I think that that’s where you’re gonna wanna go with this complex deployment that you have with Changelog.

I’m thinking more along the lines of having very good hardware, knowing exactly what hardware we are getting. That’ll be one thing from the Equinix Metal side. Do you know that actually we run a single-node Kubernetes, because it’s more reliable that multi-node Kubernetes? We’ve had so many issues with a three-node Kubernetes cluster… Since we’ve switched to a single node, everything jsut works. People wouldn’t think that. It may be the fact that it is a monolithic app, it may be the fact that it is using local storage… Sorry, not local storage. Block storage. And then you can only mount – that’s the CSI limitation, you can only mount that persistent volume to a single (obviously) app instance at a time… That’s something we would like to change. And we just use PostgreSQL.

The amount of issues that we’ve had with three nodes was just embarrassing. You shouldn’t need to have that. And this is like, you know, a certified Kubernetes installation, we always kept up to date, nothing specific… Volumes not unmounting… All sorts of weird Kube-proxy issues. I know that, David, you mentioned that is like a good component… I’m not so sure, based on the amount of the problems that we’ve found with it…

Yeah, I think that Kube-proxy is one of those first components that’s gonna be swapped out. I think we’re already seeing that from Cilium. I don’t know if you use Cilium as a CNI, but they’re a good proxy replacement. Use an eBPF to route all the traffic… That’s what I’d go for by default now.

Really? That’s interesting.

Yeah, I remove the Kube-proxy whenever possible.

I’m pretty sure we use Calico, and I wanted to go to Cilium because of that. I need to hit Liz up. I really wanna talk to her about a few things, including this…

Well, I do have experience of doing an online CNI replacement in Kubernetes… [unintelligible 00:50:51.29] we could have a bit of fun with that.

Oh, that’s a good one. Okay… So yes, I’ve just confirmed we’re using Calico. Which version of Calico, you ask… I can hear you asking that. It’s version 3.19.1. So I’m not sure if that’s the latest one, but anyways. So let me describe the sorts of issues that we’re seeing. The tail of HTTP requests is really long. What that means is that between the 95th percentile and the 99th percentile, some HTTP requests to the app, as far as Ingress NGINX is concerned, can take 30 seconds, 60 seconds… And they’re random. So we have a very long tail.

Most requests complete really quickly, but some requests are really slow. There’s nothing on the database side, there’s nothing on the app side, there’s plenty of CPU, plenty of memory… Everything is plenty resource-wise, but what we’re seeing is that some requests which go via Kube-proxy are sometimes slow, inexplicably. So yeah, isn’t that an interesting one?

[51:54] Yeah. I think we can have a lot of fun digging into that and seeing if we can work that one out, for sure.

So that is the follow-up which I have in mind, by the way, and the livestream I think would go really nicely with that. That’s what I’m thinking.

Yeah, I’d love to do that. I think that’d be cool. Let’s do it.

So I just have to set up another one in parallel… And this is to a comment that Marques made earlier - we always set up a new setup for the next year, so that first of all we do a blue/green, so if something goes wrong, we can always go back… We can experiment, so we can try just to improve things in a way that would be difficult to do it in place… And we can also compare. So how does the new setup compare to the old setup? How much faster or how many more errors we have with the new one compared to the old one? And there’s a period of time, typically a week or two weeks, where we shift traffic across, the production traffic, make sure everything holds up with real production traffic, and if we see any errors, we can always go back, because everything is still there for the old setup.

We do this so that we don’t have to do upgrades in place, because we know how all that works… Not very well, by the way. Sometimes you can just run into weird issues and you wonder why you’re the only one having this issue… And who can help you? Well, maybe an expert. And even then, it’s a “maybe”, it’s not a definite. The point being, this stuff is hard, so that’s why we just do another setup, and then we challenge a couple of assumptions… And it worked well so far over the years. We’ve simplified a lot of things that we wouldn’t otherwise, and I think this is going to be the best one yet, 2022. That’s what I’m thinking.

So David, where do you think that Equinix Metal would fit in Changelog? Or do you think that Equinix Metal is even a good choice for Changelog.com, considering it’s just a monolith, it doesn’t need that much power CPU-wise or memory-wise? It’s mostly traffic, but the CDN handles most of it… So I think maybe 10% of the traffic the app sees and the infrastructure sees.

Yeah, I think where Equinix Metal would come on is if you wanted to then think it a bit further and build your own CDN. That is a really great use case, that takes advantage of the Equinix network, as well as the performance of the metal devices themselves. What I would encourage people to do is to augment their virtualized setups with metal for CPU-intensive tasks, or stream processing, ETL pipelines etc. Even continuous integration - if your development team can get their CI/CD pipeline from five minutes down to one minute by switching [unintelligible 00:54:11.03] that’s probably time well invested, because you’re gonna be shipping faster.

I really like that you mentioned that, because the one thing which I’ve noticed is that whenever you have VMs, virtualized infrastructure, you tend to suffer from noisy neighbors. Weird issues that only happen on VMs. People don’t realize that this stuff is real, and the bigger your setup is, the more costly it is on time… And you keep chasing bugs that are not real. They just happen because of how things are set up, and that’s when bare metal will help, in that you just basically get what you pay for, like, for real.

I don’t think people have ever really dug into what a [unintelligible 00:54:48.06] across the different tenants on the cloud… And all these things add up. Even the [unintelligible 00:54:57.11] There’s contention across all of this, because of the cloud provider’s interest to maximize the costs and the profits from each of those physical devices. So yes, it’s cheap; you can get a single vCPU, you can get half a gig of RAM and you can go run some workloads on it, but the contention will always be a challenge… And when that becomes a problem and start to cause you more problems than it’s worth, you can start to look at augmenting and bringing in some metal for hybrid architectures.

That’s a good point.

I’m concerned about your one-node cluster now…

Okay…

Your one node is going up against other nodes. I assume this is all in your VM-managed cluster?

Yes. It’s LKE. We get a single node worker. The control plane, we just don’t have access to it. We just use whatever the node provides; that one node where the workload actually runs… So we have the app, we have Ingress NGINX, a couple of pods basically in total… Again, let me tell you the exact number. It’s 31 pods in total. 12 deployments, 38 replica sets, two stateful sets, 6 daemon sets. It’s not like a big Kubernetes cluster.

[56:04] Now, we back everything up every hour. We can restore everything from back-up and we test this often - every 3 to 6 months - so we can restore everything within 27 minutes the last time that I ran this, and everything’s backed up.

Also, everything goes through the CDN. So if the backend - “the origin”, as it’s called in Fastly - is not available, it will serve the stale content. Not the dynamic stuff, obviously… But to our users we will still be up, just a bit stale (pun intended). That’s exactly what they will see. So it’s unlikely that we will not be able to serve content if our origin goes down, even for a few hours.

So you’ve just got 31 pods, and the scale here - you could probably have 31 large VMs running on bare metal, and each of those 31 VMs running from 31 pods, and then some…

Yeah, it is interesting to imagine how it would fit, and maybe what more could fit. David pointed out - having a CDN that’s taking advantage of more nodes and more networking availability… Do you have any thoughts on what you might do with more CPU and storage?

I don’t think we would need more CPU and storage. I honestly don’t. The app itself is a single instance, because it is a Phoenix app. It’s using the Erlang VM. So it scales really nicely when it comes to CPUs available. So a single machine can serve all the traffic many times over. Like, a hundred times over. It’s extremely efficient. So we don’t have to worry about that side of things. This was why WhatsApp was able to scale the way they did, because of the Erlang VM, because of the bare metal infrastructure… It’s the same model in our case. And the CDN picks up most of the slack.

So let’s imagine that if we were to have a CDN, if we were to run a CDN ourselves, I think most of the costs would be bandwidth. And then would we use the Erlang VM? Maybe… I don’t know. Maybe we’d use something else, like Varnish, or something like that (I don’t know), to just cache the content and just serve it like that. But do you know who has a good article on this? Kurt from Fly.io. Build a CDN in five hours. And I know that Fly.io runs on the Equinix Metal as well, which is something that I’m going to take a closer look at. I like that relationship, and I can see many things coming together.

But these are all great ideas, and I’m wondering, if someone has been listening to this, what is the one key takeaway that they should take, Marques? What do you think?

You’ve mentioned Fly… So that’s one of the strengths with Equinix Metal, is that we have a lot of partners available. So there’s a lot of services that are already running here… And if they’re not running on Equinix Metal, they’re running on Equinix. So kind of combined strengths of different organizations.

Earlier I was talking about the Crossplane composition, and maybe that’s what your solution looks like. I wanna make sure that I add a – it’s not a preface now; a suffice… That’s not necessarily the direction that you should go. You did mention that you’re doing the blue/green deployments, so that’s excellent to hear… So try it. Try the Crossplane, try the Equinix Metal integration with that… You’re going to run into some resources that haven’t been implemented, you’re gonna run into some providers that haven’t been implemented…

So I would that of the Changelog delivery, the whole content system, in general for Equinix Metal I think the takeaway is that if you’re doing something that only requires a handful of pods and you don’t need a global presence and you don’t need a lot of CPU, you don’t need a lot of memory and disk, a VM might be the right place for you; a managed Kubernetes service might be the right place for you. But when you have a lot of bespoke and monolithic, large – or not monolithic; it could be a bunch of microservices that need to be globally distributed, bare metal is something to investigate, and do the same sort of blue-green. Try it on bare metal, try it on some managed service and see how they stack up. I think you’re gonna find that performance metrics are gonna be heavily in the favor of our setup.

I think this is meant to be controversial, this last part, so I’ll make it even more so… I disagree with some of the things you’ve said… The direction is sound, but what I would say is that if you do use bare metal, you tend to have less problems just because you’re using bare metal… Especially around latency, especially around performance, especially around things just mysteriously failing.

[01:00:11.15] I’ve seen less of those failures on bare metal, and more on VMs. And even more on specific cloud providers which I’m not going to name. I’ve used more than 20 over the years. I just like my infrastructure, I like my hardware, I like my networks. And – oh, CCNA. I’ve just remembered that. That was an interesting one. Finding more about BGP, and RIP, and all the other routine protocols… That was an interesting one. The point being, Equinix Metal, and Equinix, just made me think of that.

The point being, when you use bare metal, your CI tends to run better. You tend to see less flakes, fewer flakes. Your app is just more responsive. And this is weird and unexpected, but that’s exactly how it behaves once you reach a certain scale. So - worth trying, for sure, and it may not work out, but worth trying… Because I think there’s many benefits which are hidden.

The layers of complexity are definitely different. On the Equinix Metal side you have control over the physical host. There’s no virtualization layer, there’s no virtual networking happening on the individual host hardware. All of our virtualization is performed on network hardware, the same as it would if you were collocated in a physical space. When you’re dealing with VM providers, there’s a virtualization up and down the stack, and there’s sharing going on up and down the stack… So yeah, definitely, if you want to reduce your problem set, try the bare metal approach.

And the way that we deploy that bare metal infrastructure is open source, in a sense. So the underlying infrastructure provisioning, full chain that was used at Packet became the Tinkerbell project (Tinkerbell.org), and you can use that to experiment with bare metal in your own home lab, from just doing Raspberry Pi’s, or you can deploy it in your collocated environment.

What’s interesting about Tinkerbell is that it kind of takes some of these benefits that we’re seeing in the cloud-native community on projects like Kubernetes and bringing that same kind of scheduling of workflows to the bare metal.

We’ve left all the good stuff to the end, haven’t we, Marques? That’s exactly what happened. But I want to hear from David, because I think he has the best one yet, I think… So David, if someone was to take away something from this conversation, the most important thing, what do you think that would be?

Well, I think it’s the – you have covered all the good answers. However, I kind of wanna bring a different perspective to what you both said there. Bare metal brings infinite flexibility, unrivaled performance, the ability to switch architectures, use ARM devices. I think that’s a really great selling point. You know, we’ve covered the network. All these things are great. But you’re right, in that running things in VMs - you have kind of opaque problems. I mean, you run them on the metal - those go away, because you have full visibility of everything on the stack, which is great. However, you then have to operate bare metal, and I think this is still a really challenging thing. Teams these days don’t have dedicated ops teams anymore like they used to when we had our own data centers. We were all DevOps, and Agile, and Terraform; we just don’t have that experience.

So you know, you’ve gotta be careful when you’re adopting bare metal. Don’t walk into it lightly, thinking “And with a little bit of Linux, this will be okay.” There’s a lot to learn there. Like you said, even BGP. How many people can tell you what BGP is in 2021? It’s not that many anymore. And why would they? Because they’ve had convenience at their doorstep for so long.

So bare metal will remove a whole class of problems, like you’ve said, but it brings in different challenges that you need to navigate. Approach wisely.

Isn’t that exactly where things like the Crossplane provide for Equinix Metal? Tinkerbell, Kubernetes comes in… In that yes, all those things are still present, and you can have access to them, but maybe those higher-level abstractions give you everything you need to define all the things… Not to mention Equinix Metal. You have an API for bare metal. Who has that? I mean, more companies, and I’m not trying to sell Equinix Metal, but if you see the simplicity, how easy it is… Like, you don’t have to click through menus to select the operating system, the data center… A bunch of other things. Networking… You just get that - not even via an API. You get it via the Kubernetes API, and I think that’s the really amazing thing. It’s this combination of this low-level and this high-level, and maybe removing a lot of the stuff that you would get in between, like in the middle, and maybe removing that out. So that’s the value prop which I would like to try out for Changelog.com. How well does it work in practice? That’s what I’m thinking.

Yeah. Go spin up another M3, S3, C3 boxes, just for ten minutes, run [unintelligible 01:04:39.20] See the cores, see the memory…

It’s all yours.

Yeah, all yours, there to do whatever you need it to do. And then shut it down; it costs you 50 cents. But that’s just not something you’re gonna get elsewhere.

What I’m thinking is “Watch a few Rawkode Academy videos, figure out how to get Kubernetes on bare metal, and then figure out how to recreate our setup”, which by the way, Kubernetes was meant to be the promise… That we declare everything, we program against the Kubernetes API… So how do we get Kubernetes on bare metal? Well, David has the answer, in one of his videos, I hope.

Well, Marques and I are constantly working on this experience. And I won’t put words in your mouth, Marques, but I personally am not a fan of tools like Terraform and Ansible to a certain degree, because they require manual triggers. Someone has to initiate the action. What we need is something that can be deployed in a more autonomous fashion, with reconciliation. We can do that through user data and custom data, which is a really cool aspect of the Equinix Metal API, and Marques and I have spent a great amount of time over the last couple of months, and we will continue to explore how to make this easier through Crossplane and other cluster operators as well. So stay tuned… Good stuff is definitely coming.

This was a pleasure. I will try the good stuff out, by the way, and I will hopefully contribute a little bit to that. At least tell you what doesn’t work. I can guarantee that that’s what I’m gonna do, I’ll tell you what doesn’t work… [laughs] For us, for Changelog.com.

Thank you, Marques, thank you, David. This has been a pleasure. Looking forward to a next time, and a follow-up video, David. I have not forgotten.

Thank you for having us.

Thanks for having us, Gerhard.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00