Changelog & Friends – Episode #8

Bringing the cloud on prem

with Bryan Cantrill & Steve Tuck

All Episodes

Adam was out when Bryan made his podcast debut here on The Changelog, so we had to get him back on the show along with his co-founder and CEO Steve Tuck to discuss Silicon Valley (the TV show), all things Oxide, homelab possibilities, bringing the power of the cloud on prem, and more.



FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at

Fly.ioThe home of — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at and check out the speedrun in their docs.

Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You iterlly can’t get any faster!

Notes & Links

📝 Edit Notes


1 00:00 Let's talk!
2 00:38 Is this Oxide & Friends?
3 01:12 Last time we talked...
4 02:39 Silicon Valley
5 11:12 Oxide sold a box
6 13:54 Awesome design
7 16:55 Oxide's heist team
8 17:39 Is this your rack?
9 18:20 Homelab Oxide edition
10 19:33 Oxide's enthusiast audience
11 23:34 Would you consider "The Homecloud"
12 24:29 Throw us a bone
13 25:58 Give me a simplified Oxide rack
14 28:02 Know what you're saying no to
15 29:53 Far into the future
16 31:03 The 2050 roadmap
17 38:59 What's wrong with the switch?
18 41:15 How different is an Oxide rack?
19 43:34 Powered and provisioned same day
20 47:43 Take the Homelab out of the DC
21 49:38 The first boot
22 51:59 Customer demos
23 54:53 Back to the bootup
24 59:36 How did you know?
25 1:13:11 On-site with a customer
26 1:13:50 On prem vs cloud
27 1:18:50 Where's the secret?
28 1:22:12 We all pod? Nah...
29 1:35:21 Bye friends!
30 1:36:20 Wrapping up


📝 Edit Transcript


Play the audio to listen along while you enjoy the transcript. 🎧

Okay, so this is our talk show. We accidentally – I think we ganked your guys’s name. I didn’t realize – I think you guys inspired us. This is called Changelog & Friends. This is just our talk show, Bryan. Okay, you guys know that. You were on our interview show, we thought we came up with the name, then we went back to your website, and it’s like “Wait a second, they already have a podcast called this…” So we ganked it. We switched the end to an ampersand so we made it our own…

We use an ampersand…

Oh, crap.

Not on your website you don’t, so…

Well, we’re not caught Oxide, so there you go.

That’s true.

Yeah. Yeah, that’s the – yeah, it’s all good.

Anyways… Point being is this will be a lot looser than even our last conversation, Bryan; that’s my point.

It’s gonna feel less interviewy.

Yeah, our last conversation was super-rigid because we were having arguments over which Silicon Valley character I –

If you can believe it, that was rigid for us… [laughter]

Fair. My kids gave me grief for that. It’s like “I can’t believe – dad, you know which Silicon Valley character you are. You’re Gwart.” I’m like “Go to your room.”

Really? Dang… That’s a burn.

It is a burn.

Well, a fun backstory on that… Adam wasn’t there for the show. Adam is actually the Silicon Valley aficionado amongst us.

It’s true.

It was me and Gerhard. And I was gonna just not bring it up. That was my plan. And I should have –

That was a good plan.

And Gerhard brought it out. And then Bryan - also big fan. So you started launching in, and I was sitting there “Oh, no… Me and Gerhard don’t even know the show very well. Bryan’s an expert.”

No, you just were like “Let’s go fishing.” I’m “Great. Let’s go deep sea fishing.” You’re “Why are we on a boat? We’re leaving the bay.” I’m “Oh, we’re going deep sea fishing right now.” You’re “I did not…” Yeah, you guys were not ready to go.

It was awful. It was terrible. In fact, I think we cut a few minutes, because you just chided us. You’re “Come on, guys… We can’t look that bad.”

I was “Well, if you’re gonna ask the question, be ready to roll.”

That’s right.

Yeah, it was fun.

I didn’t ask the question. I was just a victim of having Gerhard –

Right. That is true. Gerhard asked the question.

I should have never invite Gerhard.

Yeah. This is blowback for Jerod… [laughter]

Yes, that’s what we learned.

Well, Adam is here, Bryan is here… Do you want to get out of the way? I mean, I’m sure Adam will bring it up…

It’s gonna be the whole show, Jerod.

The whole show.

The whole show is Silicon Valley.

Steve, are you a fan?

I mean, no… Bryan’s been hazing me for the better part of the last year and a half, because – I got through season four. I had not gotten through season five and six. And so he would fire references, and he’s just like “I can’t

work like this.” I can’t… I couldn’t work that way. Like, “Get through the end of the series.”

So I finally powered through the end of the season five and season six in the last six months… So that’s the part that I have in state. Seasons one through four - it was a while ago.

I’ve taken the other tactic… I just refuse to watch it now, just so that Adam can’t…

It’s so good. You think you’re hurting Adam, but you’re not hurting Adam. Jerod is hurting Jerod by doing that.

That’s right.

I’m hurting myself?

It is so extraordinary. And it’s extraordinary for all the reasons that great satire is extraordinary, in terms of it’s very much a reflection of this satire that we’re living called Silicon Valley. It’s just very, very well done.

Well, and the number of people that will say “I can’t watch it, because it hits too close to home” tells you it’s perfect satire.

That’s what most people say. And Bryan is the first one who didn’t say that. He’s like “Oh…!” and he just launched in.

You know, Steve and I actually in a previous life reported to the chair At one point they get rid of the CEO and everyone’s reporting to the chair, and that is the episode… I know a lot of people that can’t watch it because of that episode…

Because it happened…

Oh… There are plenty of companies where it’s like “The CEO is so bad, we’re gonna get them out of here. We actually don’t know who the CEO is… But by the way, it’s none of you turkeys.”


“Actually, this chair is now in charge.”

“Can you really fire us? You can’t really fire us. You’re just the CTO.”

[laughs] Exactly. And just a bunch of the dynamics in there that are very – and I think Dick Costolo is the one to really… That’s the reason I think – you know, Dick Costolo was kind of a fan of the series, and after season one kind of volunteered to help write a bit…

And he’s talked a bit about this, but I just feel like as you get into these later seasons, and you get things that are so dead on… I mean, Jerod, I love where they’ve merged two companies, Sliceline and Optimoji, and there’s a civil war in Pied Piper, because the two companies had different dog policies.

[laughs] Okay, you’re selling it…

One of the company’s dog policies was like “You absolutely bring your dog to work.” The other company’s dog policy was like “No, no dogs come to work.” And it’s like, this civil war spills into Pied Piper…

…because Richard casually allows one of them to bring a dog to work, and the next thing you know –

He’s trying to get them to like him. Nobody likes him, basically… Well, they like him, but they don’t respect him, so they don’t listen to him… He’s like “Well, I’ll get you your favorite coffee. You want dogs in here? I’ll get you some dogs in here…” Whatever it takes to get them to like him.

And I think it’s so interesting, because – like, with so many things about the series. It’s like, it’s funny, and it’s light, but it’s hitting on something really, really deep, where you have… And this is an absolute problem in Silicon Valley, where management is like “Yeah, I just wanna make you happy.” And it’s like “Well, that’s not actually not the way you lead.” As Steve and I will know, leadership is not always going to lead everybody happy… And if you try to lead everyone happy all the time, what you end up doing is actually just creating a mess.


You leave no one happy, yeah.

[06:12] Yeah, for sure.

It’s good stuff.

It is good stuff. It is good stuff. I mean, the other one that hits home for, I think, a bunch of folks, especially those in enterprise sales, is everything around the box. And then, the like collection of the sales team, where it’s like – see, now I’m gonna fall down, because this was before season five, but… I can’t remember his name, but it’s like, he’s regional vice-president Northwest, and they’re all checking in with all the different regional sales reps…

That’s right. I shadow him.

I’m Keith. I’m shadowing Bob.

That’s right. [laughter] Jan the Man here again…

They call me Jan the Man from inside sales…

And it’s a woman. So it’s like an oxymoron. It’s like “Well, okay…”

Wait, Jerod, you’ve not watched any of it?

So I have watched season one, and I liked it, but I just didn’t love it. So I watched it, and I just kind of dropped off. And then Adam’s like basically been trying to reel me back in. I kind of took the antagonistic stance, you know… But you guys are selling it. You’re selling it.

Adam has not used this shame tactic that Bryan was.

He does, but it’s not as effective, I think. We’re too close. I just don’t care as much about –

Yeah, he doesn’t care.

Well, let’s not give me too much credit, because it took years of ongoing shame to get Steve like finally over the line.

Oh, okay.

But there is a great – the scene that Steve’s alluding to is something, again, that’s very close to our own lived experience, where they build out sales effectively before they have a product.

And so Richard walks in, and there is this sales team that is already built out, but they don’t have a product yet. And the sales team – and again, we’ve seen this… You see this over and over again in startups, where they build out sales and marketing before having product-market fit. The sales and marketing folks are slick. They seem like they’re charismatic, they’ve got this kind of customer Rolodex, they’ve got all these… Next thing you know, you have all these RFPs, and MSAs, and you’ve got all these trials going on, and it feels like very promising, if feels like a pipeline.

POCs… Yeah.

POCs… Exactly. It feels like a pipeline. But in fact, it’s not. It’s all a fiction, because there’s no product. And as customers discover that there’s no product, then the sales folks are like “Well, the problem is the product.” And like, they’re not wrong, but they’re also not right. It’s like, actually, the problem is that we poured a lot of our scarce resources into building sales and marketing before we had resounding product-market fit.

What was the line when he’s asking about why they can’t sell, and he’s like “Well… I mean, they’re amazing salespeople when the product can sell itself.”

Wow. Good job, Steve. That’s like a line, man.

Because he’s like “I thought these guys were the best.” It’s like, yeah, they’re the best, because the product sells itself. And Silicon Valley does this over and over again, where someone says something and you’re like What the *bleep*?” And the camera holds an extra beat on the other person in the room being like “Are you listening to yourself say that?” The reaction shots are great… And I feel like that’s one of those where it’s like – there’s a lot of wisdom in that. And it very much informed the way we built Oxide, by the way.

Here’s the oxymoron here, too; the unexpected thing. It’s like – I’m gonna spoil something. So obviously, if you’re listening to this, we’re spoiling things. So if you haven’t watched beyond season one, like Jerod, then –

You’re spoiling it for me…

It’s everything spoiled.

Cover your ears.

I probably shouldn’t watch it now… Yeah, spoiled.

Okay, so the Box goes on to be the most moneymaker for Hooli, in like season five. Of all the inventions, of the great inventions that Gavin Belson did, in the words of Sherry, I believe her name is.. She’s like “It would have been better if you didn’t invent any of those things, because they were all money losers.” But here’s the Box. He’s like “Is this for the whole year?” He’s like “No, this is for the first quarter, in terms of sales.” Like, it was the best success.

What’s in the box? What’s the box? I don’t even know what the box is. What’s in the box?

What is the box, Bryan?

Watch and you’ll know.

Is it kind of like Brad Pitt at the end of Se7en? “What’s in the box?”

“What’s in the box?!” And this is kind of funny, because they try to disparage the hardware angle a bit, but of course, that’s what we’re building at Oxide.

Yes. I love hardware, by the way.

[10:18] So they’ve got the great scene in the data center, where the box is gonna go, and like “The box goes here.”

Data center operator, you know, that lives in a cave… Yeah.

I mean, I can imagine you guys watching the show, literally taking notes, you know?

Oh, for sure. Yeah.

Because it’s so on the nose with what you guys are doing.

It is very on the nose. So it’s the Box Gavin Belson Signature Edition.

That’s right.

And they crowdsourced the logo inside of Hooli for–

Oh, nice.

And I won’t give that away, but it’s really genius…

Don’t give that away. That’s too good. Leave some things for him to chew on later, yes…

Exactly. But that’s certainly resonated with what we’re doing at Oxide, where we’re doing this exactly – we are doing the Box 3 Steve Tuck Edition.

Is that right?

We don’t have a Russ Hanneman on the cap table though…

We don’t…

[unintelligible 00:11:04.24]

“This guy…”

“This guy…”

“Hey, this guy…”

You guys sold the box, didn’t you? Segue… Didn’t you guys sell the box?

Yeah, we’re shipping.

Well, you shipped.

We shipped. We have shipped.

I saw the tweet… Was that you? Now that I see you in person… Was that you wrapping this thing?

That is not me wrapping this.

Okay… You look similar to whomever that is wrapping it.

I’m not sure if he’s going to be insulted or I’m gonna be insulted by that… I don’t know, I think that looks similar to Robert Keith. I don’t have any – in particular… So that’s our engineer, Robert Keith.

Okay… Now that I zoom in, I’m disagreeing. I didn’t open up big. It’s small on Twitter.

Callback to another Silicon Valley scene… When Robert Keith joined the company, among the very, very few people at Oxide were a Robert and a Keith… And I’m like “Look, I don’t know how to tell you this, but we can’t call you by either your first or last name, because we are–” And he’s like “It’s fine. I’ve been known by RFK before, thank God…”

The two-name problem, which in Silicon Valley was because of a second Jerod.

Exactly. So RFK, our engineer, he… One of the questions that we got on Twitter - so he’s wrapping the rack… Well, one, there was like a very weird strain of like “I can’t believe the amount of plastic that you are using to wrap the rack.” And it’s like “Do you not know how anything works or is built? Trust me, the plastic that this thing is being wrapped in is the least of the resources being consumed.” I mean, it is a computer, and it is a good one. It’s one that we’ve designed to be efficient.

You had static, versus anti-static Twitter…

Oh, gosh.

So there was a big group that was just like “Oh, my God… You’re wrapping it in something that’s going to create static electricity.”

And it’s actually anti-static. But my favorite was medical Twitter, that was just spiraling over his right foot about to plant, where his ankle’s gonna snap…

Oh, no. Gosh… Really?

And just worried about what happened to his ankle…

You actually talked to RFK about this, because I’m like, you know, RFK, one of the burning questions we got on the internet is like “Is this guy about to eat ****? Because it looks like it.” And he does actually use – zoom in on the right foot…

That’s true.

You’re like “It does look like you’re about to trip.” And he’s like “I can’t remember, but I can tell you, I definitely do not recall sprawling all over the floor.” I think RFK is a coordinated guy. He did not trip… But yeah, that was us – so that’s our engineers on-site with our contract manufacturer in Minnesota, putting the final touches on that rack as it goes into the truck and ships out to customer number one. Pretty exciting stuff.

Yeah. Very exciting. I mean, to literally ship something real… Not just software; not that that’s a bad thing… But physical thing, that’s very hard to take back and obviously change… You’re gonna get it on-site, they’re not gonna wanna let it go… It’s a beautiful thing, too. I mean, you guys have phenomenal industrial design, as well as just design generally. I love the color.

Who’s doing your design work, and how can we get some access to this talent? [laughs]

[13:58] The team?

Yeah, I mean, seriously… Your guys’ design is so good.


Early on, we were very fortunate to get connected with a firm that was helping us with some elements of design, some other stuff… And one of the folks that was there, that was kind of front and center for that had left, and was looking to see what they wanted to do next. And thankfully, it was one of those things where… You know, in a startup at kind of an early stage a full-time designer is – you’re not starting there when you’ve got limited resources and a small team… And it was just an absolute no-brainer with this particular person that we had worked with, getting them in early…

I think, like a bunch of people at Oxide, this particular person spans way beyond design. And thinking about design not for design’s sake… Because that’s the other side of this - we all lived as data center operators; you always have seen the products where you see design for design’s sake show up, and you’re always asking the question “I don’t use that. Could you strip that off? How much would the remaining product cost?” And there’s some good company examples of that in the past, where they would put these really expensive bezels, or LEDs displays on the front that don’t really serve a purpose… And all you as an operator see is added cost for no benefit. And I think the team’s done a great job of focusing in on design for usability.

Like, let’s colorize this thing, because that’s where the operator needs to touch. Or “This is how you indicate health or quality of a particular part of the system”, rather than “How do we make this thing just look good for looking good’s sake?”

And Ben Leonard is the designer that Steve is referencing. And some of my favorite conversations is getting Ben together with the mechanical engineers to figure out how to make the rack look great, while making it highly manufacturable, while designing for manufacturing, and all these other constraints. And Steve and I are very much students of history… I mean, I think that if you haven’t read Steve Job’s “The next big thing”, absolutely terrific book about the history of Next. And the really actually interesting chapter of Steve Job’s life is at Next… Because they made a lot of mistakes. And one of the mistakes they made is he’s after this particular black for the Next Cube, and just spends untowards amounts of money… I mean, it’s the wrong decision, and we do not want to have the matte black that we are finicky about, that we’re not designing for manufacturing. So how do we make this thing beautiful without sacrificing its manufacturability? And that requires you to get some mechanical engineers and a designer in the same room… And there’s some back and forth, because it’s like, “How about this?” “No, no. That’s too expensive. Can’t do that. Can’t do that.” But what we landed on I think is really gorgeous.

I don’t know if yo’ve seen the side of the rack, but there’s a punchthroughwith the Oxide logo, with that green, that just absolutely pops… And it’s good-looking, which is really important to us. I mean, it’s important to us to build something that we – when we set out, we wanted to build something that we would all be proud of; that we’d pull together this kind of team spanning these different domains and disciplines. And Oxide as a result, because we pulled in so many different kinds of folks, from so many different domains, Oxide feels like a heist movie. It feels like a heist movie… I love heist movies, and it’s got – you know, we’ve got the safecracker, and we’ve got the helicopter pilot, and we’ve got the specialists, but then those specialists all pool together, to pull off one last job. [laughter]

That’s right.

Well, hopefully not one last job in your guys’ case…

No, no, no.


The first job.

Yeah. There’ll be other follow-up movies, maybe. With different products.

This depiction on the homepage of the rack - is this pretty accurate to what a typical you’d sell would look like

That is very accurate, yeah. Actually, that is based on the CAD renderings. So that’s pulling straight from Mechanical CAD.

Is a lot of that storage? Is that what a lot of that is? Like the vertical greens across the top and bottom?

That’s what the green is, that’s right.

Okay. Did I read it right, you’ve got 32 terabytes of nVME? You only do nVME storage in this thing?

Gosh, these things are expensive. Holy moly. That’s good though, right? I mean, for what you’re doing in a data center, you want the fastest possible. That is such an expensive buy… I mean, that’s not my money. That’s somebody else’s money… Right?

I think for the lifetime of the company there’s been this real homelab interest in Oxide…

Yes. You gave me a homelab Oxide edition.

We’ve had plenty of requests for that, for sure.

I want that, for real.

For the enthusiasts… Because remember the last time, Bryan - Gerhard wanted to buy one. You’re like “You’re not buying one.”

I think I let Gerhard down a little bit. Gerhard is like “When can I buy one?” I’m like “You’re not gonna buy one.”

Yeah. He’s like “I’ll save up.” I’m like, “I don’t know if you wanna do that, but…”

You’re right.

There is a lot of opportunity though. I mean, obviously, you have to focus on the market you’re gonna focus on, which totally makes sense… But you’re using Zfs, which a lot of homelabbers love… Btrfs is another one, but I think for the most part OpenZFS is one. The homelabber’s heart. So you’re at least there, and you have beautiful hardware…

Yeah, and we’ve got – I mean, Zfs is certainly an important building block… We’ve built our own software, from the lowest levels to the highest levels. So we’ve go our own service processor, we’ve got our own hypervisor, we’ve got our own control plane software, we’ve got our own console… And all of that is open source.


So that’s the other kind of big angle that we can tack into… And this is what we tell the homelabbers; it’s like, “Good news. It’s all downloadable.”

Right. And we’re like “Neah… We wanna buy something.” [laughs]

“I want the hardware!”

Sorry… I know. I know.

It’s kind of weird that you guys have this nerd cachet, and you have this enthusiast audience. So many people interested, watching, love it, wanna buy stuff… Your Gerhards… I’m sure Adam would buy some stuff…

I would, totally. Yeah.

Yeah. But does that translate into anything value for you all?

For sure.

How so?

Yeah, I was just gonna say, it’s like that contingent - many of those folks are in companies who spend a lot of money on infrastructure, on premises… Which, again, is kind of this like forgotten corner of the technology world. It’s like, “Oh, does anybody do on-prem compute anymore?” And it turns out - like, just listen to an AWS keynote from two years ago, and Andy’s on stage talking about 95% of infrastructure sits outside of the public lab.

So you have this kind of overlooked area that is much larger than the public cloud, but has none of the access to the same benefits that we are all intimately familiar with, which is like - why would you consume infrastructure any other way than at the end of an API, that is a set of Elastic services? And yet, if you want to own parts of your infrastructure, for the right reasons, or you’ve got regulatory compliance reasons, or latency, or security… For any of these types of things which are good reasons to run portions of your infrastructure on premises, you’re doing the same thing that folks were doing 20 years ago, 30 years ago. You’re taking a metal rack and then you’re figuring out what server vendor to put in there, who by the way is outsourcing firmware and a bunch of other stuff in that set of boxes… And then you’re figuring out what do you do for storage, and what do you do for your networking… And then you have to do the software part. Are you going with VMware? Are you going with Red Hat? And you have to basically build that whole thing together over months, just to deliver what AWS has at the swipe of a credit card, which is a set of Elastic services for developers. And it’s a tragedy, because you shouldn’t have to –

You know, Bryan and I were at a cloud computing company and just realizing how tough it was for those that were not running a cloud computing company to actually get this kind of clean water to their end users, to developers.

So to your comment Adam, it is definitely expensive for homelabbers, but the interesting thing that you find is when we’re talking to enterprise customers, and they’re comparing it to their current stack of putting all that into a rack, it actually becomes really, really attractive, even from an economics perspective.


[22:00] And I think that that kind of appeal to that enthusiast demographic is super-important to us, because so many of those enthusiasts that are homelabbers at home - they’re the ones that are going back to work and making an IT decision. So we love having that – and I think that that’s always been really important to technology in general, is that playful tinkering that’s happening, where people are kind of following their natural curiosity… It’s a really important way that technology has developed.

So even though we’re never gonna sell to Gerhard and the homelabbers, we love the support, the engagement, the discussion, the enthusiasm. It’s not our market, but it’s a really important element of who we are… And plenty of folks have come to Oxide out of that enthusiast demographic. One of our engineers came to us because they were starting to do things in Hubris, which is our open source operating system; we talked about it the last time, Jerod…

…our Rust-based operating system that any homelabber can experiment with, by the way. I think that’s where I was trying to steer Gerhard into.

Yeah, you were.

I’m like, “Dude, what you wanna buy is like a 20-dollar eval board”, whoever those went off to.


“This is what you wanna buy. You wanna buy an STM32H753 eval board. You can download Hubris, and then you’ve got – you’ve got an Oxide computer. You have it for 20 bucks.”


And he’s like, “No, no, no. I want a real computer.” I’m like, “Alright…”

[laughs] He was serious.

The thing that’s amazing is those things are real computers, and so it is actually a great way for people to get to know some of the lowest-level software that we’ve done. And because all that’s open, people are able to get insight into this level of software that historically has been completely closed and proprietary.

Do you think if you’ll conquer this enterprise world you’ll consider homelab? Like, the home cloud, so to speak?

Uh-oh… [laughter] Adam… Adam, Adam, Adam…

There’s room for the home cloud, that’s what I’m saying. It’s not about “Oh, will you please do this, because I want it?” It’s more like –

He knows…

There’s a market, I believe, in the future, for home cloud.

Alright, so the first step is at least pretty straightforward, which is there are a bunch of use cases… This is still in the enterprise, but there’s a bunch of use cases that are sitting in like retail stores, bank branches, manufacturing sites, park attractions, where there’s a lot of need for compute and storage and networking, and really needing a cohesive, integrated solution. So I think that has to be step one for us, as we think beyond the core data center use cases…

And then yeah, there’s the pony rack. There’s been a lot of calls for how small this thing could get. But…

You know what you guys could do in the meantime, and maybe just forever? …is to throw us a bone. Is to have a Drobo kind of a thing. Just like – it’s an Oxide storage thing that can sit on my desk… I’m a YouTuber, it can be in the background, it can glow green, or whatever… And I think we’ll all shut up and just go on with our life if you guys provide something that we can buy off of the website, you know?

I think we got to the ask…


“Yeah, can you just give me an Oxide-branded machine?” So I think part of the challenge for that homelabber demographic is that we have taken a rack-scale approach. This is true rack scale design. So in particular, as you really want to – actually, I’ll tell you, the biggest technical hurdle to getting a true Oxide rack, even a scaled down one, is you’ve gotta have your own power shelf and power rectifiers. We’ve got our power shelf, we do our AC to DC conversion in a single shelf on the rack, and then we run DC up and down the rack.

Oh, okay.

So a mini DC – you need a DC bus bar, and then we’ve also got an integrated switch, which is actually the single biggest challenge we would have, is scaling down that switch to something that can reasonably hit in a homelab.

And also, Adam, I feel like I’m doing the discourtesy of taking the request a little too seriously, because… It’s like, it’s just not gonna work in the homelab.

What?! I’m very serious.

I know, I know, I know…

[25:55] Okay, well, let’s back up one step then. So rather than take what you have, that large rack - which is just phenomenal. I mean, 2048 CPU cores. I mean, I don’t need that in my home. So don’t give me that scaled down. Give me a version of how you think for homelab cloud. Assume that I want you to consume 4 to 8U on my rack, and you’re a simplified system that gives me great power, great networking, maybe great CPU obviously, and then storage, just in one single box that has superfast throughput between all the different services I run. You know, maybe I’m running a Proxmox, maybe I’m running something else… I don’t know, that you’ve all built, something that’s Proxmox-like… But give me not a version of what you have, scaled down, but a version that thinks like you think, for home cloud.

Yeah. And I think, again, the challenge there is that we have taken, just from a technical perspective – ultimately, the reason that Oxide exists is because the machines that we run in the data center are actually closed to the homelab than they are to the hyperscalers. That’s actually the problem. It’s like, haven’t you homelabbers had enough, really? Because what we run in the DC are these 1U/2U boxes that actually are personal computers. And the approach that we’ve taken is to blow all that up and to take a rack-scale approach. So that scales down to a point, but when you get to something like the switch, it’s like actually the integration of the switch with our control plane software. So we’ve got our own switch, we’ve got our own switch operating system… Actually, that switch is actually not one switch, it’s two switches, because yo’ve got a high-speed switch, and you’ve got a management switch. Getting that into a form-factor – I mean, it’s not impossible, in kind of like the arbitrary future…

Are you scared, Bryan? Are you scared to do this?


You’re making all these excuses. I’m just teasing you…

I appreciate you trying all the tactics here… [laughter]

I really appreciate it. He started with like “Imagine if… Let’s just go clean sheet. How would you do it…?” Not “How you’re going to do it?”, “How would you do it?” And then it’s like, “Now to the shame…” You know. And that was the origin story to the Oxide Mini.

That’s right. I went McFly on you. To be super-serious, I love the focus on where you’re at though… Like, I’m a Ubiquiti lover; I love the simplicity of what Ubiquiti has done for home networks, and enterprise networks even… They’ve just made it really easy to, I guess, get into networking, when you would have normally been maybe intimidated by some of the things that running a network requires. And so I think they’ve proven there’s a beautiful hardware possibility, molded with great thinking and great software. And then distributing that, and having a fanatic customer base. They have a fanatic customer base.

So given that in the marketplace, if you can collapse some of those things that you already have done, maybe there’s another player in the market that’s called Oxide… [laughter] It’s a compelling argument.

Maybe. Maybe. Maybe.

Yeah, there you go… Just say yes, guys. You don’t have to do it. [laughs]

“We’re gonna do it. Next quarter.”

Actually, it’s funny, because I do feel that it would be easy… Jerod, to your point, like “Can you guys just agree to it, so we can move on? I don’t know, let’s go back and talk about the series that I haven’t watched, or something…”


But you know, we’ve always tried to be really direct about what we’re doing and what we’re not doing… And I’ve got a complicated relationship with Steve Jobs; there’s plenty to not liek about the guy, but I do love his WWDC 1997 keynote, “Focus is about saying no.” And especially as a startup, especially as a new company, you’ve gotta know what you’re saying no to. And what’s actually important - in order for us to be able to… And this is the hand-on-heart honest answer. In order for us to be able to ever serve those smaller, edge use cases - still probably in the enterprise, but it would get us much closer to the homelab - we need to survive and thrive as a company, and that means we’ve gotta focus on this core market that we’re going after, which is this enterprise DC market.

For sure.

Good answer.

[29:52] Yeah, I’ll put my hat down there, because I for sure agree with extreme focus, so I’ll give you that… However, I will also say I began with “If you conquer…”

Okay, well, if we can look forward into the future - yes, absolutely. For sure.

There you go. There you go.

I do feel that we – I mean, our aspirations are really to be the kind of company that young engineers can come up in, that customers love to buy from, that people are enthusiastic about… And it’s like, we’re veterans, and we are trying to pull from the best of our collective pasts and careers, and where companies really get this right, and then they lose their way… Which is what we’ve certainly seen a lot of. And we are trying to pull from the best of that, and build something that can be really generational and special. So yes, in that future, absolutely.



Oxide, Homelab Edition.

He finally landed on the correct answer,

2050 OxCon is gonna be just really the big announcement there, as we finally serve the homelab.

You can go back and play this audio at that announcement, and be like “Wow…!”

And we will. We’ll do that. Exactly. “Wow, they knew even then.”

The vision of these guys.

“27 years in the future they would serve homelab.”

So how do we get there? [laughter] What’s the state of – no, no, no, I’m not going there.

This just turned into a board meeting.

Exactly. It’s like, “Okay, so you’ve already committed to doing this in 2050. I’m just pulling in the date at this point…”

Yeah, give us a roadmap.

High-level. Just high-level, what are the milestones?

That’s not what I’m saying, but okay. No, seriously, how do we get there? What is the state of on-prem? You guys are building amazing hardware for this market that you said Steve is sort of like – I forget your words, but basically just unpaid attention to. It’s been an afterthought, basically.

Under the radar.

Under the radar. Thank you, Jerod.

It has. And the worst is it’s been ignored by the companies that are serving that market. And that is largely because - you know, the last ten years, all focus has been on “How do we collectively move to this public cloud computing model…?”

And forget everything else.

I mean, if you were gonna give it the most charitable treatment, it’s like “Well, no, that should be the first focus. It’s not in spite of everything else, it’s just like, that’s where you should start.” And that’s actually not entirely untrue, and that has been the focus for most companies over the last decade. And we were certainly in the midst of it, running a public cloud computing company. I think now the question is “Okay, well, we’ve moved most of the good use cases to the rental model of the public cloud.” Because a lot of people think about cloud computing as this rental service model; this kind of hotel model for living, rather than the actual what it does, which is providing abstractions over a bunch of complicated infrastructure under the surface, and making it accessible via APIs.

So I think now companies are rightfully asking, “How do we get that same service model everywhere the business needs to run?” and there’s no good answers right now. Over the last 20-30 years the industry has split hardware and software; you’ve got hardware providers over on the left, and software providers over on the right. And if you wanna bring those two together, it’s each individual company’s job to go do that. Any company that is building cloud-like infrastructure on-prem has to do all the assembly, and the integration, and the troubleshooting, and God forbid something goes wrong, it’s like finger-pointing left, right and center. You know, “Oh, what version of software are you running?” Instead of delivering a complete kind of solution.

Now the long forgotten masses on-prem are trying to figure out what’s next… Because you can’t – you know, just like I’m in a hotel room right now, and it’s very nice. I didn’t have to buy any of this stuff, and if I want, I can order food to the room… And it is pretty cheap, considering I didn’t know I was gonna be in this city five weeks ago. But if I were living here five weeks from now, I would be looking at a huge bill, I would have people that can come and go in my room without telling me… You know, there’s aspects of hotel living that don’t really hold up when you know you’re gonna be in a city, in a location for 12 months, 24 months, 36 months.

[34:02] So I think at the core of this for us was how do we extend it so that cloud computing is sort of that ubiquitous foundation. And now companies in the future are able to either rent it from a provider like AWS, Google, Microsoft, for the right use cases, and then own it where they wanna own it. But it doesn’t take an army of 500 people to kind of assemble it, and build it, integrate it, and support it. There’s really this kind of productized hyperscaler-like infrastructure that everyone should have access to. That’s where we started.

Now, Bryan, I think I can speak for you that we had a good sense that this was gonna take – this was gonna require taking on a lot, because it’s not only a de novo server design, but then we decided early on that we thought we had to do our own switch, that has its own kind of backstory there…

The paths had diverged so long ago, is the problem. The problem is that kind of the extant hardware makers are PC companies - Dell, HPE, Supermicro - and they don’t actually understand cloud computing. And those folks at those companies that understood cloud computing… Steve grew up at Dell. Steve was at Dell for ten years. And when Steve saw this burgeoning new use case in California for Dell servers - a company called Facebbok. And inside of Dell, they’re like “This is a website. We don’t see why this is that important. We should be selling to the Chevrons of the world…”

Insurance, and manufacturing, finance… Yeah.

And part of the reason that Steve went to a cloud computing company in 2009 is because he couldn’t really get Dell to understand the importance of cloud computing. And you see this over and over and over again. Go look at the backgrounds of people doing cloud at Google, at AWS, at Meta, and you’ll see the Dell and the HPE and their own past, and you know that they left because those companies didn’t get it. As those companies didn’t get it, they got further and further apart. And so those designs haven’t moved from 20 years ago.

So in order for us to be able to go deliver that hyperscale-class infrastructure, hardware and software together, we’ve gotta go back to where the trails diverged, and we’ve gotta go down the right path for on-prem. The problem is they diverged so long ago that we have to take on a huge, huge problem. And the minimum viable product for this company is enormous. As Steve was alluding to, it included the networking switch, it included getting rid of the Baseboard Management Controller (BMC), doing our own service processor, doing our own software all the way up and down the stack. So VMware does not run on this box. ESX does not run on this box.

AMI does not run on this box.

AMI does not run on this box. We have done – we don’t have a bias. We’ve done our own hypervisor, we’ve done our own control plane, and that’s an enormous, enormous lift.

And by the way, when you look at kind of professionalized cloud computing infrastructure providers, this is pretty consistent. Amazon, and Google, and Facebook - these companies, their infrastructure looks nothing like what’s accessible to the Fortune 500 companies that are out there building on-prem. And you’ve kind of seen a similar pattern in the automotive industry, where we’ve been in like a couple decades of outsourcing.

There’s a really good podcast where Jim Farley is talking about how Ford outsourced everything in software. And so when they wanted to make a change to like the seat controller mechanism, they had to go to Bosch, and be like “Hey, do you mind updating the software that controls this aspect of the car?” And there were like 500 different examples of this. And this was done to lower costs, to bring the cost of each car manufacture down to like $500. And the realization that he is having, having watched what Tesla has done, and what some of the Chinese manufacturers had done, is like “This is not only costing us more, we are moving slower. We are not competitive.” And they kind of had this revelation that they had to bring everything back and start thinking holistically at Ford about what a modern vehicle looks like.

[38:10] And I think as we were kind of peeling back the layers, we had a sense of it, while we were at Joyent… And because of all the issues that we would run into, that were at that hardware/software interface… But when you start peeling back, it’s like, “Man, there’s some decades-long cruft that are gonna be pretty challenging to rip out and do anew.”

The saving grace was that at every single one of those layers there were groups of technologists that had come to this same conclusion, of like “No, this layer has gotta get blown up and rethought.” And the reason we are where we are is because those technologists came to Oxide, and said “Wait a minute - oh, you’re rethinking the switch? Thank God someone’s rethinking the switch! I’ve thought a lot about this problem.”

“That’s what I wanna go do.”

What’s so wrong with the switch?

Oh, no… [laughter] Oh, Adam… Adam…

Here we go.

[laughs] We don’t have time.

Here we go again. Oh, my God… And it’s not the switch, but the switch operating system. And you’ve got the –

The switch is in charge of a lot of different things, obviously. It’s like moving the packets, it’s connecting the devices, it’s connecting all the IP stuff. It’s super-important, obviously, in the network. It’s the network. It’s the backbone of it.

It is. But right now, the switch has no real integration with the compute nodes that it’s talking to. There are a bunch of things that you actually wanna go deliver functionality to the end user. You wanna give them that virtual private cloud. You wanna give them – there’s a bunch of sophisticated… You wanna give them sophisticated firewalling. There’s a bunch of sophisticated stuff you wanna go do. In order to do that, you actually need to have hardware and software and cross-stitch across the compute sled and the switch. And when those things are delivered by two different companies that have no real sense of collaboration and are constantly pointing fingers at one another, it’s really hard for that end user to go create that infrastructure for on-prem. So yeah, very much, the switch had to go –

It’s not a problem with the switch, it is very much that the switch just doesn’t know what happens when data leaves.

Right. It’s like silos.

And if you’re actually thinking about a pool of resources that are all – again, back to cloud computing, you’re not trying to design specific hardware components and software components; you’re trying to give developers instant access to arbitrary amounts of compute storage and networking via an API. And you give equality of service to that, and you can’t do that when you have that kind of brain stem, that switch that is unaware of what’s happening on compute sleds, and unaware of what’s happening up in the software stack. It’s the classic – anytime there’s a bump in the night, everyone blames what? The network.

Like, “Oh, it’s gotta be something in the network.” And poor network engineers are left kind of trying to defend themselves saying “No, everything I see in the switch, in the routers looks good. It can’t be in the network.” This is where, time and time again, we realized that you need to build these things together, and be able to deliver that kind of end-to-end visibility.

How different is what you guys are doing? So if I’m a CTO and I have two proposals on my desk, and I have to decide a direction we’re gonna go with a new data center we’re building out, or whatever… And I can go with Oxide racks, or I can go with whatever’s currently there; stack a bunch of Dells and some switches together, and do what I’ve been doing for the last decade… What kind of switching costs am I looking at, what kind of lock-in is there? Do I have huge risk to pick you guys, or is it like, everything you’re doing is so low-level that at a point where I’m gonna care about it as a company who’s rolling out some services it’s all good? How different is it?

[41:54] So in terms of – I mean, we would propose in terms of value, and density, and economics, and services, it’s very different. In terms of switching costs, I think one of the big benefits, and why the timing was right for Oxide now versus Oxide, say, 5-10 years ago, is that where companies have oriented and really invested a lot of resources is sort of developer-friendly tooling for cloud computing. So by that measure, the switching costs are extraordinarily low, because you’re now able to leverage the same kind of Terraform frameworks… Just, the models and workflows that you’ve become accustomed to are stitching into Oxide. Because you can think about it as kind of another cloud that you now kind of own and operate on-prem. And it’s leveraging all that investment you’ve done over the last five years, getting to more cloud-first type models, and workloads, and development practices, but being able to leverage those on-prem.

And then in terms of thinking from a data center operator perspective, where this solution meets the rest of the data center is obviously at the network hand-off. And so we speak PGP to the network. We come with gifts to the network operators and engineers, which gives them a whole kind of new world of visibility, so that they can not only be in defense mode, but actually be proactive, and be able to anticipate where there’s congestion, and be able to help give users better experiences.

And then we’ve invested a lot to make sure that that hand-off point where we’re talking BGP to someone’s network is clean and pretty straightforward, pretty low.

And in terms of that operator experience, one of the things that we’ve definitely optimized for, because we’ve actually built this thing as a product - you can actually get it wheeled in, de-crated, powered up, and you can start provisioning on it that day.

Actually, even now, when I – I guess, Steve, you won, because… Steve would say “We are gonna get you up and running within a day.” And I’m like, “Look, Steve, I normally –”

And by the way, just for context… This happened to us as we were building out data centers all over the country, and eventually the world, when Samsung acquired Joyent. And the lag time from when those boxes all land to when you’ve got added capacity, which, by the way, is dead – like, you can just watch the dollars burning on the clock, when you’ve got boxes you’ve paid for, and you do not have customers that are being served by them… So that time is really important when you’re thinking about the economics of the business. And for us, I think we had it – you know, we were operating pretty efficiently… But that’s still measured in weeks. And a bunch of the companies that we went and we talked to in 2019 were telling us that they measure it in months. It’s an average of like 100 days from when boxes land to when they’ve done installation, and integration, and burn-in tests, and software deployment, and validation, and network settings, and they’ve handed this off to developers… 100 days. And our goal - or at least my goal; Bryan’s goal was higher than this, but… It was that we’d be able to do this in one day. So you roll it in, you give power, you apply networking, and you have productive end users in the same day.

It’s not a day. I keep saying, it’s not a day; it’s like hours.


And Steve’s like –

No, you said hour. You would say one hour.

Come on, Steve, it’s hours.

And it’s not what we’re aspiring to, it’s what we’ve done. So I’m like, “Steve, can you give us –” It’s just like “Look, can we just say a day?” I mean, if it takes hours, it’ll be done in a day. I’m like, “They’ll definitely be done in a day”, but it’s actually – and this is where you get to the real pay-off of having rethought all of this, having designed it holistically… Just like that iPhone unboxing experience is really quick and smooth, that Oxide unboxing experience, de-crating experience, and the reason that it is possible is because this whole thing - we have all the hardware and all the software, and so when we actually do our initial install of the software, we effectively go through our own recovery path.

[45:58] Assume you’ve got nothing on the rack, and we go from literally nothing on the rack to you can provision within hours. I mean, I think it’s standing at like 90 minutes right now… And actually, what we are ultimately bound by is the UART speed inside of the sled when we’re transferring the most primordial image, so that it can bootstrap itself up and boot up the network. In order to be able to boot up the network you need to have enough of an image that you can actually go boot… And we are ultimately bound by that UART speed.

If we had a – I do love that the install experience around this is just eye-popping… And the folks that have been working on this are not necessarily – I mean, we’ve got some folks who have suffered through the pain of Dell, Supermicro and HPE, but a lot that are actually coming jsut from the cloud side of things, and they’re like “I don’t know, I wanna make this as great as it can be.” They don’t know – it’s like, “Do you know how far ahead you are of the state of the art?”

So when you initially install the rack and you plug into these technician ports and do this original – because you have to have some initial configuration. You have to have some initial – before you can actually just hit API endpoints, and hit that web console, there’s gotta be bootstrapping… And the actual software that does that is just gorgeous. We think it’s gonna be a wholly different experience.

Jerod, to go back to your question, if you’re that CIO - if you look at what this product offers your internal customers, it’s much more comparable to the cloud than it is to the on-prem stack of garbage that you’re currently suffering with.


Sorry, homelab…

No, no, no, that’s not homelab.

“Sorry, homelab…”

I’m cool with that.

No, the problem is – actually, we are running homelabs in our DCs.

We are. Everyone is.

Those are bold words.

It’s time to get the homelab out of the DC. I think that’s a good pitch…

We’re trying to get the homelab out of the DC. That’s exactly what it is.

To the earlier conversation - the homelabbers that go into these enterprise environments are the rabble rousers. They’re the ones that are shaking their fists, like “Why can’t we get better?”


And it’s interesting, because our motion is not top-down. These folks are some of the most load-bearing folks in these organizations, that are helping create the products that these companies are selling to their customers… And they’re saying “How come we can’t do better internally, so that I/we can focus on building better products for our customers, instead of being our own private cloud corporation?”

We had one company that we were talking to in the finance space that was like “We have a 500-person engineering operations team, and we have to put them out of business, because our customers don’t – and not get rid of them; we need to reapply those folks to be able to work on the things that our customers are waiting for, and want.” But it is folks that are not gonna necessarily sign the PO, but they’re the ones that are making the noise to get to the folks that do sign the POs. And it’s been great to have that kind of community support. And it is that clarifying time when companies have moved certain things to the public cloud, and realized how much less operational overhead there is, to help sharpen, like “Wait a minute… How come we can’t have that same operational efficiency internally?” Back to like the CIO and the CTO; it’s like, “Wait, we can vastly improve being able to focus our talented folks on our business, and then give those developers a much better developer experience.” Which - I think that’s kind of the all-important bit. The amount of importance placed on shipping new features, shipping new products. Focusing on what their actual business is has been super-important.

Can you walk us through exactly what it’s like to boot for the first time this Oxide rack? Assuming the sad data center person’s walked us to where it will go, and says “This is where your Oxide server rack will go.”

“Watch the box.”

[49:54] Assuming that’s already taken place. We’re there – let’s say Jerod and I are there. We’re the administrators, the operators, whatever you wanna call us… We’ve gotta provision this thing. You say it takes a few hours… We slide it in, maybe it takes a small forklift, or several people, or maybe it’s got wheels, I have no idea…

This thing is not short, so…

Let’s just say it’s there. It’s there. We’re not worried about door spaces, how wide we’ve gotta be, nothing like that.

Okay, got it.

We’re at the rack. It’s not plugged in…

Is RFK with us? Do we have RFK here, or are we on our own?

RFK has unwrapped it, because he comes to unwrap it.

And we’re ready to plug it in. To the network, to power etc. and then boot it for the first time. Are we attaching our Ethernet cable to a port on this thing, or a console port?

What is the exact interface, the real details?

Yeah, the real details. So if you look at the rack - and I think you may be able to see it on the website, but there are technician ports at the front of the switch. So that is where you are gonna plug in – your laptop cable, effectively, is gonna plug into the switch…

An Ethernet port.

An Ethernet port.

You’ve got a configuration file that will specify that bare minimum that we’re gonna need to be able to connect to your broader network. That’s gonna be uploaded over that technician port, and then you are gonna SSH into that SSH port, and you’ve got an install screen that’s gonna walk you through the actual installation of that rack.

We’ve gotta get a video out there of this, so people can kind of see it… And this is also where it’s just like – we’ve got a very demo-based culture, and so every Friday we’ve got what we call Demo Friday, where anyone can just demo anything to the company. That’s been really, really important for us, because it allows people who are doing things that are maybe pretty small on the stack to kind of get that appreciation of their peers.

We had a demo on Friday of one of our engineers making this thing that is already gorgeous, even better. Steve, I don’t know if you’ve had a chance to watch John’s demo, but it’s just absolutely eye-popping. But we’ve gotta put a video of it out there, so people can actually see it…

It was demo-ed yesterday.

Oh, nice.

And we didn’t even have the latest… Back to where we started, we were with a customer, and John was like – well, as you started to go in on that early setup, he’s like “Well, you should take a look at my laptop really quickly…” And again, it is fun to watch customers get delighted by these low-level kind of small implementation details… Because - back to the fact that they’ve sort of been ignored, having folks around them that really, really care about what is most painful or frustrating about their daily jobs, and seeing a little bit of care and thoughtfulness go into these parts of the stack is really fun. And wicked is kind of a part of this sort of setup service on the rack that gives you a visual of how many sleds do you have, what is up, what is not up…

And this is like – we’re not over the web right now. We can’t be, right? So this is all over SSH. This is a terminal app… So this is where – actually, one of those strange bounties of Rust. This is based on Rust tui, which is a terminal user interface builder… And you can build really easily. You can build really robust, eye-poppingly beautiful terminal-based apps.

Is that right?

Yeah. This is a terminal-based experience…

I love it.

And to Steve’s point, it’s one of these things where we are going into these little details that matter a lot to people who have been suffering. One of the things that is really important to us at Oxide - for the virtual machine… So you provision a virtual machine. How do you get into that virtual machine if it itself, the guest has borked networking, or is screwed up, or even screwed up the image in some way… It’s like, you need a great serial console. The irony of the cloud is that the serial console is actually more important than ever… And the serial console was something that actually even the biggest public cloud providers don’t take very seriously. And we have taken the serial console really, really seriously.

One of the things that kind of fell out of our implementation is you can have many people watching a single serial console and participating in a single serial console.

[54:08] Is that right?

So yeah, you can share, effectively… And I think this is gonna be one of these things that our customers are gonna absolutely love, because it is – when you’re dealing with one of these low-level issues that’s annoying… It’s like, “Oh, I’ve screwed up cloud init in some way”, and it’s hitting the wrong thing, or what have you, and no one else can log into it, because that’s the problem - the ability to share out a serial console where everyone can log into the same serial console and begin to get this thing debugged… Which is a problem that everybody has. In the public cloud, this is a problem that we have, and I think it’s gonna be one of those little touches that we think people are gonna really love… Because it’s meaningful. It’s not little. It’s actually really, really significant, and it’s gonna have a material effect on the way people are able to do their jobs.

So back to the boot-up…

Adam, or Jerod, I don’t think we took you all the way…

Not deep enough. I wanna go – take me to the tui. So I’m in the tui, and I’ve uploaded – or I’m already in the tui, so I’ve uploaded this config… I’m in this thing… What do I see as initial operator? You said this is your own OS, so it’s like –

Yup. So you are seeing the – it is telling you “I’m gonna give you a root of trust image, a service processor image, and an OS image, and I’ve done this for each of these sleds. This is now in progress for each of these sleds.” One of the challenges is always how do you deliver a beautiful interface that’s also transparent, and gives people the details that they need when things go wrong… So we very much have designed that with this in mind, so you’re seeing its progress, but you can also get as much information as you want about what’s actually happening, and where are we actually in terms of what’s actually going on in the system.

Again, one of the big advantages of us being transparent, open source - we want you to know if this thing goes wrong, where it went wrong, and what happened. You’ve got all these details, but what’s actually happening? And truthfully, that takes 20 minutes. You can do all that in parallel. That kind of all comes up. And then your configuration, provided that you’ve been able to actually connect via BGP, and you’ve got external connectivity, which - you’ve gotta deal with your own internal network to do that, and we’ve got the ability to get and NTP server, and so on - you’re up. And you’re gonna go hit a web console, and you’re gonna go provision.

That web console then is going to walk you through a workflow to go get set up with your IDP.

What’s an IDP?

Identity provider.

Yeah, your identity provider for auth. So again, in enterprise environments you’ve got usually a SAML-based auth environment, whether it’s Keycloak, or some larger, more unwieldy Microsoft products… And we were not gonna go try to replicate all of that. These are established authentication and identity validation mechanisms… And so integrating into that so that you have kind of a pretty clean workflow for being able to get that stitched together.

Now you’re the administrator, so what you are doing is setting up a silo, and that is kind of a boundary for – because one of the other important aspects of this is being able to operate in multitenancy. I know multitenancy gets thrown around a lot, but the necessities of having both delivering kind of quality of service guarantees to customers while having complete isolation is one of the very complicated and hard elements of running a cloud, and something that has been very difficult for extant systems providers to get right who are selling on-prem. Even some of the kind of hyper-converged folks that entered the market in the last 5-10 years, this notion of multi-tenancy is a pretty tricky one to get right. But in the Oxide system you’re basically setting up a silo or a number of silos, depending on your customers that you’re serving.

So you, Adam, have like two different departments, and you would have those departments in their own kind of boundary. And then it’s as simple as inviting them in, and those users can then come in, just like they’re hitting EC2 or AWS; they can set up their credentials and create a project, and they’re off and running. They can go deploy instances directly, they can do it via the API CLI…

[58:22] Or the web console.

Yup, a web console… And off they go.

Okay. Is there an install of an Ubuntu at that point, or their flavor of Enterprise Linux, whatever they decide to –

Yeah, they can upload images that they wanna run… You can kind of promote those images to be available to everyone in the silo, or just someone in the project… So you have the ability to kind of select who you want to have access to what, as say the project lead. The purpose of this is to enable those end users, whether it’s SREs, developers etc. to be able to operate fully self-service. It’s like, get out of the shadow IT, where folks feels like they need to go swipe a credit card, because that’s how they can move quickly, and start giving in that same agency on-prem that they have in the public cloud.

And then from an operator’s perspective, back to you as the administrator, your job is to keep them running. Make sure that they have ample quota, and that they are accessing the resources that they need. But you should not have to be in the way in allowing them to kind of run, and deploy software and run software, much like the cloud.

Very cool. Well, we started the show with saying that you’ve just delivered your first rack, so congratulations again… How did you know you were ready to deliver? How did you know this was hardened to the point where you can deliver on that promise? What did it take to get there? How bloody are your knuckles, how upset are people on the inside, to some degree, to get there? How did you know, what did you do to know that this was mature enough to do that?

Yeah, so I think you always have a problem when you’re co-designing hardware and software. You’ve got the things that you can kind of revisit, and then the things you can’t revisit. And you kind of said this at the top, that when you ship that hardware, that hardware leads.

Yeah. It’s out of your control.

So the hardware has to be absolutely right. And you really need to drive that to be correct… And there are huge numbers of challenges there in terms of getting – the hardware is hard, and I think actually more directly the details really matter. And a very small detail can be the difference between hardware that works and a warm brick. So getting those details right takes a long time, and there’s a lot of iteration involved.

We actually have been pretty transparent about our whole journey. So we’ve got our “and Friends”, Oxide and Friends, where we –

The OG “and Friends”.

Yeah, exactly. Well, I think we can all be “and Friends.”

We’re all friends here.

We’re all friends here.

I was telling Jerod, I’m like “This is amazing, they have this podcast called Oxide and Friends. How novel.” [laughter]

Yeah, exactly.

We’ve loved getting the team on there in their own voice… So we’ve been able to shed a light on some things that really have not had a light upon them. So getting the Double E team talking about bring up,[unintelligible 01:01:15.29] bring up lab. And getting regulatory compliance. So when you have hardware, you can’t just ship hardware. You’ve gotta actually have – the FCC has to certify that you have not made something that’s gonna interfere with all the electronic equipment around it… And that’s compliance.

And by the way, the FCC has fixated on the state of the art, which are these 1U/2U systems… So it turns out when you’re building a rack-level system and you walk in to go get compliance, they are measuring you against these much smaller systems. And if you push back on that, and you’re like “Well, wait a minute… There’s the density of two racks running inside this one rack. This is the product”, they kind of shrug. They’re like “I don’t know, take it up with the FCC.”

And you just find, time and time again, there are few in the industry that are thinking at the rack level. In fact, the only demographic that has to think at the rack level are these end customers.

Yeah. And that’s not where you wanna think about it at.

Because that’s where it’s already baked. That’s the cake, you know?


As we went through this, it’s like, you can see why this is hard. Compliance was hard. And we’ve got a great Oxide and Friends talking about all our adventures in compliance. Which, by the way, people never talk about… Because what happens in compliance stays in compliance, historically. For any company, going into compliance is tough, because you’re gonna find things where it’s like, we are omitting – we’ve got this omitted… At this particular frequency we have this omission that we need to go understand and patch up.

So there’s a lot of work there. But once that’s done, you’ve gotta have the software ready to go, and in particular the software that is the most important software ready to go is the ability to actually update the software. So there are two elements of software that have to be perfect when you ship. One is the actual root of trust, and the ability to actually indicate that this is Oxide firmware. To actually sign that firmware and put it on the root of trust, and lockdown the root of trust such that it can’t be impersonated. That has to be done correctly, and that’s actually super-complicated, because that requires the generation of a secret. Namely, the private key that we generate, that is ultimately used to sign that firmware - that’s a secret. And how does Oxide keep that secret?

And I’m convinced that many other companies our size are like “Just lock it in the CEO’s drawer and don’t ever talk about it again.” But it’s like, that’s not really good enough. The secret is gonna be used – if you could impersonate Oxide firmware in perpetuity with this, you actually need to go solve a really thorny problem, which is how you generate this securely, and how you store it securely. And that’s a whole thing.

So there’s something called a ceremony, and this is a technical term in security spaces. Steve, this is something you and I learned a lot about. I did not appreciate the complexity. You’ve gotta have that exactly correct, and that’s a whole thing. You’ve gotta have the ability to update the software. That’s gotta be correct. The software’s gotta be able to bootstrap itself. And then yo’ve gotta know the software that constitutes that minimum viable product. And there’s a whole lot –

And by the way, software update is enormously difficult. This is very difficult for Amazon - a good example of a company that does it really well. And Tesla is a company that is struggling because they don’t do it well. And VW. It has these very, very long shadows if you cannot do a good job of versioning and updating software. And it sounds trivial… I mean, it was THE feature that we had to make sure we had gotten right before shipping. And everything beyond that - well, obviously, there’s a huge amount of software that ships in this system. It’s more software than hardware… Which is a bit counterintuitive, because we’ve got a big hardware rack on the website, and it’s easy for folks to think about it as a hardware product, which it certainly is… There’s a whole bunch of software on there, but update is the fulcrum. I mean, that is the thing that allows for all of the rest of the software to continuously be improved. To go fix things that are wrong.

And we were, again, very fortunate that we were able to attract folks that had been working on this problem for their career, very passionate about this problem, that were front and center on working on that. But as Bryan points out, that’s one of a couple of really, really critical things that we had to get through the gates before we knew we were on the path of shipping.

[01:06:01.06] And then I think, Adam, important - how do you know… We’ve got a great luxury at Oxide, namely the product that we’re making is one that we ourselves wanna use. So we’ve got an Oxide rack that runs our software, that we are constantly updating and running on ourselves.

We are the first customer.

We are the first customer, and this is always essential. You know when you buy a product from someone and it feels like “Are these guys using their own product? Because this thing kind of sucks.” And if the engineers were forced to use their own product, I think it’d be a lot better. And we are a big believer - this is something that was instilled in me early in my career at Sun, where a real turning point at Sun… And you talked to Matt Aarons about ZFS. One of the early moments for ZFS was us storing our own home directories on ZFS. And I’m very proud to be in that first batch of - whatever it was, eight people that had all of their data on ZFS. Because we had to go all-in first. And that machine, Zion, was a machine that we all volunteered to be on.

Part of the reason that we’ve deployed on ZFS at Oxide is because I’ve been on ZFS for whatever it’s been, 20 years. And when you’ve walked that trail with your own infrastructure, you have a level of competence in it, because you’ve been using it yourself. So we are using our product ourselves, and there’s so many things that have come out of our own use of it, where we have obviously discovered all sorts of issues that needed to be improved, and so on, but it’s also given us the confidence to know that what we’re building is actually – and to pull this whole thing together required a hardware rack that was to the point that it could really be used… We needed a lot to be in place to be able to even use our product ourselves. And boy, the first demo day that one of our engineers actually – and you can kind of see him working himself up to it, and Steve, I knew you had been DMing Lukeman to see if we could actually demo the whole rack together… And that moment where all of a sudden we had all Oxide software running on all Oxide hardware, and being able to demo that for the whole company is so catalytic, and was so energizing… To realize every single one of us at this company has been demoed today, and how great is that.

When was that demo? How far back was that demo?

That demo – because again, this stuff has to go through compliance first. Compliance was in January. So it was in early April when we were able to actually pull everything together, and then started iterating really quickly. And fortunately, the software had been developed in parallel…

Of this year, 2023?

Yeah. But if you go back to some of the other milestones of strongly believing that this bird was gonna fly - if you go back to the first bring-up of the first board… And we did a de novo design on the board. You kind of find there’s these reference architectures for server boards that everyone in the industry uses. And if you break the mold, which is, again, based on this sort of – it’s a PC mold from the ‘80s. If you break that mold, you’re kind of in the wilderness. And you find that this stuff is very poorly documented, for the reasons that we have referenced architectures that everyone runs off of. So that first bring-up, on that first board, was in 2021. It was September 2021, right?

Yeah… October, because we had been getting it up through October/November. And then –

And then another big, big, big one was – because again, remember, early ‘80s is when the PC industry outsourced BIOS and firmware. And companies like American Megatrends came up, because it was IBM and the clones. And all the clones – everyone was consolidating around this outsourced model of “Let’s have one company or a set of companies write the firmware for all these machines, so we don’t have to.”

[01:09:57.23] And the outgrowth of that is you’ve got this massive, proprietary, opaque blob of software on enterprise machines that is not very well qualified. Definitely not understood. So ripping all of that out and writing a de novo set of firmware in Rust, and getting that to boot on an x86 board was actually maybe the riskiest thing we did. And when that booted up, that was another “Holy *bleep* We might make it.”

“This bird can fly, man…”

“It works!”

And that was a while ago. That was a long time ago.

Then on the software side, we’ve been working on the control plane, the hypervisor, and all that - all that had to happen long before we had hardware. So there was another early demo from Sean Klein on our team. I still remember when that demo was, when demoing all of the software not on Oxide hardware. So this is on commodity hardware. Because that was another moment where everyone’s like “Holy *bleep*, we’re gonna pull this thing off.” And that was a year-and-a-half, two years ago; that’s a long time ago.

So on the one hand, it all came together on the rack in April, but this has been going on for a long, long, long time. Because it takes a long time to do all this stuff.

One other demo that was amazing - and you could just tell the two engineers, James and Greg, that were doing this demo, they were so giddy they could barely… But they did a good job of playing it off like it was just another casual demo. So they had a Minecraft server running, and they were chatting up about their Minecraft activities, and who’s doing what…

Running in the Oxide rack, to be clear.

Yeah, running in the Oxide rack. And one important aspect of any kind of cloud infrastructure is the ability for you to move workloads uninterrupted. So you need to be able to tolerate live migrating things around. So we’re watching this demo, and they’re small-talking, and just like giddy to give the final reveal… And at the end of this Minecraft banter, they had been demonstrating our live migration. They’d been migrating stuff all over the place, with no blips in gameplay.

Oh, nice.

And again, it was kind of another – because there’s a bunch of aspects of this that you need to go kind of stress-test. And it was yet again another one where the whole company on demo day is sitting there just like gobsmacked that this capability was running as well as it was under the hood.

In live migrations, one of those, again, little things that if you don’t do, if you don’t build into the first product, then you have these islands of compute that you can’t do anything about. And it’s very, very important that we’re able to migrate things around, so we can reconsolidate the rack, so we can service it, so we can pull sleds, so we can add sleds… It’s like, you need to have this capability, but it’s gotta be built into the very lowest DNA of the product.

And then we bring it all the way to today, and we are gonna be finding things that are at the edge of Oxide and the customer environment, that - you know, some of which are smooth, and some of which have sharp edges… And the next six weeks, and six months, and six quarters we’re gonna be continuing to smooth that out and continually improve that, so that the product is even easier, and getting better as we go.

Yeah. So Steve, you mentioned that you’re in a hotel room… I’m not sure if you mentioned before we hit Record that you’re actually on-site with a new customer, and getting messages now… Like, this exciting start of the day, messages are coming in, so surely, you’re gonna learn a lot today, probably, and ongoing.

Yes, I may have been going to my DMs occasionally during this, to see who things were doing…


It’s all good.

You played it very smoothly. Thankfully, nothing must be that much on fire… Because we have had one guest had to just run out in the middle of the show before. And I wouldn’t have blamed you if you had to, but happily, you haven’t had to.

I may have muted once or twice, but yeah… No, it’s exciting.

[01:13:50.14] You know, when you look at on-premise versus not on cloud, is that synonymous? And the reason I bring that up is like – the question really is “Who is an Oxide rack for? What type of customer?” And the second question, I suppose, is this shift or 37signals to move off the cloud. Should they have bought an Oxide rack? Given the prolific move from, okay, cloud is – you talked about rental earlier, Steve, and how it obviously doesn’t make sense to live in a hotel forever… Is that the same song, basically? Should 37 signals be a customer, or are they a customer type for you all? Who should buy these things?

Yeah, I think, probably… But I would wanna have a conversation with DHH first, and make sure to understand what their explicit use case is… And this first product from ours is not intended to be applicable to every single use case on premises. It’s focused first on general-purpose compute. So we are definitely gonna have hardware acceleration in the product in future iterations, but there’s a large swathe of workloads that are well-suited for this, and it’s a lot of the on-premises workloads today.

By the way, I own a home, and I’m staying in a hotel room. So it’s also, like – you know, they’re the right kind of accommodations for the right use case, but the general customer set that we’re talking to, and that we’re engaged with, and that we’re serving right now are large organizations, typically… So you’ve got kind of Fortune 1000, regulated industries, you’ve got a lot of large institutions that are going to have a lot of need for rental public cloud computing, and also we’re gonna have a lot of on-premises IT infrastructure that they need to support for the next couple decades, as far as the eye can see.

And you even ask some of these folks, like, the most ambitious public cloud adopters, “How much of your workloads do you expect to have in a public cloud only model in 5 years?” and it’s hard to find anyone that will even say north of 50%.

Is that right?

So you have this just massive, massive – and these are measured in hundreds and hundreds of millions of dollars, in both places. And again, still having to pick from these kind of 1980’s architectures that Bryan mentioned, and deal with having to then find software. And is that software provider that I’m using today getting acquired by maybe a megacorp who’s gonna raise prices? So the large kind of institutions and large enterprises are the demographic that we are focused on the most right now, because those are the ones that have reached out and said “Hey, we have spent a lot of time and energy on our public cloud strategy over the last five years, and now we’re kind of turning that raygun on premises and figuring out how we modernize and how we improve that.”

There’s another group that is really interesting, and we’ve spent a bunch of time with, and that is the large cloud SaaS companies. Companies that were born in the public cloud… They themselves are now spending as much as large enterprises in the public cloud, and I think the thing that I don’t like about the whole 37signals discourse is this cloud repatriation. It’s like, “It’s time to leave the cloud. It’s time to go back to on-premises.” And I think that’s totally the wrong conversation. What’s really interesting is when you talk to these large cloud SaaS companies, that they’re not saying like “Oh, we’ve gotta get out of the cloud. It’s a racket. We can do all this for less. We can do it better than the cloud.” Yeah, good luck. You can do it better than AWS does it. No.

It’s conversations that are around “How do we grow and go get access to more of our customers’ data?” In this financially-regulated industry we’ve got 10% of this four letter bank’s data. How do we serve that bank and help them use our products for 100% of their data? Well, in order to do that, we’ve gotta extend our platform closer to where that customer is for a bunch of their data… And we can’t do that by cobbling together a kit car of five different enterprise providers and building a 500-person engineering team… And that’s where we’ve had some really rich conversations with these folks where they’re excited that they’ve got a vertically-integrated appliance that they can land their cloud SaaS platform on top of, and go deliver that into a colo, an exchange; places where a lot more of this customer data lives, or these customer use cases live.

[01:18:12.02] And so we’re really excited about that use case, because that now allows Oxide in a way to help kind of extend enterprise software beyond just public cloud use case, to a bunch of these other markets. And yes, they will be customers of Oxide, we will be partners, because we’re gonna be – you know, there’s kind of a nice, virtuous cycle here, where it can be kind of a helpful distribution channel, but also help these companies to improve latency, grow revenue… And those use cases are much more interesting than like “Oh, is the pendulum swinging back out of the public cloud, and back to on-premises?” It’s like, that’s kind of the wrong way to think about it.

I’ve got two really quick questions, and then we’ll let you all go. Sound good?

My first one is… Where are you guys storing the secret?

Oh, yeah. That’s a good one. We actually do wanna do an Oxide and Friends… Clearly, we’re not gonna tell you exactly where the secret is stored, but I think we do wanna go into some – because I think the technical details are really interesting. I think it’s important that we talk about the dot matrix printer that gave its life for the secret.

Oh, man, I like the sound of that.

Most gave some, some gave all for Oxide…


And that dot matrix printer sacrificed itself for the greater good.

Met a Dremel it wished it had not.

It lived a short, but important life.

Was this like a scene out of Office Space, where they take it out back, and…

It goes beyond that though. You can’t do that.

Yeah, because we thought “Oh, this is going to be like PS load letter and we’re gonna take it into the field, and we’re all gonna –”

Jerod Santo Yeah yeah yeah…

No, no, no. No, this is gonna be taken apart surgically, and destroyed surgically. So in particular, the Dremel goes through the microcontroller, because this dot matrix printer - why did this dot matrix printer have to die? Because it printed out the secret. It saw the secret.

It saw the secret, yeah.

And like, you can see the secret, but then they have to kill you. The dot matrix printer has died, and the secret is stored, attended by armed guards. Fortunately, society has some apparatus for storing such things, so…

Which landfill is this thing in?

Yeah, exactly. That’s right. We’ve got Russ Hanneman out there looking through the –

That’s right, for the thumbdrive.

For the thumbdrive. It’s a good, good question.

Ultimately, that ends in a safe deposit box at an unspecified institution.

That’s what I’ve figured, you know…

In an unspecified country.

Ultimately, it has to. But I think the apparatus there is really interesting, and it’s something that we actually wanna get into in the future. It was really fast-hitting, just like all of the precautions that you take, and that are really important, because the secret is super, super-important. The secret is company-ending, and you have seen this from – there are vendors that have lost control of their sign-in keys…

Oh, my goodness…

Yes. MSI.

This has happened to MSI? Wow…

Yeah, it’s happened recently to MSI. They lost control of their sign-in key, and it’s like, you’re done. It’s game over. You can never know what you’re actually running on.

You can’t trust what you’re running on.

Right, right.

And it is really, really important. So we’ve treated that with great care and great rigor, and then we’re also – for any customer, that’s like… Because another really interesting aspect of this is documenting this process really thoroughly… So a customer obviously can’t tell you what the secret is, but we can be very transparent about all of the steps that we took to go secure that. So there’s a very crisp audit trail, so we know exactly who was there, how it was done, all the steps and procedures that we took… You’ve got when it was done, and so on. So it’s pretty neat.

[01:21:54.20] That’s cool. You guys should publish that ceremony. Not the details, but just the general flow, and how to really keep a secret, kind of a thing. That’d be a cool blog post, or GitHub repo, or something.

Oxide and Friends.

There you go.

Well, your hub has gotta be the podcast, right? And everything else is the spokes. So I agree with that.

That’s right.

Yeah, put it on the podcast first.

While we’re on the podcast conversation - you know, I think podcasting’s moment has kind of passed, in terms of like, there was a time where it was like everybody had to have a podcast. And I feel like people have kind of moved on, the general consensus… But brands have wanted to have podcasts, and have podcasts… It seems like it’s a great thing for a brand. So many of them make podcasts that nobody wants to listen to. And you guys have a podcast that everybody wants to listen to; you’re also a brand, so to speak… A company… And I’m just curious, is there like a strategy around this? Is it just like you guys like to talk on microphones, or…?


Is there like a content strategy going on here, or is it just like “Yeah, we like to talk on the microphone”?

We should tell you about On the Metal first, because I think that was the first version of the podcast, was On the Metal. And the strategy, such as it was behind that, because it was also selfish in that we wanted to talk to people that had been there as computers were built over the last couple of decades, and found that there was not a lot of recorded history of it… I mean, obviously, thousands of books written, but there wasn’t a lot of audio kind of telling the stories of computing in the ’70s, and ’80s, and ‘90s, and 2000’s, and even more recently. And we were seeking, and kind of were fortunate to run into or know folks that were at that hardware/software interface in the earliest days of Honeywell, and Intel… And getting them on record telling those stories. I think we had a pretty good instinct that this was gonna be content folks would want to listen to, but that historical-themed, kind of “How we got here”, like “Why we are in the state we are” was really compelling.

And I think strategically, the thing that was clearest in our mind was that there are other technologists out there that would like to join us, and there are gonna be folks that we never met; they’re gonna be out of our network. And the podcast was a way of putting the content in front of them that we knew was compelling, and we think that they would find compelling, too. So kind of like – such as the initial thrust was, this is a way to help build the team. It felt like it was a bit of a bet, but not much of one, because it just felt like this was pretty obvious. I don’t think we were expecting just how quickly it would bear fruit… So we got that first episode out of On the Metal with Jeff Rothschild, who – an extraordinary technologist. Founder of Veritas, very early Facebook; first VP of Engineering at Facebook.

Early Intel.

Yeah, he was early at Intel back in the day… And Jeff’s extraordinary, and he was so generous with his time, and really – just a terrific conversation with Jeff. That podcast drops, and six hours later I’ve got someone coming in on LinkedIn, saying “I just listened to the podcast. I am leaving Facebook. We’ve gotta talk.” And that was Arjen Roodselaar who is one of our founding engineers, Arjen was the first one that was like totally out of network for us… But Arjen is such an important part of who Oxide is. And we share values with Arjen because he was attracted by the podcast that we put out there. And he’s like “The folks that make this - I wanna talk to these people.” And early on, those stories we knew would be attractive to the kind of technologists… Because what we knew - the thing that we knew, that I think investors didn’t necessarily know, is that the world, technologists, customers, knew that it was time for this company. And that if we could put the bat signal out there saying “Hey, here’s what we’re doing. Come join us”, we knew that technologists and customers would raise their hand.

[01:25:57.20] So strategies such as this behind the podcast was it’s a way of getting that bad signal out there. By the way, it’s doing it in a vector that we just love. We love podcasts, we love listening to them… We think it’s a really important vector. So yeah, On the Metal was huge for us.

But we didn’t talk about Oxide at all, except for a couple of advertisements… Because we just recorded a couple of tongue-in-cheek ads, and listeners, after they had listened to the 10th On the Metal, 12th, and got the same ads, they started just protesting, like “Please, God, change the ads.” We actually had one listener submit an ad for us…

Oh, my gosh.

And he just said “Just use this. We’ll start creating ads for you.”

But we didn’t talk about Oxide at all. And I think the morph into Oxide and Friends was not specifically just to talk about Oxide more; there’s plenty of topics on there that have absolutely nothing to do with the space and some of the problems in computing, and cloud computing… But to provide a forum where we could go deep into areas that no one talks about. No one talks about bringup, because bringup is ugly, especially on first boards, first systems. And no one talks about compliance, because again –

…there’s a lot of words. It’s ugly, and folks are scared to expose that to their customers. They’re scared to expose that to the market. And what we’ve found is that that transparency has actually endeared us to this demographic of customers, because they love that they get to see it all. They kind of get to see where it came from, why it was built, who built it, why they built it, and that level of transparency, where – you know, even 5-10 years ago in my career; you’re always like “Ah, do we really wanna share this? Do we really want this out there?” You’re thinking of all the downside, right? And once you start sharing stuff and you see that positive feedback loop, it emboldens you to wanna share more and more and more, and I can say we are definitely not at risk of sharing too little.

No. Not at all. I mean, it’s all contextual, that’s the problem. People get so scared about – I mean, obviously, the printed secret with the dot matrix, that’s one you keep very close. But your ideas - some of them are worth keeping close to the vest, but not like secret forever. And they’re all contextual. What you are doing is maybe drastically different than what most are doing, and they’re not in the same space. So they can’t just like “Transplant this great idea I heard on this podcast from Steve and Bryan, and bam, my company is successful.” It’s just not like that.

So many people are just not building in the public… And not like literally sharing every possible secret thing ever. There’s some things that you do keep, that just should remain private. But most of it, just put it out there, because you’ll probably attract the better people you wanna work with anyways.

You’ve just made a really important point, which is like, someone that you might worry about wanting to take an idea and go do it, you find that some of those people actually join the cause.

They wanna join you.

Or they become customers instead of wanting to go build for themselves.

Yeah. They’re like “Hey, I don’t wanna take on all that risk you all did. Everything you all did - that’s amazing. I just wanna work with you all, not instead of you all.”

A hundred percent.

That’s right. And I think also, we knew our customers, because we’d been our customers. The customers in this space, for on-prem computing, have been gaslit by their vendors. And their vendors are not just not transparent, they’re deliberately opaque. And when you are responsible for running that infrastructure and the system is misbehaving, and you feel that everybody is lying to you or otherwise obfuscating what you know to be the truth, namely the system is not working… Like, we knew that a real differentiator for us would be that transparency. And we’ve gone to an extreme that I think is terrific in providing this bright light into these things that have not had a light upon them. And that’s not just opening up all the software - though we’ve done all that, too - but it’s getting all these engineers to talk about the actual real experience of getting this stuff done and brought up.

[01:29:54.24] And actually, I think it just dropped this morning, actually - there’s a GOTO Chicago talk that I gave on the rise of social audio. So Jerod, you were saying that kind of the time for podcasting maybe has passed… I think we are in a golden age for social audio. I think social audio is really, really important. I think it captures something different than we get through these other media. Actually, Oxide and Friends was actually born on Twitter Spaces.

Yeah, I remember you telling me about Twitter Spaces the last time we talked. You were big on it. And I don’t even think you were making it a podcast back then. It was just Spaces only, wasn’t it?

We started recording really early… And fortunately, we didn’t record the first one…

That’s a bummer, actually…

Well, what we learned is actually someone did record it.

Oh, they did?

Always be recording.

Always be recording. I absolutely agree with you, Adam. Always be recording.

Always be selling… Just transplanted to “be recording”.

Always be recording is an Alex Blumbergism, and –

Oh, is it really?

That’s Alex Blumberg, That’s Ira Glass, This American Life. Always be recording.

I didn’t know that. I thought invented that. Geez. This whole time.


Just have a good idea like somebody else. Okay, fine.

There you go. And it is really important, because it’s a different medium. So I think social audio – so this GOTO Chicago talk I gave on the rise of social audio and why it’s important for engineers. So what I would like to see - I think that people focus too much on “Oh, I need to create this well-edited, well-produced podcast.” Obviously, I love the Changelog. That’s great. It’s a lot of work, too.

Social audio, throwing a Discord out there, recording it and throwing it out via an RSS feed is not a lot of work, actually… And getting engineers in (I think) any company, getting technologists… Getting people that are solving real problems together and talk about the struggles they had together solving these problems in detail - recording that and getting that out there is enormously valuable. And I actually think that one of our problems - not to go overly large on you here, but one of our problems societally is that we have done too good a job of insulating one another from the details of what we’re building. And as a result, when people look at the phone, it just feels magical. When they look at the cloud, it feels magical, because we’ve been insulated from the actual details and from the humanity that’s involved in building these things.

So I think it’s actually really important that we talk about these details so we can let people know that “By the way, yes, there are still people that are still building computers. And yes, it’s interesting, and it’s hard, and it may speak to you.” Maybe you’re interested in these details intellectually, maybe you’re interested in these details at a deeper level, where it’s a deeper calling. And I think one of the disservices that we have done to young people especially is to imply that everything’s been done and everything’s solved. And it’s definitely not. We’re all out here, solving real problems, but we need to be transparent about that, so people can get engaged and see that. Sorry, that’s a much bigger answer, I think, than you were probably anticipating, Jerod…

No, man. I like that answer a lot.

All answers are good answers.

We are big, big social audio proponents. Not on Twitter Spaces anymore, thank you; no, thank you on that. I wanna get off Mr. Musk’s wild ride. But we are on a Discord that we then record, and that’s been a really – actually, that’s been really important, because it gives you a chat vector. So people can type comments, and then you’ve got people speaking on stage… Which is really, really helpful, because it allows people to participate… There are lots of people that want to participate in the conversation, but don’t actually wanna raise their hand and speak. And on Twitter Spaces the only way to participate in the conversation was to actually take the mic and speak.

It’s really nice on Discord to have people be able to point to links, or contribute to the conversation in a way that doesn’t require them to do that. And then if they wanna get on the stage, they can get up on stage, too. So it gives you that flexibility. Huge proponents of social audio, and - yeah, again, this GOTO Chicago talk just came out today.

Is this your next company you’re trying to build, Bryan, or is this just like a –

[01:34:10.21] It’s such a – I think it’s like open source, actually. Open source is not a business model. Open source is a technique, a tactic, something you do as part of building a different kind of business. And it’s the right way to build a different kind of business. Open source is not a business model for us. Open source is something that we do as part of who Oxide is.

For me, social audio is not a business, social audio is a part of – we do what we do at Oxide as part of who we are. What Steve and I are in our nucleotide base pairs - we are this computer company. The next business is this one, because we believe that we’re building a generational company. Well, we’ve got to, Adam, in order to be able to release to the homelab in our 2050 keynote.

Yeah, we have a lot of work to do.

I know. You’ve got to commit, 2050.

That’s right. You can’t have another business. 2050, coming to a homelab near you.

What the heck will I be 2050? – Doing? I don’t think I’ll be playing with it, so you’ve gotta do it faster. Can we do it like 2040, 2030? I can do 2040, maybe. But 2030?

We’ll split the difference. 2040. 2040, but that’s the last and final offer.

Yeah, let’s do it for 2030.

We’ll take it. Alright, guys, thanks so much for hanging out with us. This was fun.

Oh, this has been a lot of fun. I love what you’re doing here.

Yeah, this has been fun. I like this Changelog & Friends thing.

Yeah, it’s good.

Thank you.

We’ll have to get you on Oxide and Friends, and do a cross-over episode.

Happily. We’d love to.

What you need to do is send us a rack, so we can test out and do fun things… [laughter] And then we can truly speak contextually.

Come full circle.

Or you know what - here’s one better: invite us to your next customer install, as media. We’ll come there, and help you document some of this stuff. We’ll do some fun stuff.

That’d be fun.

Why don’t you come to our first customer install at Oxide?

When’s that? Is that in the past?

Up to Emeryville, we’ve got a live running kit. We’ve got the whole history of boards kind of laid out…

Oh, that’d be fun.

It’d be great to have you up.

Let’s do that. Alright, guys…

Alright, friends. Thank you so much. Bye, friends.

Bye y’all.

Jerod, you’ve gotta bone up on Silicon Valley, man.

Silicon Valley.

Ah, I’ve got a lot of work to do… [laughter]

Yeah, exactly. Get on it. Come on, Donald.

Alright, see you guys.

See ya.

“Come on, Donald.” That’s awesome.


Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00