Ship It! – Episode #29

Find the infrastructure advantage

with Zac Smith, Managing Director of Equinix Metal

All Episodes

Zac Smith, managing director Equinix Metal, is sharing how Equinix Metal runs the best hardware and networking in the industry, why pairing magical software with the right hardware is the future, and what Open19 means for sustainability in the data centre. Think modular components that slot in (including CPUs), liquid cooling that converts heat into energy, and a few other solutions that minimise the impact on the environment.

But first, Zac tells us about the transition from Packet to Equinix Metal, his reasons for doing what he does, as well as the things that he is really passionate about, such as the most efficient data centres in the world and building for the love of it.

This is a great follow-up to episode 18 because it goes deeper into the reasons that make Gerhard excited about the work that Equinix Metal is doing. This conversation with Zac puts it all into perspective.

By the way, did you know that Equinix stands for Equality in the Internet Exchange?

Featuring

Sponsors

Fly.io – Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Incident.ioCreate, manage, and resolve incidents directly in Slack. Use the /incident command to create and manage incidents. This command lets you share updates, assign roles, set important links and more – all without ever leaving the incident channel. Each incident gets their own Slack channel plus a high-res dashboard at incident.io with the entire timeline from report to resolution. Learn more and sign up for free at incident.io — no credit card required.

RaygunNever miss another mission-critical issue again — Raygun Alerting is now available for Crash Reporting and Real User Monitoring, to make sure you are quickly notified of the errors, crashes, and front-end performance issues that matter most to you and your business. Set thresholds for your alert based on an increase in error count, a spike in load time, or new issues introduced in the latest deployment. Start your free 14-day trial at Raygun.com

Equinix Metal – If you want the choice and control of hardware…with low overhead…and the developer experience of the cloud – you need to check out Equinix Metal. Deploy in minutes across 18 global locations, from Silicon Valley to Sydney. Visit metal.equinix.com/justaddmetal and receive $100 credit to play.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Well, hi, Zac. I’ve been looking forward to this for a really long time… Summer of 2019, specifically. Welcome, and thank you for making this happen.

Well, Gerhard, it only took us a year and a half, but we’re ready now.

Yeah. The last year didn’t count. It was a crazy one, right?

Exactly.

I have many questions, but I’ll start with this one… Were you at KubeCon, by any chance?

This past – in October was it?

Yeah, the North America one.

No, I wasn’t there. We have a great team there, and we were doing our cloud-native cookbook. I’m not sure if you’ve got a copy.

I didn’t, no.

Yeah, we decided to organize an open source cookbook that we all did during the pandemic, which was - you know, we were all stuck at home, doing something, and so we got, you know, I’m gonna call it cloud-native luminaries to give us their favorite recipe, and we made a physical cookbook. It’s available on GitHub, so if you wanna add your recipe…

Okay…

So that was the big giveaway. I couldn’t go, I was unfortunately busy with something else… But we did have a pretty good team there; it was a great turnout. Really nice to see people come together.

Right. Okay… So that cookbook sounds great. The fact that you weren’t there - it’s okay; I missed things, but I didn’t miss you, so I’m feeling better not being there in person… That would have been a disappointment.

Do you typically go to KubeCons, by the way? Do you have time for KubeCons?

I mean, I used to in the past. Now it’s a little bit different. I’ve gone from being – let’s call it a CEO of a startup, a company called Packet, which I ran for many years… To now being a busy executive at a Fortune 500 company, which - you know, I have a little bit different set of responsibilities, and part of that is with customers in the field, but a lot of that is also internal and part of our strategy… And you know, I’m gonna call it corporate functionality.

[04:02] So I haven’t been to any conferences over the past few years, but I used to go regularly. I was on the road 2-3 weeks a month, including conferences. To me, conferences have always been – especially things like KubeCon, and earlier I remember being at DockerCon in Barcelona in 2015… And so the best part about these conferences to me is the hallway track; I just love seeing and meeting and hearing from people – you just get that pulse on what’s going on when you can go around that hallway track and see what people are talking about. So to me, that was always my favorite part of going to conferences.

What was my other favorite one…? CoreOS Summit, Monitorama, that was a good one… KubeCon, yes… You know, I didn’t ever go to like the Gartner IT Summits those things weren’t my gig.

Okay. So you’re right, that’s one of the things which I missed the most about being there in person… So even though I did attend this KubeCon, it was a virtual attendance… But I know what you mean; your responsibilities change, and things are a bit different. You’re trying to be there as much as you can in spirit, if not in person… But you’re there, because I’ve seen some pictures you were retweeting from KubeCon. That’s why I was thinking maybe you were there.

Oh, we had all of our spies. I think about maybe 15 people from the Equinix team went to KubeCon… So it was good. And you know, my favorite conference ever - I don’t know if you have a favorite… My favorite was always the ARM Tech Conference.

Interesting.

The reason why I love the ARM Tech Conference - because it was 100% hallway track. So they would do one kick-off meeting in the beginning, some kind of keynote thing, and then they make you all go away, and they would take over Clare College in Cambridge, and they would take over the professors’ rooms, and you would each have a minder from ARM, and then they would just set up speed dating between all of the attendees, and you would do half an hour or 20-minute meetings, and then you’d all switch. And then you’d switch again. So it was basically just hallway track. It was so cool.

That’s amazing. I think that sounds a little bit like Priyanka’s happy hour at KubeCon… But yeah, I really enjoy that format. I know what you mean.

Okay, so I have been a fan of Equinix Metal for a really long time… And actually, it’s been so long that it was called Packet. So it’s been many, many years. And I’ve already shared my perspective why in episode 18, with Marques Johansson and David Flanagan, Rawkode… So there’s nothing else to add from my side. But I’m wondering, how was the transition for you from Packet to Equinix Metal, besides you not being able to go to conferences which we already got? [laughs]

Without having to change my T-shirts?

Yes, that as well. [laughs]

Yeah, that’s a great question. There’s a lot of emotion built into that for me. As a founder, you spend years kind of thinking of something, dreaming/working on it, putting your soul into it, and then in my case - we were acquired by a great company, Equinix, and your role changes. It’s no longer this thing, especially as a founder/CEO, which I was kind of the leader of that, along with my colleagues, and obviously the whole team… But you know, there was a lot of personality built into Packet. Packet was very much a reflection a little bit of things and values that the founders cared about. So that is different when you then go into a much more established business, and you have to figure out – it’s certainly a totally different challenge in how to meld values systems, culture, in a brand, obviously… You know, your customer make-up and how you engage and whatnot, and just the pulse of how you run your business.

So for me, that was one of the bigger shifts, was just going from – I mean, Packet was 150 people at its biggest, and we were very much focused and built around speed… How do we find product fit, how do we service our customers, how do we listen…? Because we weren’t market-leading anything; we were just trying to prosecute a vision and a mission around making hardware automated for developers. That was as simple as it got. And where that would take us, we weren’t even exactly sure.

[08:11] And Equinix is a much different business; we’re well over 10,000 people, we have 23 years in business, 10,000 customers… It is big, and it has a robust and strong culture of its own. So that was a big shift, just moving from kind of – I’m gonna call it the upstart, forward-thinking, future-driven startup, to a market-leading Fortune 500 business, and then figuring out my role within that.

And then of course there was – I’m gonna call it personal/emotional ties. I’ve mentioned to with a few other people recently, but this is a second business that I had sold. The first business of mine - I joined a gentleman by the name of Raj Dutt in the early 2000’s, a company he has started called Voxel, which we then sold to a public firm called Internap, back in 2011. And I was much younger at the time, and I had never done that before, and frankly, I didn’t deal with it very well. You know, you’re taking something that you had so personal, and then suddenly we sold the business, and then I still was taking it very personally. And I wasn’t ready to deal with it.

Raj went off and founded a company called Grafana. I started Packet… And what I did is during this transaction, when we knew we were gonna sell the business, the first thing I did - I talked to my brother Jacob and I said “Man, we’re gonna have to get ourselves a therapist.” Because a lot of this is just dealing with the emotions of a founder… Because I knew we would change our name, and things that were special to us would not be important or the right things for Equinix, and things like that… And sometimes you could take that very personally.

So I think my experience, the first kind of go-around helped me to, in one way, be a little prepared for it, and in the other way just know that I was gonna go through it. So last year, when we changed from Packet and rebranded the business as Equinix Metal, it was still a journey, and I kind of take a little bit of pride that, you know, I’m gonna call it thousands of people throughout the industry still call it Packet, and won’t be able to replace the words in their mouth… So I was like, “Okay, fine…”, because our brand was meant for something that is important to people.

That’s the other thing… So one, your role changes dramatically from what you’re doing and where you’re at, and two, you’ve gotta deal with some stuff as the founder, around maybe the mission that you’re on, or the reflection of that for yourself, and help channel that energy in a positive way. So those were, I would say, the two biggest things.

Of course, there have been other things, which are both opportunities and not related to our product and our capabilities, and our scale, and all kinds of other things… But those are the ones that are most personal to me.

Okay. So there’s a follow-up question, but first I have to ask another question, which is linked to what you said. It was very comprehensive. Thank you very much. The precursor is “Why do you do what you do?”

Like the big Why, or the little Why?

The big Why.

Yeah, I mean - my wife asks that to me pretty often… So a few things. I can’t give you one answer, but I would say that I love creating things, for sure. I love being involved in that, I love leading it, I love tackling unsolved problems… Just building. So I am a native builder, and you can kind of tell with my personality type, I’m pretty action-oriented, I’m curious, I wanna kind of unravel and understand, and then I wanna do something about it. If I identify a challenge or a problem – my wife hates it, because everytime she complains to me about something, I try and fix it. And she’s like “I’m not trying to have you fix it!” I’m like, “I know, but–”

“Just listen to me, damn it!” [laughs]

Yeah, she’s like, “No, I just wanna –”

I know how it goes, yeah. I know. I can relate to that.

So that would be, I think, why I’m an entrepreneur, and why I have that spirit to create things… And it involves – I invested in some companies, I like to help other founders… You know, I always am interested in that creation aspect, and that really kind of – I’m gonna call it “satiates” a need within my own mind, which I’m always just a very curious person.

[12:05] And then the other one, which is like “Why this?” After Internap, I kind of vowed not to play in the world of internet infrastructure. I was like, “I’m gonna go get myself a real job, something that isn’t 24/7, with all the challenges of our plumbing world of the internet…” And of course, two years later I started Packet, and I said “Nah, I wanna work on internet infrastructure, and build a better internet.” [laughter] And why did I do that? First and foremost, I really believe in the foundational capabilities that we were able to provide, and I think still provide… And why do I believe that? Because I think technology really does best, and our innovation around technology does best with diversity. And that’s not just diversity in people, and diversity in thought, but diversity in businesses that can take advantage of technology. So I kind of saw that there was a real need to provide access to fundamental technology to an incredibly diverse set of users and projects and companies; that if we didn’t work on making it easier to consume hardware no matter what it was, where it was, or what you put on top of it, that part of that – you know, that messy part of innovation, where the magic of software and hardware come together would go away… Or at least would become much more bespoke, and I’m gonna call “unavailable” to most people. So that was kind of one of the Why’s.

And then the other big Why is that I really firmly believe in – basically, you could kind of say, with all the challenges we have, and I think this week is a big climate change summit going on in Europe, and you can kind of say that some of our biggest challenges today can be looked at as something where we could go back and reduce and maybe even think up a world where we use a lot less technology… Technology being computers, but technology even being like cars, or something.

Or we could think of a forward way, where we figure out how to use technology in better ways, more sustainable manners, and use that as kind of lean in to the technology side, versus lean out… And I’ve always been of the latter, how to lean in. That was one of my – I believed it from just a pure resources perspective we have to create the right software and the right hardware together. It’s not about making it 10% more efficient, it’s about making it 10,000 times more efficient…

Oh, yes…

…it’s the only way that happens… And I’m sure we’ll get to the chips and the things at some point, but–

We will.

…to me, that was really one of the imperatives. And the other imperative which is why I’m so excited to be where I am here at Equinix is to change the business model that we fundamentally have around the distribution of technology. When you look at a computer, like a server, 70% of the Carbon impact of the computer happens in making the computer, and only 20%-30% of it is in using the computer for its whole lifetime. And then of course, there’s some residual effect of actually recycling - if people actually even do that - of the computer.

I really firmly believe in moving to a more circular economy, and when we think about the big movements we had over the last 10 or 20 years, I grew up in the movement from IT to cloud… Which was really not a technology shift as much, in my opinion, as a consumption model shift. It was aligning outcomes of the provider; instead of “I want to sell you this server”, it says “I want to help you use this thing”, which is close to “I want you to have the outcomes that you want.” And without addressing, frankly, the OEM and the silicon business models, which currently are in the business of – if they don’t sell you another chip, they don’t make any money. Now, that’s just not a sustainable business model for the future of our world.

That’s right.

[15:46] So the same thing is happening with OEMs, who are early – like, they’re starting to make that shift with as-a-service. Just a little bit. But in reality, still - if they don’t build another server and sell you another server, they don’t get paid. And to me, these are massive, multi-hundred-million-dollar businesses. And frankly, especially with the silicon, the main control point for intellectual property - which it doesn’t have a way to monetize the intellectual property in a sustainable way, in a sustainable manner. So to me, that’s the other reason - is that it started Packet with the idea that we could impact and change the way that technology was distributed and operated… Because we’re gonna go through these kind of fundamental shifts in what the things are, and who’s using them, and how the expectations are asked. Well, if there’s gonna be a new operating model, now would be the time to introduce a sustainable business model to the whole thing as well. So that’s the other reason why – that’s the other big Why.

My follow-up question to this was “How did Equinix Metal change your Why, coming from Packet to Equinix Metal?” But I think we can draw the conclusions, we can draw the parallels, because some of the things that you were alluding to is scale, first of all; the complexity of the whole supply chain when it comes to infrastructure.

I think that people can kind of imagine how Equinix Metal makes it easier… Not to mention the interconnectivity, all the data centers. So all that - basically, you have a lot more leverage to use in delivering that Why. Anything to add?

I mean, there are some positives and negatives to it, right? Back in Packet days, we were – I’m gonna call it an “arms dealer” for transformation, as it pertained to this level of the stack… Which was just like – I always thought, like - hey, there’s real estate e.g. data center as a service, which is a pretty scaled model. It’s for the efficient capital, there’s many providers, of which Equinix I would say is the leading provider, definitely by market share related… But there’s like scaled business for like – if you wanna access one of the world’s best data centers, you don’t have to build one of the world’s best data centers.

And then there was this thing called IaaS, which was everything from the computer, and the network, through databases and load balancing. It was a whole thing. It had a ton of verticalized software opinion built into that. So what Packet was doing was we were trying to democratize that lowest layer. I called it hardware as a service. How do we enable the consumption of hardware and make it able to touch software? And as an independent, we had some advantages, which were hard to realize at our tiny scale… But in theory, we had some advantages, which we could help do this anywhere. That was an interesting concept; we had put out edge deployments, with tower companies, we had done private deployments with some large enterprises… We were really trying to figure out “Hey, where would this be needed?”

Now, obviously, as Equinix, we were a little bit more focused on our own large-scale real estate portfolio… But with that come some incredible advantages. No longer do we have to think “Wow, how could we change this?” I’m like, “Well, we have 240 buildings in 65 markets around the world. We could start there.” And we can also help our existing 10,000 customers, who were really struggling, frankly, with the breadth of our platform, and also just figuring out how they are going to react and make a difference related to their ESG initiatives around climate change. And what they’re doing is they’re saying “Can you help?” And what I’ve found is that’s been a really, really collaborative engagement. And what I’ve been so pleased about working with Equinix versus just (I’m gonna call it) another large company is that Equinix has the word “ecosystem” everywhere. And in fact, Equinix stands for Equality in the Internet Exchange. That was the name of the business.

Hm, interesting. I didn’t know that, actually.

Yeah. So it was created as a neutral place for the internet to grow. So what I’ve been really pleased is the support I’ve gotten throughout the leadership, from our CEO Charles, throughout the rest of the organization and whatnot… In doing a lot of this innovation in the open. So I get to share the Open 19 project within the Linux Foundation, which is where we work on sustainable, liquid cooling power and distribution models for computers… And also in the Linux Foundation for the CNCF for some of our provisioning capabilities and whatnot… So that’s been a real joy, as well as looking at the scale and breadth of what we can do to impact and really start a flywheel, moving way faster than what we could have done on our own.

So why do I keep getting drawn back to Equinix Metal? And you’re right, it’s hard for me to say that, but I have to. I just wanna say Packet, but… So what keeps me getting drawn into Equinix Metal - the reason why I keep coming back is because you have the best hardware, hands-down. So I have been running bare metal servers over my entire career, with LeaseWeb, OVH, Scaleway (they used to be called Online; now they merged), Rackspace, SoftLayer… Remember them? Remember SoftLayer?

Of course.

And in my opinion – so having all these data points, I think that Equinix Metal does it best. What’s the secret?

That’s a good one. I don’t know. First and foremost, I randomly landed in this industry and figure out I love – I love this industry, by the way. I love the community around – the plumbers, the builders. It’s just one of these unique places of the internet that is so apprentice-driven and collegial, which is, I think, really special in terms of people who build the lower layers of the stack. It’s not something that - I go to cocktail parties or whatever, and people are like, “What do you do?” and I was like “I work at Equinix.” They’re like, “The gym?” [laughter] So most people don’t know what we do… But underneath the hood, I think everybody really works well together. So that’s kind of just like a fun part… You have to have passion, and I have a strong amount of passion for this part of the innovation stack. So I think that’s it.

But when we started Packet, we had this super, super-clear vision… And I think I’ve already repeated it once here, but it was “How can we automate hardware, no matter what it is, no matter where it is, and no matter what software you put on top of it?” That was the thing. And what we knew is we knew our place in the world. That if we could enable a highly programmatic way to interact with hardware, no matter what it is - and that’s a deceptively simple statement; it is actually extremely hard.

[24:07] And so I was like, “This is what we’re gonna be the best in the world at. We’re gonna figure out how to enable hardware no matter what it is”, to this massive world of software, call it 40 million developers in the world, who wanna use all the stuff, right? And they need to make their amazing software work with this amazing piece of hardware… Which, by the way, what is “this piece of hardware”? I think it’s one of the most misunderstood things that goes out there. People are like, “Oh, computers are a commodity…” You know, except if you’re trying to do something special that changes the world, like make your car go left and right, or talk at the walls all day long, or carry around a supercomputer in your pocket called a cell phone all the time, that has all the widgets, right? Like, that’s where hardware and software come together and create this really magical thing.

So I think our focus on that just pure mission, which was - we knew that we had enough there to prosecute, and we could spend the vast majority of our careers trying to make hardware accessible for software, knowing the pace of hardware is changing so rapidly. We’re like in the golden age of hardware right now, with the kind of competitiveness between the silicon manufacturers, the business model changes, the hyperscalars, the demand and volume driven by mobile interacting with the ending of Moore’s Law in its own regard has just created this huge place for innovation, I think, in software. You also kind of have this natural thing which I’m happy to play a little bit of a part on, where now multi-arch in the data center is a reality…

Oh, yes…

When 5-6 years ago that just wasn’t right? It was x86 or you’re done. And now you’ve got serious languages that have been made multi-arch and have the build capacity and the CI pipelines and the related ecosystem to make that continue, and “build upon itself.” That’s happened faster than I expected, where the software has met the hardware… And the hardware is also changing so rapidly that there’s just so much to do.

So I envision that over time we’ll create a much better what I’m gonna call HCL (hardware compatibility list) for the internet, that effectively can be an idempotent view of every single piece of hardware ever, and that would allow all the software to be able to choose and understand how to work with it. We’re pretty far off from that… But I think we can get somewhere there.

But I think that’s where I’m gonna give your answer, is just like being super-crazy, laser-focused on what we do… And I’ve spent a lot of time in my first few years at Packet – I’m not gonna say fending off revolutions, but a lot of people… The clearest one on my mind was I almost had most of my management team walk out because they said we had to launch load balancers, otherwise nobody could use our thing. I said, “No… I think software will figure it out. Let’s just provide really easy, smaller hardware instances, and they’ll figure everything else out”, and they’re like “No, it’s too hard. We’ve gotta do load balancing.” And then look a couple years later… You’ve got Ingress controllers, and service mesh, and all these kinds of different – BGP control in Golang… It’s cool. I mean, it’s not for everybody, but software solved the problem.

So a lot of that was just staying super-focused on what we did, and I think some of those other providers that you mentioned, of which I’m huge fans, and know the founders of most of those businesses, that moved our industry in their own way… But they became (I’m gonna call it) all-purpose platforms in a lot of ways. And that’s probably right, in some regards… A lot of the industry has moved to that direction, especially with hyperscale clouds, having these just robust software catalogs and ecosystems… We’ve been fortunate enough to have venture backers at Packet who really saw our vision for what it was, which is staying fundamental in the primitives business… And frankly, here at Equinix, which really knows that it is a builder of physical infrastructure that can move at software speed. That’s our job. Our job is not to do all the things; our job is to enable an ecosystem so that they can do all the things. So that has allowed us to continue to focus in on just like “Let’s be the best at this, in the whole world.” That’s it.

[28:12] I can see the importance of that, and I can see many decisions which were controversial, such as “Let’s not build a Kubernetes.” Like, what?! No, everybody’s building a Kubernetes. What are you on about?!”

Oops… I forgot to build a Kubernetes service…

That’s exactly the title, yes… That was one of the great blog posts which I had the pleasure of reading… And it shows in the small things, as well as the big things. But for me, one of the reasons, again, why I loved Packet, and now I love Equinix Metal, is that I could provision an instance type, the c3.small, with the highest CPU clock speed ever. You can’t get a faster CPU clock speed anywhere. It turbos to 5 gigahertz. Now, that creates other problems… But my Erlang benchmarks run fastest on Packet. It’s unreal; like 20%, 30% faster… And you can’t reproduce that anywhere else. You can get a dedicated instance in AWS and it will not be faster. And that was surprising… And that was like four years ago, or three years ago.

[laughs]

So not much has changed since then. But there is this problem… You wrote a little bit about this, the liquid cooling imperative; that’s another great blog post. By the way, do you know that one of my favorite downtimes is to read your blog posts?

Uh-oh… [laughter]

No, they’re really good. They’re short, they’re well thought through, and you convey a lot of information in a very good way - compressed information… They’re great.

Well, we have a term for that here at Equinix…

What is it?

Well, we started it at Packet and it’s due to my twin brother Jacob, which is “Craft, not crap.” So we don’t ship any crap content. Only crafted content. So… Craft, not crap.

It shows. It shows. So the hot chips, coming back to the 5 gigahertz one - there is the cooling problem. Can you give us the TL;DR on that? Because you thought quite a bit – again, I don’t want you to reproduce a whole blog post…

Sure.

But as a summary, as a TL;DR, why is that important? Because there’s another big initiative that is linked to this, the Open 19, and I see a link there… And I can see you being the innovators behind this. But tell us more about that.

Yeah, so the TL;DR is that chips are getting hotter. Why are they getting hotter? Mainly, we’re getting dense, the nanometers are getting smaller on the fad processes. That’s how you kind of stuff more transistors in. In order to then do that, you need to push way more power through these things, and we’ve created innovative ways, like what Lisa and team have done at AMD about chiplets, and having lower yield requirements, and putting multiple chips on a single die. But in the end, we’re just running into a physics barrier here. You add it by adding more layers onto it. So suddenly, you’ve got multiple layers in the thin fad or whatever they call it. Even with memory and NVMe. So everything is having denser transistors, with more power going through them, and you have this kind of movement towards the way, as you kind of get rid of the nanometers, your only way to make things go faster and more efficient is to push more power through them.

So that’s one in the general-purpose, large-scale silicon trends that we’re dealing with. And the second thing is we have way more sophisticated purpose-built technology at this point, like GPUs, or accelerators. We have things that are very, very specific at doing one thing very well, and you then keep them busy, so you just use a lot more heat. There’s an electricity problem that we have there, and certainly, as we shift to a more renewable energy footprint, instead of just buying credits and offsets actually generating things like green hydrogen, so you can offset demand and use it, exposing – there was a great panel with the Intel team last week or the week before about how to expose to the world of software reliable metrics on “Well, that would not be a good time for you to reindex all your data stores. Maybe you should do it at noon, in our Texas data center, instead of at 2 AM in our Frankfurt data center, where we don’t have any renewable energy.” We don’t have a way to even express that in our industry, a standardized way, let alone to do something about it. We desperately need that…

[32:20] But anyways, getting back to it - accelerators and purpose-built technology are getting hotter… So you have this electricity thing, more juice into the rack, and denser, effectively… And then you have the other problem, which is cooling. We’re kind of getting to the upper barriers of two things. Number one, we’re getting to the upper barriers of how we can air-cool this stuff. A lot of the times – and you can see simulations, that about 20%-30% of the energy in a data center is just fans. If you ever walked into a data center, they’re very loud. They’re loud because there’s all these little tiny, 20-milimeter fans running at the back of every server, just sucking the air through, just to create airflow on individual computers, to pull it over those chips and those heat sinks.

So in big data centers you’ve got 20%-30% of the energy just using fans to pull air around… And then we’re getting to this density level where you just can’t cool it if there’s not enough air flow to be able to do that… And especially in a mixed data center. In a hyperscale data center is where you can build around one specific thing, you can kind of purpose-build some of the stuff around it, you can (as I like to say) build your data center around your computers… You can’t do that at some place like Equinix, where every enterprise service provider has different things. I also kind of believe that we’re gonna have a future of compute that’s more heterogeneous, versus homogenous… So we’re gonna have a few of a lot of things, versus a lot of one thing. So I kind of think that we have to solve this in a more scalpel-driven manner.

So moving the liquid – I’m not gonna go into all the things, but just think of it like your car radiator or air conditioning. Pulling a liquid that turns into a gas over the hot part, the chip, the plate, whatever, and then being able to do that does a few things. Number one, it can be way more efficient. You can stop all those fans, you can stop pushing air around which doesn’t go in the right place, at the right time, and start to put the right cooling at the right place.

The other thing you can do is create a much, much higher differential between the intake and the output. What that allows is - you’ve probably heard of things like heat pumps. You can actually turn that back into energy. So you’ve got a natural thing called a giant turbine called “thousands of computers creating heat.” That sounds kind of like a power plant to me, right? Right now we literally just exhaust that, we’re just trying to get rid of it. But if you can create a differential and actually capture (I’m gonna call it) hot enough liquid, you can actually turn that back down to energy, or sell it to the grid for municipal purposes, or whatnot. You can use that energy if you can capture it.

And then the most important part of that process is actually today most of our data centers and most of the data centers in the world use evaporative cooling, and that takes millions of gallons of water per day to evaporate this heat. And that is simply not sustainable. So we need to move into a closed system, where we can keep the water and the liquid and not evaporate it all along with it.

So there’s these momentous challenges and opportunities… I think it’s – like what I’ve touched on earlier, related to some of the business model changes are gonna be necessary to that… But as we – like, for example at Equinix we have a goal of reaching Carbon-neutral by 2030 using science targets… We have to explore all of these options with not only ourselves, but our ecosystem partners - the silicon partners, the OEMs, our customers etc.

[35:44] I think one of the biggest challenges we have right now is that in an enterprise data center, with this diversity of technology that’s going on, everything from Dell servers to NVIDIA DGXes, to boxes that you brought in from your – you know, “This is a ten-year-old server I’ve got… Let me bring it into the collo.” Still useful, and actually that’s probably one of the best things you could do, is continue to use that server, so we don’t have to make a new one.

Reuse, yeah.

Reuse is the best thing we could possibly do… And luckily, software is getting sophisticated enough to deal with that. At least until we get a more robust recycling program built in with the silicon manufacturers, where we can recapture that and put it back into use.

Well, one of the problems with this diversity is that there’s no current standard for how racks get put together… So if you’ve ever built a PC - I grew up building PCs…

Oh, yes.

You used to have the ATX case.

Recently. Even servers. I still have it upstairs. 2011, that’s the last server which I’ve built with supermicro. It’s still up there, in the loft.

There you go. [laughs]

Not liquid-cooled, unfortunately…

In the PC world we had a standard called ATX. So you had an ATX case - ATX mini, whatever. And the cool thing about that was if you got an ATX case and somebody else made an ATX motherboard, and on the back of it you had an ATX cut-out on the pins, you could kind of make anything, and you didn’t have to reinvent the logistics around the computer, like sheet metal, and power supply, and fan, and all those other things. Well, we don’t have that in the rack today. Every single rack is bespoke designed; as people plug in these servers - well, where are the power cables? Where’s the fans? Where is all the power supplies? None of it is standardized, and it’s extremely hard to build these things. Just imagine putting in liquid cooling. Now we’re putting water everywhere. This is mechanically complicated, and potentially – not dangerous per se, because you’re usually gonna use some non-conductive liquid… But whatever. It’s complicated. We’ve already got hundreds of cables going into the back of these racks, and now we have like water tubes going in and out? Like, oh my God, right?

So that presents a huge logistical thing where we need to create a standard for the rack. And not like “Everybody build this computer”, but “Everybody build to this standard mechanical form factor, so we can all connect.” Almost like Nespresso capsules; they work in the machine. Or like how many things in construction, like outlets - all look the same, right? Well, it’s because you create a standard so that the whole industry can work. Maybe Nespresso is a bad example… Whatever.

But the other thing is actually related to – you know, in terms of creating this ability to go into racks easier, is so that we can actually design a system where we can take the thing out of the rack. Today it’s so expensive and complicated to put things in the market. We never think about how to move them from it. So it’s all like a one-way street. And then we just try and get rid of it. And then what we do is we throw away all the stuff - we throw away the sheet metal, we throw away the cables, and the copper, we throw away the power supplies, we throw away all this infrastructure around it, just because we want a newer CPU.

That’s crazy.

That’s crazy. So we’ve gotta do something about that, from a sustainability perspective, but also so we could do things like put the right technology in the right place, at the right time. For example, imagine if we put in your coolest c3.small processors into Ashborn. They’re great, and whatever, but it turns out we need some of them in Atlanta. Right now it’s so expensive and so heavy of a lift to move anything from anywhere. We’re requiring specialized people, and lots went where are the boxes? We don’t have any standard boxes in our industry. Well, better go get brand new boxes… You know, things like that - there’s a massive amount of waste. Well, what if we could standardize and we could pull out server SLED and for $10 via a standardized FedEx box put it into Atlanta? Well, holy cow. I can just imagine the defrag that we could do on our data centers for our customers.

Oh, yes.

But that’s not a possibility right now, without creating some sort of a STDIN-rack ATX case, so that way things can go and move, yet innovation can still happen within it.

Is this where Open 19 comes in? Is this is it? Basically, Open 19 is what you’ve just described, this standardization?

[40:01] I would say that’s the vision, which is instead of kind of dictating the technology, it’s around creating an open standard for the mechanical form factor, and that’s it. And that’s really important, because having innovation to occur in both proprietary and open manners is very important for hardware supply chains… If you’ve ever made hardware, it’s really expensive to go and do. Spooling motherboards isn’t cheap; inventing chips isn’t free. And so we need kind of a robust set of options for the intellectual property model of the technology that goes inside… But if we could then start standardizing as an industry, especially as our challenges around power and cooling and heat capture become front of mind for most companies, and are imperative for all of us, that’s gonna provide a really amazing outlet for all of us to work together - OEMs, customer supply chain, data center operators etc.

We’ve chosen to invest in Open 19. It has a special kind of blind mate connector design. So the idea is that you shouldn’t ever have to go to the back of the rack. You basically have a sheet metal kind of cage, and on the back you have blind mate power, blind mate data, and soon blind mate liquid cooling loops… So if you have a server that can mate with those, it automatically engages. But you never have to go to the back of the rack and do all this complicated stuff. So my vision would be that that FedEx driver can literally come in, walk it in, slot it, and walk away, and it would work. That would be the dream. We’re not there yet, but if we can get there - wow. That would change how we use technology.

Yeah, I would get two of those, please. Can you send this FedEx guy up in my loft and slot two of those in? I would definitely want two of those as well.

And especially if we could do it with – I mean, you just think about places like Equinix, or whatever… We could do it with reusable packaging. Like, “Okay, it’s a brick. It’s of this size, we’ve got a package… Like, plop-plop, here’s your thing. We’ll come back with a brick if you wanna move it. We’ll come back with a reusable box.” And I think that that in and of itself is a huge reducer of waste, but it could enable this movement of technology to the right place, at the right time.

Yeah. This sounds an awful lot like containers for software. That’s exactly it - this is the standard…

Stuff it into here…

Create the standard, and then everything is going to slot in. Like, spin up the container, and that’s it. Well, okay.

That’s a good phrase, and we can use our physical infrastructure at software speed. Then we need to create – Kubernetes is to the containers as something is to the physical hardware mover

I was too busy creating the hardware equivalent of Kubernetes… That’s why I didn’t create Kubernetes. [laughter] Okay…

There you go… You’ve uncovered the secret.

So you’ve mentioned about multi-architecture becoming a thing, a big thing in the data center… And I have seen at least four developer workstations, like when it comes to the new Apple M1 chips… I think they’re amazing. I don’t have one, but I’m looking forward to it. I know that Intel has always been great for single-core clock speeds. That’s why I was mentioning the 5 gigahertz… But if you need lots of cores, AMD - I think especially with the Rome architecture, really had a home run this year. I was following it, and it was just amazing. My dev workstation - it’s an AMD Epic Rome. Rome is the second generation of AMD Epic. You know, but I’m not sure whether all listeners know that.

Don’t worry, here comes Milan. It’s coming out soon.

Right. So how do you see this, between Intel AMD ARM, the whole chip play… I know that you provide ARM servers, but I haven’t seen them publicly… But what does this multi-arch look like from an Equinix metal perspective? And from your perspective, from chips… Because you love chips even more than I do. And I mean the CPUs, I don’t mean the chips that you eat.

[laughs] So let’s see… The best way I can answer is we’ve always – I loved investing in the ARM ecosystem because it really pulled us as a company back in (I think it was) 2016 when we launched the Cavium ThunderX, which was the first 64-bit server capable ARM processor that you could buy. There was a few before that, but they didn’t really come to fruition enough to be able to – kind of general-purpose.

And the reason why we did that, which a lot of people questioned me from that time, especially at our company. They were like – we’re a very open and transparent business, which I was always very proud of, and people hopefully felt that they could say what they needed to say, or whatever they thought… And some people said “Why are we doing this ARM stuff? There’s just no money in it.” And I said “It’s because we need to force ourselves –” kind of like how cloud-native… We found a lot of cloud-native developers developed on Packet or on Metal because it forced them to be not reliant on cloud provider services… Because we didn’t provide any. You couldn’t get stuck on our load balancer, because there wasn’t one.

[laughs] Not our problem…

But that’s what I wanted to do with moving the ARM, was making sure that we could be really agnostic around what the technology was. And I always pushed people internally and said – whether it’s Intel, or ARM, or some other thing that somebody invents, which I’m sure they will, we wanna be really good at turning it on and off in a repeatable, secure way for our customers, and then helping the world of software to touch it.

And so ARM was a really great opportunity to push that envelope, because nothing worked. Like, everything you thought would work – like, “Oh, we’ll boot.” “Well, not really.” UEFI oops, that’s a little different. Oh IPixie. All kinds of things throughout the bootchain process and whatnot had to be worked on until you could do it… And I remember sitting with Syed, who’s the CEO of Cavium, and I was like, “We’re gonna need to provision and delete these things like thousands of times a day, until it is boring. It’s just not boring right now. And then we’ll get like Debian, and CentOS, and Ubuntu, and some other things working on it every day, boring, with all the build things, and all the repos, and all the things that needed to get rearchitected in multi-arch for that.”

And I remember one of the first ones we did is I called up – what’s his name? He used to be a client of mine way back in like the early 2000’s, but he was the maintainer of the build infrastructure for Golang. And I remember calling him up through a friend and basically being like “Yo–” He worked at Google. “Can I give you access to our works-on-ARM ecosystem so you could start doing builds of Golang natively on ARM?” He’s like, “Well, you could always compile it yourself.” I’m like, “Yeah, but that’s a lot of work for everybody to do every time they wanted to try–”

Oh, yes…

So we just kind of slowly built that up… And that was a really cool way for us to make sure that we’re being agnostic on architecture. Now, of course, later Intel was challenged by AMD with their chiplet architecture, and Lisa’s kind of forward-thinking vision… Mark Papermaster and whatnot creating a purpose-built (I’m gonna call it) technology or chip architecture for Cloudera… Just provided a huge amount of competition and an alternative in the marketplace… But now you’ve got this – you know, we’ll see, but like NVIDIA is buying ARM, or at least attempting to… I’m not sure what’s the state of the regulatory approvals or whatnot…

[48:11] But now you have these three really good, really competitive now path back at Intel, he is moving hard, from what I can tell on the outside… And it’s great to see three giant, pretty consolidated chip companies, all fighting it out. This is good. This is really great. And in the meantime, you have people like Amazon creating Graviton and pushing the limits there and showing what’s possible, or Apple doing M1… And now even developers – I mean, I was on a podcast with a developer friend of mine recently, and he was talking about how much he loved developing on ARM. I was like, “You wouldn’t hear that three years ago…”

Oh, yes.

He’s like, “But it’s so much faster to do it natively.” I was like, “Whoa…! ARM laptops, here we go…!” [laughs] It’s like, Ubuntu on your desktop, right? It’s gonna happen one day.

So I think if you’ve just got this nice, healthy, competitive silicon environment, you’ve got a bunch of different technology tracks that people are going off of… And frankly, the software world has become because of both (I think) the two critical ones, Apple having moved to ARM for its own chip - that’s gonna help make a lot of developers experience native ARM architecture. Obviously, the mobile world has carried that through… And then the second one is with cloud providers like Amazon even adopting their own ARM technology. I think that really will just cement a multi-arch world, which will prepare us for whether it’s OpenPOWER, or RISC… You know, things that are truly open ISAs. And that’s the difference there, is that whether it’s ARM, which is still a licensed instruction set, or x86, similar, and then maybe we’ll see – Intel will also start licensing it… But you know, RISC, with RISC-V, and OpenPOWER are truly open, and they have no intellectual property ties to them beyond their licensing regime, but it’s an open source license… Which is really neat. Because in my opinion, that’s where the next phase of super-bespoke chips comes out of, where you can use an architecture really liberally… And I think we haven’t seen that yet. I think RISC-V is on the radar, but it’s not here… So whether it’s Sofi or whatnot come to market, but someway where we could see companies of all sizes, maybe even pretty early companies developing and having their own chip that just did their workload. It’d be pretty cool. More Apple M1’s, but for different companies.

So you’re the second person that I know of who speaks very passionately about RISC. Dan Mangum, from Upbound Crossplane, he’s the first one… And I know that he’s really passionate about RISC-V. Besides the open source model, is there something more to it? Is it like the potential what RISC-V could become, and the chips that could be built with that instruction set? Is that what gets you really excited? Is there anything else beyond that? Because right now, it’s very nebulous; it could be an amazing thing. But if you were to use it – like, you can use ARM today. Can you use RISC-V today? I don’t think you can. There’s no implementation of RISC-V as far as I know.

There’s a great podcast that I listened to, maybe it was last year, from NPR, about RISC-V. But it was great. People are using RISC-V just within their own proprietary silicon… For example, some of the big machine learning products and whatnot, they use a ton of RISC-V. And I think where it comes down to is although the licensing model will be good, and certainly, I’m gonna call it, a liberating tool, that will just kind of create competitive and licensed models. And I think it’s really just gonna be the overarching assembly – like, RISC-V is a pretty new language, or a pretty new ISA. This is an architecture built recently. That’s kind of cool.

So modern, is what you mean by that.

Yeah.

[51:58] And I think that’s powerful. I’m not smart enough to even understand what that means, I just kind of have to believe that there’s some pretty big advancements we’ve all made in 20 years in terms of how we can build architectures… So I think that that’s gonna be the fun part, is to see what comes out of that and where people can take – as it gets more mature, and there’s a line on chip factories for that, from the silicon fabs… Like, okay; well, that would be cool. What if you could produce a chip that just did this one thing that your software needed? And that’s where you get into the “Oops, I did it 10,000 times faster and more efficient” thing, versus anything else.

I see, I see.

And maybe the barrier to that just goes way down… Kind of how ARM did it for certain parts of the market, but maybe for the next phase.

So I’m going to mention now the third article, the third blog post that you wrote, “Five predictions for hardware in 2021.” I really enjoyed that. I would ask you how they played out, but let’s leave that for another time, if ever… I’m more curious about your two predictions in hardware for 2022. Do you have any?

Oh, that’s a good one. I haven’t thought about it yet, man…

Well, you have to… Because I’m looking forward to that blog post, and you have to start writing it… [laughs]

Well, don’t hold me back into 2022 but the two most interesting things I think of related to hardware right now… First and foremost, we’re gonna have to solve the sustainability problem. This is just not gonna work. So whether it’s because people come out with licensed CPUs, like “Sign up for your subscription to technology from whoever” versus “Buy this thing”, and also the related kind of (I’m gonna call it) the surround sound stuff around the cooling, the power, whatever. We’re gonna have sustainability. Silicon is at the heart of that. Hardware needs to become a sustainable, circular economy; it is not currently today. So that’s probably not gonna be done in 2022, but –

At least the beginning of that, yeah.

I think we’re gonna make progress on it… We already see it happening throughout our industry, which is regulatory impacts customers… All of our biggest customers bring sustainability as their number one issue now. It didn’t use to be there.

That’s a good one.

Even 18 months ago it wasn’t even on the radar. Now - right at the top. So… Okay, that’s great, because now we see business drivers… I think people are gonna pay for this too, which is really important… Because you don’t just get sustainability for free. We don’t get to just do “Oh, we did green power for you. It was a good marketing thing.” No, no, no. We invested tons of money to make meaningful impact to change our world; that is going to cost money. We are going to invest together. So I think that’s an important – that’s number one.

I think actually number two is that at some point, if we can solve this distribution of technology - right thing, at the right place, at the right time, so that way you could pull up on your iPhone and see the tracking of your cool computer to the right market, and then just turned on… And if we could snap our fingers and – let’s say you figured out just the right technology that you needed to use for your platform, and then you clicked a button or hit an API call, and somebody like Equinix got it into 50 to 60 markets around the world in like a couple weeks… That would be rad. [laughs] And I think we would see disruption in content delivery, and CDNs, and edge computing, and all kinds of things that we would do, and networks, and all things it could run on - I’m gonna call it hardware and software moving at their own pace.

So pending we solve this distribution thing, I think the big – and this is, again, probably… Now, you’d have to ask me what’s the 2025 predictions. That’s way more my style. But 2022, I’m not sure.

2020-something - the other thing I think is gonna be security. Right now, people just try and get the hardware or the thing in the right place, at right time, and they’re lucky to have it. That is not going to be our long-term challenge. We’ll solve that. Then we need to solve a way different approach to security, and that has to start at the hardware level. So I think our enablement of hardware-level security has barely begun. Most people don’t think about it in the software enablement side. They think about “Oh, I’m gonna encrypt my stuff, I’m gonna get my TLS going, I’m going to do all those things…” But really, even things like basic time protocols, basic boot processes… Is this machine the thing I think it is? Who touched it in the supply chain?

[56:14] Oh, yes.

You know, I always say “Why hack the app, when you can just hack the one-dollar chip at the factory?”

Oh, yes.

So I think we’ve gotta start thinking about like a zero-trust approach to hardware, and that will allow us to increasingly move these very important parts of our life into hardware we never touch or operate ourselves. We have a trust a third-party? I don’t know… You shouldn’t trust a third-party. But right now we don’t have a mechanism for third-party hardware to be zero trust… And I think that’s the next biggie wave.

So I know, like supply chain security…

202x.

202x, yeah. I can see that one, even like in software, where we have been doing it for long enough… When it comes to containers, when it comes to various CI/CD systems, when it comes to two different platforms even, how software moves with those different platforms and you shouldn’t trust any of them - how do you ensure things are secure? How do you ensure things remain signed? I can see this being a big thing coming.

Coming back to what you mentioned earlier about sustainable hardware, and how we cannot throw away hardware. We have to replace the parts which are broken, or are obviously an advantage to upgrade them, like the CPU, without upgrading everything else, and making it so simple that the FedEx guy or gal can come into a data center and just plug it in.

FedEx robot. FedEx robot.

Yeah, that as well. It may happen. So it can happen, and maybe should happen, because this is like the whole more sustainable hardware, more sustainable economies of scale, because they have to be big for them to work… And you’re right, it is top of the mind for many people, especially this week.

So I can see a very nice link - and I’m sure that you can see it as well - between what you’ve just mentioned and Equinix Metal. So how does this map to the Equinix Metal priority for 2022? I know that you promised priorities in a few weeks in your last blog post, on November 4th…

You’re trying to get a teaser, you can’t do that… [laughs]

Yes… You promised a few. I just want one. So can you give us one?

Well, I think we’ll make meaningful progress on the distribution capabilities. I always like to tell people that Equinix metal is not a bare-metal cloud. We’re a hardware distribution platform, an operator for fundamental infrastructure… So we’ll enable more places where you can do that. We’ve been really fortunate to be able to invest heavily and put Equinix Metal in 18 markets around the world. I think we’ll expand that and go to more. I think though that what we’ll do is we’ll move this – my prediction is that we’ll move some of these things which are kind of loosey-goosey right now… Like, we’re going to do field trials of our pluggable liquid cooling. We’ve already been doing it for about a year in one of our data centers. We’re gonna move there with customers in the coming months, using some prototypes that we’ve been building…

Interesting.

So we’ll move out there, and we’re gonna do that with some partners… OEMs, supply chain partners etc. So I think that’ll be really important, because as Equinix, we’re fortunate that we’re always building data centers. I can’t remember from our last earnings call how many are under construction right now, but it’s a lot… So we have this opportunity to really optimize and change what we’re putting into the ground around some new hardware delivery model…

So my hope is we’ll make progress on that, and hopefully with customers and in the open, so that everybody can learn, and we can try and (let’s say) exit 2022 with a super-clear path to disruptive sustainability from a power and cooling perspective.

I love that. That’s something I can get behind… Oh, yes. Yes, please.

[01:00:00.10] I haven’t found somebody who can’t get behind that. Everybody is like “That makes a lot of sense, and I wanna be part of it.” So I think making sure we do that in an open way is gonna be really important.

And the second thing is I think we’re gonna see the OEMs, Dell, HP, CISCO, Lenovo etc, even NetApp, and F5, and Pure, and the people who make purpose-built technology in hardware - I think we’re gonna see just massive business model shifts. The cat’s out of the bag. People want aligned business models as a service… And that’s gonna be a really, really big turn for these aircraft carrier style companies; they’re big businesses that are really used to shipping you the technology and you doing everything, and now they’re gonna turn it and running it for you somehow… We’re gonna feel the ripple effects of that. But I’m so excited about it, because that’s the first leading indicator of how we can make the business models more aligned.

And people sometimes – you know, they originally inferred that Equinix Metal was kind of in conflict with cloud providers… I don’t think so. We’ve recently enabled things like Amazon EKS, and Anthos… Because I see cloud providers as software companies, that when at the right scale, run aggregated infrastructure for you. But when not at the right scale, call it pretty much anything beyond multiple megawatts - they don’t really wanna run the technology for you. They just wanna sell you the software and services. And I think that that’s a pretty aligned model with Hyperscale that we can help support.

And with OEMs, as they move into this as-a-service model, I think we can be super-helpful with Equinix Metal to help them be the best in the world with that. It’s one of the main reasons why 2014 - we’ve been making it so that we can automate hardware, no matter what it is, and where it is, and what runs on it. We might wanna add one other thing - or who owns it… Because it doesn’t really matter, right? Your server, my server, Dell’s server… It’s just a server. And can we make it consumable and usable? That just requires an adjust. That’s a startup guy talking. It just requires a business model change. [laughter]

But that’s simple, right? We’ll figure it out…

Let’s figure that out… [laughter] Pull request on version 1.2 of the business model.

Exactly. Or send me your pull request and I’ll consider it. I’ll merge it. Who knows, maybe…

I’ll consider it…

Okay. So as we are about to wrap this up, I’m wondering that – like, from a listener perspective, if there was one thing that I would take away from this conversation, what would you like it to be?

Well, I would like more people, and especially software-minded people, to be interested and open to (I’m gonna call it) the disruptive innovation that can happen when you pair magical software with the right hardware. I think it’s not only super-cool, I think it’s an imperative for us long-term to be good at that. Not everybody, but I think that that’s an open place, and I’d love people to come away excited about the opportunities of making a difference with technology, about doing so in a sustainable way… And not just because it’s good for the granola country planet, it’s like… Because it’s both good for you, and doing good, but – what is it? Doing well by doing good…

Another one of my blog posts a year or two ago, which is about creating a bigger tent… An ecosystem-driven way, where we can create more value by solving these problems together, instead of (I’m gonna call it) a siloed way, where we take the value. It’s like the Carbon industry right now, where instead of pulling in raw resources and extracting them for ourselves, kind of like drilling for oil, we can create new technologies like renewable solar, or even Carbon capture, what Stripe is doing… That’s a way that you can do well, but also create a bigger opportunity tent. I think that’s the other powerful part that I’d love to impart to software-minded users, is that we can really work together between software and hardware to solve some of the biggest challenges in the world, but we cannot do it on our own. Together, that’s a pretty powerful combination, and I’d love to be part of that ecosystem.

That’s a really good one… And I know that Tinkerbell OSS is a great example of what you’ve just said. So if you’re wondering, like, “This sounds a bit handwavy…” Well, no, because there’s actual projects that you can go and check out, and they look really good… Which shows the investment and commitment to those technologies. The otel-cli is another one… And there’s a couple of other examples in the Equinix labs, which is a great way to see some of the ideas which float around, and I’m sure new ones will appear next year.

Yeah. Tinkering with hardware and software together? Come on by.

Tinkering, I love that. Like, where did Tinkerbell come from? Tinkering. There you have it. Let’s tinker with hardware. I love it.

Zac, thank you very much for indulging my curiosity. I had a great conversation about hardware, and you gave me some crazy ideas for 2022, and I would love to have you back at Ship It. Thank you very much.

I appreciate you having me here. Thank you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00