Zac Smith, managing director Equinix Metal, is sharing how Equinix Metal runs the best hardware and networking in the industry, why pairing magical software with the right hardware is the future, and what Open19 means for sustainability in the data centre. Think modular components that slot in (including CPUs), liquid cooling that converts heat into energy, and a few other solutions that minimise the impact on the environment.
But first, Zac tells us about the transition from Packet to Equinix Metal, his reasons for doing what he does, as well as the things that he is really passionate about, such as the most efficient data centres in the world and building for the love of it.
Zac Smith: Yeah, so the TL;DR is that chips are getting hotter. Why are they getting hotter? Mainly, we’re getting dense, the nanometers are getting smaller on the fad processes. That’s how you kind of stuff more transistors in. In order to then do that, you need to push way more power through these things, and we’ve created innovative ways, like what Lisa and team have done at AMD about chiplets, and having lower yield requirements, and putting multiple chips on a single die. But in the end, we’re just running into a physics barrier here. You add it by adding more layers onto it. So suddenly, you’ve got multiple layers in the thin fad or whatever they call it. Even with memory and NVMe. So everything is having denser transistors, with more power going through them, and you have this kind of movement towards the way, as you kind of get rid of the nanometers, your only way to make things go faster and more efficient is to push more power through them.
So that’s one in the general-purpose, large-scale silicon trends that we’re dealing with. And the second thing is we have way more sophisticated purpose-built technology at this point, like GPUs, or accelerators. We have things that are very, very specific at doing one thing very well, and you then keep them busy, so you just use a lot more heat. There’s an electricity problem that we have there, and certainly, as we shift to a more renewable energy footprint, instead of just buying credits and offsets actually generating things like green hydrogen, so you can offset demand and use it, exposing – there was a great panel with the Intel team last week or the week before about how to expose to the world of software reliable metrics on “Well, that would not be a good time for you to reindex all your data stores. Maybe you should do it at noon, in our Texas data center, instead of at 2 AM in our Frankfurt data center, where we don’t have any renewable energy.” We don’t have a way to even express that in our industry, a standardized way, let alone to do something about it. We desperately need that…
[32:20] But anyways, getting back to it - accelerators and purpose-built technology are getting hotter… So you have this electricity thing, more juice into the rack, and denser, effectively… And then you have the other problem, which is cooling. We’re kind of getting to the upper barriers of two things. Number one, we’re getting to the upper barriers of how we can air-cool this stuff. A lot of the times – and you can see simulations, that about 20%-30% of the energy in a data center is just fans. If you ever walked into a data center, they’re very loud. They’re loud because there’s all these little tiny, 20-milimeter fans running at the back of every server, just sucking the air through, just to create airflow on individual computers, to pull it over those chips and those heat sinks.
So in big data centers you’ve got 20%-30% of the energy just using fans to pull air around… And then we’re getting to this density level where you just can’t cool it if there’s not enough air flow to be able to do that… And especially in a mixed data center. In a hyperscale data center is where you can build around one specific thing, you can kind of purpose-build some of the stuff around it, you can (as I like to say) build your data center around your computers… You can’t do that at some place like Equinix, where every enterprise service provider has different things. I also kind of believe that we’re gonna have a future of compute that’s more heterogeneous, versus homogenous… So we’re gonna have a few of a lot of things, versus a lot of one thing. So I kind of think that we have to solve this in a more scalpel-driven manner.
So moving the liquid – I’m not gonna go into all the things, but just think of it like your car radiator or air conditioning. Pulling a liquid that turns into a gas over the hot part, the chip, the plate, whatever, and then being able to do that does a few things. Number one, it can be way more efficient. You can stop all those fans, you can stop pushing air around which doesn’t go in the right place, at the right time, and start to put the right cooling at the right place.
The other thing you can do is create a much, much higher differential between the intake and the output. What that allows is - you’ve probably heard of things like heat pumps. You can actually turn that back into energy. So you’ve got a natural thing called a giant turbine called “thousands of computers creating heat.” That sounds kind of like a power plant to me, right? Right now we literally just exhaust that, we’re just trying to get rid of it. But if you can create a differential and actually capture (I’m gonna call it) hot enough liquid, you can actually turn that back down to energy, or sell it to the grid for municipal purposes, or whatnot. You can use that energy if you can capture it.
And then the most important part of that process is actually today most of our data centers and most of the data centers in the world use evaporative cooling, and that takes millions of gallons of water per day to evaporate this heat. And that is simply not sustainable. So we need to move into a closed system, where we can keep the water and the liquid and not evaporate it all along with it.
So there’s these momentous challenges and opportunities… I think it’s – like what I’ve touched on earlier, related to some of the business model changes are gonna be necessary to that… But as we – like, for example at Equinix we have a goal of reaching Carbon-neutral by 2030 using science targets… We have to explore all of these options with not only ourselves, but our ecosystem partners - the silicon partners, the OEMs, our customers etc.
[35:44] I think one of the biggest challenges we have right now is that in an enterprise data center, with this diversity of technology that’s going on, everything from Dell servers to NVIDIA DGXes, to boxes that you brought in from your – you know, “This is a ten-year-old server I’ve got… Let me bring it into the collo.” Still useful, and actually that’s probably one of the best things you could do, is continue to use that server, so we don’t have to make a new one.