Jonathan Norris changelog.com/posts

WebAssembly runtimes will replace container-based runtimes by 2030

there's a ton of energy around making this happen

This is a fancified excerpt of Jonathan Norris’ unpopular opinion on Go Time #275. Jonathan is Co-Founder & CTO @ DevCycle where they’re using WebAssembly in very interesting ways. To get the full experience you should listen while you read.

🎧  Click here to listen along while you read. It's better!   🎧


The advantages of WebAssembly, with its:

  1. tight security model
  2. very fast boot-up time
  3. scalability at the edge
  4. much smaller footprints
  5. portability across environments

will really drive a shift away from container-based runtimes for things Kubernetes and edge workloads by 2030. There’s a ton of energy around making this happen within the WebAssembly community.

Subscribe to Changelog's YouTube channel for more clips like this, live show recordings & more ✌️

Kris Brandow: What do you think is the largest barrier to getting there now?

That’s a good question I would say:

  1. language support
  2. profiling
  3. tooling

And as we’ve talked about today a lot, getting to a point where you can optimize and profile the WebAssembly a lot easier is a big thing. And the standardization…

So there’s a lot of really exciting changes to WebAssembly that are coming along. I think we’ve talked about a couple of them already, around multi-threading support and native garbage collection support.

One of the big changes that’s coming is called the component model, which is a way to standardize the communication across multiple WebAssembly components so they can talk to each other and really make your code a lot more componentized (and in smaller chunks).

So that’s a big effort that the community is working on to drive towards replacing largercontainers in these Kubernetes and edge workloads.

So yeah, I think those are the big things; if the WebAssembly community can get sort of those big changes that are coming - the component model, multi-threading, garbage collection support and many other things down, then I think we’ll be on that path, and we’ll see some big companies start up around this space in the coming years.

Brad Van Vugt: I think it’s funny, because we’ve talked about this a lot, and I think my unpopular opinion would be the opposite of yours. Because I don’t know – maybe more on timeframe, sure, maybe possibly, but I think the lift required is so large. Do you think that something AssemblyScript is crucial for that, as sort of this core, native entry point?

I think a more approachable, higher-level language is important as an entry point. I think that’s one of the challenges with WebAssembly right now: the best environments are lower-level environments, things using Rust, or C++.

There’s actually a good amount of momentum around running JavaScript or TypeScript in WebAssembly, but by bundling in SpiderMonkey (Firefox’s JavaScript engine) into your WebAssembly runtime, they’ve been able to get that working in a couple megabytes. So you basically have the full SpiderMonkey runtime running within WebAssembly, running your JavaScript or compiled TypeScript code in that…

For a lot of these Wasm cloud/edge companies… that’s one of the big entry points that they’re talking about.

But yeah, I would say getting a higher-level language that executes really efficiently in Wasm is probably one of the biggest barriers to that.

Kris Brandow: There’s a lot of pressure from the other side, of VMs and hypervisors becoming super-fast, like with Firecracker, and all of that. Do you see maybe a merging of those technologies, so you can get the security benefits of virtual machines with the speed and all the other benefits of Wasm?

Don’t get me wrong, those VMs have gotten very good over many years, and we’ve been relying on them for a lot of our high-scale systems. But yeah, I think there’s just an order of magnitude difference between the size of containers.

You can optimize the size of your containers to be pretty small, like tens of megabytes… But WebAssembly is, at its core, designed to be more portable than that.

You’re talking about tens of kilobytes, instead of tens of megabytes. And the boot-up times can be measured in microseconds, instead of milliseconds, or tens of milliseconds, or even seconds (!) for containers.

So there’s just an order of magnitude change by using WebAssembly. I think it’s gonna be really hard for a lot of containerized systems to match.

You can think about a big platform running at the edge (at scale) where – for our use case, we have a lot of SDKs that hit our edge APIs. And we have certain customers, say our big mobile apps… And they may send out a push notification and get hundreds of thousands of people, or even millions of people who all open their app at exactly the same time.

When that sports score, or that big news event lands on their phone; they’re opening their app at exactly the same time, and we see massive deluges of traffic (literally, a hundred times our steady state traffic) hit our edge endpoints in those points in time. And because we’re using these edge platforms, they’re able to spin up thousands of runtimes of Wasm and edge runtimes in milliseconds to serve that traffic. And having to do that with VMs is possible, but there’s a lot more latency in that toolchain.

So that’s why I think the power of not only the really tight security model, but the boot-up times, the small size of the Wasm modules really can power that. And for certain use cases it makes a lot of sense.

I’m not gonna say it’s gonna replace every use case; it’s clearly not. But for certain high-performance latency-sensitive use cases like trying to deliver feature flags globally to mobile apps, or web apps around the world (that is our use case)… it’s definitely very applicable to this problem.

Jon Calhoun: I feel the current setup with Docker containers (or whatever else) are a little bit slower, but they work for probably 90% of use cases; maybe not – I’m just throwing that as a random number out, but they work for some big chunk of use cases. And the WebAssembly version that you’re saying would replace it - essentially, the speed benefits and all those things, there’s going to be a huge chunk of people who wouldn’t actually care as much about that, necessarily. So I’m assuming for that to happen, it would have to become just as easy to use the Wasm replacement for Docker. At least in my mind, that’s the only way I would see that working, is if it became just as easy. And I don’t know, do you think it’s just as easy now?

Oh, it’s definitely not just as easy yet. I think there’s definitely a lot of developer tooling work to go to make it easy. We’ve been using Cloudflare Workers, and there’s lots of other people that (for edge runtimes) make it super-easy to deploy at runtimes; they make that pretty easy.

But I think the real benefits come from the security benefits.

So a WebAssembly module is way tighter in controlling what it has access to through the WASI interface than a VM is, right? And so for very security-conscious companies, I could see it having a lot of value there for certain mission-critical modules of their application.

And then there’s a lot of cost benefits.

One of the reasons why it’s a lot cheaper to run your edge workloads in Cloudflare Workers (or Fastly, or Netlify, any of those edge runtimes) versus something like AWS Lambda, is because the boot-up and shutdown times and the sizes of the binaries that they have to manage are way smaller.

Those edge runtimes can start up your code and in milliseconds, if not faster, where Lambdas and other things like that are more containerized at the edge, take a lot longer to spin up, they have a lot higher memory footprints, things that… And so the cost differences there can be huge.

We saw huge cost savings ourselves by moving to these edge runtimes to run these workloads at scale. Not only do we build SDKs, but we run really high-scale sort of APIs at the edge.

There’s huge cost advantages to having really small, portable, fast runtimes that I can execute all around the world.


Discussion

Sign in or Join to comment or subscribe

Player art
  0:00 / 0:00