Retool – Retool is a low-code platform built specifically for developers that makes it fast and easy to build internal tools. Instead of building internal tools from scratch, the world’s best teams, from startups to Fortune 500s, are using Retool to power their internal apps. Learn more and try it for free at retool.com/changelog
Micro – Micro is reimagining the cloud for the next generation of developers. It’s a developer friendly platform to explore, search, and use simpler APIs for everyday consumption all in one place. They’re in early development building out the first set of APIs, and they’re looking for feedback from developers. Signup and get $5 in free credits.
Sentry – Build better software, faster with Sentry’s application monitoring platform. Diagnose, fix, and optimize the performance of your code. Cut your time on error resolution from hours to minutes. Use the code
PARTYTIME and get the team plan free for three months.
Play the audio to listen along while you enjoy the transcript. 🎧
Hello, JS Party people, and welcome to another wonderful episode of your favorite party about the web on the web. We are livestreaming right now. I have our one and only Nick Nisi joining me today.
Hey-hey. Mr. Burns - we may be calling him Mr. Burns through this episode because our special guest today is Mr. Nick Fitzgerald, who is a staff engineer at Fastly. Hey, Nick.
Hey! How’s it going?
[04:06] So one Christmas break I kind of got annoyed and fed up with this and I decided to rewrite it in Rust and interpret that to WebAssembly… And I ended up making it a bunch faster; I forget what the exact numbers were. It was quite a while now. But that was kind of like my intro to WebAssembly and how I got involved there.
Then as it turned out, Mozilla was spinning up a team to work on WebAssembly stuff, different people who would work on WebAssembly – like, the engine directly that’s inside SpiderMonkey… So I joined that, and then I stayed at Mozilla for a while, and then ended up moving to Fastly with a bunch of the rest of my team. So that’s kind of how I got here.
I remember when that first came out with the sourcemaps. I somehow hadn’t made the connection that it was you… But it was 10 or 11 times faster than the original implementation…
So we have this tool that I developed called Wizer, which takes snapshots of WebAssembly, and allows you to just basically initialize a program, take a snapshot at that point in time, and then the result of that snapshot is actually itself a WebAssembly module. And when you instantiate that WebAssembly module, everything is already initialized. So you don’t need to do any of that startup again.
So we’re kind of like making an office in a box here, where you just open the suitcase and the office is already in. Everything is ready to go, and you don’t have to do any of that initial setup time.
That’s super-interesting. Can we actually step back for a second? Because I think you’re way deep in the weeds on this in a way that I think not everybody has the context… So you mentioned a couple of things there that I’d love to dig into. First, can you just sort of explain what is a v8 isolate? Because that was the comparison you were drawing.
If you were developing a serverless platform and using v8 for that, you wouldn’t want two customers’ code to run in the same isolate, because you really don’t want them to be able to poke at each other’s stuff. That’s a huge security vulnerability. So an isolate is kind of like the unit of – it’s kind of like a process in an OS, or a different window in a browser, a different tab.
Cool. So then you were talking a little bit about the different phases involved with running code in an isolate, right? There’s like instantiation, parsing, all these different things. So can you maybe break down a little bit what was the motivation for this? Because I think we jumped right into also what does it do, but maybe we could talk a little bit about the why’s behind that.
That was something that was new to me. This is not really my realm by any means, but the idea of a serverless environment for WASM - what are the practical uses of that?
[12:04] Yeah, so we just talked about isolates, right? A WASM instance is kind of similarly sandboxed. There’s a few different kinds of state that a WASM instance has, but the one that everyone knows about is the linear memory. You just have basically this big array of bytes, and that’s your sandbox to play in as a WASM instance. So it gives you similar guarantees, but it’s a lot simpler, because that’s it, there’s just this array. We’re not talking about objects in a GC heap, or anything like that. So because it’s so much simpler, we can start it up a lot faster; creating a new one takes, depending on the module, a handful of microseconds, rather than milliseconds, so a whole order of magnitude faster… And WebAssembly has this nice property where it can only do stuff that it imports. So by default, WebAssembly can’t really do anything; at most, it can kind of spin the CPU and cause some heat, and maybe you have to interrupt it and say “Stop doing that.” But if it wants to talk to the network, or write to disk, or anything like that, you need to kind of give it functions that allow it to do that.
So it’s kind of like a capability-based security, if you’re familiar with that, which is basically like - you don’t have the capability to write to the network or communicate on the network unless I give that to you. So you get these really nice security and sandbox properties… And so that’s kind of why we’re interested in that and serverless, because we can create more instances faster than alternative approaches, and we can pack more of them together in one machine.
So it’ll look familiar, but it might feel a little bit different based on the restrictions of targeting WebAssembly.
People have shown all over the place that the faster you can have your pages load, the more customers will click Checkout, and generally the happier they are, or whatever. So kind of having the speed of light, the fastest that you could potentially go be on the order of like 15 milliseconds, or if even if we got down to like 6 milliseconds - that’s not great. People fight hard to get better times than that, because it’s worth it. So that was kind of a no-go for us. So what we wanted to figure out was how could we have basically instantaneous startup. That’s kind of where my whole snapshotting work comes in.
And does it achieve that then? It’s essentially unmeasurably fast?
That’s wicked fast.
Okay. And you mentioned that there are some trade-offs in terms of throughput if you end up then executing a fair amount. Have you kind of measured those curves over time? How long running of a function does this need to be before it starts to swap over to being less efficient?
That’s super-cool. Does Fastly support that today?
It’s an open source tool. This isn’t something that we’re kind of hoarding the magic and doling it out as we please… You can download the tool, it’s on github.com/bytecodealliance/wizer. It’s the WASM Initializer, Wizer. And then someone suggested that we call these modules, after the snapshots, as “wizened” modules, because now they already know everything that they need to start up.
How did you spell that?
Okay. We will include a link in our show notes for all who are interested in that.
Yeah. And if people are really interested in the snapshot side of things, I gave a talk at this year’s WebAssembly Summit specifically about Wizer and how it works… So I can share a link for that after the show.
Yeah, that would be super-cool. So where do you see this going? I think we’re right now at the cusp with tools like this, and we had an episode a few weeks back when we were talking with the team behind Web Containers, they’re basically running…
StackBlitz, yes, and web containers… Where they were talking about running Node.js and other server-side environments in the browser, and things like that. So we’re kind of reaching this place where WebAssembly is letting us open all these new possibilities. What do you see as the next frontier here?
[24:18] And kind of the way that that happens are something called in-line caches, which kind of are like “Is it this type? Then do this. Is it that type? Then do this.” And each of those “do-this’es” is a little stub in my cache stub.
Traditionally, the way that in-line caches have been done in kind of a JIT environment is - say we’re reading a field of an object. Every object has a shape or a hidden class, which is basically saying “What are the other of properties that I have, and what is my prototype chain?”, that kind of thing. Normally, if you don’t have any idea what the shape is, you have to kind of look up in a hash table to see “Okay, where does this field exist?” and then “Let me get that value.” That’s kind of an expensive operation for something that happens so often. But if you have an in-line cache, you can say “Is this object this shape that I’ve seen before?” This function happens to always be called with objects that have the same shape. And then you can just say “If it is, then I know already, I’ve kind of baked in that the field that I wanna read is that offset 8”, or something like that. And that’s just way faster. It’s like a check, and then an offset read.
So normally, the in-line cache would kind of bake in the pointer to that shape, and it would also bake in that offset… And those would kind of be generated in the machine code just-in-time. But what we can do is actually we can make the pointer into offset parameters, and make this in-line cache a little function that takes these things. So now this doesn’t actually depend on anything at runtime, because where the shape is in memory - that’s something that’s at runtime. But we’ve kind of pulled all this stuff that happens at runtime out and we have something that we can use ahead of time.
So if you’re baking in pointers and stuff, there’s kind of an infinite number of in-line caches that you could generate, but there’s only so many types of in-line caches, where if you pull all these dynamic things that rely on what’s happening at runtime out and you make them parameters, then you’re left with just N different kinds of in-line caches… And we can actually compile all of those ahead-of-time and then kind of like wire them up during execution, but without any kind of just-in-time compilation.
And as you talked about profiling, it made me wonder - you’re already doing precompilation, you’re already putting these things in an environment where they’re gonna run against the most realistic data there is, actual production data… How expensive would it be to put profiling gathering there and over time recompile these same workers that you’re deploying based on profiling data of their live application?
Yeah. That’s kind of like the long, long, long-term. We have a lot of stuff to build out before we can start thinking about that stuff… But yeah, you can do stuff like – you don’t need to profile every single execution; you can sample… So it’s exciting, but we have a lot of work to do before we can start doing that kind of thing.
Yeah, that is super-cool.
So it’s one thing to be able to stuff WebAssembly modules together, but we talked about how simple WebAssembly modules are, and that means that there’s not really a good way to communicate advanced structures. MVP/base WebAssembly has 32-bit floats, 64-bit floats, 32-bit integers, 64-bit integers. So that’s not a lot of ways to communicate with each other.
And that’s it.
[32:01] But if you have a rope, what you can do is you can just say – you know, it’s kind of a tree, so you have a node that’s just “I am the concatenation of this one string and this other string, and creating that is order one.” So it’s very cool, but it’s very complicated. But interface types kind of will eventually allow you to define your own ways to kind of lower the platonic ideal of a string down into a rope, or something like that. Kind of like arbitrary computation for translating these types on either side.
So that’s kind of like the furthest vision. But right now we are defining just what’s called a canonical ABI, which fixes the representation; you have to use a string buffer, or something like that. There’s one representation for each type.
So with just a canonical ABI it is kind of just like an IDL, but this it’s open to that next step once we ship the first phase… So this is gonna allow all of these modules to talk to each other. And each of these modules - what’s really key about interface types is that they’re kind of [unintelligible 00:33:04.14] So if you think about npm modules, when you use an npm module, it gets all the same permissions and capabilities that your application has. And this is a problem; we’ve seen these supply chain attacks, where some generic markdown library or something - I don’t think it’s actually happened with a markdown library, but… You know, I just do something very innocent, and then actually I’m reading your SSH keys from disk, and I’m sending them off to some server, or I’m mining Bitcoins, or whatever… And so it’s not great. We talked a bit earlier in the podcast about capabilities and how a WebAssembly module can’t do anything unless you explicitly give it something to do. So Interface Types kind of preserves that ability between different WebAssembly modules. It says “Just because I can read to the disk and I’m talking with you and I’m using your markdown library doesn’t mean you can talk to the disk. All you can do is take this markdown [unintelligible 00:34:06.00]
Yeah, exactly. So it kind of limits the blast radius of where things can go wrong when you have a supply chain attack like that. They can’t escape their sandbox even if they’re talking to you… Because the only way you can communicate is with this type grammar, and you don’t automatically get any access to resources unless I explicitly give them to you.
So I can’t have Node modules obviously, or anything like that, and I don’t have any of the browser environments, like Fetch or things like that, that are more supplied by the environment. It would just be the core language itself.
Right. So there’s no DOM nodes, for example. And there’s no requirefs that you would have on Node. But the replacement for that is the ecosystem that I was just talking about, of these kind of shared-nothing modules that communicate with Interface Types. We hope to build a whole ecosystem that is doing this stuff, so if you want file access, you’d be able to import something that would give that to you, potentially limiting what you can access only to a certain directory. So you can access this scratch directory, but you can’t access my .ssh in my home directory.
When we talk about that communication, does interface types define an ownership model of some sort? Or are we copying memory as we go between these? If not, how do you deal with borders between garbage-collected languages and not garbage-collected languages, and things like that?
Yeah. So there is a copy implied between each side, and that’s basically there to make sure that you’re not sharing the memory, because that’s kind of the vector into heap corruption and getting rid of the sandbox properties that we care so much about. But what’s nice is with the eventual full interface types that kind of allow programmatic lifting into an interface type and then lowering into a concrete type on the other side, that will be only one copy, and it will be kind of like directly into and from the representations that each module [unintelligible 00:38:45.29] It’s basically as good as you can get, implying that you do have to have one copy.
Got it. So thinking then about the implications for application architecture, as we talk about these things, we’re gonna want to have modules that essentially are self-contained relative to data, where a module is gonna own a set of data and you wanna keep the communication between them relatively minimal in terms of data size, ideally.
Yeah, ideally. I think it depends on the component. Copying a string is pretty fast. Memcopy is quite fast. But it also depends how nested is the loop in which you’re calling it. So I don’t know, there are architectural things that you can do. You can kind of like make one module own the data, and then hand out identifiers saying “This is essentially a pointer to this data, and whenever you wanna ask something about that data, give that back to me.” You could almost imagine it as like an object, and that’s like the little self, and then you call each method to get little bits of data, but you don’t ever get the whole thing.
So we have to kind of parse that and get the full mappings, so we know this line corresponds to that line, and this file corresponds to that file. But whenever the debugger, for example, stops in a location, it doesn’t need the full mappings. It doesn’t need everything. It just needs to know “Right now I’m paused at this location. What’s the real source location for where I’m currently paused at?” And that’s a tiny amount of data compared to the huge map. So you just kind of expose an API that allows you to keep the full dataset in the original component, and then just make little queries where you get the little bits of data out on the other side.
Yeah, that’s really interesting. How much overhead is there in terms of calling between modules? Is this like roughly equivalent to a function call even within a module, or is it a higher cost?
It’s a little bit higher cost than function calls within a module, but not too much. Basically, maybe we’re getting a little bit too bogged down into details, but there’s a register for the VM context that kind of keeps track of what is my current WASM instance and what are the bounds of its memories, and things like that… And that stays in a register. When you call across instances to a new module, you have to kind of swap out that register with the new instances register. So if you’re doing a micro-benchmark, you’ll see it show up, but if you’re doing any sort of actual work anywhere else, it’s gonna be lost in the noise.
Yeah. And that means that it’s extremely viable to treat these things as essentially objects, in a lot of ways. You can say “This module owns this data”, and you can call methods that are essentially accessors on it when you need the data…
…and really minimize the amount of copying you do. That’s super-cool. So as we move towards this world, what do you think the implications are for how we develop applications, and are there particular domains of applications that are likely to benefit or be driven to adopting this sooner?
Most popular platforms aren’t like this. So porting existing applications - depending on how large the application is and how many things it’s using and stuff, it could be hard… Similar to porting a desktop application to the web can be pretty hard, especially the larger it is. But that tells me that we’ll see more new applications being developed, kind of greenfield applications. And then where are we deploying this stuff first? Well, us Fastly are doing it kind of in serverless environments, where in general you already have smaller micro-applications. I think that’s relatively easy to bring over to this new paradigm.
[43:45] Another domain where we’ve seen a lot of excitement for WebAssembly, and I think will work well for this kind of ecosystem, is games that want to have plugins or mods, where - say you wanna change X, Y or Z about the game, give us a WebAssembly module and that’s kind of what you’ll write it in; then it’s sandboxed from the rest of the code and you can’t break out. You can only use the game API’s that we give you. Basically, any kind of plugin architecture, maybe for a digital audio workstation, something like – I don’t know, what are popular digital audio workstations? I guess Ableton, and Reaper, and these sort of things. They’re taking these audio signals, midi or whatever, and then that goes into one plugin that provides a filter, and then there’s another one that’s a compressor, or another one that adds a chorus effect… And each of these could be their own little WebAssembly module communicating with interface types to kind of apply their transformation on that signal along the way, and you know that it’s not gonna break out of the sandbox again and it’s not gonna mess with your desktop, or whatever. It’s just gonna work on the audio, like it said it would. So that’s another area where this will be a really good fit.
Awesome. Nick Nisi, did you have any more things you wanted to dig into?
Cool. And then another question is can you think of any triggers or things that developers should be on the lookout for for using this as a potential solution to a problem that they have? Is there something that would identify this as a solution?
Yeah. I would say whenever you’re looking to have your users be able to run custom code, and you don’t trust them, but you still wanna have them be able to plug into your architecture and customize things, that’s basically what this is designed for.
So we develop wasmtime (WASM engine), which we kind of focus a lot of work on making it easy to embed into other applications… But there’s a bunch of different choices out there. If you find one that works better than wasmtime… Yeah.
I feel like the web’s security model was by necessity pushed to a place where things had to be sandboxed, they had to be secure, because suddenly you’ve got all of this untrusted code that’s gonna be running, and now WebAssembly is basically allowing us to say “Hey, that’s a good idea for any type of code we might wanna run. Let’s pull that in.”
Yeah. And we can – rather than have one sandbox for the whole tab, or something, we can have sandboxes for each different component, which is really nice. In general, trust things less. If you don’t have to trust it, then don’t. Even if you do trust it, don’t trust it.
I feel like that’s a good show title. “Trust things less.” Awesome.
I’m not paranoid, I swear. [laughter]
I feel like if you’re running code that you didn’t write, paranoia is a very healthy attitude.
Awesome. Well, I think that is – we’ve covered a lot of ground. I’m still sitting here in shock, observing all of it…
Me too… [laughs]
Nick, do you have any other things you wanna leave us with or let us know about before we wrap up?
Well, I really liked your intro music, and I was wondering if one of you produced that, or who did the music.
Yes, all of the JS Party and generally all of the Changelog family of podcasts - their music is produced by Breakmaster Cylinder.
He (I think) or they have some great stuff.
Yeah, I have to look them up.
You’ll get another taste, because we’re gonna close with an outro, I’m sure…
Awesome. Well, if there’s nothing else, then thank you so much for joining us today, Nick. I think this is a really interesting topic, and I’m super-excited to see where it continues to go.
Thanks so much for having me. This was a blast.
Alright. And if you’re listening to this not live, if you’re listening to this on your podcast and you wanna join in, you wanna be a part of the party live when we do it, we do record live and publish to YouTube at the same time we do it; every week, Thursdays, 10 o’clock Pacific, 12 Central, 1 Eastern. Check out changelog.com/live. You can join with us and slack in real time, and you are what makes this a party. So for all you listeners, we’ll catch you next time. This is Kball, signing out!
Our transcripts are open source on GitHub. Improvements are welcome. 💚