Kevin Ball and Suz Hinton talk with Jay Phelps about WebAssembly; what it is, how to use it, and how some are using it already.
Hired – Salary and benefits upfront? Yes please. Our listeners get a double hiring bonus of $600! Or, refer a friend and get a check for $1,337 when they accept a job. On Hired companies send you offers with salary, benefits, and even equity upfront. You are in full control of the process. Learn more at hired.com/jsparty.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code
changelog2018. Start your server - head to linode.com/changelog
- WebAssembly Demystified
- Can I use… WASM
- Making WebAssembly even faster: Firefox’s new streaming and tiering compiler – Mozilla Hacks – the Web developer blog
- GopherJS vs WebAssembly for Go - DEV Community 👩💻👨💻
- WebAssembly cut Figma’s load time by 3x – Figma Design
- Screamin’ Speed with WebAssembly – Hacker Noon
- raphamorim/wasm-and-rust: WebAssembly and Rust: A Web Love Story
- Hello wasm-pack! – Mozilla Hacks – the Web developer blog
- all: WebAssembly (“wasm”) support · Issue #18892 · golang/go
- Guide for C/C++ developers - WebAssembly
- Compiling a New C/C++ Module to WebAssembly - WebAssembly | MDN
- rust-native-wasm-loader - npm
- cpp-loader - npm
- WebAssembly Studio
- DenisKolodin/yew: Rust framework for building client web apps
- Initial stab at porting
asm/stack.tsto Rust by alexcrichton · Pull Request #752 · glimmerjs/glimmer-vm
- Draco 3D Graphics Compression
Click here to listen along while you enjoy the transcript. 🎧
Hey! Dude, that song… Now I know why it’s called JS Party, I wanted to dance!
I dance every time. If you could see me, I’m here rocking out in front of my microphone. Awesome, thanks for joining us. Also on the line we have Suz Hinton. Hi, Suz!
Hey, it’s good to be back. Thanks for having me. I’m really excited about this topic today as well.
Yeah, I think it will be good. So we have a lot of interesting topics to discuss related to WebAssembly, but I think there’s a key thing we have to figure out first, which is how do we pronounce the abbreviation? Is it WASM [wozm] or is it WASM [waezm]?
That’s a great question… I think it’s more of a regional thing, because I say WASM [wozm] but the majority of the people in the community group and in the working group call it WASM [waezm]. I honestly have no idea, I don’t think there’s a correct pronunciation. I think it’s just regional, and then also for me personally, WASM [waezm] just feels weird to say… Like, waeeeesm…
Jerod argues that it should be WASM [wozm] because then it rhymes with AWSM… So that we can have our title for today, “WASM is AWSM.”
Exactly. But at the same time, WASM [waezm], you can be like “Wuzzuuuum!!!” You know, like the Budlight commercials… [laughter]
And one of our panelists, Chris Hiller, who’s not on today, he wrote a poem for WebAssembly. It’s called “Instructions”, by Christopher Hiller.” Wasm. Has’em.” [laughter]
You guys love doing rhymes.
Sure, yeah. The phrase that they like to use is that it’s an efficient low-level bytecode for the web… But we kind of have to distill that down and talk a little bit more about what that means. On the efficiency side of things, it kind of means efficient in almost every single way… Not just efficient as in like performant while the application is actually running, which it is, but it’s also trying to be efficient in the actual size of the files, and also efficient in sending out those files over the internet and then getting them compiled to the person’s native machine code.
With WebAssembly, the goal is to do something totally different, to do what’s called “Streaming compilation.” That means that while the browser is downloading those bytes from the internet, it can actually compile it right then and there. It parses and compiles the WebAssembly bytecode while it’s being downloaded. It doesn’t have to wait for the file to complete. That’s huge. This is a fairly new feature out of WebAssembly in some browsers, particularly Firefox and Chrome… But in those instances, for example, Firefox is able to compile the WebAssembly faster than it actually is downloaded over the internet, on average.
There’s certain cases where the internet is super fast and your computer is super slow where that’s not gonna be true, but particularly on things like mobile devices, that can be huge. Now the compilation, that parsing and compile step that runs in your browser is no longer the bottleneck; it’s back to being the internet.
It’s somewhat true. For WebAssembly, it can compile it because of the way the things are broken up into sections. It’s gonna depend on the actual virtual machine implementation about whether it’s able to compile in between different segments. I don’t wanna get too low-level, but the files themselves are broken up into these segments, and different things are organized inside the files in a way that you might not normally – like, if you don’t know anything about binaries and native compiled code, it might be weird to actually distill down what the files contain, because they split things. The code bodies of all the functions are put in different sections than the names for the functions, and for imports and for all your strings, all the data segments, things like that are gonna be in totally different sections.
So depending on the actual virtual machine, it may be able to compile just the individual sections separately. It may have to wait for that section to finish downloading before it can compile it, but in – I don’t wanna speak for them, but in my understanding of the Mozilla’s virtual machine implementation is that it is able to compile them as they’re coming in, even within the same segments, the same sections.
That’s really cool.
Now, they keep the web in mind, and certainly some trade-offs–
Interesting… Something you said there got me thinking… Essentially, this is like redesigning Java bytecode or something, so you could have a global virtual machine, but instead of it being owned by a single company that someone like ORACLE could acquire and do nasty things to, it’s developed in the open.
Yeah, exactly. There’s certainly some things that are analogous to the JVM bytecode. There’s been a couple attempts at doing similar things to this, like creating a generic bytecode that’s super low-level… The JVM is kind of a bad example simply because it is very – if you actually look at the JVM bytecodes, they’re very specific to Java. It’s very clear they had Java in mind when they were designing the bytecodes… Whereas with WebAssembly it’s pretty generic, it’s pretty low-level; it’s about as low-level as you can get while still abstracting the underlying machine.
So I think that’s important - it’s very, very low-level. It’s not something you typically will write by hand, unless you’re looking for ultra-performance or you’re working on tooling.
Interesting. So if we kind of explore that direction then - it sounds like what you’re saying is we’re calling it WebAssembly, but really the web is like Basecamp. Everest is some place that is essentially a universal virtual machine.
Yeah, that’s exactly right. That’s one of the most exciting things about WebAssembly to me… It’s not tied to the future of browsers and all that stuff per se. There already are places and people who are using WebAssembly in completely unrelated web cases. The Ethereum Virtual Machine, for example, is being rewritten to use WebAssembly as their instruction set. And there’s other use cases, as well.
[00:11:55.26] There’s even a person who’s on the community group who is trying to create an actual micro-kernel, the core functionality of an operating system, that runs WebAssembly natively, without needing to use system calls for – like, if you know a lot about OS stuff, they have different rings to the security privileges of an operating system… And with WebAssembly, the way it’s sandboxed, you actually don’t need those rings if all applications were written and compiled to WebAssembly. So you could do some very interesting optimizations.
There really is not a lot of precedents for this. The long-term viability of it is an open question, but it’s kind of a really cool and exciting thing… Because I would love to see WebAssembly be that gap, that bridging of native applications to web applications and making it so that eventually the browser just gets absorbed into the operating system and there’s no distinction.
Chrome OS is the perfect example of what does the future maybe look like, where the browser and the OS are one and the same. I think WebAssembly really helps bridge that gap. I could see – this is a huge bet type of thing, but I could see companies like Apple and Google, with their Android and iOS, deciding to eventually support WebAssembly as a first-class application format, instead of their proprietary solutions. Because they both have to do sandboxing, and they both came up with their own proprietary way of doing that.
Interesting. So stepping back a little bit to browsers, partially because a lot of our audience is kind of web-developer-focused, Firefox has really been pushing the edge on this, and I think their super high-performance compiler is showing essentially they can load and compile this stuff as fast as it comes over the wire… It’s really highlighting the potential of WebAssembly. Is there any info out there about some of the other major browsers doing that level of optimization? I don’t see as much news, but you’re in the know… Are Chrome, and Edge, and Safari - are those teams working on “How do we get this thing compiling as fast as it comes over the wire?”
They absolutely are. They don’t talk about it as much as Mozilla does. The last I checked, Chrome – I don’t remember if it’s in the actual release or if it was on the Canary that I was using… But Chrome did support streaming compilation, but the performance of it was not as – what’s the tactful word…? It was not as good as Firefox at the time. But it’s an incremental thing. I really do subscribe to the “Make it work, make it right, make it fast” type of mentality, and they do as well; I mean, I don’t know if they officially subscribe to that, but they at least certainly act like they do.
That’s one of the interesting things about WebAssembly currently, and one of the reasons I attribute to why WebAssembly has not taken off. The community group and the working group definitely subscribe to the “Make it work, make it right, make it fast” type of thing, so everything’s very minimal as far as the abilities and stuff like that.
In WebAssembly, currently you can’t pass around a DOM node or manipulate DOM or anything like that, because there’s no concept of it. There’s basically a giant linear memory that is a bunch of numbers… Just like you would deal with native machine code - there are no abstractions currently about structs and objects and garbage collection, and stuff like that.
C# has done a ton of experimentation and they’ve got the blazer stuff that compiles to WebAssembly that works pretty well. Go, another garbage-collected language, just released their ability to compile to WebAssembly, but it doesn’t perform as great, and the binaries are kind of bloated; that’s a common theme that you’re gonna see when a new language decides to target WebAssembly for the first time - things are gonna be slow at first, and your binaries are gonna be bloated. That’s a pretty normal thing, because “Make it work, make it right, make it fast.” Do it in that order.
Suz, you’re often doing some pretty interesting edge cases of stuff - web USB, funny gaming stuff, things like that. What’s your take on WebAssembly and where things are?
I feel like that’s such an accurate description… I feel like always the edge case with everything that I’m trying to do. Yeah, I’m really interested in WebAssembly for obviously these load times - amazing with the streaming compilations… But once you’ve actually downloaded the WASM package or the modules and you’re running it, I’m interested in what WebAssembly is actually really good at with regards to what it can run.
I’ve been reading things about it and I’m seeing that it’s good at things like crunching numbers, which obviously makes me think of things like gaming, but I also wanna see if we can hack it to just be able to port tools that normally run just on your desktop with C, and in even a lower-level language than that. I’m wondering, can we port things to the browser, similar to what we used to use Emscripten for… That’s what I wanna know.
And they did a little bit of tweaking, a little bit of algorithmic improvements, just basically taking advantage of knowledge of the fact that it’s gonna be compiled to a much more native target… Right now it’s 10.9x faster. I know I appreciate that when my Babel builds, and the DevTools, and all that stuff… But that’s just an example.
[00:28:24.29] Aren’t those types of optimizations what your compiler to WebAssembly would do? Like, removing the tool chain out of the browser and into the compiler.
That’s exactly right. However, there is a limitation that the compiler in this case doesn’t know the underlying machine code, so it can’t utilize – it can only compile to WebAssembly, and WebAssembly doesn’t have all the tricks in the book for it… Because modern, real native CPUs today have a lot of exotic instructions and things that you can do for special cases to increase performance, and you don’t get direct access to those things; there is no way to hint to the virtual machine “Hey, use this specific instruction (or whatever) if it’s available.”
That kind of underlying tooling stuff is very interesting to me. One of the things that I was thinking about as I was doing research for this episode is, like, if you look at what’s going on in – and I’m a web front-end guy, so I’m thinking of that world… But if you look at what React is doing with their new architecture, they’re behind the hood, slicing and dicing work and doing all sorts of magic compilation/computation things that your application never has to worry about, and then running them out. That could be happening in WebAssembly.
[00:31:51.19] Yeah, exactly, unfortunately… If you’re doing things like games, you can get great performance right now, today, in games, just because you don’t need to cross that bridge super often. But when you’re touching DOM and stuff like that, what you have to do – you don’t have to do, but to really get Glimmer, Ember-style performance, you have to batch up your changes and send those changes in like a change list across the wire, across that bridge, and then apply those changes at a single time, just to minimize the cost of that bridge. But I think that performance is definitely going to increase significantly over the coming years.
Suz, did you have other examples in mind that you were curious about?
I really just want to have a compiler compiled into WebAssembly. That’s what I wanna see. You look at those websites like repl.it and other websites that allow you to essentially be editing things in the actual browser… This is the missing puzzle piece for me on a personal project, where I would love to be able to move something like avr-g++ to be able to be run in the browser. And the bridge isn’t too bad with passing over something as small as a C++ file to compile using that…
So I guess I’m interested in that [unintelligible 00:33:09.05] in that how hard is it to port existing tools that might be written in C++ or Rust? I know that there’s some work you need to do, or you even need to kind of write these pseudo-interfaces in order to get that cleanly coming over. What are the current challenges in people being able to do probably more unique projects like that?
So Empscripten you mentioned earlier, which is a project that one of the guys who’s at Mozilla created originally - I think it was designed for a predecessor of WebAssembly called asm.js; I’m not gonna talk too much about that, but it was essentially a predecessor, and attempt to do something similar to WebAssembly before this… And they were able to reuse a lot of that architecture, and now Emscripten’s primary goal is WebAssembly.
Now, ultimately it’s gonna depend – the issues you’ll run into are most likely things like platform-specific APIs… And even then though, some of those platform-specific APIs have been shimmed out and will just naturally work. But if you’re especially on the graphic side of things, you may have to do some if defs, where you’re saying “If it’s a Mac platform, use this. Otherwise, if it’s WebAssembly, call this header from Emscripten.” Emscripten provides headers for accessing HTML5 and all that, so it takes care of that for you, you just have to call that.
For example, if you’re touching the file system, all of that stuff gets emulated in Emscripten automatically for you. If you’re using anything from the standard library, it will just be automatic and you don’t have to do anything special.
You might find other edges… Multi-threaded environment is not currently – it was working for a little bit, and now it’s not working because of the Spectre and Meltdown exports. The browsers had to disable shared array buffers, and a shared array buffer is required to be able to do the multi-threading… So it currently does not work. That’s one deal-breaker for a lot of people who have C++ stuff - they might be surprised where they have threads.
Yeah. I don’t know the latest from the browser vendors other than – the last thing I knew was that they were doing research on the best ways to be able to unlock shared array buffer without exposing those exploits again. They’ve mentioned it in one of the community group meetings a couple weeks back, and they seemed confident… They talked about it in a way of like “WHEN we re-enable it”, not “IF we re-enable it”, so I’m assuming that they have confidence in it, but I unfortunately don’t know specific timelines.
If you haven’t dealt with actual very low-level races before - not you, but everyone just in general - they can be very unintuitive and very difficult… So I empathize with the browser vendors pushing back on it. I think they’ve done a good job of acknowledging that it’s an inevitability, but realizing that there are bigger fish to fry right now, and we want to focus on the host bindings, we want to focus on the garbage collection… And if we focus on the multi-threading, that’s just gonna take away time and push back the GC and the host bindings stuff.
And that’s part of “Make it fast”, right?
First they’re working on “Make it work, make it right”, and then, finally, they’ll work on “Make it fast.”
So let’s count this as – we’re rolling in, and it’s your weird idea, but I think there’s actually something really key and interesting there… Two pieces, actually. One is in terms of distribution. So folks who are coding in these other languages who want something that’s more native-level performance, but wanna be able to tap into the greatest distribution network in the world, which is the internet and the browser…
But then the other one that I think is interesting is something we’ve touched on a few times, which is learnability. Tools like JsFiddle and CodePen and things like that, that essentially give you a browser environment for development and for sharing code, have dramatically accelerated the ability of people to learn web development technologies. If we can get compilation and runtime and all of that working in the browser - what does that do for the learnability of all these previously kind of systems languages where you had to do a lot of local set up?
Yeah, that’s what I want. I want us to stop having to call out to a cloud service to compile the code that you’re writing in the browser IDE. I want someone to plug an Arduino in, using WebUSB to be able to upload the code, but it needs to be compiled first… And if that’s all happening completely offline and they can open a browser that’s even just running stuff in like local storage, that to me is where we’ve finally hit the point where you’ve got easily distributable educational resources like that. I think you totally hit the nail on the head there… And that’s why at least I am so excited about it.
I totally agree.
So a lot of the tool chain right now is built – they’re using an LLVM back-end to output WebAssembly…
Well, LLVM is pretty mature, and most compilers focus on bootstrapping, right? Like, “How do I compile myself with myself?” Has anybody tried bootstrapping LLVM with WebAssembly?
Did it create a speed increase, or…?
[laughs] No, no… That would be hilarious though if that was the case.
That would break my brain.
Yeah, that would break my brain, too. Theoretically, that could be possible…
Oh yeah, absolutely.
That’s a whole new level of those standardized unit tests that they’re now running.
I’m sorry, was that a question?
That’s how you cheat and get threads. [laughs]
Right, exactly. [laughter]
…calling back and forth between all of these different engines.
Oh, you could do it today, absolutely.
There’s gotta be a trade-off there.
It’s kind of amazing. So we touched a little bit on tool chains, and that was one of the things that was kind of interesting looking around… Somebody’s done a doc of like all the different languages that have compile to WebAssembly support, and there’s like 20 or 30 different environments that support this at this point.
Sure, yeah. Is that the Awesome Wasm website, or is that a different one?
It might have been… Let me look. I actually wrote a blog post about WASM. I got so excited getting ready for this episode, that – well, it was funny… I was working on this episode, and then I was like “This is really cool!” so I went on Quora and I was starting to answer questions, and that got me more excited, and then I was like “I’ve gotta write a blog post about this!” So I went and wrote a post about how WebAssembly is accelerating the future of web development, and what it’s potentially gonna enable.
Let’s see… Yeah, it was the Awesome Wasm Langs, that’s what it was. That’s the list of environments. It’s got your esoteric languages, like Brainfu*k has a compiler… [laughter] Prolog, and things like that… But there’s also C, C#, C++, the whole .NET environment, Python, Haskell, Java, Go… All of these things are now capable of compiling to WebAssembly.
[00:48:17.00] They’re just not production-ready though. A lot of those are not production-ready, unfortunately.
Which ones would you say are?
Rust and C++ are by far the biggest ones that are production-ready. I would have full confidence in using those in production. Any of the dynamic ones, any ones that require garbage collection and all that stuff - you may be able to use it in production, but you’re definitely gonna be an early adopted.
Go, for example, had to do a lot of clever tricks to work around the – they essentially spilled their entire call stack into linear memory, so that they can do garbage collection on it. That has a pretty big cost… And it’s a temporary trade-off, until WebAssembly gets that GC support, or the ability to introspect the call stack within – this is all getting pretty low-level, but WebAssembly is a stack machine and you can’t currently introspect that stack. So to do any kind of garbage collection within your language, you have to basically duplicate or move your pointers and stuff that you would normally have on the stack has to actually get in linear memory. Essentially, you can think of it as – even things you would normally do on the stack, you have to do on the heap. If you know a little bit about stack vs. heap, that can give you an of example of – it’s gonna be expensive to be spilling these things into that linear memory.
Interesting. You have to share that link with me later, because I’d love to read that.
[laughs] I love that.
It’s true that, especially from the native world – think about it this way, the native world, for the most part, has not really had to care about file size; it’s within reason, right? You don’t want a 2-gigabyte executable, but the difference between one byte and a meg is really pointless in the native world for the most part… So they haven’t really focused on those types of optimizations historically, so the tool chain stuff that compiles to WebAssembly, the early on stuff - and it’s already gotten way better, but early, early on it was pretty bad. It would not be unusual to compile to WebAssembly and get 20 meg files.
[00:51:59.01] They’ve gotten super-improvements on that, and now with Rust and C++ with the right flags and stuff you can get that down to just a couple of k for a simple Hello, world. But there’s a lot of trade-offs. One of the biggest things, ironically, for Rust and C++ is you need a way to allocate memory on the heap, so like your malloc and your free. And malloc and free are actually fairly large implementations for most of them. There’s the ones that are built into the operating systems, but then there’s community-based ones. Rust uses a totally different one by default than OS ones; they have different trade-offs.
One of the trade-offs they did was having more code to – you’d have bigger binary size, smaller heap usage overall, like fragmentation and stuff like that… So one of the choices people are gonna have to make in Rust and C++, the people who are working on WebAssembly implementations, are both working on smaller allocators. That will make the opposite trade-off - you can say “I want a smaller bundle at the expense of having slower allocations.” It’s all physics, it’s give and take. You can’t just magically create things to be fast.
So it’ll depend on the project. A perfect example of it is if you’re just trying to compile to WebAssembly a small library, a tiny little library, you may wanna use the smaller allocator that trades file size for performance. If you’re writing your entire app in a language that compiles to WebAssembly, you probably want the better allocator because that size is essentially amortized; if the allocator is 2 kb, in the grand scheme of your app 2 kb means nothing, so it’d be worth that performance difference. And we’re not talking slow versus fast; we’re just talking micro-benchmarks, especially when it comes to very small allocations.
So here’s an interesting question along these lines… So what we’re describing here, if we were going to the compiled language world, is basically everything is statically linked. You’re embedding all these libraries that you’re gonna use in your binary that you’re gonna ship. Is there anything on the horizon that looks like essentially dynamic systems libraries, so that the browsers and whatever VMs would have a standard malloc implementation that you could dynamically link to and not have to ship that over the wire?
As far as the browser providing something, it’s gonna provide the host bindings eventually, so you’ll be able to create DOM nodes, and print out the log, and all that… So that’s technically gonna be dynamically linked.
As far as providing standard library stuff like malloc and free, or sbrk (the lower-end ones of those), there’s been discussions… It’s tough, because it’s such an opinionated thing, right? If you know about malloc, you might not even realize that there are lots of implementations of malloc, and that’s an opinionated thing. So instead of that, one of the focuses has been also making the caching story easier, so that that becomes less of an issue.
[00:56:03.26] You can imagine there being lots of CDN links for standard library stuff… You know, like, you pull in the C standard library, and that just get cached and reused cross-origin between all the different WebAssembly applications, and stuff like that. So not just the file gets cached, but the compilation itself gets cached. So the fact that it’s a different file isn’t really that important. That’s not doable today, but that’s definitely something that’s been considered and talked about, and a goal is to make those types of use cases easier, just so that code can get reused, especially when – you can have a lot of excess file size that you just don’t… It’s kind of a pain – it kind of goes back to the jQuery days, where every website had jQuery, and it’s like “Well, wouldn’t it be nice if everyone just shared the same jQuery?”
Yup, and we got that with CDNs, and then we introduced module bundlers and now we’ve thrown it away…
Whether the browser will expose those things directly itself, like provide its own opinionated malloc and free - if I had to guess, I would say they won’t… It’s just too contentious, I just can’t see that happening.
Suz, anything else on your mind for WebAssembly? You said coming in you had lots of questions…
Yeah, I’m thinking about this from the web developer perspective, who works 9-to-5 at a company that they work on a product that gets released on the web… Given that this is so early on and given that there’s already a few performance-based benefits for starting to use WebAssembly, are there any low-hanging fruit that they can sort of focus on for now, while they’re waiting for it to be ready, where they can sort of arm themselves with enough knowledge to start using this stuff to improve the product that they’re already working on? How is it relevant to the day-to-day web developer right now?
Well, aside from the fact that you’re probably already using WebAssembly – actually, I would argue that almost every single person listening to this is probably using WebAssembly without knowing it… Going back to that SourceMap as an example of one of the many projects that have imported to WebAssembly, that we all use, just transparently… Personally, even though I’m super-obsessed with WebAssembly and super-excited about it, I don’t try and advocate people force themselves, or I don’t super-advocate “Try and find somewhere just to use WebAssembly for the sake of…” So certainly the average CRUD app, where all you do is just (most applications are this way) read data from an API, and then update those forms and stuff like that - those things are not gonna super-benefit from WebAssembly at this point. I would not recommend going down that route.
But if you do deal with - as you guys were saying - weird things… I would say the more weird it is, the probably more likely it’s a good fit for WebAssembly at this point. Anything dealing with algorithmic anything; if you’re dealing with algorithms, it’s probably a great fit for WebAssembly. Graphics in general, as well. In the future, WebAssembly is gonna get SIMD (single instruction, multiple data), which is really useful for doing factor-based stuff.
I want people to be more aware and excited about WebAssembly so that the browsers focus more on it, as well, so that the revolution that I think is coming gets sped up. In a perfect world - I’m envisioning five years from now - you don’t really need to know anything about WebAssembly; it’s just an implementation detail of the language you’re using… Just like machine code. Like, how many people compile for iOS and know anything about the ARM instruction set? Probably a tiny, tiny fraction of people doing that. So they don’t need to know how their code gets compiled and runs on the native thing; it just works.
We’re not quite there yet with WebAssembly, because it’s so early, but that’s the goal - you will just be able to transparently take advantage of it. You’ll be able to use your Reason, your Elm, or some brand new language that hasn’t even existed yet, and transparently compile to WebAssembly and everything just works.
Our transcripts are open source on GitHub. Improvements are welcome. 💚