Go Time – Episode #275

Go + Wasm

with Jonathan Norris, Adam Wootton & Brad Van Vugt

All Episodes

The DevCycle team joins Jon & Kris for a deep conversation on WebAssembly (Wasm) and Go! After a high-level discussion of what Wasm is all about, we learn how they’re using it in production in cool and interesting ways. We finish up with a spicy unpop segment featuring buzzwords like “ChatGPT”, “LLM”, “NFT” and “AGI”

Featuring

Sponsors

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Changelog++ – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with extended episodes, make the ads disappear, and increment your audio quality with higher bitrate mp3s. Let’s do this!

Notes & Links

📝 Edit Notes

Chapters

1 00:00 It's Go Time! 00:58
2 00:58 Welcoming our guests 01:22
3 02:20 Intro to Wasm 07:34
4 09:54 The challenge of Wasm 03:07
5 13:01 Use cases at DevCycle 02:20
6 15:21 Choosing Wasm 03:56
7 19:16 Writing AssemblyScript 08:32
8 27:48 Measurement tools 04:35
9 32:23 Trailblazing Wasm 07:59
10 40:22 Concurrency models 04:45
11 45:07 Sponsor: Changelog++ 00:56
12 46:03 It's time for Unpopular Opinions! 00:34
13 46:37 Jonathan's unpop 09:36
14 56:13 Adam's unpop 04:20
15 1:00:33 Kris' unpop 08:17
16 1:08:49 LLMs as fancy compilers 01:29
17 1:10:18 AGI & public transport 02:04
18 1:12:23 Gotta Go! 00:14
19 1:12:44 Next time on Go Time 01:17

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello everyone, and welcome to Go Time. Today I’m joined by three guests. And Kris, our first guest, is Jonathan Norris, who is the co-founder and CTO of DevCycle . He’s built multiple developer-facing products, and he has experience in designing and building large scalable systems. Jonathan, how are you doing?

Hi. I’m doing great. I’m glad to be here, and really excited to talk about everything we’re working on, and a little bit of WebAssembly, and Go.

Happy to have you. Our next guest is Adam Wootton. He’s the chief architect at DevCycle , and he’s responsible for infrastructure, performance and system scalability. Adam, how are you doing?

I’m good, excited for the chat today.

Happy to have you. And third, we have Brad Van Vugt, who has joined us in the past to talk about Battlesnake, which is a company he founded. Battlesnake has been acquired by DevCycle; I believe it was earlier this year. And he’s now the head of strategy and growth at DevCycle. Brad, how are you?

Good. Good. How are you doing, Jon?

Good. It feels like it’s been a while since you’ve been on.

Yeah, it’s been a little bit.

I think it actually has been that long, but it just feels it.

Always happy to show up.

And then Kris is also joining us. Kris, how are you?

I’m doing well. Glad to be back on some episodes. I was gone for a bit there, so…

Yeah, I don’t think I’ve hosted with you in a while.

Yeah, it’s been probably a year or so.

Somehow it feels I was just randomly on episodes with the same two people all the time, and I don’t know how that happened, but… Okay, so today we’re talking about WebAssembly, and basically the general idea is I want to sort of start by talking about what WebAssembly is, why people should care about it, and then we can talk a little bit more about your experience, Jonathan, Adam and Brad, using it at DevCycle, and how you’ve used it to build different things, and where you find value in it. So I suppose at a high level what is WebAssembly, and why should people care about it?

Yeah, I can start off there. WebAssembly is basically a memory-safe sandboxed execution environment that was originally built for bringing sort of lower-level native code to the web… And so obviously, it’s branded as WebAssembly, And really, WebAssembly has evolved into much more of a cross-platform way of executing code at a near-native speed across multiple different environments. Everything from the web, to at the edge with serverless environments, to in your servers, and across multiple different language types… And they’re really designed to create small binaries that can be started up really quickly, and can be executed at really sort of near-native speed, with a really tight sort of security and sandbox environment around them… It’s kind of the gist of what WebAssembly is.

Awesome. So I guess, where do you guys want to start with this? Do you want to start by talking about where people might start using WebAssembly? Or do you want to start talking about what historically has been done instead of something WebAssembly?

Yeah, we can get into either of those. So let’s probably start with some of the common use cases for WebAssembly, I think some of the most common ones for WebAssembly have really been bringing functionality to a web browser that maybe didn’t exist natively. You think of something Figma… So Figma, for those who don’t know, it’s a design tool that allows you to basically do your full design workflow in a browser-based environment, and that is primarily powered through a bunch of WebAssembly libraries behind the scenes. If you want to do deep data analysis, or run things you might run in Python, or R, and sort of data analysis toolchains - there’s lots of that type of stuff you can do with WebAssembly within the browsers.

And then also games… A bunch of people have seen on Hacker News - you can run Doom in your browser, and things that; like, old-school games. All of those are basically just people compiling C or C++ codebases into WebAssembly and loading it into your browser runtime, and letting you sort of run games in your browser. But really, I think where the energy behind WebAssembly is coming is creating that portable cross-platform binary that can be executed either in the browser, but in a lot of cases now in the server side or at the edge. So edge computing is really starting to pick up around WebAssembly, because you can start up a WebAssembly runtime in nanoseconds, maybe milliseconds in most cases, depending on how big your binary is, and really scale your edge computing really, really fast, up and down. And so there’s lots of momentum behind using WebAssembly at the edge. And we’re also – we’re using it at the edge, we’re also using it for SDKs across a bunch of different server-side languages.

So I’m assuming the fact that you can run this at the edge and do these other things are the reason why – like, when you talk about Figma and some of these tools, theoretically I assume they could be built with JavaScript. I don’t know how performant they’d be, but theoretically, you should be able to build pretty much any software with JavaScript. So would that be the reasoning there? …it’s not just the fact that JavaScript could have done it, it’s the fact that you could also deploy this on the edge and do all that extra stuff. Sorry, I feel I botched that a bit in explaining it…

[06:04] So I guess what I’m trying to get at is what what do you think the main motivation is that caused people to switch from trying to rewrite all these libraries in JavaScript, to WebAssembly? And I think what you were saying is all the edge computing and stuff is the main factor there. Is that accurate?

Well, I think there’s also a performance aspect to it as well. As you were mentioning, Figma could have been written in JavaScript, but really, it actually couldn’t have been, because you just would never be able to write something as complicated as Figma, with as many sorts of low-level computations that have to go on, in just plain JavaScript. It just wouldn’t be fast enough. WebAssembly gives you an environment to execute your code, and it’s sort of closer to the actual hardware. So it’s easier to write performant applications, and for things Figma, you’re dealing with huge amounts of data being moved around, and shuffled, and transformed, and that sort of thing… And those kinds of operations, you just can’t write them efficiently in JavaScript. You need some sort of compiled layer to do that. So I think that’s another big benefit of it.

So the amazing thing about JavaScript is that JavaScript can actually be really fast, because V8 is this magic piece of software. We haven’t looked at the internals of how V8 works and how it optimizes JavaScript code; it really is a pretty magic piece of software, that I’m sure hundreds and millions of developer hours have gone into making this amazing, amazing thing. And so V8 can get your JavaScript really, really fast; like, very close to native. But because it’s a JIT compiler, it takes time to get there. So the first execution of your JavaScript code will be kind of slow. And then as the JIT compiler and V8 and SpiderMonkey and all these other engines sort of kick in and compile your JavaScript code into lower and lower levels, your code gets faster and faster. But with WebAssembly, you can basically compile down to that sort of near-native speed, running the same code in Go, or C#, or Rust, or C++, or something that, and get a similar performance level right out of the box by sort of compiling your target to WebAssembly and executing that in the browser environment.

Okay. So this is Go Time, obviously, so it stands to reason that a lot of our listeners want to use go with WebAssembly, if they’re listening. So I guess to get started, how would somebody get started using WebAssembly with Go?

Yeah, so there’s two ways. So you can either compile your existing Go code to WebAssembly… And so there’s a couple projects out there around that. And you can actually create portable code out of your Go code. So if you actually want to, say, take this Go library you have and execute it at the edge or, or execute it in a web environment, or even in a mobile environment - because WebAssembly can be run in mobile - you can take that Go code and package it into a WebAssembly binary; it’s a pretty simple thing to do. It’s a compiler target, you can google it.

The one advice I have there is that the standard sort of compilation creates pretty large binaries; like, multi-megabyte binaries, which don’t sound that big, but actually, in the WebAssembly world, you want to get down to single-digit kilobyte binaries, so that they can be downloaded really quickly and executed and loaded into the runtime really quickly. So the main advice I have for anyone looking to do that is to use TinyGo as sort of the layer; it reduces the binary size a lot on your output. It obviously restricts what you can use within go, but my advice there for anyone looking to do that is to use TinyGo.

And then if you’re looking to take WebAssembly code that you’ve maybe written in another language and executed within Go, the Wasmtime runtimes are what we highly recommend. They’re the main supported sort of runtimes by the Bytecode Alliance, which is the open source alliance behind WebAssembly, and are well supported, and well battle-tested at this point. Those are sort of the runtimes that we’re trying to use ourselves in all of our SDKs across all of our different languages.

[09:53] So when it comes time to actually write the code this, is it going to – I’m trying to think of an example. There’s a couple libraries out there that allow you to write Go code that basically generates React, or something that. And then a lot of those, you end up having to write very specific code. It doesn’t really feel Go code, it feels you’re writing Go merged with React, or something that. So when you’re doing WebAssembly, do you have to drastically think about how you’re thinking about Go code, or does it really feel you’re writing Go code?

Yeah, I think that’s where we can get into the challenges of WebAssembly. So you really have to define your interface. And I think that’s the biggest challenge of working with WebAssembly right now, is you really have to understand sort of the interface between your native code and your WebAssembly code to build sort of those features well. And Adam could probably jump into this a bit. But that’s where we face the biggest challenges. If you want to get performant WebAssembly code - it’s really great at doing CPU cycles, it’s not so great at sort of managing memory and transferring data in and out between your native code and in the WebAssembly code. So that’s where you really have to be careful and think about sort of how much data you’re passing into your WebAssembly runtimes, and how much data you’re sort of fetching out of it, and how you optimize that path.

Yeah, I guess just to jump in here, one issue that we really ran into was that it’s not really possible to exchange sort of complex data structures in memory between some native layer, Go, and a WebAssembly layer. And really, you basically have to think of it “How can I pass my data in a way that can be serialized and deserialized as efficiently as possible?” Because you can’t do something take an instance of a class, or something, and then just use it directly in your Wasm side. But what you can do is figure out how to turn that class into some representation in memory that you know how to read on the WebAssembly side.

So there isn’t sort of a direct one-to-one of “This object becomes this object in WebAssembly.” But instead, there’s kind of “Turn this into–” For example, in our case, we ended up using protobuf as a serializer. But there’s other examples of how to do that. I think Google has a frame buffer type of library… We originally were using JSON, but it was a lot slower, obviously, than protobuf. So there’s different strategies that people use to just kind of efficiently shuffle data back and forth when they’re executing their Wasm modules.

And it’s really optimized. If you’re just dealing with binary data… So there’s a lot of examples of using WebAssembly for video encoding, or audio filtering, or – I wouldn’t be surprised if the tool we’re using today, what Riverside is using are WebAssembly modules behind the scenes to do a lot of the audio filtering, and video encoding, and types of things in the browser level. So we’re just dealing with sort of structured binary data that is easily passed between two systems, between sort of the JavaScript side, or the Go side, and the WebAssembly side.

So in those use cases, there’s a lot of image processing WebAssembly binaries out there that will let you sort of do image analysis, or image sort of filtering, and things that within your browser, and do it much faster than you could do it within JavaScript by just looking at sort of the binary data, and passing that buffer between your different environments.

So when Brad reached out to me, talking about WebAssembly, he said that you guys had some pretty interesting use cases for WebAssembly at DevCycle . So can you share a little bit about what you guys are building that required the use of WebAssembly? Or maybe not required, but benefited from it.

Yeah, happy to do so. So at DevCycle where we’re building a feature management tool; so we’re building a feature flagging tool that needs to work across many SDKs, and many different environments. And you can imagine, for a feature flagging tool, we want to be able to support as many different customers that come to us and want to use our software. And to do that with a small team either requires you creating lots of custom SDKs for all the different languages under the sun, or you can figure out a way to use the same codebase across multiple environments. And that’s really where we came to WebAssembly, and we’d been playing around with it for a couple years with our previous products, and really took the dive with DevCycle, to say “Okay, we’re going to create one common WebAssembly sort of codebase that has all of our core business logic, all of our logic about how we decide which users should be bucketed into which feature flag, and decide the rollouts, and all the sort of important business decisions. We need this piece of code to be super battle-hardened, to have every test we can think of being run against it frequently, and be as solid as possible.”

[14:24] And so by using WebAssembly, it allowed us to create a really battle-hardened library that we can then share across everywhere we need it. So we’re using that WebAssembly library in our workers that are running our scalable APIs that run within Cloudflare, that edge worker environment; and then we’re using that same WebAssembly code in all of our server-side SDKs, so everything from Go, to Java, to Node.js, to C#, and I think Python and a couple others are coming soon. So all the major sort of server-side languages that we need to support as a feature flagging platform, we can now be really confident that we can build those SDKs quickly, and we can ensure that they all work as we expect, because we’re using basically the same core code across all those SDKs, and across our APIs. So it has brought a ton of business value for us, and really reduced the amount of code that we’ve had to write, basically.

I’m curious, when the decision was being made - it was before my time at DevCycle, but when the decision was being made to go with Wasm, what was the community like? What was the maturity of Wasm in general? We hadn’t quite seen – Figma is a really good example of a company that’s using Wasm at incredible scale, and to great benefit… But WebAssembly is still pretty new, and it’s still kind of figuring itself out, and it’s still kind of figuring out its place in the ecosystem… So what was the community when DevCycle was deciding to opt into it so hard?

Adam, do you wanna jump in there?

Sure, yeah. I mean, I think one thing that’s been kind of interesting is that we’ve sort of felt we’ve been a bit on the bleeding edge of using this stuff the entire time that we’ve been trying to employ it in our SDKs. And I would say that when we first started using it - I mean, there’s a pretty decently-sized community around it, but a lot of the specific tooling that we were using was, I would say, fairly kind of fresh. And so it’s kind of been interesting, because we’ve sort of been able to watch it grow with our usage of it. One of the main sort of tools that we use is AssemblyScript. And so that’s a language that basically looks sort of TypeScript, or the typed version of JavaScript, TypeScript, but it compiles to basically a WebAssembly module. And that language was pretty new when we started using it. I think it had only been out for maybe less than a year, and sort of as we’ve been using it, we’ve seen it vastly improve. We’ve seen the community grow quite a lot, and we’ve been pretty active in their Discord. They have a really good community in there, people that answer your questions if you have issues with it… But yeah, so it’s been really interesting to see the excitement and the momentum growing behind using this ecosystem.

And I would say the ecosystem for people using WebAssembly in the browsers has gotten pretty mature. I think browser support for WebAssembly has been in place for a long time now. I don’t know exactly the number years off of the top of my head, but it’s many years. And WebAssembly, actually - I think I was talking to some of the folks behind WebAssembly last week at KubeCon in Amsterdam, and I think it was started around 12 years ago, or 10 years ago. It’s been around a lot longer than most people think, but it was really originally about bringing your sort of more lower-level codebases, your C++ code bases, your Go, your Rust codebases into being able to execute them in a browser environment. But really sort of where the community is going is expanding beyond that, which is it brings the power of this very secure, small runtime to other places, your server-side use cases, your edge runtimes, things that. And that’s really sort of where the community has started to grow.

[17:59] There’s a bunch of new startups that I got to talk to last week who are really trying to basically replace Docker containers, in a lot of ways, with WebAssembly runtimes for things Kubernetes, and at the edge. So it’s a really exciting space, and the community is growing and accelerating really quickly here.

I’ve found it very interesting how – you said, the original goal of this was “Let’s bring some–” Unreal Engine, I think, was one of the things that they compiled via Emscripten, and had it running in the browser… It was “This is awesome.” That was even before we had WebAssembly, and how the whole goal of that was “Okay, let’s move it into the browser.” And no one kind of thought of at the time, I don’t think; I don’t think anybody really thought of it, that “Well, the browser is actually this incredibly sandboxed environment, and we have not really thought about that, because a lot of backend engineers don’t pay a whole lot attention to frontend.” So it’s just “Oh yeah, it’s the browser. It’s just gonna work.” But it’s incredibly robust when it comes to sandboxing. And WebAssembly had to do that as well.

So that’s one of the things I’ve always found incredible about WebAssembly, is it’s “Oh yeah, we’ve been trying to do all of the sandboxing on the server side, and it’s just been – we’ve tried.” It winds up being pretty messy outside of virtual machines. So I’ve found that whole arc and that whole story kind of incredible in and of itself.

You mentioned using AssemblyScript. I’m just curious, what’s been the experience actually writing code, writing WebAssembly in AssemblyScript? Because I’ve heard it can be a bit rough, because it’s a bit TypeScript, but there’s a lot of parts of it that aren’t. And I know that WebAssembly also – like, you basically get nothing. It’s just “Here’s a box, you can run some code in it, but don’t expect it to be an actual computer.”

Yeah, I mean, I would say from our experience there’s definitely a lot of pitfalls with using it. And I think it’s maybe a bit deceptive when you first start learning it, how similar it can look to TypeScript. Because under the hood, it really doesn’t work TypeScript at all, obviously. And so you really have to kind of – especially when you’re sensitive to performance, or when you’re sensitive to things that, you really have to start looking at the actual output that it’s giving you, and trying to figure out what your code is ultimately doing inside of the Wasm environment… Because you can very easily write extremely inefficient code that would not seem inefficient if you were just running it in a JavaScript environment. I think Jonathan mentioned earlier the V8 JIT compiler is really good at sort of – once it executes some code path that might not be efficient the first time, it really quickly optimizes it, so that the next time it runs, it can be a lot faster. None of that is the case in the AssemblyScript world. So if you write code that might not have seemed slow in TypeScript, it will end up being slow in AssemblyScript. And we also found – the further we got into it, the more we sort of had to understand the underlying memory structure of how it allocates memory for the various different kinds of standard objects that you can use… We had to kind of figure out how the strings work, how class instantiations work, where does the garbage collector kick in… There’s kind of a lot of factors involved, which we got into more and more as we were trying to optimize performance. So it was really easy to get something going, that worked, and passed all of our unit tests, but once we sort of started to profile and benchmark it, that’s when we started to realize “Oh, a lot of the assumptions we’ve made about how we can write this are actually not true, and we need to start sort of rethinking some of it to make it performant.”

Yeah. For context, we were coming from a TypeScript codebase, that we basically wanted to make work in WebAssembly. And so it was kind of a natural choice. So basically, we took that TypeScript codebase and we basically had to go in and basically change all the higher-level types to lower-level primitive types. And then we had to remove things you can’t do closures, or things that in AssemblyScript. So we had to refactor a bunch of the code back down to lower-level primitives, sort of going back in time a little bit to simpler times. And we were able to convert our fairly large codebase to AssemblyScript in under a week, and getting it passing all of our tests, and functional… So I think it made a big impression on us in how quickly we were able to get something going… But yeah, as we get into the project we’ve been working on lately, which has been optimizing our Go SDK to sort of a nanosecond level, there’s definitely a lot of roadblocks that we ran into along the way with the AssemblyScript side of that.

[22:20] Yeah, I was hoping we could dig into that a little bit further. I think DevCycle’s use case is sort of unique in this space. And Jonathan, you’re kind of scoping this a bit, we’re talking about ultra-low, nanosecond-level latency, we’re talking about edge computing, we’re talking about running things in the browser as close to the user as possible… And so I think the DevCycle use case is far beyond just the cross-compilation, the shared binaries sort of benefits of WebAssembly, and really what we’re talking about is super-micro optimizations and very large-scale performance, if you look at how the DevCycle infrastructure operates. So I’m hoping, if we can dig into sort of the – let’s get into the gritty sort of technical details of what was it like, how did you go from sort of WebAssembly code that ran, to how do we make it run really, really fast, specifically in a Go runtime?

Yeah, I’ll provide some context to the optimization problem, and Adam can dive into some of the details there, but… Basically, we were challenged by a new customer who is running a global CDN network, and that global CDN is built upon a Go codebase. And we’re like “Okay, this is a real challenge for us. We have to take this challenge to optimize our WebAssembly code as much as possible, so we can make as little impact to the performance of their CDN code running Go, and evaluating, say, tens to hundreds of flags at any given time.” Yeah, so that was sort of the challenge put in front of us, and we kind of naively said, “Okay, let’s take it on, and let’s dig into every single little detail within the WebAssembly code, and start looking at sort of everything we can do to optimize the performance of it.” And yeah, there’s definitely some interesting things we’ve found along the way. I don’t know, Adam, if you want to start with where we started from… Because we didn’t start from a good place, but we got to somewhere pretty good by the end.

Yeah. I mean, as I kind of alluded to earlier, our goal originally with this was just to get something working at feature parity with our original TypeScript code. So we weren’t really super-concerned with actually how performant it was. And it was only once we started digging into the performance metrics that we realized how far we needed to go from there. So our initial performance metrics - basically, we were measuring in terms of evaluations of a user against a variable. So the DevCycle SDK - you’re basically asking it for “What is the value of this variable, given this user data?” That’s sort of the idea of a feature flagging platform; it’s like, “Given this user, what value of this variable do they get?” And in our case, each one of those evaluations in the very first version of this code was over a couple of milliseconds, which is really, really slow. We’re talking about a CDN here that’s – they’re targeting 10 to 15 milliseconds in total for their entire request handler, which might evaluate 10 or 100 DevCycle variables, where each one of them would be taking two to three milliseconds in the original code. So obviously that was an untenable performance level. So a lot of the initial work that we did was essentially just to reduce the amount of data that was being passed over that module boundary. So we talked about this earlier, how it’s not possible to just directly share rich objects between your host and your Wasm code. So we had all these requirements to pass a structured set of user data, and a structured set of configuration data that was received from the server, and then tell Wasm to basically give us the answer, “What is this variable value supposed to be?”

[25:43] So a lot of a lot of the initial work was kind of “Okay, how can we cut down to the bare minimum of the data that we actually need to pass across that boundary?” And it turned out we were passing a lot of data that was not necessary for that particular operation. So once we cut that down, that got us much more into the ballpark, where it was literally two milliseconds to 70 microseconds, which was a huge improvement to start with, but 70 microseconds was still way too slow. Because to set a ground truth here, we benchmark some of our competitors’ SDKs, and their execution times were down in the thousand to 10,000 nanoseconds per op range, whereas ours was up in the 70,000 nanoseconds, 70 microseconds.

So yeah, a lot of what we ended up doing was kind of avoiding allocating new memory, and sort of sharing the memory between the host and the Wasm module more efficiently. So it turned out that, especially in AssemblyScript, asking it to allocate new memory can be really slow, because AssemblyScript tries to give you this sort of easy to use garbage collector, where it’ll just make sure that things are being cleaned up properly. But it’s a fairly naive algorithm that they use, and so it ends up adding a lot of time every time you need to allocate new memory for things.

So one of the major optimizations we did was basically to say, “Okay, let’s sort of create a buffer in memory that is the scratch space to write any data that we need to pass across the boundary to”, rather than what we were doing before, which was allocating that entire buffer every single time, and basically saying, “Here’s your new buffer. I’m going to write the data into this buffer. Now you should read from this buffer, and this is your data that you need to deal with.” So instead we said “Here is this static buffer of a fixed length, and we’re just going to write into some portion of it, and then tell you where to read from.” And so it’s already allocated, it’s already fast. All you need to do is just read the bytes to get your information. So yeah, that was the majority of the work that we were doing, was kind of things around that. I can keep going, but… Do you want to ask any questions?

Well, I’m curious from the Go perspective; I think the numbers are obviously impressive and interesting. How are you measuring these things? Because you’re measuring it from a Go runtime point of view, you’re not measuring it as a WebAssembly project? What tools are you using out of the gate to even arrive at these measurements?

Yeah, so we actually had a lot of different tools we ended up using, for different reasons. So these numbers that I’m sort of rattling off here were all based on just running a Go bench test, where it was essentially just evaluating variables as fast as possible, in a single thread, and just saying, “Okay, how quick is this operation?” and at the end it would just spit out time per op. So that was sort of where we were getting our initial numbers from.

We realized, as we got further into it, that there was sort of more factors involved, especially when you start to deal with multi-threading, which is obviously important for a Go web server; you can’t have just the single-threaded things sitting here being the bottleneck for all of your stuff. So that’s sort of changed the nature of how we needed to measure things.

But outside of just measuring the times for op, we were also sort of digging into how long are the actual calls inside of Wasm taking, and what is slow inside of those calls. And that’s not something that was easy to measure on the Go side, because we are using things pprof, and basically spitting out a CPU profile, and going to look at it… And as far as Go is concerned, everything that happens past the WebAssembly module boundary is just a black box. So we just had this giant box on our pprof output that was like 20-30 microseconds. And it’s like “Well, what’s going on in there? How are we supposed to find that out?” So it turned out that Node.js has sort of really good native support for WebAssembly, which includes actually outputting CPU profiling information for the actual WebAssembly calls that are being made, down to the actual executions that are happening.

[29:40] So we were able to basically take the same module we were testing on Go, stick it into our Node SDK, and start running some benchmarking there, collecting CPU profiles from that. So the Chrome browser has basically a debugging tool that you can just attach directly to a node process. So we’re just kind of capturing CPU profiles from the same sort of tests that we were doing on the Go side. And once we were doing that, we can now start to see “Okay, here’s the particular call inside of the WebAssembly code that’s slow.” Or “Here’s where we need to focus our time to do some optimization work.” I think we were able to save another 20 or 30 microseconds of time, just from doing that, basically; just kind of looking for “Where are we inefficiently allocating stuff? Where are we maybe duplicating some calls that we didn’t need to be duplicating? Where could we sort of transform the configuration we’re getting from the server before we need to use it, so that it’s in a more efficient form for the code to iterate over?” There’s a lot of revelations that came from that.

So that was, that was an interesting investigation as well… And then we also were getting into a bit of memory size issues as well, where it’s sort of like “How much memory is it allocating? Are there memory leaks?” That sort of thing. And so for that, we found some tooling that actually will plug into the AssemblyScript compiler, and can give you a really detailed output of what is on the heap right now, and what’s been allocated. And we were able to sort of do the standard comparing of two heap dumps to sort of say, “Oh, there’s maybe a leak happening here” or “This is allocating way too much memory”, and we could kind of start to dig into some of that as well. So it was actually a pretty, pretty good experience overall to be able to use all those tools to dig into some of this stuff.

So you said you were using Node to sort of dig into the WebAssembly side, and to actually trace what was taking a while; do you see that as something that eventually WebAssembly itself will have tooling around that, so that you don’t have to sort of jump from language to language to test that? Or is that something that would have been hard to do without something calling the WebAssembly directly?

Yeah, I think it’s just that the V8 runtime for WebAssembly is the most mature runtime, so they have those development tools built up. And I’m sure that the Wasm time by Code Alliance team are definitely working on similar types of outputs for all the different runtimes that they support… But yeah, the common recommendation is if you really want to get that low-level optimization, plugging it into some type of a browser engine that has Wasm support is the best way to get that low-level sort of profiling information. And obviously, Node, being that it just runs on V8, is probably the easiest way from a server side use case to get that data.

When you guys were actually doing all these optimizations, did you find that a lot of other companies were doing similar things? Because you said that when you started, it was pretty – WebAssembly wasn’t adopted by a lot of people. So were you kind of in the dark, figuring this out on your own? Or were there other people you could talk with and exchange ideas with?

I guess we haven’t really come across anybody that’s trying to do what we’re doing, which is using a WebAssembly module to just create a reusable block of code in SDKs, and also on the server, and also everywhere else. I’m sure there are people that are doing it; I’m not aware of them, but I’m sure they’re out there. If you’re listening, give us a ring. But yeah - I mentioned this earlier, but the AssemblyScript community was really helpful to us. They have a Discord we joined, the creators of the language are in there, pretty much every day, they’ve jumped in on some of our questions… So it’s definitely pointed us in the right direction, and we pretty much wouldn’t have been able to get where we are without their help… So we’re really appreciative of that. But yeah, there was kind of a lot of fumbling around in the dark, just like trying things, trying to figure out what was going on… But I would say - yeah, we’re in a much better place now than we were when we started. I think we understand how this stuff works a lot more than when we first went down this path.

Yeah. I would also say that most teams who are trying to do sort of high-performance WebAssembly codebases are likely starting with a lower-level language, like C++, or Rust, or something that, as their target of compiling to WebAssembly. So if you’re starting a brand new fresh project and not coming with an existing sort of TypeScript codebase, like we were, I would definitely recommend starting with one of those probably two languages that are sort of the… I would say there’s a much larger community around Rust and C++ of compiling to WebAssembly than there is trying to compile TypeScript code to WebAssembly.

[34:12] So that would be my advice. And you can also go down the route of using Go. From people I’ve talked to, the Go to WebAssembly conversion doesn’t seem as efficient as Rust and C++. It’s probably at a similar efficiency level of where the AssemblyScript code is… So you’re not going to get as purely optimized code as you would with C++ or Rust, but I’m sure there’s lots of Go folks out there who are probably listening to this and shaking their fist and being like “No, I’m working on that optimization!” And so I’m sure it’s gonna get better over time.

But yeah, definitely start with a lower-level language if you’re trying to do something very latency-sensitive. And I think over time, we’re definitely looking to move to something that starts off at a lower level and can be optimized more directly than our AssemblyScript code is. So I think we will sort of migrate to something that over the coming months or years.

Just to make sure I understood correctly… I believe you said that WebAssembly has a garbage collector, and that you guys were not using it, because it was just slow with what you were doing, and you needed to have that scratch base. Is that what contributed to the memory leaks, and that sort of stuff then?

Yeah, there’s a couple things there. So WebAssembly itself, at the moment, doesn’t have a garbage collector, as far as I know… Although I think there’s a proposal that they’re working on, to sort of standardize how a garbage collector should work. But at the moment, it’s basically up to whatever you’re using to output to WebAssembly to figure it out. So in this case, the garbage collector was actually implemented by the AssemblyScript compiler. And essentially, their implementation turned out to be the majority of the slowness once we had trimmed away everything else we could do. We sort of got down to like “Okay, now we’re basically just trying to reduce the number of memory allocations”, because every call to allocate new memory was sort of showing up on our profiles, being like “Okay, that’s pretty slow.”

There’s also some interesting work we were doing there to kind of tweak the - -there’s a couple of variables you can tweak in the garbage collector algorithm to try and sort of improve how often does it run, how long does it interrupt for, and try to smooth out the lines a little bit… Because we were also measuring p50 execution time compared to p99 execution time, and we were seeing huge discrepancies there… Which turned out to be a garbage collection issue, where basically anytime the garbage collector decided to interrupt, it would triple or quadruple the execution time with that specific call, and we needed to kind of bring those numbers closer together. So we were sort of playing around with the garbage collection numbers to try and figure out if we could tighten that band a little bit.

In the case of the memory leak thing… So the one instance of that happening that was actually kind of interesting was sort of what caused us to go down that route of using that tool I mentioned earlier, which analyzes – it’s a plugin for AssemblyScript that analyzes the heap, and tells you what’s been allocated… And we were trying to basically use that output to figure out what’s leaking.

But what’s really interesting is when we used that, we weren’t seeing any new allocations, or any major growth happening in the actual addressable heap memory space… But what we were seeing was the overall – so the way that the WebAssembly works in Wasmtime, and probably everywhere else, is that there’s a certain fixed amount of linear memory that it gets allocated for execution. And when it starts to run out of that linear memory, it will grow the amount of memory that it’s allowed to use. I think that might actually be implemented by AssemblyScript, but it basically doubles the amount of memory.

So what we were seeing was this linear memory size was growing, and it kept getting bigger and bigger the longer we ran it for… But meanwhile, the heap was like “Well, I don’t have any new stuff going on.” And so we had to figure out sort of “Where is this memory growth coming from? Why does it think it needs to grow its linear memory, when its actual heap sizes is still really small?” And it turned out that it was a bug in one of the libraries we were using.

[38:02] And this kind of gets back to the fact that we’re a bit on the bleeding edge here, where we’re using libraries that are really, really new, some of which are not super production-hardened… And in this case, this library was using this concept in AssemblyScript called an unmanaged class, which basically skips all of the garbage collection stuff, and expects that you’re going to keep track of its references. But the library wasn’t actually keeping track of the references to those classes. So it wasn’t showing up in the heap because the garbage collector had no idea that it existed; the library also forgot about it, so it was basically just filling up a whole bunch of memory space with instances of this unmanaged class, which was never getting cleaned up. And so that was kind of an interesting investigation, because we basically ended up zeroing in on this library, being like “Okay, so because they’re allocating this unmanaged class, it’s not going to show up in any of our tooling.” And so we ended up sort of collaborating with the author to get that fixed, and the problem went away.

Yeah. And for some background there on garbage collection in WebAssembly, it’s currently the responsibility of the runtime within WebAssembly to manage its own garbage. So for example, AssemblyScript has its own garbage collector, as you said, and if you’re using Go, for example, compiling to WebAssembly, it would sort of bundle in a garbage collector there. But there is a really exciting proposal that we’re engaged in, that would actually expose the garbage collection to the host runtime. So for example, if you’re running Wasmtime in the future within Go, the Go native garbage collector could manage all the pointers and references for all the allocations made within the WebAssembly. Or for example if you’re running in the web, or in Node, the V8 garbage collector could manage all those references for you, which would be way more efficient than trying to bundle in a garbage collector into WebAssembly, like we have to do right now. So that’s one of the many proposals that we’re keeping a close eye on, and are probably going to be early adopters of.

I assume that means if somebody is coming from C, C++ over to and converting to WebAssembly or compiling to it, at that point it’s gonna be the same as normal C++, where they’re managing all that on their own. So is that one of the reasons why you see C++ being a little bit more bleeding edge as far as WebAssembly performance?

Yeah, yeah, for sure. Yeah. If you’re managing your own memory, then you can definitely not pay the garbage collection tax, and get a much more performant WebAssembly codebase, for sure. But then you have to manage it though…

Sort of a long a similar line, I wanted to ask about the concurrency model difference. Because a big reason for adopting Go on server side in general is for concurrency and multi-threading support, and goroutine environments, and that sort of thing. But obviously, there’s trade-offs when you’re going into a WebAssembly-built core. How did you approach that, and what do you think needs to happen for that to be performant going forward?

Yeah, the current state of WebAssembly is that the – and my knowledge might be out of date here, but there’s also a proposal for multi-threading support that hasn’t quite landed yet… And for that reason, AssemblyScript doesn’t support any form of multi-threading either. So what that means is basically for us to safely call our WebAssembly module, we essentially had to put a mutex around any call to it, to make sure that we weren’t sort of corrupting the memory state by having multiple goroutines accessing it at the same time. The problem with that is that it obviously creates this sort of single bottleneck for any web server that’s trying to serve thousands of requests concurrently, and it’s dealing with this little WebAssembly module that every time it’s asking for a variable value, the WebAssembly module’s like “Sorry, hold on. I’m doing something else, for somebody else.”

So we ended up solving this in the SDK by basically creating multiple instances of the WebAssembly module, and then just kind of shuffling between them. So whenever proper multi-threading support lands, we’re definitely gonna integrate that into the SDK. But in lieu of that, we sort of ended up on this solution where we have - I guess it’s called like an object pool, where you can basically borrow an instance of the WebAssembly module, and then do some work with it, and then return it to the pool. And so the SDK lets you configure the number of those objects that you have; by default, it’s basically the number of the [unintelligible 00:42:09.27] And you essentially – every time the SDK is asked to get a variable value, it’s just borrowing one of these WebAssembly modules from this pool, doing its thing, and then returning it back to the pool.

[42:21] So that was sort of our workaround for that, and it basically unlocked better concurrent performance. There were a couple challenges there as well with the WebAssembly sort of needing to be kept up to date with the latest configuration from the server, and we had to kind of make sure that across that whole pool every instance of the WebAssembly had the latest configuration that had just come in. So we had to sort of set up this system of taking some of these objects out of rotation, so that we could kind of update their configuration behind the scenes, and then return them basically back to active duty, once the configuration was up to date. So there was a little bit of interesting kind of juggling going on there, but the solution seems to work fairly well.

Is that something you had to actually set up and manage in every SDK individually? So if you’re writing a Go SDK, you’d have to set that up there, and if you’re writing a Rust SDK, you’d have to set that up there?

Yeah, basically it would end up being language-dependent. I guess the downside of this is that with the original goal of WebAssembly kind of making it really easy to write all of our SDKs across all of our different platforms. As we got further and further down the performance path, we started to realize that there’s more and more sort of platform-specific code we had to write to get it to perform the way we wanted it to. So another example of that is the protobuf serialization that I mentioned earlier. So to pass data back and forth across the WebAssembly boundary in a performant way, we switched from JSON to protobuf. And in order to do the protobuf serialization, we basically needed to implement a protobuf serializer and deserializer, or use an off-the-shelf library in every SDK that was going to use Protobuf.

So we did keep around some of the old interfaces… So the Wasm module still can use JSON, and it doesn’t need to have multi-threading support, depending on the platform. Obviously, in Python everything is single-threaded anyway, so it’s like in Python you don’t need to really worry about building multi-threading support. And also, in some platforms, the performance is already kind of good enough. Whereas in Go, we’re thinking – it has higher performance requirements, basically.

I think that’s the important point… For 99% of use cases, the single-threaded performance of the Wasm code has gotten so good, like how far we’ve optimized it, that a mutex lock around it doesn’t affect performance in a very paralellized environment at all. But for this specific use case, where we’re dealing with an extremely high load CDN server that’s running this Go SDK, with such high request count, in that specific scenario we definitely needed this optimization and it helped bring down our our p50 and p99 times by a lot once we implemented it. But for 99% of use cases, the single-threaded performance of the WebAssembly code is so good now that you don’t really notice the difference for most SDKs.

Break: [45:08]

Jingle: [46:04]

Okay, it’s time for our unpopular opinions. It does not have to be tech-related, just something you think is going to be an unpopular opinion. We will then set it up as a Twitter poll, and let our audience decide if they think it’s actually unpopular or not. So would anybody to start first?

Yeah, I’ll go for it. My unpopular opinion is that WebAssembly runtimes will replace container-based runtimes by 2030. The advantages of WebAssembly, with its tight security model, its very fast boot-up time, scalability at the edge, with much smaller footprints, its portability across environments will really drive a shift away from container-based runtimes, for things Kubernetes and edge workloads by 2030. There’s a ton of energy around making this happen within the WebAssembly community.

What do you think is the largest barrier to getting there now?

That’s a good question… Yeah, I would say likely language support, profiling and tooling. And as we’ve talked about today a lot, getting to a point where you can optimize and profile the WebAssembly a lot easier I think is a big thing. And the standardization… So there’s a lot of really exciting changes to WebAssembly that are coming along. I think we’ve talked about a couple of them already, around multi-threading support, and native garbage collection support.

One of the big changes that are coming to WebAssembly is called the component model, which is a way to standardize the communication across multiple WebAssembly components, and they can talk to each other and really sort of make your code a lot more componentized, and in smaller chunks. And so that’s a big effort that the community is working on, to drive towards sort of replacing containers, larger containers, in these sort of Kubernetes and edge workloads.

So yeah, I think those are the big things; if the WebAssembly community can get sort of those big changes that are coming - the component model, multi-threading, garbage collection support and many other things down, then I think we’ll be on that path, and we’ll see some big companies start up around this space in the coming years.

I think it’s funny, because Jonathan – we’ve talked about this a lot, and I think my unpopular opinion would be the opposite of that… Because I don’t know – maybe more on timeframe, sure, maybe possibly, but I think the lift required is so large. Do you think that something AssemblyScript is crucial for that, as sort of this core, sort of native entrypoint?

I think a more approachable, higher-level language is important as an entrypoint. I think that’s one of the challenges with WebAssembly right now, is that the best environments are lower-level environments, things using Rust, or C++. There’s actually a good amount of momentum around running JavaScript or TypeScript in WebAssembly, but by bundling in SpiderMonkey, which is Firefox’s JavaScript engine, into your WebAssembly runtime, they’ve been able to get that working in a couple megabytes. So you basically have the full sort of SpiderMonkey runtime running within WebAssembly, running your JavaScript or compiled TypeScript code in that… And that’s kind of one of their – for a lot of these Wasm cloud, or Wasm edge companies, one of the big entrypoints that they’re talking about. But yeah, I would say getting a higher-level language that executes really efficiently in Wasm is probably one of the biggest barriers to that.

From the other side of things I’m wondering as well, do you see – I guess I should say, there’s a lot of pressure from the other side, I would say as well, of VMs and hypervisors becoming super-fast, like with Firecracker, and all of that… Do you see maybe a merging of those technologies, so you can get the security benefits of virtual machines, and the speed and all the other benefits of Wasm?

[50:09] Yeah. Don’t get me wrong, those VMs have gotten very good over many years, and we’ve been relying on them for a lot of our high-scale systems… But yeah, I think there’s just an order of magnitude difference between the size of containers – like, yeah, you can optimize your size of your containers to be pretty small, like tens of megabytes size… But WebAssembly is, at its core, designed to be more portable than that, where it’s – you’re talking about tens of kilobytes, instead of tens of megabytes. And the boot-up times can be measured in microseconds, instead of milliseconds, or tens of milliseconds, or even seconds for containers. So there’s just an order of magnitude change there by using WebAssembly, that I think it’s gonna be really hard for a lot of containerized systems to match.

You can think about a big platform running at the edge, at scale, where – say for us, for our use case, we have a lot of SDKs that hit our edge APIs. And we have certain customers, say our big mobile mobile apps… And they may send out a push notification and get hundreds of thousands of people, or even millions of people who all open their app at exactly the same time, when that sports score, or that big news event lands on their phone; they’re opening their app at exactly the same time, and we see massive deluges of traffic - literally, a hundred times our steady state traffic - hit our edge endpoints in those points in time. And because we’re using these edge platforms, they’re able to spin up thousands of runtimes of Wasm and edge runtimes in milliseconds to serve that traffic. And having to do that with VMs is possible, but there’s a lot more latency in that toolchain.

So that’s why I think the power of not only the really tight security model, but the boot-up times, the small size of the Wasm modules really can power that. And for certain use cases it makes a lot of sense. I’m not gonna say it’s gonna replace every use case; it’s clearly not. But for certain high-performance latency-sensitive use cases, trying to deliver feature flags globally to mobile apps, or web apps around the world - that is our use case, and it’s definitely very applicable to this problem.

So that would definitely mean that in that case – the way I would put it is I feel the current setup with Docker containers or whatever else are a little bit slower, but they work for probably 90% of use cases; maybe not – I’m just throwing that as a random number out, but they work for some big chunk of use cases. And the WebAssembly version that you’re saying would replace it - essentially, the speed benefits and all those things, there’s going to be a huge chunk of people who wouldn’t actually care as much about that, necessarily. So I’m assuming for that to happen, it would have to become just as easy to use the Wasm replacement for Docker. At least in my mind, that’s the only way I would see that working, is if it became just as easy. And I don’t know, do you think it’s just as easy now?

Oh, it’s definitely not just as easy yet. I think there’s definitely a lot of developer tooling work to go to make it easy. We’ve been using Cloudflare Workers, and there’s lots of other people that, for edge runtimes, that make it super-easy to deploy at runtimes; they make that pretty easy. But I think the real benefits come from the security benefits. So a WebAssembly module is way tighter in controlling sort of what it has access to through the WASI interface than a VM is, right? And so for very security-conscious companies, I could see it having a lot of value there for certain mission-critical modules of their application.

And then there’s a lot of cost benefits. One of the reasons why it’s a lot cheaper to run your edge workloads in Cloudflare Workers, or Fastly, or Netlify, any of those edge runtimes, versus something like AWS Lambda, is because the boot-up and shutdown times and the sizes of the binaries that they have to manage are way smaller. Those edge runtimes can start up your code and in milliseconds, if not faster, where Lambdas and other things like that are more containerized at the edge, take a lot longer to spin up, they have a lot higher memory footprints, things that… And so the cost differences there can be huge.

[54:14] We saw huge cost savings ourselves by moving to these edge runtimes to run these workloads at scale. We obviously – not only do we build SDKs, but we run really high-scale sort of APIs at the edge, and there’s huge cost advantages to having really small, portable, fast runtimes that I can execute all around the world.

It makes sense. Alright, Adam and Brad, or Kris - do either view or any of you have an unpopular opinion you’d to share? Sounds like no. Everybody’s scared to bring an opinion.

I mean, I can come up with an opinion…

Yeah, I could unleash a bunch, but I don’t know… [laughter] I don’t know there’s value going down that road.

I mean, that’s what the segment is for. Say something spicy.

I mean, we’ve had some that – I’m actually curious, I need to go check if they put it on Twitter yet. The one we had not long ago was when Mat and I were talking with – I think it was Matthew Boyle, who wrote Domain Driven Design with Golang… His unpopular opinion was that you should be able to bring your laptop into the movie theater, and use it while you’re watching the movie. And I’m still pretty convinced that that one’s going to be one of our more unpopular opinions.

I have lots of stories of bringing my laptop into a movie theater because I was on call, and then every time I’d go into a movie, I’d get an on-call call. That happened to me three times in a row in the early days of our first company.

So you might agree with him just so you don’t have to leave the movie theater.

I definitely agree with him, yeah.

I agree that I should be allowed to, but I don’t know that anyone else should be allowed to, necessarily. [laughter]

Well, luckily, you don’t get called as much during movies, now that we’ve switched to edge workers.

It is true. It is true.

Alright, if nobody else wants to share any, I can play the outro and then we can end the episode. Is that good everybody?

I mean, I could come up with an unpopular opinion… Let me see.

It’s up to you, Kris…

Adam, I feel like you have lots of unpopular opinions, too.

Yeah… Okay. I mean, one of them is I think Kubernetes is way overused in the tech industry. I think that there’s a lot of people that don’t need Kubernetes, that are using it, and could do something a lot simpler to just get their servers deployed. We’re using Kubernetes, and I think that we shouldn’t be using Kubernetes in retrospect. So that might be an unpopular opinion… Although I’ve sort of seen some chatter online of some other people starting to realize that as well.

I feel that’s a – it’s like a split. Either you’re someone that’s been burned by this, so it’s a very popular opinion, or you’re still in love with Kubernetes, so it’s a very unpopular opinion.

It’s kind of using third-party libraries or frameworks in a language like Go. There’s some people that are absolutely adamantly against it, and then there’s other people who are like “Yeah, it’s worked pretty well for me, so I’m fine with it.”

I was at KubeCon last week in Amsterdam, and yeah, I would say there’s a lot of energy in enterprise for large – large enterprise companies that have hundreds of services, microservices that are running in Kubernetes… And yeah, it makes a lot of sense for those. And there’s a lot of tooling built around it. But if you’re a smaller company, and maybe a lot of your workloads are at the edge, or in SDKs like us, and you only have a handful of services you need to run, it’s probably overkill, and it’s more likely to cause downtime than improve your team’s productivity… And I think that’s what we’ve experienced so far.

Yeah, everything is great until the first time you have to go diving into the Kube system namespace to figure out which of the internal pods is having a problem, or what’s gone wrong with the cluster.

[57:49] I mean, I could definitely agree with the sentiment that it’s probably better to teach people that there are other approaches to deploying, and making sure that they’re aware of that and they don’t feel like Kubernetes is “That’s where we all end up being anyway.” Because I’ve definitely seen enough – I’ve seen enough small projects where they’re just getting started, where essentially, they have Kubernetes running before they have ten users or something, and you’re like “I don’t know that that was required.”

Yeah, I think there’s not too much knowledge about the simpler ways to get your code running on a server. You don’t always need to build an entire orchestration system to get something running.

Yeah. Bring back Heroku…

I loved Heroku. Heroku was awesome. I used it at so many hackathons…

Heroku was awesome until – I felt like it was awesome until you had to actually scale up and your bills just skyrocketed.

Oh, yeah. But again, it’s something to just get going. Like, I have some code and I just need it to be running. I just need some endpoint that people can hit. It was awesome for that, which is why I used it at hackathons; it’s the great the best way to just get your code deployed.

I feel like when Heroku was popular, no one was saying that it was cool and good, and now that it’s kind of on the break, everyone has all this nostalgia for it. I remember the good old Heroku days… Nobody backed it when it was new.

Thanks for making me feel old, Brad… Good one. [laughter]

I have mixed feelings there, because if it was a hackathon project, I used it all the time. But as soon as it became a paid product, something that I needed to scale up a little bit, I immediately was like “I need to move my stuff over somewhere else that’s cheaper.” Or in the case of – a lot of startup incubators and things that, you would end up getting credits for Heroku… And I don’t remember what they were, but they were insanely high amounts of credits, $100,000 in Heroku credits or something. So in those cases, you just didn’t care, because it’s gonna be really hard to burn $100,000 in the one year time limit you have for it… So you’re just like “I just don’t care right now.” But as far as an actual paid business model, it’d be hard for me to be like “Yeah, I really missed paying $40 a month for some app that served 10 people.”

Or $1,500 bucks a month for a database that was –

Actually worked?

…redundant.

That’s why I love usage-based billing for all these edge runtimes, and things that. Just pay for what you actually use. That’s the way to go, I think.

Yeah, wasn’t our bill like $10 one time, for an entire month of traffic?

Oh, for certain things, yeah. For certain services. Obviously, not our main.

Not for the main platform, yeah. But for tens of millions of requests, it was 10 bucks, or something like that.

Yeah. You can get a lot of free requests out of some of these services these days.

Alright, Kris, did you have something you wanted to share?

Yes, I have a slightly – I guess it’s slightly spicy… But I think that all of this AI stuff that we’re doing right now, all of this ChatGPT, and Copilot, and Midjourney, and all of that is like last year’s NFTs, and 3D printers from a while ago, or Segways from forever ago, where it’s not like it’s going to go away completely, but it’s going to move into much more niche markets, and it’s not going to do the things that everybody is screaming that it’s going to do. 3D printers are a great example. 3D printers are wonderful, they’re amazing. People use them all over the place. But a decade ago, people are like “Every single college kids gonna have one of these in their dorm room, and they’re gonna be 3D printing every single thing that they need.” And it was just like “No.”

Last year, we were all dealing with NFTs, and it was like “NFTs are gonna revolutionize how every single thing works”, and it’s like “No.” So I think AI is in that same hypetrain movement at the moment. I personally will be very happy when it kind of moves on, and we can get to the place where we’re using the AI in very useful places. But yeah, I think that everybody that’s like – we’re so much closer to artificial general intelligence, and all this other stuff, because we have these large language models. It’s just like “No.” It’s gonna die down. I would say six months, but that feels a little aggressive.

[01:01:59.04] Are you implying that VC Twitter doesn’t know what they’re talking about?

Yes. [laughter] I said it was spicy!

It’s pretty spicy, yeah. I would agree with you though that at the higher level the term AI I think is a bit misleading. I don’t think these LLMs are artificially fully intelligent yet, but they’re extremely valuable. The difference between the AI hypecycle now and the NFT grift that happened a year ago is very different. There’s actual real world value that you can get out of ChatGPT. Us internally - I think Adam can talk probably for another hour about how we’ve been using ChatGPT internally to help us accelerate a lot of stuff. But for us as a developer, for me personally, I’ve never been the best Bash programmer, or SQL programmer, but now I can type into ChatGPT and be like “Hey, write me a SQL query that does this. Here’s the table schema, figure it out”, and it just goes and does it, and it’s accurate most of the time. And then if it gets it wrong, you just paste the error code into ChatGPT, and then it rewrites the SQL query based off the error code, and fixes itself. That is just truly – I’ve found it truly, really accelerated the pace of my development over the last couple of months.

Yeah… I think the NFT point’s a good one too, because the idea of having this kind of distributed ledger that you can use to kind of mark the progress of items over time, and kind of track the chain of custody of things is super-interesting, but also super-niche, and for a lot of us in software engineering not a huge thing we care about right now. But I think there are markets, like wine, or art collection, or things that, where it’d be super-interesting. I think ChatGPT, one of the applicable places is things software engineering, where when it’s wrong, we are well aware that it’s wrong.

So I think there’s a lot of things right now where people will even – the words that are using kind of irk me a bit, which is kind of where this whole unpopular opinion comes from, when people are like “Oh, the large language models are hallucinating.” And I’m like “It’s not hallucinating, it’s just producing statistically accurate results that are wrong.” So it’s the prediction engine got it wrong. Just like when autocorrect gets it wrong; it got it wrong. It didn’t do anything different than it was doing before, but people are just interpreting it differently. So I think in places where we know what the correct answer is, or know at least what the correct answer should look like, and we can tell “Okay, that’s not right”, I think it’s fine to use. But in areas where we don’t know that, it can be very dangerous.

I think it’s similar to self-driving. All the hype around self-driving cars… We haven’t gotten that close yet to a full self-driving car, that can – I live in Toronto… That can drive in the winter, when there’s snow banks, and stuff like that. We’re nowhere near that. But we have gotten to the point where every car you buy now has really good – even a cheap Toyota has really good lane keep, Lane centering, that works amazingly on the highways, when it’s nice out. And that’s kind of how I think about it. We may not get to the pure self-driving cars for another 20-50 years, but there are incremental improvements that we’re seeing now and today, and I think with AI we’re seeing similar improvements there.

Well, see, you’ve just agreed with me. [laughter]

I kind of did, yeah.

I think the problem is it depends on how you view it… If you’re expecting AI to become perfectly, 100% accurate all the time, and to do all the stuff perfectly - I think anybody who’s thinking that is going to be disappointed, because I think that’s going to be really, really hard to do. Same with self-driving cars. But I think there is a use case for something that gets 90% of it right, and there’s certain cases where it’s very good at it. Self-driving cars are an example of – if I had a car that literally just could drive the highway, like nothing fancy in the city, and nothing else, just long highway stretches, that would be miles and miles of improvement compared to what I’m currently at.

[01:06:02.29] We’re pretty close to that.

Yeah. So it doesn’t have to be perfect. It can skip all these other edge cases, as long as it does a couple core ones exceptionally well. And I think we can work towards those ones, but I think the problem is people get that idea in their head, of like “We’re building towards the perfect drive everywhere, self-driving technology.” Or if it’s AI, they want AI that’s 100% accurate, and doesn’t do anything wrong, ever… And I think people who are expecting that are going to be disappointed.

But I will say that I feel the one thing I’ve liked about ChatGPT and everything is I feel like it’s helped a lot of non-technical people see how much more they could use computers to solve menial tasks in their lives. Developers, we all knew this could – like “Oh, yeah, that’s easy to do.” And there’s even things that I see people doing what ChatGPT that I’m like “You could have done that before ChatGPT pretty easily.” But they just didn’t know how to do it, so now they’re able to leverage that, which is nice. I’m even thinking Clippy inside of Microsoft Docs… If that was released now, I feel it would never go away. Versus when it did release…

It basically is being released now… [laughs] Microsoft is essentially trying to reintegrate something Clippy into the office Suite, or they’re adding a chat interface.

I mean, I’d love it if it was actually Clippy there.

I’m sure somebody will make an add-on that’ll just change the icon… But yeah, I think it’s kind of the last 10% problem that people always talk about. And the difference between self-driving cars and AI is that with self-driving cars, you had the example earlier of the perfect lane keeping on the highway. That’s really useful, and it’s useful today. But the problem is that for driving fully on city streets, you’re still going to need to pay attention. And so anything additional past the perfect lane following on the highway isn’t that useful until it’s perfect… Because even if it can mostly drive in the city, that doesn’t really help anybody, because you still need to have your hands on the wheel, you still need to be paying attention. So it doesn’t reduce or help the driver in any way. It’s just kind of a neat toy.

Whereas with AI, even with 90% of the way there, you can do a lot of things already with it that are really, really useful. And I think at this point, it’s more just a UX problem. I love kind of just scrolling through Twitter and seeing all of the really interesting and novel ideas that people come up with on how to use this stuff. And most people, it wouldn’t even cross their mind to try that. But that’s not necessarily a problem, because in five years most of our software will probably use something under the hood to add a lot of that functionality, that people are sort of using ChatGPT for now; it’ll just have a way better interface. You won’t have to explain to it sort of what it’s supposed to do. It’ll just kind of be able to do it. So I think that part of it is really exciting, just kind of seeing how it can be kind of weaved into existing software in a really seamless way.

[01:08:49.04] Yeah. I’ve had this argument for a while, that these large language models are sort of like fancy compilers, and that if you don’t know what they’re doing, they look a lot like magic… And they are – I mean, compilers also are, to some degree, magic. The things you put in, and then the stuff you get out, you’re like “I don’t know how that happened.” But when you think about it in the simple – like the pieces that are used to make it, it’s like “Oh, well this actually – I could comprehend this. This isn’t some sort of thing that’s beyond comprehension.” But when you start to scale it up, it can get very confusing, because humans are bad at that big numbers. It’s like that classical thing of “The difference between a billion dollars and a million dollars is a billion dollars.” When you say that, people are like “That doesn’t sound right”, But if you said “The difference between $1,000 and one dollar is $1,000”, people will be like “Well, yeah, of course.”

So I think when you get to those large numbers of things, whether it’s a large number of compilation instructions in a compiler, or a large number of weights in a large language model, it starts to get interesting in what it can do, and how you can apply it. But once again, I think that’s not necessarily who I was – I feel like those are still movements into the niche kind of areas of like “Oh, these are these cool interesting things we can do with this technology.”

Maybe even a spicier take would be for - both self-driving and AGI, we will never actually achieve those things, but it’s still good to continue going on, because all of the byproducts are very useful things for human life, and augmenting human life.

I think a lot of people say we can’t achieve self-driving without AGI… So if we can’t achieve AGI, we can’t achieve self-driving either. [laughter]

I mean, there’s a whole bunch of reasons why we might never achieve self-driving – I don’t know, maybe we won’t have cars in the future. Maybe we’ll get really, really good at public transportation, and never need to drive.

Oh, don’t get me started about public transportation… I could on another hour rant about that. [laughter]

Toronto just started building their third subway in downtown, after like 50 years… [laughs]

I live in rural America, so the idea of public transportation is non-existent here. I think literally, the town I live in has two taxi cars, and they’re not always running. You have to call and schedule stuff with them. It’s not like you wait down a taxi. Lyft and Uber don’t exist where I am.

You know, there’s glimpses of possibility… I watched this really cool video about Switzerland’s train system, where it’s just like, oh yeah, there’s this random ski lift in a coffee shop, that – I think it was a hiking trails, and it’s like “Oh, we get train service every 30 minutes, all day long.” And like “Oh, it’s possible to do rural trains that function? Okay…” But yeah, for Americans I think especially it’s very hard for people that live in rural or suburban places to imagine high-quality public transit.

I think sometimes – it’s almost they need to experience the benefits to really see them. There was – I forget what city… I think it was like Tennessee and some other city, they were talking about putting a train that would get you from one to the other in a couple hours… And I’d heard people on a podcast talking about it, and they’re like “What’s the point in that?” And I’m like “Yeah, it doesn’t seem very useful until all of a sudden you can just zip over to another city, and you find yourself doing it way more often than you would have otherwise, because it’s so easily available.” Whereas if you have to hop in a car and drive eight hours, or you have to fly, and it’s going to take everything that’s involved with flying, you’re not going to do those things all the time. But a train is super-easy most of the time.

Yeah, I remember it blew my mind when I was visiting London and I realized that I could take a train to Paris in two hours. And I was like “Wait, this trip I was going to visit London, but I can also just go to Paris, and just take a train to get there.” That’s crazy.

Alright, I think that’s it. I’m gonna play us out, and then I will stop the recording. Thank you guys for joining. Jonathan, Adam, Brad, thank you guys for joining. And Kris, thank you for helping me host.

Of course.

Thanks for having us.

Thanks for having us.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00