JS Party – Episode #261

Qwik has just the right amount of magic

with Miško Hevery


All Episodes

A deep dive into Qwik, how it makes your apps fast by default, and the carefully calibrated amount of “magic” that makes it uniquely powerful.



FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Changelog++ – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with extended episodes, make the ads disappear, and increment your audio quality with higher bitrate mp3s. Let’s do this!

Notes & Links

📝 Edit Notes


1 00:00 It's party time, y'all 00:55
2 00:55 Welcoming Misko back to the pod 00:52
3 01:47 A quick recap on Qwik 02:22
4 04:09 App framework vs content framework 02:36
5 06:45 The problem with React 03:16
6 10:01 Qwik Listeners 03:14
7 13:15 What makes Qwik City unique 04:10
8 17:40 Sponsor: Changelog++ 00:54
9 18:35 Qwik is black magic 07:06
10 25:40 $ function magic 01:01
11 26:41 Optimizer magic 01:52
12 28:33 Loader magic 01:20
13 29:53 Qwik City vs Remix 02:45
14 32:38 Not too much magic tho 01:44
15 34:22 Serializing framework state vs app state 05:42
16 40:04 Implications of serialization 02:40
17 42:44 Challenges when adopting Qwik 03:51
18 46:35 Qwik's solution to memory leaks 01:28
19 48:03 How to get started 00:54
20 48:57 The real value of Qwik 02:43
21 51:41 Wrapping Up 00:41
22 52:31 Outro 01:00


📝 Edit Transcript


Play the audio to listen along while you enjoy the transcript. 🎧

Hello, JS Party people. Welcome back to JS Party, your celebration of JavaScript and the web. I’m Kball, I’m your host today. I am joined by a very special guest, MiškoHevery. Miško, welcome to the show.

Thanks for having me again.

Yes, I should say welcome back. So we spoke about your exciting new project that we’re going to talk a lot about today; was it six months ago, or something like that? And we talked about Qwik… And that was really introducing Qwik to JS Party and the JS Party audience. And at that time we had so many things we wanted to dive into that who said, “Okay, we’ve got to do another episode, we’ve got to dig back in.” So I’m excited to do that.

I don’t want to do another intro to Qwik episode. So if folks are listening, they missed that episode, you want to go back and find out what Qwik is in kind of the high-level, go back and listen to JS Party 237. There’ll be a link. But I guess before we jump into really the nitty-gritty, it might be good to do a high-level review of what Qwik is. So do you want to give us just sort of the bullet point level, what Qwik is, how it fits into the frontend ecosystem, and what makes it different?

Yeah. So I think the best way to think about it is Qwik is like React, or any other web framework, not just react. And then React has this thing called Next.js, and so Qwik has this thing called Qwik City. So Qwik City is the metaframework, Qwik is the actual framework for rendering the UI. And together, they kind of solve the same exact problem as the existing meta frameworks, whether it’s Next.js, or Remix, or SvelteKit, or Nuxt, and so on. So that’s kind of the category where it kind of falls into it.

So now you might ask yourself “Well, there are so many choices out there… Why would I want to look at Qwik?” So Qwik is kind of unique in that it is very SSR-first, meaning we think about server-side rendering and delivering to the browser just pure HTML, and then downloading just the necessary JavaScript to perform the operation you want. And I cannot stress enough just how surgical we are about delivering just the necessary JavaScript. Yes, there are other systems that can delay the download of JavaScript or delay the hydration or something like that, but they’re all actually in kind of big chunks, and in the real-world applications there’s a very limited amount of delay that can actually happen. Qwik is extremely surgical, where if you say “Push this button to add item to a shopping cart”, you will only download the handler associated with the button, and then only download the component associated with the shopping cart, and then refresh the component without downloading anything else that’s on a page. So it’s extremely surgical in its sense.

And the reason we do all of this is because we want to have an amazing UX experience for the end user So end users on a mobile device, on a slow network or something like that - they come to the website and want to interact with it. And so if we force all of the JavaScript to download ahead of time, then the user can take many, many seconds before the application is kind of ready for the user input, and the nice thing about Qwik is the application is ready immediately. So it kind of produces instant apps. So that’s what the differentiator is for Qwik.

This is a trend that I think is picking up a lot in the last year or so. We just were speaking with Fred K. Schott from Astro, and they have sort of a similar approach of HTML-first, do everything on the server, though part of how they do it is they focus very deeply on content first, and they make the assumption that most of what you’re shipping is going to be static, and then you can ship these kind of islands of interactivity. From what I understand, Qwik still feels like an application framework, rather than a content framework. Is that fair?

Yeah, that’s very fair. So if you think about Astro - first of all, I love Fred, I love Astro and what they’re doing, and I think they’re totally heading in the right direction… But I kind of want to paint the picture in terms of the differences So in Astro you really have two different things. You have the content, and then you have the behavior. And they’re written in different languages, they’re written in different locations, and mentally, you kind of have to keep track of “What am I doing? Am I doing content, or am I doing the behavior?” So you have to kind of switch back and forth. And I think this mental switching is not the thing that we want as developers As developers, we just want to build an app and not think about it.

So the big difference with Qwik is that in Qwik you don’t have to do the mental gymnastics of “So does this run on the server? Does it run on the client? Does it pre-render? Where do I put this stuff?” And then “Oh, I’m inside of the interactivity, so I’m now in the React world”, or whatever the framework you chose. Or “Now I’m in the content side, so I’m in the MDX world.” And so you have to do all these mental gymnastics, and there’s a cost to it.

And also, if you talk to Fred, he’ll tell you, Astro isn’t the solution for everything. There are certain sites that are really good for Astro, and certain use cases that are not; whereas Qwik wants to be a more general-purpose thing and say, “Look, if you can build it using any existing technology, whether it’s React, Angular etc, then you can also build it in Qwik.” So that use case is the same, but you don’t have to think about what is static, what is dynamic, what has to be lazy-loaded, etc. Qwik will figure all this stuff for you, and you just focus on the application.

[00:06:15.18] Out of the box, Qwik will break up your application into pieces, lazy-load the pieces, install a service worker, which will prefetch all the stuff… So even if the internet connection is dropped, you will have a correct behavior and figure out how to do server-side rendering, how to serialize the data, how to send it to the client, and the back and forth. And so all of those things that you need in order to get kind of the application in the ideal world running are just something that’s available to you out of the box, without any kind of effort for the developer side. That’s kind of the value that we’re providing here.

Right. So you’re kind of creating that same, unified, all-in-one DX, without content-switching, that has made React so popular…


…but instead of shipping 40 kilobytes of React over and having to boot up the entire runtime ahead of time, you’re doing all this sort of magic to make it feel like it’s just little snippets of progressive enhancement.

And what I kind of point out here is that the problem isn’t that React is 40 kilobytes. That’s not the issue. 40 kilobytes is small enough that it’s really not a problem. The problem is that if you build any significantly large application in React, the application itself will be hundreds of kilobytes So the issue isn’t that React is 40 kilobytes, the issue is that the applications built in React are oftentimes hundreds of kilobytes. And the way React and other frameworks are structured, they have something called hydration, and hydration requires that all of the components kind of be present when hydration runs. And that’s where the problem is The moment you navigate to a page, the more complicated your page becomes, the more stuff you see on the page for the user, the more JavaScript has to be present, and more jobs it has to execute. Whereas if you’re in the Qwik world, you can make the page as big as you want, and only the necessary bits are downloaded.

So the first thing is Qwik doesn’t have hydration, and so that in itself removes huge swaths of JavaScript that never has to be downloaded in the client, and then Qwik has this really good lazy-loading story, so that when you – imagine an Amazon website, an Amazon page. If you go to the Amazon page, no JavaScript gets downloaded, and if you click a button that says “Add to the shopping cart”, then we only download the handler for that shopping cart, and we only download the shopping cart itself, because it has to re-render. And nothing else on the UI has to get downloaded So we don’t download the menus, we don’t download the product details, we don’t download the comments section, or the reviews section, unless the user starts interacting with it. That’s kind of the value-add here, is that you can navigate to a page, no JavaScript; once the user interacts, we get the correct thing.

And I really want to stress that – the number one question we get is “Well, isn’t that slow, if you download when the user interacts?” And the answer is no, because there’s a service worker that starts prefetching all of the code available, to make sure that when you click, you don’t have to wait. So even if the connection drops, or you’re in a tunnel and you don’t have the data, you can start interacting with the page just fine. And the service worker has a full view of the application, and so the service worker knows that “Oh, all these components that you have here - they have no interactivity, so don’t even bother prefetching them, because there’s no code path that the user could possibly take that will update this thing” Whereas “Oh, this menu - it has interactivity, so go prefetch it, but don’t do it as fast as prefetching maybe out of the shopping cart, because we know statistically that’s a more likely scenario. So make sure you get that code first, and then get the menu code afterwards.”

And so there’s all these tricks that can be done, where first of all, we don’t download huge amounts of code that is not necessary, and then when we do download the code, we make sure we do it in the correct order. And all of that just kind of happens without you having to do anything as a developer.

Something you mentioned there that I’m really curious about it - so you said, “Okay, we can make predictions about which of this code is likely to be needed first, and optimize based on that.” Is that done statically? Is there sort of feedback from usage data? How does that work?

[00:10:16.00] Yeah, no, it’s done dynamically, actually. So when you interact with a page – you know, at the end of the day, what does interaction mean? It means that there is a like a click listener, or a mouse-over listener, or a hover, or whatever So there’s listeners inside of the HTML. And so we know what these listeners point to, and we can also query the DOM and say, “What are all the listeners?” and we can see all the possible things. So given a particular state of the application, it’s easy for the system to kind of look at the HTML and say, “What are all the possible things that the user can do?” And so that gives you a list of items that you can go and fetch, a list of chunks that you can go start prefetching.

And then when the user actually clicks on one of these chunks, then we fire an event saying “Oh, user clicked on chunk 12345. And so now we know this, and it’s relatively easy to then ship that information to the backend, and the backend can collect the statistical stuff, and then you can basically know “Oh, most of the users that you have are interacting with the Add to the Shopping Cart button, or View Details button.” Very few people interact with the menu, and nobody ever interacts with the Logout button. So given that information, we can then feed that information to both the bundler, so that the bundler says, “Ah, if you click the shopping cart, you are also very likely to go update this other thing, so make sure you put it in the same bundle together.” And we also know that almost nobody clicks on the Logout button, so do put it in a bundle, like in a separate one, but then tell the service worker to kind of load it at the end.

So you use that information in two ways. One is you use it to kind of figure out what are the ideal chunk sizes and what the little chunks should contain, but you also use it to kind of prioritize in which order these chunks should be loaded. And I need to clarify that as of right now, this isn’t available to you out of the box, so you have to do a little bit of work… But we are planning to have such a feature as well.

That is super-interesting. And I can imagine that analytics - that has use for a variety of purposes, right? That has business case uses as well.

So do you expose it via an API that folks can use?

Yeah, as of right now we just fire a custom event that you have to kind of grab. The hard part isn’t really doing all these things. So we have a bundling system that can do these bundles, we have - you know, how to prefetch in the correct order… All this stuff is up and running. What isn’t ready is that when you collect this stuff, you have to send it to the server somewhere, and that server has to have a provisioned database to kind of keep track of it, or something like that. So that is still onto you as a developer to kind of integrate this into your website. But once you collect this information, you can feed it to the bundler, and the bundler then knows how to bundle it together for it.

That’s super-interesting. Well, and it gives you the potential of very quick, easy-to-build first-party analytics as well, so that you don’t have to worry about “Oh, am I integrating some [unintelligible 00:13:02.17] I know you have PartyTown to speed up third party analytics scripts and things like that, but you don’t even necessarily have to worry about that. You can bundle in your first-party analytics.

Correct. Correct.

That’s super-cool. I want to dive deep on the concept of resumability, which is something we talked about a lot in our first episode. But before I do, you mentioned a little bit about Qwik City, and we didn’t cover it that much when we talked before. I want to understand, is Qwik City a pretty much straight, standard equivalent to other metaframeworks like Next or SvelteKit, or are there unique things about Qwik City, similar to how there are unique things about Qwik?

[00:13:40.25] Yeah, there are a couple of unique things in there. So first of all, what you get out of the box with Qwik City is you get a router. And that’s kind of the standard thing that you can imagine; it’s a file-based router, or a directory-based router. So that’s pretty straightforward. But the other thing you need to get is you need a way of loading the data, and then doing behavior, or actions when the user interacts, and you want to update some data in the backend. So if you think about it, you need to have a way of transferring data from the server to the client, and then you need a way of transferring data from the client to the server. So we call those loaders and actions. Other meta frameworks kind of have it, too. Remix sort of has it. But I think we were able to go a step further than everybody else.

And the reason for that is you cannot refer to a – like, let’s say you have a server action or a server loader; you can’t refer to it directly, because if you refer to it directly, then the bundler will include it. And that’s a problem, because you can’t include server-side code in the client, right? Even if we could somehow download it, the issue is going to be like “Well, the server-side code has NPM dependencies, and import dependencies”, and it becomes basically this huge amount of code. So first of all, you don’t want to ship all that stuff. But even if you could ship it, you don’t want to accidentally execute it, because it will just blow up on a client… So the way most systems get around this is they basically say, “This is where you put the code for the server, and this is where you put the code for the client. And oh, by the way, we’ve got to make sure that if type information wants to be passed from the server to the client, it has to be passed in a type-only way. You can’t refer to the function.” The type-only thing – types get erased by TypeScript, and so they kind of disappear. But if you refer to a symbol, that doesn’t get erased, and that confuses the bundler.

So most systems have this, but they have this separation of like “This is client code, and this is server code, and the two shall never meet.” And as a result, you can’t just refer to functions directly. But Qwik has this amazing ability to take code and break it up into pieces. And so for us, you can actually refer directly to the server function, and then when the bundler gets done with it, the bundler is smart enough to be like “Yeah, yeah, but this is a server-only code, and I know not to bundle it in there”, and it can kind of exclude it.

And the nice thing about that is that the DX is way nicer. You just have a single file; in this single file, you say, “This is a component. This is a loader.” Inside of the loader, I directly talk to my MongoDB or whatever, Node.js-only import I do on there. In the component I do my stuff, and then when our bundler, which is the optimizer, runs through it, the optimizer is like “Oh, I see $ sign. I’m going to lazy-load this thing.” And then I see like “Oh, right. But that’s referred from the components, so I’m just going to exclude it separately.” Because the optimizer has to create separate bundles for the client and separate bundles for the server. And so because of that, it knows “Oh, this code can only go be on the server, and this code can only be on the client.” And so the right stuff just kind of happens automagically. And that results in a much nicer developer experience, and it’s not a developer experience that can be easily copied, because other systems don’t have the ability to kind of break the codebase like that up. That’s a Qwik specialty. And basically, wherever you see a $ sign, you know that there’s some breaking up happening underneath the hood, and that allows us not just to do lazy-loading, but to also do what I’ve just described, basically, where you can have direct relationships between the client and the server, but then the right stuff happens at runtime.

I love that you’re talking about this sort of automagic, and how you’re breaking things apart, and how that does it… And when we spoke before, you mentioned that this process of how Qwik automatically breaks up the application is the blackest magic of how Qwik works. So can you peel back the hood for us, and take us through that black magic? I mean, I personally believe software is magic. That’s what we – our job, we’re magicians, that’s what we do. So take us through the spells you cast here to make Qwik Qwik.

Before I do that, I only want to point out that a lot of people consider magic bad… And I think the way I look at it is magic has cost, in terms of understanding. And the thing you want to make sure is that the cost, or the benefit of the magic way outweighs the cost that it provides. And I think a lot of people have been burned in the past; the magic is so complicated that you’re like “I have no idea what’s going on in here. This is not worth it. I hate magic.” And I just really want to point out that like, we are well aware of this problem, and we think that our magic is very easy to explain, and as a result, it’s easy to understand and then people don’t get surprised. So we think our benefits we get out of it way outweigh the cost of mental model that’s required to understand what’s going on, and not be surprised. So I just want to kind of put that out there, because I think a lot of people, when they hear the word “magic”, they just kind of freak out, and say like “This is bad.” So I think it depends on the situation.

Alright, so let’s jump in what kind of magic we have. So we have a compile step, and this compile step is called the optimizer. And this optimizer, in my opinion, I feel very strongly that the amount of magic you put into the optimizer should be the absolute bare minimum necessary to get the job done, and nothing else. And if you look at systems that are purely runtime, they’re much easier to understand, because there’s isn’t a lot of magic going on. And as the systems become more and more compiler-dependent, they kind of become complicated in terms of understanding, and this is where people might jump the gun and be like “Oh, this is too much.”

So what exactly does the optimizer do? Okay, the problem we need to solve is that writing lazy code, or rather lazy-loading code is complicated. Well, let me back up a second… We have two problems, and that is that if we want to minimize the amount of JavaScript we ship to the client, we need to have a way of somehow breaking up the codebase. And if you think about it, regular systems don’t have an easy way to break these things up, because what they do is they say, “Well, here’s a root component, this is the entrypoint to my application.” So this is your root component. And once you have the root component, the component has references to child components, and those have references to child components, and so on and so forth.

So when you grab the root component of the application, you pretty much have grabbed the whole application. And so most systems have some kind of lazy-loading, either in the form of a router, or explicit lazy function inside of React that creates a suspense… But there’s a lot of ceremony associated with it; it’s not just something you can just do. And there’s ceremony both in terms of the developer, all the stuff they have to do, and also in terms of runtime, because the way this works in most systems is that you execute until you hit the suspense boundary, and then you kind of give up, and then you wait until the suspense resolves, and then you re-execute from the beginning, hoping that you’re gonna get further, and then you find another suspense binary, and you kind of give up, and then you wait until it resolves, and then you kind of repeat the process.

[00:22:15.09] So it’s very expensive, both in terms of what the developer has to do, because the developer has to wrap the component inside of a dynamic import, take the component, put it in a separate file, put a reference to it, wrap the whole thing inside of a closure, put it inside of a lazy, and then the lazy gets fed into the suspense… Like, a lot of ceremony that has to happen in order to get this thing going.

And so most systems have a really hard time with breaking your application into chunks, most bundlers. Typically, if you don’t put any dynamic imports in your source code, then the answer is you’ll get exactly one bundle. And for every dynamic import you put in your source code, you can get a small chunk that kind of is cleaved off from the system. So the thing that Qwik needs to solve is we need to have this be automatic. And not just for components, but also for things like listeners, callbacks, use client effect, tasks, and so on. So basically, we want to take your application and have an easy way without any sort of ceremony on the developer side to break everything up. Because the ceremony goes against the DX. So we want to have a nice DX, and so we just want you to write your code as you would normally write, and then we do the breaking up. The thing is, we don’t know where to break it up, so we need some kind of a marker. And so in our case, the marker is a function call that ends in a dollar sign. So anywhere there is a function name that ends with a dollar sign, that’s a message to both the developer and to the optimizer that magic happens here.

And the magic that happens is pretty straightforward. It is take that first argument of the function, which usually is a closure, move it into a separate file, and leave behind a dynamic import. That’s all that it does. And it’s both – it’s a hint to the optimizer, to the compiler, but it’s also a hint to the developer, saying “Look, certain assumptions you might have about what’s in here cannot necessarily apply.”

So for example, by moving this function to a separate file, you can’t be closing over variables that are not importable. Because when you move it over, you can’t see those variables. So there are certain constraints that you have to follow, and so you need to learn as a developer, like what does this magical dollar sign mean? But at the same time, it’s relatively easy to explain, because there isn’t some complicated things that are going on; we’re literally just taking that closure and moving it to a separate file, we’re giving it a name, we’re getting the file name, you don’t have to think about any of that stuff, the file name is autogenerated, the symbol is autogenerated… You just have to make sure that you don’t close over certain variables that are not going to be visible from the other file, and so for that we have linters to kind of help you along… So it’s pretty straightforward.

But what you get out of this piece of magic is you get lots and lots of entrypoints. And that’s the secret. Once you have lots and lots of entry points, then your bundler can do magic; the bundler can decide to put these entrypoints together, or separate, or whatever the bundle decides is a good idea. You can feed runtime information into it, and the bundler has more information… But unless you start with a world where you have lots and lots of entrypoints, bundlers can’t do anything. And that’s the challenge that existing frameworks have, is that they don’t have an easy way of breaking the codebase up.

Right. So this is reminding me of how everybody was excited about tree shaking, and then it turns out the majority of the tree is always included, and you can shake a few things.

So what you’re doing is you’re basically inserting these like cleavage lines, where suddenly the bundler has so much more power, because it has many more choices available to it.

[00:25:58.20] Correct. So it’s all about making those choices. And it’s all about making those choices in the way that isn’t expensive for the developer. Like, you don’t want to put that cost, that burden on the developer. And the simplest thing we could come up with is basically a function call that ends in a dollar sign. That’s the magic. That’s the thing that says “Lazy-load this thing.” Now, just because it’s lazy-loaded doesn’t mean it actually will cause lazy-loading in the runtime; it just means it’s a potential place where lazy-loading can happen. And that has a lot of implications, mainly because it means that every time you see a dollar sign, you understand that that closure that follows is going to be invoked asynchronously.

Right. I was gonna say, this pushes you to an asynchronous first-base model, and the beauty of that is if your code is asynchronous, it can actually run synchronously or asynchronously. It doesn’t care, it can just go.

That’s right, you got that. So that’s the magic that we saw with the optimizer. And also, what I want to point out is that there’s a collaboration going on between the optimizer and the runtime. The thing is, you can’t just leave behind a dynamic import; that breaks the semantics of what the code originally said. So it’s not like existing frameworks can easily add this feature in because, it breaks the semantics. And so in the Qwik world, the optimizer breaks the semantics in a way which the runtime knows how to deal with. So there’s an agreement going on over there. Like, “I know what I’m doing is not 100% legal here, But as a runtime, you will understand this.” And so we have this agreement going on, and therefore we can do things that others cannot.

Right. It’s similar to what Svelte does, in the sense of you’re sort of extending the language a little bit, changing semantics a little bit to support DX, but because you control both sides of the process, it’s fine.

That’s right. That’s the magical piece. We control both sides of the process. And if you look at other frameworks, you kind of realize, they don’t care about bundling; it’s not their problem, it’s somebody else’s problem. But the implication of that is that if it’s somebody else’s problem, it means that somebody else can only do things and transformations that are semantically equivalent. And that’s the problem, because any kind of transformation that’s semantically equivalent cannot be used for lazy-loading, because it’s a synchronous; it changes a synchronous thing to asynchronous things, and that’s not allowed. Whereas in Qwik world, the runtime understands that there’s this asynchronicity that’s being introduced over here, and therefore it can deal with it.

So you’re doing this in this example for how you’re loading code and components…


Do you expose it in a way that people could use it, for example, for data loading, and things like that?

Yeah, absolutely.

Because that’s another area where oftentimes people are thinking about things in a synchronous way because it’s easier, but it kills your performance. Like, thinking about data loading as an asynchronous problem is so much more powerful.

Yeah, so this is where loader and actions come in. This is kind of the extra magic that Qwik City does, that allows you to expose data, like whether or not a user is logged in, the session characteristics, the list of contacts, or whatever you want, and then the runtime can kind of consume it. So in many ways, a loader in action can do what tRPC does for you, or GraphQL does for you. It’s not that we want to replace tRPC or GraphQL; it’s like, 95% of the time this is just simpler, and you just do that. And so out of the box, you get this powerful solution that, for the most people and for most cases it just works just as fine without any sort of extra integration, or getting other things in there. If you want to do GraphQL, you certainly can, but that’s really a more complicated thing for you.

Well, and once again, you have visibility into this…

[00:29:56.27] So it’s reminding me a little bit of what Remix was doing, where they forced you to define for any particular route what is the set of data that needs to be loaded, and then they can aggregate that and run things in parallel.

Yeah, so you can think about – Qwik City is in many ways kind of like Remix, but we add a whole bunch of things on top of it, and specifically, we can do this because we have these magical functions that end in a dollar sign. And that gives us all kinds of possibilities that just isn’t possible with if you’re gonna use React underneath, or… And not to pick on React. The same thing is true for any other framework, whether you use React, Angular, Svelte, Solid etc. because they don’t have a way to break up – breaking up the code is not a fundamental low-level primitive of the framework. They cannot do all of these magical things, and so they have to do things like “Oh, this is a server-only code. Clearly, I have to put it in a separate file that ends in .server.ts, or something like that. And this is a client code, so I can put it in a separate file, containing [unintelligible 00:30:58.24] And I can’t directly refer to this thing over there, because if I do, it will get pulled in. And so I have to create a name that’s a string, and then the string then gets passed between the two things, and as long as the string is the same, then the two sides know how to talk to each other. And all this is just like extra ceremony that can all be avoided if I could just like directly refer to you. But I can’t, because that would mess up the bundler, so we come up with all these other workarounds for it. But we have this optimizer that knows how to break things up, and the optimizer understands the intent of these things, understands what the runtime is trying to do, and then the optimizer can just do the breaking up for you without you even trying.

Now, you mentioned this is a primitive, and it got me thinking, do you expose this in any way for like plugin authors, or something like that?

No, it’s totally exposed to you; it needs to be exposed, so that you can do composability. So we have, for example, use client effect dollar sign, or use styles dollar sign; if you wanted to make your own use methods, you can compose other use methods. And in that case, you might have to take a closure or some kind of a callback that you want to lazy-load. And so this is something that is totally exposed, and as an end user or developer of the libraries, you can take advantage of.

Notice what I keep saying is that the only thing we care about is that it’s a function call ending in dollar sign. As long as you make a function call that has a name that ends in a dollar sign, this magic will be applied. And so it is not specific to us, it is totally exposed, and anybody can do this.

I was just pondering - like, extending the level of transformation that you’re making, or shall we say flavoring the types of transformations… So thinking, again, of the data example, if you know that loaders are referencing databases with particular characteristics or something like that, you might want to transform them in slightly different ways, or give hints to the bundler to say, “Hey, these things should actually be run together, because we’ll be able to do something.”

Yeah. So this is where I think we get to the dangerous territory of you creating too much magic, and so we tried very hard to make sure that any magic we do is well understood, has well-defined properties, and we don’t deviate from it.

My philosophy is that if it can be done in runtime, you should always do it in runtime. Because there are two kinds of costs to the compiler. First of all, it’s magic; like, weird stuff happens that you need to understand. But the second problem is, because it runs at compile-time, there is always these weird edge cases where you think it’s doing this thing, but no, because it’s statically-run ahead of time, it didn’t have the correct information, and so it has to do this generic thing, not the specific thing that you think you’re doing.

[00:33:47.08] And so there are costs to compilers. I have a great amount of respect for compilers. And if there is a solution that cannot be solved in any other way, then compilers are great. But throwing compilers just because it’s cool - it’s dangerous. And so we just want to make sure that like we are very well defined in terms of what the optimizer does, and it’s very strictly defined in a way that’s easy to explain, and for other people to grok, and we want to not kind of deviate from that, because we think then you get into this black magic that’s dangerous.

Okay, so one more thing that I want to talk about is one of the pieces that goes into this ability to pull things out and compile it was being able to serialize framework state. And you talked about that a lot. Actually, can we revisit a little bit what is the distinction we made between serializing framework state versus application state?

Yeah, so application state is – many frameworks know how to serialize application state. For example, if you look at Next.js, your application state gets serialized into a special script tag that I think has a type __next_state, if my memory serves me right. And it’s just a JSON basically of the state of the application, so that when the application wakes up, it doesn’t have to go fetch the data from the server, it has the data available immediately, and it can do whatever it wants to do.

The problem is that there is additional state to the system which is the framework state. And normally, you understand your application state because you as a developer wrote it, but it’s kind of hazy like what exactly the framework state is. And so the framework state is what a framework needs. And so what is an example of that? So component boundaries are an example of component state. Locations of all the listeners is an example of state. And because we are doing things with reactivity, the reactivity graph is an example of the framework statement.

So let’s have an example. Imagine an Amazon page, and now you click on a button that says, “Add to the shopping cart.” How is the framework supposed to know that it has to go wake up the shopping cart and re-render it? And then even if it knows that, how is it supposed to know where in the world is the shopping cart in the DOM? Where is the boundary? Where exactly is it? And then a shopping cart might have child components, and so the framework wants to be like “Oh yeah, I want to re-render the shopping cart, but not its children. Those are not necessary.” And so it needs to know all of this information, and all that information is lost on a server. And so the way Qwik is unique is that it serializes all of this information.

And the question you might ask is like “Well, if all that information is lost on the server, how do the frameworks of today recover this information?” And the answer is they have this thing called hydration. And hydration just basically means just run the whole application from the beginning to the end, and as you’re running the application, as you’re running the templates, the framework learns about the application - it learns where the component boundaries are, it learns where the listeners are, it learns what the components are. And the frameworks that are fine-grained reactive, like Solid.js and Svelte, the framework also really learns about the relationships between, “Oh, this variable is bound to this DOM element. If I change this variable, I have to go up to this DOM element.” And all of that information is recovered during hydration.

But what it means is that the hydration requires the application to be present. And so if you want to have a strong lazy-loaded world, then you just kind of ruined it for yourself, right? Like “Oh yeah, I broke up the application to a million pieces. I can do all this lazy-loading. Guess what?! At the beginning, we have to download everything and execute everything.” Like, that completely ruins your day. So you need some other strategy of recovering this information. And so what Qwik - the unique thing about Qwik is that Qwik serializes all this information into the HTML in such a way that the framework can recover it later.

[00:37:57.21] So again, using the example of Amazon shopping page, if you click on a button that says, “Add to the shopping cart”, the framework knows that there’s a listener there, and therefore it knows that it has to execute something about it; without executing anything in your application space, the framework knows that there’s a listener there. And it knows how to load the listener and execute the listener. And then as the listener is running, it’s probably mutating some state of the system. And so the framework knew how to recover that state, and it also knows “Oh, you modified a count property in this shopping cart object, right?” Now I know “Oh, yeah, this count property is actually bound to this DOM element over there, so now I have to know to go and update that thing.” And so it can do all this stuff without any of the existing applications being present.

So if you look at systems like Solid.js or Svelte, which are fine-grained reactive, or Vue (it’s also fine-grained reactive), they have a particular challenge, which is “How do I get the information back?” Their answer in every single case is like just rerun the application.

So the interesting thing about fine-grained reactivity is that once you have the reactivity graph, you can be extremely surgical about what you need to run and update. But in order to have the reactivity graph, you’d have to execute the whole world. And that work where you’re kind of like “Ah, I was so close, and I lost them.”

And so the innovation for Qwik is we know how to serialize all that stuff, so that we can reason about the application, the same way that Vue and Solid and Svelte reason about the application. But we can do it without executing the application at the beginning. And that’s where the magic comes from. This is where opens up many things, because by not executing the application at the beginning, our resumability is instant, and it means that we can be very surgical about like “Oh yeah, but we only need to download the click listener and the component representing the shopping cart. We don’t need to download anything else on this page.”

Does that ability to serialize have any implications for, for example, being able to – I’m just imagining like sharing game state, or something like that. If I build a game in Qwik, and I don’t want to have a server behind it… But I let somebody play around, and maybe I give them a way to export both application and framework state into local storage, or into a URL even - is that something that helps there? Or you really just need the application state for that?

I have a demo where I kind of showcase this, where I open up a to-do list, and I interact with it, add new to-do lists, hide some items, and then I tell the system to serialize itself back into HTML, and I grab the inner HTML of the application. And then I open up a completely different browser and just paste it into the tab, and the application just continues running with the correct state, and even the hidden stuff is available when you unhide it. It just runs. That’s kind of the example we’re talking about.

The implication there is that when you’re talking about state, it has to be serializable, right? So we have a strong guarantee that if you’re going to have a state, we need to be able to serialize it, and so we’ll eagerly throw errors at you saying like “Hey, you’re trying to store something that later on we won’t be able to serialize, so don’t do that.” Because you’re storing it now, but the serialization might happen later. And so once this serialization is happening, and then we like throw an error, then we’re going to be like “Yeah, but where would it come from?” You want to do it eagerly.

So there are constraints that you have to learn as a developer. They’re not particularly strong ones, because if you think about it, Next.js is already serializing the state of the application for you. So the same constraints already exist in other frameworks, and nobody’s screaming that this is a horrible thing. They just learn it and they just know, like “Oh yeah, I can’t put certain things inside of things.”

[00:42:03.12] But having said that, we can serialize surprisingly many things. Obviously, we can serialize everything JSON serializes. But we can also serialize promises, dates, and we’re even talking about serializing functions, provided that they are pure. So it’s a pretty rich set of things.

So yes, there are constraints. I don’t think they’re a big deal; specifically when I go to Discord and talk to people that are trying to build applications with quake, it rarely comes up. It’s not a thing that people worry about. There’s challenges with the new technology that people need to wrap their head around, but this is not one of them that comes up almost ever.

Well, that leads to kind of an interesting question - what are the big challenges that people run into when they start trying to adopt Qwik? What feels hard to folks still?

Yeah, so people very much think in kind of classical ways. A typical example is suppose I want to have a mouse move, I want to track the position of the mouse. The way you would do it in a classical framework, you’d say, “Oh, okay, so I’m going to create a use effect, and inside of use effect, I’m going to say document.addEventListener mouse move.” And you can do that in Qwik; that totally works. But if you think about it, that’s not what you want, because now you’re like eagerly executing code on application startup. Instead, what you want to do in Qwik is you want to say – you want to run the registration of event listener on the server. And to most people, that’s like “What are you talking about? You can’t do that. That’s in the server. There is no DOM. We’re doing SSR, that can’t possibly be done.” But the framework knows how to serialize these things, and actually, it can be done on the server.

And so people end up writing code that is idiomatic for other frameworks, and it just happens to work inside of Qwik, but it’s not performant. Because they’ll end up eagerly registering all these listeners, and pulling all the code that’s unnecessary. And so a lot of issues is like, well, you’ve got to think like Qwik, which means “Can we offload many of these things to the server side?”

The other thing that people are kind of surprised with is that all the other frameworks have these expectations, that “This is the server code, and this is the client code, and the two should never meet.” So you have to separate it in separate files etc. And so because people are already pre-trained with this mental model, when it comes to Qwik, they’re kind of confused… Like, “Wait, where do I put the server code? Where do I put the client?” Well, just put it in the same place, and the right thing will happen. And then they’re like “Really? That seems strange. I don’t expect that.”

So some of this is just behavior that they’re just preconditioned with from other technologies, that they have to kind of unlearn, so to speak, to kind of understand “Oh, the mental model is different. It is not my application running on a server, and then a separate thing running on the client. It’s the application starts on the server and then it gets moved over.” And so a lot of times people will try to do things like, say, you run this code on the init phase of the component, and they’re surprised why it’s not running inside of the client. Because, well, the component [unintelligible 00:45:09.24] on a server, not in the client, so the constructor no longer runs on the client. And because they’re used to the hydration world, where all of the components reconstruct themselves on the client, they’re kind of like “Well, why isn’t this working?” It’s like, “Well, because resumability.” We are instantiated on the server, and we continue running on the client, without reinstantiating everything on the client again.

And so there is a little bit of learning that has to happen… But once you kind of get that mental model, it kind of totally makes sense. Like, “Yeah, of course. We started on the server, then we moved to the client, but the server one was the one that instantiated the component. The server one is the one that registered the listener for the mouse move, and so the client only downloads the mouse move code if you actually move the mouse. If you don’t move the mouse, then no code gets downloaded.”

[00:45:59.28] Alright, that’s blowing my mind a little bit.

Yeah, it’s crazy, right?

Particularly the mouse example.

So the corollary of that is that a registering of a listener is adding an attribute to the DOM, which the framework does for you. The corollary of that is, you don’t ever have to deregister the listener, because deregistration is just removing the attribute. And when the component gets destroyed, with it we destroy all the attributes. That’s just the natural thing of the DOM, is just to remove takes everything out. And the moment the attributes are not there, the listeners don’t work, and so the memory can get released.

That’s fascinating. So do you then find that writing things in Qwik, you tend to have fewer memory leak problems? Because that has been one of the wonderful challenges that has come with the SPA world, is we have these mega JavaScript apps that just - you keep opening a tab for days, and the memory usage just keeps climbing…

Yeah, I think many memory usage issues are around registering listener events, and then not properly cleaning them up. And so that whole category of issues kind of disappears. That is not to say that you cannot have a memory leak in Qwik. You certainly can. It’s just the nature of any language, not just JavaScript, is you can have a memory leak. But we have kind of taken out one huge category of issues.

Now, obviously, if you keep calling at event listener on your own, and not following the Qwik way of doing things - yeah, you will have memory issues. But that’s why we have the equivalent one. This way, you have to kind of reprogram your behavior a little bit, to be like “Oh yeah, I want to have a listener.” Don’t do the classical way, use the Qwik way, because then all these other benefits just come out of the box. I can run the listener, I can register the listener on a server, versus on the client. Because if you think about it, registering the listener is just adding an attribute to a DOM element. And that can certainly be done on a server, even if you don’t have a DOM. Even if you’re doing SSR, even if you’re streaming the HTML, you can certainly insert these extra attributes on a DOM, that result in a mouse move, or whatever.

Okay. So if somebody is listening to this and they’re like “Wow, I’ve gotta check this out, I’ve gotta learn”, what are the best places to go to start with Qwik?

Well, qwik.builder.io. That’s the homepage for our project. From there, you can find a Discord. The discord community is very lively; a lot of passionate people are in there, and they’re helping each other. So those are the two best places. Qwik.builder.io also has good tutorials, we have a REPL that’s in the browser, so you can try these things out without going through the trouble of installing it… If you outgrow the REPL, you can go to StackBlitz. If you type qwik.new in your browser, then that URL will take you to a StackBlitz where you can kind of have a more richer experience, and you can build your apps. And then if you are ready to do the full thing, you can always type “npm create qwik app latest”.

Awesome. Well, this has been super-fun, to get to go a little bit deeper. Are there any things that we didn’t cover today, but that you think people should know about Qwik?

I think we covered most of it. I really want to stress that Qwik was intentionally made to look like other popular technologies. And so part of the trouble we’re having is that people look at it and they go like “Oh, it’s just like this other thing I know” and they immediately dismiss it, not understanding that the value of Qwik is not that there is a different developer experience. The value of Qwik is that there’s a different user experience. And this user experience becomes evident not when you build a Hello World app. It’s when you build large-scale applications. That’s when it really starts shining. Because every Hello World app can get 100 out of 100 on that PageSpeed score; every Hello World app can be fast and instantaneous etc. Yeah, yeah, yeah. Sure. It’s when you have a large app where things break down.

[00:49:58.13] And so one of the things we like to talk about out when it comes to Qwik is that - this kind of just happened recently, is that we compared different movie examples, and we wrote a blog post. And then a lot of frameworks came out and said, “Oh, this is unfair, because this particular person who wrote the demo app didn’t do these optimizations. They didn’t do lazy-loading. They didn’t do this part. They didn’t do that part. No, no, no.” And I think what it shows is that most frameworks have the easy path, and the performant path. And those are two separate things. And the problem is that when you’re building things, you’re always under time pressure, and so you always take the easy path; you never take the performant path until you are forced to do so, and the vast majority of people never do.

And so one thing that I think is unique about Qwik is that the easy path and the performant path are one and the same inside of Qwik. But unfortunately, that’s not something you can discover by looking at a simple example and playing with it for five minutes. That’s something that you only realize after you deeply build a complex app in the system, and you realize, “Hey, I have this complicated application, but still, a minimal amount of JavaScript is getting shipped. Still, SSR just works beautifully, because even if I turn off JavaScript, most of the pages still work.” You get all these secondary benefits that don’t become obvious until you have a huge application.

And so this is kind of what’s been difficult to kind of explain, because you have to go through a lot of things before you get to there, before you realize you’re kind of stuck, because you took the easy paths [unintelligible 00:51:36.03] performant paths.

Awesome. Alright, well, thank you again for joining me today, Miško. This has been fun. I definitely am intrigued by Qwik, and I feel like we’re seeing a trend now where people are starting to realize how the easy path of React, of Vue, of Next, of whatever, is resulting in these massive, bloated applications, that are slow, and are costing us time, energy and money.

That’s right.

And so I’m excited to see where Qwik goes in helping us solve this by default, where the easy path is the performant path. Alright, so that’s it for today’s episode. Thank you, and this is KBall signing out.


Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00