JS Party – Episode #290

Modernizing packages to ESM

featuring Mark Erikson


All Episodes

Mark Erikson (web dev professor/historian, OSS Maintainer & engineer at Replay) joins us to talk about the shift from CommonJS to ESM. We discuss the history of module patterns in JS and the grueling effort to push the world’s biggest developer ecosystem forward. Get ready to go to school kids, this one’s deep!



FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!

Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.

Notes & Links

📝 Edit Notes


1 00:00 It's party time, y'all
2 00:56 Welcoming back Mark
3 03:28 A history lesson
4 10:36 UMDs were hot ...
5 11:49 History continued
6 16:59 Enter TypeScript
7 21:28 Publish as a Service
8 22:51 Common pain points
9 31:30 Sponsor: Changelog News
10 32:42 Recapping the post
11 44:22 Running multiple tests
12 51:51 We need standards
13 59:07 Parting thoughts
14 1:03:03 Closing time
15 1:04:06 Next up on the pod (Changelog++!)


📝 Edit Transcript


Play the audio to listen along while you enjoy the transcript. 🎧

Hello, internet! So excited to be back with you all this week. We have a very special guest with us today. His name is Mark Erikson. Hello, welcome, Mark.

Hello. Glad to be here.

Yeah. It’s just Mark and I today, and I think if we had any other person here, we wouldn’t even have enough time to probably even finish the intro, because - I mean, I know Mark; my brother Mark is a verbose man. In fact, we’re gonna be talking about a blog post he wrote today, that’s why we’re here… And I don’t even – I’m gonna admit, I haven’t even finished the blog post. There’s three paragraphs that I haven’t read yet. So my verbose friend Mark, welcome. So excited to have you on the show.

So we’re going to be talking today about some of the challenges that Mark has been facing as a maintainer modernizing his packages, his Node packages to use ESM. And so Mark, before – I mean, I can give you this glowing introduction, which is like you’re like the internet teacher, you are the, I don’t know, world’s most patient and verbose human being, you know a lot about JavaScript, you’ve done a really great job of being a steward for some of the most widely adopted and maintained packages in our community… All the things, all the things. But why don’t you go ahead and introduce yourself?

Yeah, so my standard introduction blurb - my day job is working for a company called Replay.io, where we’re building a true time-traveling debugger for JavaScript applications. I’ve been there for about a year and a half. I am loving working on this project; it’s an incredibly useful tool. We’ve got a fantastic team. And as good as it is right now, and as much as it can make debugging easier right now, a year and a half from now we’re going to have some fantastic new features that we just haven’t even had time to build yet. But I’m really excited about where things are going from here.

Other than that, as you said, I answer questions anyplace there’s a textbox on the internet. I collect useful and interesting links, I write extremely long blog posts, and I maintain the Redux libraries… But most people know me as that guy with the Simpsons avatar.

Yeah. Yeah, most people do know you for that, because that’s like the helpful avatar that pops up to answer random questions on a Twitter thread… You know, it’s like the most pleasant “Well, actually…” [laughter] It’s like, I welcome your well actuallys.

But yeah, Mark, thank you so much for joining us. So again, we’re here to talk about modules, and the process of moving this massive ecosystem of ours into the world of ESM. And for those of you who may be wondering why is this a thing - yes, it became a standard many years ago. It was actually technically part of the 2015 spec, although people have been working on modules for beyond a decade before that, and it officially became part of the spec… So why is this a thing? Well, we’re gonna go through a little bit of history first before we get into the crux of what Mark’s famous blog post is all about… But the frontend community hijacked the Node ecosystem, right? Because we were using Bower, and CDNs, and we were not in the Node space. And then React came along and was like “Oh, you’re gonna need a compiler, and you’re gonna need a Node runtime to run and build your app.” And so React came onto the scene as a frontend package in the Node ecosystem, and then the rest was history. So can you walk us through that evolution a little bit, Mark?

Sure. I love history lessons. I am generally a big fan of trying to understand when and why were tools created, or why were certain technical decisions made, in the context of that time in place. I think it’s a lot easier to understand why things are the way they are now if you understand the decisions that were made, how we got here that way.

So the first issue is that JavaScript as a language never had a built-in way to define packages or reusable modules in the same way that other languages like Java or Python did. So Java, from the beginning, you declared packages with a package keyword, you organized your files and folders based on a certain structure, and the compiler, and all the tools then automatically understood, “Here’s how you’re defining your code, there’s an import statement”, and then all the tooling worked based on that. And JavaScript never had anything like that. And that’s both because JavaScript was very hastily thrown together, the infamous 10-day development period at the beginning, but also because the intended use case early on was just like a tiny little bit of JavaScript in like a click handler or a mouse-over. Or maybe you’ve got a script tag in your HTML page, but it’s, I don’t know, 10, 15, 20 lines maybe… And so the original intent was just very small bits of interactivity. And starting in maybe 2004-2006, around the time that Ajax started becoming a thing, people began writing real serious applications in JavaScript. And now we’re talking thousands, tens of thousands of lines of code…

[06:35] Millions. Millions now.

Millions. And you need a way to actually organize that code, and to provide encapsulation and isolation, and how different files refer to each other. So one of the first attempts to do something with this was actually just reusing a sort of discovered JavaScript construct; the immediately-invoked function expression, or iffy. And people figured out that if I write my code for a “module” inside an iffy, it provides the encapsulation, and you can pass some arguments in, and it can return something, and it sort of approximates what a package module might feel like in other languages. And from there, you kind of had a couple of different community-invented specs that came out.

On the browser side, some people invented something called AMD, or asynchronous module definition. It was supposed to be browser-friendly. So you would first load a library, like required.js, that understood the structure of these AMD modules, and you would point it at your top-level module file, and it would download it and look at it, and it specifies that it depends on modules A and B, and so it goes and downloads those, and B depends on c, and so it downloads that… And it’s this whole big waterfall of requests. Eventually, it’s downloaded all of them, and then it unwinds and loads each file, and everything initializes. And that was meant to be browser-based, with the idea of downloading each file separately.

On the other hand, on the Node side, they invented the CommonJS module format, which was specifically designed to be synchronous, and read everything off disk at import time. So every time you called the require function, it actually does a very defined search in the local file system to try to find a file that matches the path that you gave it, which in a lot of cases isn’t even a complete path; it has implicit assumptions about looking for index.js, or looking for pkg.json, and trying to find the main field, and those sorts of things. So you had two different community-defined specs, neither of which was standardized, standardized, and both meant for different use cases.

I’ll interrupt you right there… I think this for me is like the beginning of the rift, right? Which is interesting looking at it in retrospective now. You have people using JavaScript now in two different places, right? Folks who are using it exclusively in the browser, and folks that are also using it in a server context. Remember Node was kind of like a revolutionary thing back in the day. I don’t think even Brendan Eich could have predicted that one day JavaScript would be used on the server. One day JavaScript would be used to write scripts like that are running in washing machines. It’s like, “What?!” So yeah… Actually, Whirlpool has a phenomenal JavaScript engineering team, for what it’s worth…

[09:44] So JavaScript being in these different runtimes means people are solving for this problem that really should just be part of the language, should just be a standard feature, but it isn’t there. It’s this huge gaping gap in the world’s most popular programming language, for God’s sake. And so here they are now, independently solving this problem in the best way that they know how, making the best decisions under the constraints, and all that jazz… But they’re solving the same problem in different ways. And I think it’s interesting to hear you talk about this rift, because this rift – bringing it back together, bringing us all back together to do it one way in these different contexts is like the real like source of pain here, because there’s some really big decisions, and we’ll get into that later in the show. But anyways, so back to you, Mark.

Yup. Some people attempted to kind of paper over some of that problem, and invented another module format called UMD, universal module definition, which is this horribly, utterly hacky and disgusting-looking wrapper that does some careful checks, and it means that the same file can be used simultaneously as either an AMD module in the browser, a CommonJS module under Node, or a plain script tag that attaches variables to the global window.

UMDs were like the hot – I think I can say the word s**t on this podcast. They were hot s**t when they first came out. UMDs were like “Oh, my God!!” It was revolutionary. It was like game-changing. I remember the whole community being so excited about UMDs. It was like “Oh, they’ve found a way.” Because at that point there was already a need, there was already code that was being shared between the browser and server contexts. We didn’t have React; this predated tools like React, but there was already a need in the community for that alignment… So I remember that solved a really big problem back in the day.

So where we ended up is that there were several years of investigation and research into what an official module spec might look like. And that was finalized, as you said, as part of the ES 2015 language spec, and we ended up with what are now known as ES modules. And the biggest issue here is that the ES spec defines the syntax for ES modules; the import and export keywords, and how those are supposed to behave, and how when you export a variable, it’s a live binding, so if something else imports it, and then the first module reassigns to it, it actually gets the new value. But there were some parts of the behavior that they didn’t specify, and that’s roughly speaking how the host runtime environment - in other words either a browser or Node - should actually handle loading the files off disk, and how they should handle interoping with other module formats.

And so what ended up happening is that browsers mostly figured out how they were going to implement the downloading, and the parsing, and the execution of ES modules, but Node had a much harder time figuring out what they were going to do, because they had to worry – like, Node was already all-in on CommonJS. Node was built around this concept of CommonJS. And so now the question is – like, if we look at any random.js file, and we’ve found it, and we’re trying to load it, how do we know ahead of time if that is supposed to be CommonJS or ESM? We really can’t know it until we’ve actually tried to parse it and execute it, and then find out that “Oops, we guessed wrong.” Or what happens if you’ve got an ESM file that tries to call require, or tries to import a file that’s CommonJS, or vice versa? Like, what are the semantics, and how does that interop?

[13:51] And so the Node folks ended up spending years debating all this and trying to figure out how it was going to work, and it was a very painful and involved process. A lot of people with good intentions spent a lot of time arguing about how this stuff should work. Eventually, Node made some technical decisions, and implemented them, and moved on. And so in theory, Node has pretty good support for both CommonJS and ESM right now, but where this has led to is we have an ecosystem of many different tools with different expectations around how different modules – oh, the other complicating factor… And this is what you were saying a minute ago - people have been wanting to use ES module syntax even since before the spec was finalized. And so bundlers like WebPack, and Vite, and Parcel, and ESBuild have had support for parsing and loading multiple modules of different kinds in one codebase for years. So they’ve kind of had to invent their own semantics for what happens when you go back and forth, and what happens if a CommonJS file imports an ESM file, and all this stuff. And that doesn’t necessarily match how Node decided that they were going to do things. And then you get TypeScript in the picture.

Oh my God.. Yeah, hold that train for just a second. We can’t even get to the TypeScript discussion yet, hold on. So just to kind of recap here… This is for me this interesting cluster of the maintainer community eagerly giving developers what they want and need, before there’s even a decision as to how these things need to resolve under the hood. And then don’t forget, it’s not like the bundlers got together and had a standard for how they were going to make these decisions. Each bundler had their own logic and algorithm tree that they used for module resolution. And meanwhile, you have the Node TSC, the Technical Steering Committee, many folks who’ve been on this show in the past, who’ve spent just years trying to hash this out, and they kind of finally landed on something that shipped stable… I think was it Node 13? Or sorry, I think it was 13, 14, something like that.

At least 16, if not 18.

13 was experimental, and then maybe – yeah, and then 16 was maybe one that was stable. So it finally shipped, but the community has been used to this frictionless experience already through the thankful, hard work of the folks who’ve been doing all the bundling. And so now, how do we shift this to work out of the box and just be turnkey is the real question? Because we can have these magic polyfilling machines everywhere; that’s not scalable, or sustainable. WebPack should not be a prerequisite to use Node. Or whatever tool. So this is where we are now, and we’ve had that problem growing… And alongside it, you’ve had the rise of TypeScript, around the same time, as kind of – we started to figure out how we wanted to handle this, and how to handle ESM in Node, and then we have like the hockey stick rise of TypeScript, which adds another layer of complexity into the already complex matrix. So yeah…

Yeah. And there’s multiple additional factors from there. I actually put out a tweet back in April, when I was neck-deep in the middle of all this stuff, where I listed “Here are things I have to keep in mind when I publish a library in 2023.” And quoting myself, “Build artifact formats, ESM, CommonJS, UMD, matrixed with dev, versus prod, versus various Node env flags. Am I prebundling my JavaScript library when I publish it, or am I publishing individual JS files per source? How do you define package.exports? What about WebPack 4? What about TypeScripts Module Resolution option? What about different user environments? What about different bundlers? Node in ESM versus EJS mode? Do I need to prebundle my TypeScript type defs? What about edge runtimes? What about React having the new “use client” keyword, or needing to deal with server components differently? Oh, and what about all the libraries that I depend on?” It’s a mess.

[18:20] With something like “Only wear green socks”, part of that list too… Because it just sounds like such a kooky list that you probably should have like a special piece of clothing on you when you’re publishing a new package.

Pretty much, yeah.

So yeah, so enter TypeScript. So let’s talk about how TypeScript complicates this landscape, Mark, before we dig into some of the specific pain points that were outlined in your epic blog post.

Well, so there’s a couple more things that even tie in along with the TypeScript aspect. So we said earlier that the frontend ecosystem kind of jumped on the Node train, and that includes publishing packages to npm. And publishing a package to npm wasn’t that bad in the beginning if you make the assumption that everyone who uses this is also just running it under Node.

And that’s a fair assumption, for what it’s worth. A reasonable assumption.

Nut now that we’re starting to worry about – and by now I’m referring backwards to 2011 and 2012… You start worrying about “I need to publish this code so that it can run in a browser.” Well, okay, we’re going to publish this code to npm in CommonJS format, because that’s what most tools are going to understand, but we also need to make sure that we backwards-compile our JavaScript syntax, because everyone has to worry about running their code in IE 11. And IE 11 only understands ES5 syntax. So even at that point, if you wanted to write an author your library code using upcoming JavaScript syntax - what eventually became ES2015 - you had to backwards-transpile your own code at build time to ES5 and CommonJS, so that it was the lowest common denominator, and build tools could load the modules, and the syntax itself would execute in IE 11. And that’s basically where we’ve been at for seven plus years. Even now, most libraries are compiling to CommonJS, and compiling the syntax to ES5, so that it works everywhere. And it’s just within the last couple years where we’ve really started seeing more libraries not just including an ES module file in their published package, but trying to make it the primary file, and actually saying, “Okay, we’re going to ship a more modern syntax, whether it’s ES2017” or just literally like “Here’s the syntax I wrote minus the TypeScript types.” And that’s where the other thing comes in, is that with everyone using TypeScript, or many people using TypeScript to write their code, that’s another layer of a build step. Because TypeScript code won’t run in any runtime environment, so at a bare minimum, even if you’re not going to convert the syntax, you have to at least strip out the TypeScript types when you publish it.

Yeah, that sounds like a good time. It sounds like there needs to be a service, publish as a service. Like, “I wrote this in whatever language, whatever runtime. Here, make it work for everyone else.” There should be like a Jetsons machine; you put whatever code in, and it spits out this huge matrix of formats that you can distribute. It’s crazy. Like a podcast distribution, or something. Like, wherever you get your podcasts, right? Like, “Wherever you run your code, here, take this. It should work.”

[22:04] But anyways, okay. So getting into this TypeScript craziness, can you kind of – so most people are not maintainers, right? So most people are on the consumption side. I know for a fact that people on the consumption side have experienced a lot of friction around this, especially in the early days when Node was experimenting with ESM. There was just a lot of import errors… Back in the day, I would almost have that error memorized. When you try to use import in a Node context… It’s like, there’s some reference error…

“Error module not found”, something-something.

Right. Yeah, google that. There’s so many hits for that, right? But can you share some insights into what are some of the common pain points that you see people hitting because of these issues? …before we flip over to the maintainer hell that you are in. How are everyday people, everyday developers feeling this pain point?

So developers certainly run into this stuff downstream from the libraries. And I even ran into this - we might talk about in a minute - when I was attempting to make my first updates to Redux Toolkit to modernize some of its packaging. The Redux Toolkit depends on the Redux Thunk library, which I maintain, and the Immer library, which I do not. And my attempts to modernize the package worked somewhat, but then like Jest, which is yet another tool that does its own module parsing, doesn’t have great ESM support. And it was getting confused, because it was trying to load both those packages in an ESM context, and instead of getting – like, they both mixed default exports and named exports, and instead of getting the actual values that I wanted, I was getting back an object with a key of default inside of it, which is not the thing that the code expected. So there’s that, there’s the error module not found thing that you were talking about… A lot of app developers have seen problems where some library authors, like Sindre Sorhus, a prolific author of Node-related libraries, have decided that they’re just going to go ESM only for everything. And he even – he published beta versions of all his packages, he even put up a gist saying “Here’s my reasons why everything I do is ESM only from here on out.” And I can absolutely understand and respect the technical and personal reasons behind that decision, but it’s also meant that in a lot of cases people upgraded dependencies, either intentionally or unintentionally, and all of a sudden the latest version of Chalk or Node Fetch or something like that broke, because the rest of their toolchain isn’t properly configured to load these libraries the way that the author is now publishing them. And so a lot of people have had to revert back to the previous major version of these libraries just so it’s not ESM only.

Yeah, great summary. I mean, the Node ecosystem is like this Lego Land. And that’s the beauty of it and also one of the pain points, I think especially for smaller teams, or new developers… You’re having to maintain the matrix of interoperable packages and their peer dependencies and whatnot, and whatnot… As well as what runtime version you have. Like, what version of Node are you even running, and does this version of Node support top-level awaits, for example? So yeah, in Sinnorus’ – I just said Sinnorus. I always mispronounce their name.

Sindre Sorhus, I think…

[25:51] Dyno man, okay? His whole thing with “Oh, I’m only gonna publish ESM”, which, I respect their decision to do that… But the implication of that is that you might not be running the right version of Node; your customers might not be running the latest version of a browser that would support said language syntax. There’s some serious ripple effects, and so then are you kind of transpiling your dependencies? How do you even manage that process easily, as a team? So there’s pretty big downsides.

But the flip side of this is that unfortunately, Node dependencies have been stuck in 2014, because no one is publishing “modern JavaScript” as their final output. So you have bytes in bytes of JavaScript that could be removed from the web, and we could be better optimizing all sorts of things… And there’s an initiative that I tried to start many years ago, and just got really busy, but kind of trying to say “Hey, can we have a standard around how we publish our dependencies? …because we should be able to publish modern JavaScript, and not hold the web back.” So that’s a whole thing to – right now, the web is very much held back by all of the third party JavaScript that’s in 2014 code. Minus Dyno man. So yeah, any thoughts on that?

Yeah. And like I said earlier, packages have generally had to publish the lowest common denominator in terms of module formats and syntax, and it has definitely added to the weight of webpages. And so being able to ship moderate – for example, optional chaining is great. I love optional chaining syntax. Have you ever seen what it gets transpiled to? Like, that little question dot ends up as like 80 or 100 characters of something-something bang double equals void zero, bla, bla, bla, bla. And if we can just ship modern JavaScript, that’s way fewer bytes that have to go out to the browser.

Yeah, I mean, it’s a win/win for everyone. It’s a win/win for users, it’s a win/win for developers, it’s a win/win for the Earth, because that’s literally like less resources and like less bytes across the wire, less internet trash… All kinds of things.

But to get back to the TypeScript problem here, and some of the issues that you described as consumer pain points… I think when you publish these blog posts, you’re airing out the next piece of dirty laundry to go live in the JavaScript community, which is these pain points. So I know you published this blog post on August 8th, it’s had a lot of circulation, and actually, one of our Changelog++ listeners - Nick, if you’re listening - actually requested this episode. They were like “Hey, I’d love hear a discussion on this topic.” And I was like “Well, why don’t we just invite Mark back onto the show? I love my main man Mark, you know?” So you kind of going viral with this post means that yeah, people are feeling this pain as well as you. So can you share some thoughts on that?

Yeah. So I guess first off, when I write my blog posts - yes, I’m writing them with the idea that someone’s going to read them, but usually, I have some idea, like “This is a topic that people would actually be interested in or not.” The one about how React renders has been by far my most widely read post, because that’s a thing that people care about and often don’t really understand. But this one - trust me, I was not reading this with an expectation that it was going to get lots of views… It was just, “I’ve gone through all these pain points. This is mostly me attempting to document them just as an FYI for folks.” And it hasn’t necessarily gotten tons of views, but it actually has gotten a surprisingly large amount of people saying “This is a good article, thank you for writing it” or “I’m a library maintainer, and yeah, I’m experiencing all these pain points.” So it definitely has struck a bit of a nerve.

[30:15] Yeah. And having to publish for this crazy matrix of considerations means that you have to sometimes question yourself, like “Is this – it can’t really be this bad. Maybe it’s just me, right?” And I think by you airing your grievances and saying “These are all the things that I had to do, and these are the issues that I still have, and these are the problems that I hit for this big, widely adopted package”, it really lets people come out of the woodworks too, to say “Oh, great, it’s not just me.”

Okay, you have to understand that I have this very, I don’t know, Jekyll and Hyde view of myself. There’s plenty of times when I come into a conversation and like “I’m the Redux maintainer. I know what I’m doing. You should totally listen to what I have to say.” But there’s lots of other times where I genuinely feel like I don’t know much about this topic. And that may or may not be an accurate view of myself, but that’s the feeling that I have. And there have been a couple who read this post and legitimately told me, almost verbatim, like “Wow, if Mark Erikson doesn’t know what he’s doing, how can anyone else do this?” Which is hilarious, because I barely know what I’m doing here…

Break: [31:31]

Let’s get into some of the specifics here. So we’re gonna flip on to talking about what are these serious pain points that Mark has experienced, along with other developers. Where should we start? Should we start with the –

We’ll sort of go through the posts sort of in order… I can just sort of even recap things off my own head.

Sounds great.

At the start of this year I was the primary maintainer for five different Redux-related packages: the original Redux core, React Redux, Reselect, Redux Thunk, and Redux Toolkit. And all those have been around for years. We’d even published major versions of like React Redux in the last couple of years… But most of them had publishing and build setups that went back many, many years. And on top of that Redux Toolkit we published version 1.0 in late 2019, we’re up to version 1.9… And I’d made a few updates to the publishing setup, switching to something ESBuild for that project a couple years ago, but most of them we were just building with Rollup and Babel, and shipping in a whole big mixture of CommonJS, ESM, and UMD files. And none of those projects specified the relatively new package.exports field to define how different build and runtime tools should determine which module file they’re supposed to load. And we’d gotten some reports from people saying “If I try to import Redux Toolkit in a full ESM Node environment, then it errors, the module not found error.”

[34:20] Some people have said that we can’t import the right things with certain TypeScript settings… And then along with that, the other points I mentioned, where we were still compiling all the code to ES5 to support IE 11, and we generally wanted to ship modern JavaScript, and better support full ES module compatibility, whatever the heck that actually means.

I’d been squirreling away hundreds of bookmarks about this topic for years, knowing it was a thing that was eventually going to happen, and scared me… So I finally started to look at it at the start of this year, and I did a bunch of research, and I thought I knew what I was doing, and I was very wrong… And I tried to update Redux Toolkit to add package exports, and I tried putting the type module field into our pkg.json, thinking that was a thing that I needed to do. And like I said, stuff broke. In this case, primarily Jest not being able to load dependencies properly.

And you put type module in your pkg.json, right? Because there’s maybe a few other places you could have potentially even done something similar to that. There’s tsconfig, there’s pkg.json, there’s…

I was under the impression that putting the field type module in your project’s pkg.json was a requirement for “Es module compatibility.” I didn’t even fully 100% understand what that meant, but I thought that it was a thing that I had to do. And as I found out later, the core issue there is that – so Node has to figure out for any given file “Is this Common JS or ESM?” And originally, everyone’s just shipping files with a .js extension, and importing them, requiring them, whatever… So if both your CommonJS and your ESM files have a .js extension, how can it know without parsing them? And so what the Node folks decided was there’s two different ways you can do it. One is you can actually use different file extensions. If it’s .mjs, it’s an ESM file. If it’s .cjs, it’s a CommonJS file. But where does that leave any .js files? So what they decided was that if you put a tight module field in your pkg.json, that is telling Node that anytime you see a plain .js extension, assume it is a module, or you can use I think type CommonJS to assume that it’s a CommonJS file. So I didn’t really understand the implications…

What if we don’t have anything there? If you don’t have anything there, it’s just default to CommonJS.

Yes, still defaulting to assuming CommonJS for all .js. extensions.

Yeah, which makes sense for backwards compatibility, to think about all the thousands and thousands of Node projects that are CommonJS, and they want to be able to still bump their Node versions. So of course, they’re going to try to do this in a way that’s not going to break everybody’s app automatically. That would be like the death of Node, if you can’t upgrade… And just to be clear, the .mjs - I stopped following that discussion, because I was like “I can’t think about this right now.” It’s gone back and forth… Where have we landed on .mjs as a thing now? Because I haven’t really seen wide-scale use of that… But I’m wondering if you have, or if you’re more familiar.

[37:55] I don’t know much more than – I honestly don’t know much more of the discussion other than realizing that a) it was a thing that I could actually do, b) it was actually going to be simpler and easier to name my own output files with a .mjs extension than it would be to put a type module in the package file. And frankly, I think I still think that .mks and .cjs look stupid, but type module caused enough complications for me that I decided that “Okay, fine, changing the file extension of the output files that I build seems to be less trouble overall than having type module in the package.”

Yeah. And where does TypeScript fit into this, too? Is it MTS? Which, by the way, sounds like a Northeastern metro system, or something like that, MTS…

That’s the same kind of problem.

Or CTS… It sounds like a network television channel.

TypeScript tries to follow what – TypeScript now has several different rule sets for how it determines both where are your JavaScript files, and where are your type definition files in a project. And that’s now controlled by a TypeScript config option called module resolution. The original setting is Node, or now renamed to Node 10, which is like the old school option. There is a new Node 16 option which tries to match Node’s current behavior, and then there’s also a new bundler option that’s kind of like “Do whatever WebPack and other similar tools do.” And that also implies – like, one of the things that Andrew Branch from the TypeScript team pointed out is that you can have mismatches between your runtime code and your TypeScript types. Because of how the module formats work, if you only publish one set of types, and for example by default it has a .d.ts extension, TypeScript’s gonna say “Oh, that’s types that are sort of meant for a CommonJS file, right?” But the runtime behavior in the exports of an ESM file could be different; having output the types representing CommonJS might not be accurate for what actually happens at runtime. So the “correct” answer here per Andrew Branch is that you really need to publish two copies of your TypeScript types for your library. One generated with TypeScript having CommonJS settings, and one generated with TypeScript having ES module settings. And those two should have a file extension that matches your JS file’s extension. So if you’re publishing a .mjs JS file, you should also publish a .d.mts type definitions file.

Wow. Yeah, this sounds like a living hell to me. I don’t know about you all listeners, but yeah, this does not sound fun. And is there like a GitHub comment or a blog post or something that we can cite to include in our show notes for that recommendation that Andrew had?

So actually if you feel like at the bottom of my blog post, I tried to link a few different things. One of those items… Andrew Branch is working on a very large new set of documentation for the TypeScript docs, that will talk about “Here’s how TypeScript understands modules and the state of the world.” And the current work in progress for that is in a guest, and it’s pretty long; it’s several thousand words, at least.

Well, I guess my question is why does it have to be any different than how JavaScript understands it? …in the sense that does there need to be a distinction other than just how we handle types, and that should also just be very straightforward for the most part? I’m just trying to understand why the need to have a whole complex second system?

[42:02] I’m going to do a bad job of explaining it, but the one-line summary is that it’s because TypeScript is a types-only overlay on top of whatever happens at runtime. In fact, here’s even one of the goofier things that I still haven’t fully wrapped my head around. So one of the other aspects of using ESM, especially under Node, is I believe in some cases you really need to actually specify .js as part of an import statement. So import, curly braces, whatever, from, dot slash some other file dot js, close quotes. And I’m still not even 100% sure when that’s necessary. But okay, what happens if I am authoring a file in TypeScript, and all my TypeScript files have a .ts extension? That’s just at compile time. At runtime, all your files have a .js extension, so you have to write import dot slash something dot js in a TS file, in some cases. And I still don’t even know when that’s necessary, but that scares me.

Well, you could compile away that, right? You could have a prebuilt step that replaces that, or whatever the hell else.

Well, that’s the problem… The TypeScript folks are trying to stand firm on “Yeah, we had a couple of features way in the past that required runtime changes, but from here on out we only do types-level stuff. We don’t rewrite your source code.” So there’s multiple issues where people have begged, “Let us write .ts in our imports, and then just like rewrite that to .js for us at output time”, and the TypeScript folks were like “Nope, nope. That’s runtime changes. Not going to do it.”

Yeah. I mean, I guess it just makes grepping your code that much, like, one more thing you have to think about when you’re just trying to grep for all matches of a file name, or whatever. Or file path. But yeah, I mean, I don’t know… I mean, we’re living this lovely module hell, so that is 2023… But anyway, so moving on your blog post. So we were – where did we leave off?

Yeah, so my first attempt at trying to modernize Redux Toolkit’s package didn’t work, and I concluded that I’m going to have to spend a whole bunch of time setting up example projects using a half-dozen build tools and environments and combinations so that I can verify that any future attempts to update the package actually work right in each of those environments… So I wrote a little tiny Redux toolkit sample app and a playwrite test, and then I built it with Create React App 4, which uses WebPack 4, Create React App 5, which uses WebPack 5, which supports the export keyword… Vite, and Next, and then a couple of different – like, Node in CGS mode and Node in ESM mode folders. And that at least helped.

The other really big thing I’ve found was that same Andrew Branch guy has written a tool called “Are the types wrong?” And you give it a package name, and it will download the package and say “Here’s how TypeScript is going to interpret the way you’ve defined everything.” And it actually points out a number of common mistakes, like “Do you have a mismatch between your JS files and your TS types?” Or “Oops. Something you listed, just like – we can’t even find that at all.” And so using that as both a local command line tool for checking things, and a CI tool for verifying that this PR doesn’t break anything has been incredibly valuable. And in fact, I was using that even trying to work on stuff last night.

Wow. I think you’ve kind of made yourself your own little pre-package dressing room kind of space, where you’re just like “Alright, now let me fake publish” with [unintelligible 00:46:07.17] or something.

[46:10] Exactly that, yeah.

Yeah. “And then I’m gonna make sure that everything still works as expected.” And I think for me, I really appreciate, and I hope the community does as well, of course, your diligence that you’re going through to make sure that there’s no edge cases for people, depending on what combo they have… Like runtime bundler, etc. But really, this is a moving target, let’s be real, right?

Very much so, yeah.

So it’s not a magic bullet. You’re like one publish away in any given thing, and any given thing changes and you’re back to square one, potentially. And so…

And there’s even another example that popped up literally last night, or this morning… So the React team has been working on React Server Components for the last couple years… And I think it’s a genuinely interesting and very useful technology, but there’s been issues both around the marketing rollout, as well as the way that the React team and Vercel have been implementing the first real usable version of it… And there’s documentation about how to use it inside of Next as an application developer, but there’s no real documentation about how libraries are supposed to interact with a React Server Components environment. And so this has popped up in a few different ways. So Next 13.4 came out back in May, and they flipped their defaults so that if you just keep hitting Enter when you create a new project from the command line, it defaults to a Server Components setup. And the documentation says shows using Server Components by default… And so people are following the defaults and trying to use Server Components without really thinking about the pros and cons. And a lot of people still want to use Redux, and so they’re trying to add Redux and thinking they can just throw the React Redux provider in one of their server components. But it turns out Server Components have a lot of technical restrictions. You can’t call create context from within a server component import. You can’t call or use any of the React hooks in a Server Component. And so people were trying to do what they thought was the obvious thing of “I know how to set up Redux. I’m just going to add it to this parent component”, and it would break. And then they would file issues against either the React Redux or Redux Toolkit repos and say “Why doesn’t this work right?” And then we had to spend hours looking into it before we asked, “Oh, are you using Next and Server Components?”, and they would say yes, and then we’d figure “Oh, those don’t actually work together, at least not the way you think they do.” You can render it inside what’s now called a client component, but you have to have that separation from the server component part of the page.

That doesn’t even feel like a mark problem though. This feels like maybe the rollout of this feature on the React side could have come with some more training wheels, and guidelines, and all the above, you know…

Yeah. And that’s actually the exact point I’ve both stated in the blog post and tried to pass on as hopefully helpful feedback to the React team. But it’s been an effect on us as the Redux maintainers…

No pun intended…

Yeah. And Lance, and the other maintainers of Apollo, and focus on React Query, and so on. In particular, there was one discussion thread where one of the next canary builds, one of the just daily type of builds briefly broke Apollo client, because it was starting to check for “Are you importing any client-side code at all? And if so, let’s throw an error.” And they undid that change, but that whole discussion thread led to a pretty long debate between me and Lance, Redux maintainers, and Apollo for Lance, and Sebastian Markbåge and a couple other Vercel folks.

[50:17] And there was a whole back and forth where Seb was saying “You can add another exports condition to your package, and create a whole other build artifact just for use in React Server Components, that makes sure it doesn’t use any client code whatsoever, so it’s safe.” And Lance and I were pointing out that “Wait, that’s a whole lot more work for us; it’s yet another build artifact we have to figure out how to generate.” In the case of Apollo, they’re not even using package exports, so they can’t do that until the next major versions… And it felt very, very frustrating to be told “You have to do more work to satisfy this one extra runtime environment”, on top of all the work I’d already put in over the last several months.

So there was even yet another development in that just like last night. Sebastian had suggested that “Okay, you can sort of fool our static analyzer if instead of doing named imports of hooks, like you state, you do import star as React, and then our static analyzer won’t even notice that. So we sort of like nudge-nudge, wink-wink suggest that you do that for now…” And someone apparently reported to Apollo last night that – you know, Apollo has been using this for a little while now, and that apparently broke in the latest version of Next. And I don’t know if it’s a bug on their site or what, but… Emphasis on the moving target and so many things we have to worry about.

Yeah. To me it’s just obvious at this point, and this is maybe – I know we have a couple more things to get through on the blog post, but there’s a big need here for some standards, and for everyone to kind of be working and publishing against the same specs. That way, as long as everyone’s following the specs, and everyone’s following the rules, then there’s confidence that handshakes should just work. Because I mean, really, this should just work, right? I expect a lot of this churn to leak out into the community. It already is, but unless we fix it, it’s only going to get worse, you know?

I’ve been begging for years for someone who actually knows what they’re doing, aka not me, to publish the authoritative, comprehensive guide on the right way to publish a package, and all the output options, and all the build tool configuration settings, and all the file formats… Like, tell me exactly what buttons to hit, so that I can follow it. Or even better, give me tools that will do that.

Well, I can tell you that right now that’s not gonna – I mean, I can’t say that with 100% confidence, but I say this with a high degree of confidence, that’s not going to come out of npm anytime soon, just given where… It’s just like they have a skeleton team running the registry right now. What I’m confused about - if Microsoft wasn’t interested in actually investing in this ecosystem, why –

Why did they purchase these companies? Yeah…

Yeah. And a number of other big tech companies were also interested in purchasing at the time… So it should have just been in hands where the registry and the project, this important part of the ecosystem would actually get the TLC and the love that it deserves. That was like the only silver lining for me when that acquisition happened, was like “Okay, well, at least there’s a big company here to take care of this now.” And problems have only been really getting worse. So it’s not going to come out of npm, and so the question is I think this has to be community-driven, as always…

[54:01] I mean, to name a specific name, Jason Miller, who works at Google, has done a lot of fantastic work over the last few years, trying to encourage folks to ship modern JavaScript. And he’s written some good articles that are helpful starting points… I have felt like he would be the right person to write that kind of definitive module publishing guide, but he’s obviously very busy with whatever stuff it is he’s actually working on.

Yeah. He’s now at Shopify. Has been for at least a year or two, I think… But yes.

Oh, moved over… That shows you how much attention I’ve been paid. I’m sorry…

That’s okay. That’s fine. And just in case anyone was curious, Jason Miller is the creator of Preact as well. And an awesome person who’s been on the show a couple of times, at least. So yeah, getting back into – because we could… See, I told you, we’re not even – we’ve been talking for almost an hour and we’re not even through all the things… But I’ll let you pick up wherever you want to pick up, Mark.

I mean, that’s the major summary. If you look at my blog post, it amounts to – I took a first stab at things, they didn’t work out, I spent a couple months trying to build example CI setups to double-check myself and tell me when things are breaking… I made a second round of attempts, that mostly worked, except for “Are the types wrong” telling me you’ve still got that JS vs. TS file extension mismatch in some cases… So right now we’ve got Redux Toolkit 2.0 beta, and Redux core 5.0 beta published. We’ve got alphas for Reselect and Redux Thunk. And actually - today is Thursday; Tuesday night I finally managed to push through the first alpha for React Redux version 9, that has the same packaging changes… And I determined months ago that we’re going to have to ship major versions of all of our libraries simultaneously, which… Oh, boy. Self-imposed responsibilities.

But we’ve at least got alphas or betas of all five packages, with the same general packaging contents applied to each of them. Last night I was working on trying to resolve the last outstanding “Are the types wrong” warnings by actually attempting to generate those duplicated TypeScript type definition files… And “Are the types wrong” thought it looked good locally, and then I pushed the PR, and literally everything broke. So…

What was the gap there? And are you going to submit a patch in the tool to try to –

No, actually, I literally just need to look at it again and figure out what’s going on. One of our CI checks is that we double-check our types against like eight versions of TypeScript simultaneously, just to see if we broke anything. And all those failed, and I glanced at the output very briefly, and I don’t remember what the actual problem was. I’m not sure if it was having trouble finding the types, or if something in the modified type definition files that I was bundling was wrong… Because that was actually a change; Redux Toolkit had shipped with just running TSC, and generating one .d.ts file per TypeScript source file… And what I changed last night was actually using this ts [unintelligible 00:57:30.07] tool to prebundle the TypeScript types, so that it would be easier to ship the duplicate copies. And something about that broke, and I’m not sure if it is something about the bundling of the types step done by the tool, or whether it’s something about the way that I’m pointing to the files… But I literally – like, that was my cue to call it quits for the night. So I will go back and investigate further, and hopefully I’ll actually figure out what’s going on, and then be able to apply that to the other libraries, too.

And write another blog post, right? [laughs]

[58:06] Well, I’ll update this one.

Update this one. Okay, got it. We’ll have to look at the version history; now you’re gonna have to start publishing a version history for this post. But yeah, I mean – so for those listening, if this sounds dense and complicated and confusing, it’s because it is… And I would highly recommend for all of you to read Mark’s post. I’m gonna finish reading it. I learned a lot, and also just… I think you did a really good job of putting really good references out to different resources in there as well, including one that I really enjoyed, which was the history – it’s a gist on modules, history and future, so it’s like a full timeline of all the things with links for when we started working on ESM, and just how that evolution has gone, from 2008-2009, all the way to present day. And so just lots of great resources, I highly recommend checking it out. So Mark, what are some parting thoughts? Obviously, you had some lessons learned that you’ve shared. Do you want to maybe share that on air a little bit?

Yeah. So like I said, we’ve got the betas and the alphas out right now; the current package definitions for each of those I think is close to being correct for us… It’s certainly possible that other libraries with different needs for how they need to package and ship things would need somewhat different setups, but I think I seem to have found a combination that is reasonably correct for us, minus the type mismatch issue.

It is really hard to keep up with the nuances of all the different tools, and I wish there were a resource that listed how each bundler and each runtime environment handle things in some kind of a way… There’s a couple resources I’ve found, but not quite the thing that I have in mind.

Similarly, I have found a couple of guides on how to try to publish a package, and they are useful. Again, I think I have a picture in my head for what I sort of imagined the ideal resource to look like, and no one has written that yet… And no, I’m not going to, because I don’t have time. Having better tooling would drastically help.

What about standards? No mention of standards.

Standards would be exceptionally helpful, too. Having standards, having guides, having tools that correctly output the right combinations would be extremely helpful, as would having some kind of a SaaS where you can say “Here’s my library, here’s the code for an example app that uses my library. Please just automatically generate projects for like ten different frameworks and build tools, and build them all, and tell me which ones succeed and break.” Because that’s basically what I had to build for myself, roughly.

[01:01:02.07] And then Server Components are a great idea. I think they’re going to be a very valuable part of the React ecosystem, but the technical rollout has been rocky and confusing. And then - yeah, we’re going to be stuck with somewhere in between CJS and ESM for a number of years. I even saw a pair of blog posts just recently where the Deno folks were arguing that CGS is dead and holding us back, and the Bun maintainer said “Actually, CGS is, number one, still widely used, two, still useful, three even loads faster in a lot of cases.”

Yeah, it’s also – let’s not just… I mean, we have to understand the sheer scale of the internet, and the sheer scale of Node, and how many things are written a note that will never ever get updated even… This is just the internet. When you’re developing web standards, the goal is don’t break the web. That’s like the number one principle for new proposals. So yeah, I think this decision is here to stay forever, so I don’t think this is ever going to be – I think hopefully there’ll be less and less people writing CJS is my guess over time, but ultimately, it’s never going to ever go away. If you’re a maintainer, or you want to write a new bundler, or whatever it is, you’re gonna have to – unless you’re only targeting greenfield, new projects, or projects that are strictly never going to use CJs, or CGS dependencies… Which is like – I mean, come on, let’s be real… Even your dependencies might hold you back there, to some degree.

COBOL and Fortran are still being used…

Exactly. Yeah. Great examples. So yeah, I mean, this was a really great discussion on some of the pain points of shifting tides, really; this is what this is, it’s a big, wide horizontal shift… Because this isn’t something that’s a vertical; it affects everything kind of uniformly. It’s a baseline shift. And so thank you so much for being such a great resource and a lighthouse for our community, Mark. We really appreciate you. And thank you for all the great resources and links. There’ll be lots of links in our show notes, lots of links in Mark’s, so check them out. And so if folks want to follow you and catch up with you, Mark, where can they find you on the interwebs?

@acemarke on Twitter, blog.isquaredsoftware.com, @AceMarke on Reddit, Mark Erikson on GitHub.

Awesome. Well, thank you, and so with that said, have an amazing rest of – I was gonna say have an amazing rest of your week. I’m like “Wait, hold on… People listen to this Sunday night, Friday morning, Thursday, Monday, Tuesday…” Whatever week you’re in, have a great rest of your day, whoever is listening… Alright everybody, ciao-ciao.


Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00