Adam adds a twist to our YepNope format this week. Instead of 2v2, it’s 1v1v1 with Mikeal reppin’ team Yep, Divya on team Nope, and Feross sitting in the middle on team It Depends. You don’t want to miss this excellent debate/discussion all about JS tooling complexity.
New frameworks built all the time
Config hell. Webpack
Keen – Keen makes customer-facing metrics simple. It’s the platform that gives you powerful in-product analytics fast with minimal development time. Go to keen.io/jsparty and get your first 30-days of Keen for free.
Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code
changelog2019. Start your server - head to linode.com/changelog
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Click here to listen along while you enjoy the transcript. 🎧
Back by popular demand is this cool format, this debate topic, so to speak. We put a Twitter poll out there asking “Do you like our new Yep/Nope segment?” and an overwhelming (or a somewhat underwhelming?) 65% responded with Yep. So we took the bait, and we’re doing it again.
Today’s show will be a debate on modern JS tooling and whether or not it is too complicated. Basically, the question is “Is modern JS tooling too complicated?” We have two teams – wait, wait, wait. Three teams now, because we had some changing… We’ve got team Yep, being represented by Divya, team Nope represented by Mikeal, and team It Depends, which is the moderate, represented by Feross. What’s up everyone?
He’s like Switzerland. [laughter]
You get to sit in the middle. It’s so easy. You’re not really picking a side. The rules for this are pretty simple - the first segment we’ll have each person go through four minutes of their position in the argument from their side, and then when we come back to segment two, we’ll do a shorter format, so we can be more conversational… But the thing to keep in mind, listeners, is that the panelists may not be representing their beliefs; they’re just instead representing the side they’ve been assigned… So it’s a good argument that way. Let’s get into it. First up, team Yep. Divya, what have you got?
I love it. Go.
You’ve got a minute and a half left.
Okay, I guess I’ll just keep going.
Do you wanna keep going or do you wanna pass it on?
I have one more point to make.
Nice. 20 seconds left, if you wanna use it. If not, we can move.
No. I will open the floor.
Nice, alright. Well, let’s go then to Mikeal, representing team Nope. Because team Feross, which – these aren’t really teams, they’re just people, individuals now; we had teams originally, and that’s how it was, but it’s just individuals… So Feross is representing “It Depends”, for the moderate position, which I guess might be the better; we’ll see. Mike, what have you got for team Nope?
Okay, I need to start with some context. When you think about programming and just technology in general, you’re talking about an ever-expanding field. There’s more code tomorrow than yesterday. The entire field is growing at a pretty exponential rate, and the future is much bigger than the past, so we should expect this to grow into the future.
When you think about programming languages or frameworks that “die”, they often don’t actually die. They may lose a couple of users, but for the most part what they actually do is they stagnate. They have the same amount of usage or the same amount of users that they always did, but the entire field has gotten much bigger.
What that essentially means is that unless you are in a part of the programming ecosystem that is growing, you have a problem; you are effectively sort of dying. If you aren’t capturing at least as much growth as the entire field is growing, that can be problematic. It means that in the future you will just have less options than other developers. So I wanna come back - in that context I wanna come back to this lovely haiku, actually. It’s perfect.
[00:08:10.02] Many packages - this is said like it’s a problem. Like, what an amazing problem to have… Ask a Haskell programmer - love the fact that when they want to use a package, it does not exist and they have to write it from scratch every single time. So we’ve effectively graduated on to second-order problems because we have been successful. New frameworks built all the time, new things being built all the time is a sign of success. It’s also a sign of health. If you don’t have new things built all the time replacing the old things, then that’s a huge problem.
Yes, that is painful to go through as a developer, to always be learning a new thing, but that is literally the job of working in the technology sphere. If you’re not learning a new thing, you eventually will be off in a corner, still writing COBOL… Which is fine, COBOL is cool, but it may not be the most interesting thing in the world.
And as far as some of the configuration hell stuff goes, I think that a lot of what we complain about with these frameworks is not that there is a framework, it’s that the way that these things have been developed is with vertical integration patterns, rather than horizontal integration patterns. So we build these frameworks that have these plugin stacks where everything sort of linearly depends on the next thing… Rather than building an ecosystem out of smaller components that are more leverageable independently, and interact with each other more independently.
So if you look at the earlier days of Node, that was how the whole ecosystem worked. Then eventually people started building these frameworks, and then you started to see a lot of packages that were literally just taking some package from the Node ecosystem and wrapping it in the plugin wrapper of some framework. That is a problematic to be building on, and I think that we are definitely at the height of this cycle for some of these bigger frameworks, and a lot of that needs to sort of implode, so that that can then be used… But we’re still going to be left with an npm with a million plus packages, and sorting through all those packages, because that’s what it’s like to work in a healthy ecosystem. How am I doing on time?
22 seconds left.
I think I’ll hand it over to Feross, where he can take all sides and win by default. [laughter]
Feross, you have - I don’t wanna say the easiest position here, but you can play in the middle. You’ve got It Depends, so how do you wanna represent It Depends?
So I basically get to cherry-pick the best arguments from Divya and Mikeal, and restate them in my own words…
This is not fun for anybody. [laughter]
I want to hear this haiku again. Divya, before Feross goes, can you say that once again?
Oh yeah, of course. “Many packages, new frameworks built all the time. Config hell. Webpack.” I feel really bad, because I essentially threw Webpack under the bus here, and I use it a lot and it’s great, and their documentation is wonderful, and Sean Larkin is wonderful, but…
They do have a huge configuration file. [laughs] It’s like an unbelievable to manage.
Alright, Feross. It Depends.
I guess I wanna just start off by saying that, in general, I’m very sympathetic to this argument that modern JS tooling is too complicated, and I’ve gone on my fair share of rants about it… Especially when dealing with some tool that I feel is more complicated than it needs to be. Whenever that happens, I do tend to feel like we’ve created a lot of problems for ourselves that we didn’t need to create.
[00:12:01.23] A lot of times I feel like when nerds are being nerds, they can invent unnecessary problems for themselves. An example of this that I encountered a lot a few years ago was people would send a pull request to an open source project that I was in charge of, and they would be like “I converted everything to the newest syntax for you. Here you go. Oh, and also, I added 15 Babel plugins, so that we can compile it back to ES5.” And they change every single line in the project.
You hated this so much you wrote standard.
But something like adding ES classes to your package, converting the old way to using new ES classes - doing that now maybe makes sense, actually. I’m starting to do that actually to all my packages. But doing that five years ago, back when you just had to take on all this complexity of a build toolchain, doesn’t necessarily make sense to me. I’d rather just wait it out; wait a couple years till it’s in more environments, and convert then.
So that’s one thing. I think a lot of the problems is us doing it to ourselves. That’s what I’d like to push back on. And I guess I’ll also say that JS is kind of a lot like Perl in some ways. Perl’s motto is that there’s more than one way to do it. Python has sort of the opposite motto - there’s only one way to do it. In JS there’s always different competing approaches for doing things, and so that is also a source of this complicated tooling, because we sort of have a lot of options… And that’s not necessarily bad, like Mikeal was saying. The best can win, and we can have this competition of ideas.
You can definitely find lots of examples where the tooling is just the right amount of complicated. There’s this difference between essential complexity and incidental complexity. Essential complexity is like “This problem is actually hard, the solution therefore must be hard. There’s no way around it.” And there’s incidental complexity, which is like “We just solved it in a bad way, and we created all this extra garbage that basically people have to deal with forever.”
We are doing a lot of hard things, like trying to make a website that loads instantly, and has 60 frames pe second, and is accessible, and looks great, and handles all the error states, no bugs, beautiful animations… That’s an example of actually a really hard problem, so I think that complexity is really unavoidable; that’s essential complexity, a lot of the time. How am I doing on time?
You’ve got five seconds.
Okay, great. I’ll rest my case.
Ding-ding-ding. Alright, so we have three takes in here. We began this debate thinking we’d have two teams, but we ended up with three - so we’ve got team Yep, team Nope, and team It Depends. When we come back, we’re gonna dive a little bit into more of some back-and-forth, a little bit shorter segments, so we can kind of conversate around the complexity, and maybe switch sides even. We’ll see.
I’m gonna start by appealing to authority…
I’m gonna pull a Feross…
Back to Hacker News?
No, this is actually a credited source i.e. Yehuda Katz’s blog.
Okay, alright… Bring it on, Yehuda…
That’s not just an opinion, that’s a fact… [laughter]
Exactly. It’s not an opinion, it’s a fact. He created a framework called Ember.js, and therefore whatever he has to say is valid. And he sits on TC39, so I guess that makes it valid. Anyway, in a blog post that he wrote, that was – I can’t find what it’s called; I’ll figure out where it’s from exactly, but the point he was making - and I’m gonna quote:
Yeah, performance is really important, but is it worth putting in that extra time and that extra tooling and dependencies in order to optimize for a problem you don’t have? Maybe not.
So in a sense, within the ecosystem there’s this push towards “Yes. New. Doing things better”, which is what Mikeal was mentioning, which is great, but it’s also “Do we need to do this all the time?” If we have a solution that works, do we need to constantly iterate on it at the speed that we’re currently iterating on, in order for us to be more effective, or to build better applications? I’d argue that’s not the case. A lot of the times we introduce this complexity when we don’t need it half the time.
[00:20:10.16] For instance, React – and I hate to throw specific frameworks under the bus… This is a specific part of it - they introduced Fiber, which is their new reconciliation algorithm… And to this day, I have no idea why I would use it. Maybe because the applications they’ve built have never been to the scale that it would require it… But I still can’t fully grok why I would use it, and what use case. I’ve never actually put it in an application of any form, because for me that’s a solution to a problem I do not have… But I know of use cases where people are like “This is great. I’m gonna start using it”, even though you don’t necessarily need it. And I hear this argument a lot.
Same for TypeScript. I’m not someone who uses TypeScript, and I understand the arguments for it. I will not start using TypeScript because I’m like “This is a problem I currently do not have”, and I do not want to add the added complexity just to be like “Oh, it supports TypeScript”, because that is just not necessary.
Yeah. That’s a sign of maturity, I think… To be able to be like “I’ve seen this before. I know it’s gonna happen. We’re all gonna jump on this thing, it’s gonna be super-exciting, and then in a year from now we’re all gonna be jumping on the next thing… And I’m just gonna opt out of this.”
Yeah. And it makes it really painful too, because I’ve been on teams where you’re constantly evolving your tooling, so it just causes - bringing back the term I’ve talked about earlier - this fatigue, because everyone is just frustrated all the time. They’re like “I have to constantly learn something new, and my knowledge from two years ago is no longer valid now”, which is incredibly frustrating. I can say that truly about frameworks. Like React - I knew React two years ago, and I cannot understand the React today with that knowledge.
We don’t have good information, so we kind of have to just let a lot of stuff happen, and have a lot of churn happen. The issue that we get into though is that the platform is not static; the platform is a moving target. And as the platform improves, we need to be able to shed a lot of this tooling. And the issue with vertical integration patterns is that all of the value is locked up inside of one giant framework. So when the platform catches up, you can’t just ditch a bunch of that.
I remember when React was launched, the whole thing was about DOM diffing. The value of it is this virtual DOM thing. Then we made the DOM fast, and who gives a shit now. But we’re still using React because of – I don’t know, there’s like other features that people rely on in it, so we’re just using the whole thing…
The component model has been useful for getting people to sort of all write their components in the same way.
[00:24:06.24] Yeah. And then now we have Web Components and they can’t adopt it, because they’re on their own pattern, so we can’t take this feature upgrade from the platform. I think there’s a ton of other examples of this where the platform starts to catch up, and then the frameworks can’t.
If you wanna look for a model that is much better, look at what happened with CSS frameworks for the longest time. There was a new sort of bootstrappy thing every week for a couple years, and there’s all these different grid frameworks, and Flexbox frameworks, and all these things, and they’re all just like CSSthat you can add into a page. And because it’s just that simple “Add that CSS into a page”, when CSS Grid happened, we just stopped including those… CSS Grid is just better than all of those frameworks and components. When the platform caught up, we were actually able to remove complexity, even though we still have this big ecosystem; and now we’re building a new, better ecosystem on top of Grid. And that’s an argument for change, for more things happening, for more choices at the end of the day, and more complexity for you to deal with and sort through… But what you end up with is a toolchain and an application that fits your needs a lot better and is actually easier to reason that.
What about this concept of maturity? I don’t think that the web platform is immature. It’s been around for a while, it’s got a lot of users, a lot of developers… But the concept of complexity and progress - it’s not so much that it’s unstable, because it is stable, but there’s progress happening, so that means that tooling will always change.
I think it’s about to completely shift again, actually. You just had modules land in the browser. We haven’t really taken that on yet, so… We’re due for another big shift. So I wouldn’t say at all that it’s stable. The platform is changing faster than it’s ever changed.
So would you agree with this then - as our tooling advances, so does the complexity around our tooling?
Well, I wouldn’t call the platform tooling. The platform is what we build the tooling on and what we rely upon… And to some extent, if the tooling is masking over deficiencies in the language, you can basically say those things are gonna need to change in the future; you sort of know that those are gonna need to change in the future.
You can look at a lot of the patterns that Node developed internally, because they didn’t exist yet, and now we’ve had to move past them once the platform caught up, and that’s been really painful.
Right. Buffer is a great example of that.
Yup. Buffer, the standard callback API, Streams… Jesus.
Whenever you’re inventing your own error-handling mechanism, you are covering up a deficiency in the platform that is like just dead. But sometimes you have to. You just have no choice. I don’t think that Facebook stood out going “You know what we should really do - rewrite the DOM as a diffing mechanism in JS.” They had a problem that they needed to solve because the DOM was too slow, and that was how they solved it. It’s just that because of the way that they decided to prevent the solution to that problem, it was very hard to remove that when the platform had caught up.
One thing we should mention is that it’s important to make sure that the tools you’re using solve problems that you actually have. I think that’s a huge source of unintentional complexity, or what I call incidental complexity earlier. If you adopt a tool because everyone else is adopting it, and that tool was meant for a company that’s a thousand times your size, you’re gonna have extra complexity; that’s gonna be solving problems you don’t have yet… And you might argue that maybe it’s good to be using a tool that can scale when you’re ready to handle that much traffic, but let’s be honest, your app is probably not gonna get that popular.
[00:27:56.14] If your app gets that popular, I guarantee you’ll have very different problems. That’ s the thing - any app of a particular scale is going to have unique problems to that app. This is the issue with cargo cult and culture in tech in general - if you’re not Google, you don’t have Google’s problems; you probably don’t need Kubernetes. Unless you’re running a cloud provider, you don’t need Kubernetes.
Yes, I love this. I love that you brought this up.
Yeah, and unless you’re Facebook, you probably don’t need all of React.
One of the things I’m super-impressed by - there was a post a few years ago on the High Scalability blog, which by the way, a lot of people who love to add complexity read this blog, because they’re like “Oh, what are the biggest players doing? Oh, we need to adopt that as well.” [laughter] But anyway, there’s this great post on there about Stack Overflow; I think it was 2014. Maybe their architecture has changed a little bit since then. But in 2014, when they wrote this post, they were dealing with 560 million pageviews a month, and they were the 54th most popular website in the world. They also ran the entire Stack Exchange network, which at the time was over 100 different sites, all being powered by guess how many servers? 25 servers. Literally, 25 servers that they just directly SSH into to manage. Now, no Kubernetes, no auto-scaling, no magical fairy dust cloud functions…
It’s called caching. Caching fixes most of your problems, actually… [laughs]
Yeah, and this is a site that actually is quite cacheable… So maybe your problem is not exactly as easy as Stack Overflow’s problem. Stack Overflow still has writeable stuff, dynamic websites, so it’s not completely static… But yeah, the point is that they decided for them that they wanted to go with boring, well-understood technology, and that served them incredibly well, and I kind of admire the simplicity of it. The fact they managed to go that big and still have a system which they can fully understand… It’s 25 servers. They’re running basic things like a SQL server, and that’s a well-understood technology.
I think that people don’t think about the idea of technical risk enough, and what is the downside of adopting a tool in a few years when everybody who was using it has moved on, and now you’re stuck using this tool that no one’s maintaining, and that you don’t even understand how it works, because you adopted it hastily, and now you’re the one who has to fix the bugs in it.
But that’s a good differentiator though, because that creates a very clear separation between the kind of like “I wanna use this boring thing because it’s a thing that I know” or “I wanna use this boring thing because your new crazy thing might not work out. Because if you’re talking about certain upgrades and certain shifts, you have some certainty that it’s actually going to be around.
I usually don’t adopt new language features when they’re not even in the stable version of Node.js, but there were a bunch of applications where I took async generators and was running them under a flag, because it was so much better than using Streams… And I knew that this was gonna stick around. In the future we will be doing more things with async generators rather than with Streams, because that is an older API and we’re moving past it in the language. There’s some certainty there, and that’s a level of certainty that you wouldn’t have in adopting something like, say, TypeScript, where it’s not actually on a path to be adopted in the language and everywhere. It is like its own sort of side community, and you don’t know what the future of that is. And if you look at the future generally of compile-to languages, it’s not great. Like, what happened to CoffeeScript…?
There’s this thing I like to say - technical bets are multiplicative. Basically, every time you make a decision to use a new piece of technology, you have to decide “What is the likelihood that this thing is gonna have a problem that’s going to destroy my project, or be a huge source of work to rewrite?” You wanna know that adopting a new technology is not a pure good; there’s a trade-off, and that trade-off is “What happens when it turns out it was a bad idea and I (obviously) thought that it was a good idea at the time? What happens if the community disappears, or it’s replaced by another model and we have to rewrite everything?”
[00:32:07.23] You can do a certain number of technical bets, but you don’t wanna just – everytime you have a decision about whether to use a risky technology or a safe technology, you don’t wanna always choose the risky technology; that’s just a recipe for disaster. You wanna be very careful about the risk you take on.
They had landed under a flag in Node, so they were past the point where they were gonna be changed to that degree, for…
For async generators, yeah.
Yeah, sure. So my point is just that even things that seem like they’re sure bets that they’re on the standards track, you can still kind of get owned if you’re unlucky… So I would say that your decision to do that was probably pretty good; you probably had like a 95% chance that it would work out, but you took on a little bit of risk that you decided was worth it, because you were getting quite a bit of benefit from it, right?
I feel like Mikeal had an opinion… Yeah, you should go. You were like in the midst of finishing.
I think when you start out doing development, using something really high-level, like you were just talking about, is what you tend to do. You take an example, you poke at it, and you make it do the thing that you wanna do, and you sort of learn from there and you work your way down the stack.
I think where you start to run into problems is as you become a developer, as you become more familiar with your tools, all of that understanding of how those tools work ends up sitting in your head and becoming the context that you program in… And you have to, at some point, limit the amount of complexity that you’re gonna keep in your head in order to get anything done.
When we talk about complexity, we’re not just talking about the surface complexity of an API, but we’re also not really talking about the entire implementation complexity either, because almost nobody keeps the entire implementation in their head when they do this stuff.
I’m somebody who severely limits my tooling. I’ve moved away even from graphical editors, and back to Vim, and back to doing all of my development on a remote server, just so that I can severely limit the amount of tools in-between me and my code, and running it and and reading it.
But that said, it’s really important to have a diverse and broad and really high growth ecosystem. If you don’t have all of those things, then you’re sitting in a corner of just the technology sphere in general that might die off. We were also talking about risk earlier, and the risk that something may or may not be adopted… In ecosystems that do not have this growth problem, you literally run the risk of this whole thing that you’re working with dying off and not that many people using it in the future…
What you’re saying is that complexity is a given, so get over it or find a way around it, for a lack of better terms. Is that right, Divya? Maybe you said it more softly than I did. I’m a bit more abrupt about it.
[00:40:11.19] I think the point Mikeal was making, and I kind of agree with that, is that the ecosystem is incredibly lush with tools and libraries, so you can choose whichever you want. You can choose an incredibly pared-down version. If you want to use React Light, there’s Preact. If you wanna use a more declarative framework, you can use Vue. There’s all these options you can use, at your disposal… But I think there’s also that part, which is “My application, or the thing that I’m working on, is complex, because I choose to add all these extra things to pre-optimize my codebase, because my application is obviously gonna be successful and scale.”
That’s kind of my issue with it - in a way, we shouldn’t curb the growth of the community, because I think the fact that there’s so many things means that people are actively contributing and actively working on things and thinking about problems, which I think is a great thing… But it’s like, “How do we introduce that nuance to show developers, both seasoned and new, that certain tooling is not necessarily needed for every single use case?” Because a lot of the arguments I’ve heard for certain libraries have been “You have to use this, because your code will be better by it”, which I think is incredibly subjective… Because I’m like “Sure, maybe. But will it, actually? And is it introducing more load and more weight to my codebase to solve one thing, that I might not even have a problem for?” So that’s where I was coming from.
Before Feross jumps in, I wanna mention this topic of “You are not Google, Amazon, LinkedIn etc.”, choosing the right tooling for the job… We actually had this conversation on the Changelog about two years ago now. As a matter of fact, August 4th, 2017, with Oz Nova at the time his last name was Onay, Oz Onay. He’s actually an instructor at Bradfield School of Computer Science - president of, actually, and one of the instructors. So if you wanna hear more about that, we’ll put that in the show notes… But episode #260 of the Changelog we cover that, and that actually was based on a very thorough blog post and a very popular blog post as well from Oz. Feross?
No, you go for it, Mikeal.
We got 9 minutes left in the show. Maybe can we talk about the future, Mike… You mentioned Web Components and this very large potential change. So if we are on the fence of whether or not tooling is or is not overly complicated, how can we simplify? Mikeal, you mentioned when you write your own code and you start a project, you sort of simplify things… So what are other ways that developers out there can sort of resist the complication, or lack thereof if there isn’t any?
Implode? I mean, it’ll keep working…
I don’t know, I actually don’t think that a lot of it will keep working, to be honest.
And the registry will go downnn…!
Can you be more specific?
[00:44:03.07] I think that looking at pika package is sort of enlightening… Because by literally drawing a line and just saying “We’re only using these new features that are available in the platform”, they’re able to provide an experience that’s just really, really good. Way nicer than what you can get with npm plus a bundler, for instance.
Can you go into that a little bit, what makes it nicer?
So they only use the new module syntax, and as a result do not actually need a bundler and a loader, because they can be directly loaded from the browser. So their job as a package manager is just fundamentally different.
In practice though, when you ship your site, don’t you still bundle because the performance from downloading 100 separate modules, with 100 separate HTTP requests is still too much?
Yes, that’s the thing though - right now you have two options. You either load a hundred files, or you use a bundler. But if all of your dependencies were using these new standards, you would actually have quite a few options in between. You could actually use much more sophisticated loaders that did some bundling for you dynamically, that loaded a few packages together but not all of them, you can start to rely upon HTTP/2 and just say like “Oh yeah, we are gonna give you a 3m digital file and we’re gonna do it all at once”, so it would be the same as a bundle, for instance. Your options open up a lot wider once you say “We’re just not going to support all of the old syntax”, essentially.
The reason why I bring this is up is just it’s something to look at and think about, because it opens up a lot of possibilities that we don’t have with the npm plus bundler scenario, but adopting them would require us to basically drop almost all of the current npm registry, and reimplement a lot of things. A lot of this code would not be substantial code changes, but quite a few.
I’m still writing modules that have a require statement in them, so obviously I have not transitioned to that yet, being this tooling does not exist… But you can see something as coming up on the horizon that’s gonna change things pretty fundamentally.
It doesn’t seem like it’ll be too hard to switch your app to using this bundler when the time comes, if you wanted to… I guess the question I have is –
No dependencies in your entire dependency tree can not use the new syntax. That’s a substantial change.
But in theory, if I’m sitting there using Browserify, or Webpack, or something like that, and over time more and more of the modules that I depend upon are shipping an ES module version over time, my Browserify or Webpack tooling is just gonna keep working just fine. I might not be getting these benefits that you talk about from pika package, but one day when most of the things I depend upon are using this ES module syntax, then I can go ahead and swap out Browserify or Webpack for this new stuff… But in the meantime I can continue to ship a working app to my users, and my users will be happy that I’m not spending all my time debugging bundler problems, which isn’t helping them with their problems in life…
I just don’t think that that’s how ecosystem upgrades work though. We’ve gone through a few sort of minor upgrades to the platform like this already, and we’ve had upgrades to Node.js as well… And when you look at the ecosystem, 1) we have not been able to drop anything old - basically anything - because somewhere in your giant 800 to 8,000 module dependency tree is something that relies on that, that nobody is touching, that’s such a transitive dependency and so deep in the dep tree that you can’t update everything to get at it.
So things like that just don’t actually go away once you have these giant dep trees that continue to grow, so we have to support that stuff indefinitely, which means that if there is a new feature that in order to use we have to drop old support, we just don’t have access to it until we make a hard shift.
The other thing too is that when you’re building a new ecosystem or you’re trying to adopt a new ecosystem feature, there are some pretty big advantages to breaking compatibility. If you just say “We actually don’t work with everything before”, you incentivize a new group of developers to be the first people to write all of those new things again.
So are there actually packages that are written using ES module syntax that don’t work with old-fashioned bundlers?
They work with bundlers, but again, move out of the – so think about just not using a bundler; using something that looks very different from the way that current bundlers work.
Sure, but isn’t that a decision that the user at the end makes? I’m still confused… Are there gonna be packages that are on npm that I can’t use unless I switch to using a different bundling system?
Yeah, because the bundling system does not have a way to compile down the old syntax. There are also issues that you get into that you can’t resolve. You can’t have recursive dependencies, for instance. That’s a serious problem. If you have a large enough dep tree, with different versions of things, you usually end up with a recursive dependency somewhere.
I’m still confused… Because it seems like basically what you’re saying is that there’s like a new bundler that is our there called pika, that if I use it, it actually restricts what modules I can use…
It’s not a bundler.
Well, whatever you call it. It’s a tool that helps you ship your JS to your users, whatever you call it. What do you wanna call it?
It’s basically a package manager. I would call it that. I’m trying to look at what they describe themselves…
But it seems to me like basically it’s requiring packages to follow a stricter set of rules; basically, you can’t use all these other things.
But then if I’m using a tool which is more lax, in other words it never dropped support for old stuff, then wouldn’t I just be fine? Now I can continue using all my own stuff, and also I can use these new things, because they’re just using like a subset of the language. They’re only using ES modules, so - great, I’ll just use them. I’ll just consume the same.
It seems like all I get from switching to pika is I can use less modules. Unless I really like the other benefits that you talked about. But as far as which modules I can select, basically pika is a subset of what I can use if I just stick with my current tooling.
I see what you’re saying. You’re saying that if you don’t take this upgrade, then you can continue to use all of that value in the old ecosystem.
Yeah, until pika is so useful – like, I really want the features of pika, and enough of the ecosystem is updated that now I can sort of do this shift to pika a couple years after everybody else, and now I get all the benefits, and I had to do none of the suffering of trying to be like “Ugh, I can’t use this package! Ugh, I can’t use this package!” You know what I’m saying?
Sort of, yeah…
That’s what happens when you get modern, right? Once you start moving forward, you have to leave something behind. It’s a law of physics.
So the question is “When do you wanna leave stuff behind?” Do you wanna just sort of take the leap right now, or do you wanna defer it until more of the ecosystem has moved forward?
I don’t know, this may just be where I’m at in my head with the code that I’ve been writing lately, but I’ve been working in really restricted environments, where you can’t take on a ton of dependencies, and I’ve effectively had to write all my dependencies again from scratch, because there just aren’t enough packages that work like that. The average thing that does something tiny in Node pulls in like 100 dependencies. We’re incentivized to do that because it is so easy to depend upon all that stuff. It’s not a bad thing from the point of view of Node.js, but when I need that to run in the browser really fast in a tiny bundle size, it’s problematic. When I need it to run in the Cloudflare worker and I have a limit on the amount of code I can put in it, it’s really problematic… And I don’t think we’re gonna have less of these constrained environments in the future, so…
We’ve got three minutes left on the time right here. Divya, I haven’t heard from you in a while… What do you have to say?
I was just listening in on this conversation… It’s interesting, because I haven’t used pika, so I have no reason, similar to what Feross was saying, for switching just yet. And if anything, I would wait until there’s a reason for me to switch, like there’s an actual problem that I’m trying to solve… Which I don’t have.
[00:52:02.19] Because I know that pika apparently has – I’ve heard a lot about its optimizations for tree shaking, and less module dependencies, and all of that, but I’ve never noticed that need in my applications for me to switch over. And I would use that argument for most tooling out there.
I’m actually excited to try pika. I don’t wanna come across as like a hater, or anything. I just think that, like I was saying, you have a limited number of technical bets that you can make. If I’m already at my maximum limit – like, this thing I’m working on is probably not gonna work, it’s already so hard for me to do it, do I wanna add on the additional risk of like “Oh, now I’m using a bundler that is really bleeding edge”? Do I wanna be the one who’s filing the bug reports, or do I want the people who came before me to have already figured out all the obvious bugs? It depends on if I have the bandwidth for that or not; and if I don’t, then I wanna stick with more trusted, reliable tools.
I think you’re gonna always scrutinize the tooling you use, though… So I think your pushback on pika is wise, because you wanna understand why you should use it, and what problems it really solves, and whether or not it actually creates more for you.
Yeah, pika right now is not what I would recommend people to use, actually. When you look at pika and understand what it can do in such a simple package by shedding a lot of the features in the past, and by wholly kind of adopting the new browser standards for modules, you realize that there’s a very large opportunity in the future for us to shed a lot of that, and for us to build much simpler, more reliable tooling.
That makes sense.
Yeah, yeah. So I think it’s done more to just sort of expand what I think that the future is gonna look like around this, than it is currently a solution to this problem.
Yeah. And there’s something that’s really aesthetically nice about that idea of like “We’re just gonna get rid of all the legacy crap that’s annoying.”
You already did that Yep/Nope. [laughter]
“A future without Webpack”, written by Fred Schott, I believe the creator behind pika, on dev.to. We’ll link it up in the show notes and put that on Changelog News as well, because I hadn’t seen this yet, and that’s something we should be spreading the news about.
This was a fun debate, I really enjoyed the format. I think even having to throw the curveball at ourselves with the It Depends section - Feross, I think you represented it really well; Divya, you represented Yep very well, and Mikeal, Nope… And I think in the middle there we sort of all huddled around and said “Bummer, it’s so complex. Let’s find ways forward”, and talking about where we’re going actually in the future.
Listeners, if you want to say hello to us, you can do so on Twitter. We’re at @jspartyfm. You can head back to the show notes, there’s a link there that says “Discuss in Changelog News.” We love to hear feedback, we love to hear from you our listeners, so we encourage you to do that, but… Mikeal, Divya, Feross - thank you so much. It was fun.
Yeah, this was great!
Happy to be part of it.
Our transcripts are open source on GitHub. Improvements are welcome. 💚