JS Party – Episode #107

Modular software architecture

with Ahmad Nassri, CTO at npm

All Episodes

Jerod and Divya welcome npm CTO Ahmad Nassri to discuss modular architecture. What it is, why it matters, and how you can achieve it. Ahmad has been thinking deeply about this topic lately and we have a very fruitful discussion that should have takeaways for developers of all experience levels.

Featuring

Sponsors

Rollbar – We move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog.

Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2019. Start your server - head to linode.com/changelog.

The Brave Browser – Browse the web up to 8x faster than Chrome and Safari, block ads and trackers by default, and reward your favorite creators with the built-in Basic Attention Token. Download Brave for free and give tipping a try right here on changelog.com.

Notes & Links

đź“ť Edit Notes

Transcript

đź“ť Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

You know what time it is, friends… It is JS Party time! I’m Jerod, I’m excited to be here. I’m joined by a special guest, I also have a very special panelist that everybody loves… Divya is here. Divya, what’s up?

Hey, hey!

Divya, I hear you’ve been working on an introductory tag noise that you can use at the top of the show…

Yeah, I still haven’t perfected it. I think it’s a work in progress.

Okay. So “Hey, hey!” is just a placeholder?

Yes, it’s a temporary placeholder.

Alright, you work on that and get back to us. The special guest we have - and we’re super-excited, of course, to have the CTO of npm here, Ahmad Nassri. Ahmad, thanks so much for joining us.

Thank you for having me. I’m excited!

It’s kind of a funny story, because you and I met four years ago, almost to the day, on the Changelog, and you had such an interesting back-story. We didn’t use to do back-stories on the Changelog, but I heard yours… And I think that probably took maybe a third of the show. We were there to talk about Kong, and Mashape, and APIs, and we ended up talking about how you came to be where you are… Actually, I think that episode inspired an entire segment for a year or so. We were doing origin stories with everybody… And it turns out not everybody has as good of an origin story as you do, so we ended up saying “Well…” Sometimes it was hit or miss, but we hit such a home run with your story that we thought we’d ask everybody that question. Eventually, we moved away from it… But awesome origin story for you, and I will just submit to everybody if you’re interesting in hearing about his background - go back and listen to the Changelog episode 185, which we’ll link to. Very, very fascinating stuff. But now you’re at npm, so catch me up. It’s been a few years, you’re at npm, you’re CTO there… What have you been up to?

Oh, wow. Lots to catch up on. I guess the journey for me since we last chatted - not to revisit all the history there, but I kind of did this thing where I went from startup to enterprise, and then back to startup again, and back to enterprise again, and [unintelligible 00:03:38.17] The reason I was doing that is I wanted to get exposure to the “other side.” When you’re in the developer tooling space, or you’re in the software development and open source space, I kind of get self-conscious about how deep into our own echo chamber are we, or how much on the bleeding edge are we, that sometimes we forget about people who are perhaps stuck in systems that can’t be modernized, or technologies that are still catching up, or doing the day-to-day grueling work of trying to break down the monoliths, or trying to operationalize an old system or an IT infrastructure.

[04:17] So I did this thing where I kind of went full 180 to the other side, and I went and worked at a DotCom for about a couple of years, leading a team of – I think we had about 450 people at the time, just trying to do digital transformation and modernization of telecom technologies, especially when it comes to e-commerce operations and online interactions with the customers. That was kind of fascinating, knowing how the sausage is made, type of thing… As we all carry smartphones and use the internet, seeing how the ISP systems and the telecom operations actually work was kind of kind of fascinating and interesting.

It was an interesting journey to go back into the enterprise space and seeing the challenges of the enterprise developer, and the kind of level of velocity that teams like that operate on, versus focusing on the open source space, focusing on the modern technology spaces, and the cloud-enabled infrastructure technologies.

To me, that was a very good educational space that I went through. I achieved a lot of things there, and then – you know, I’ve still got the itch of going back and doing the bleeding edge, the modern thing. Enter npm, which I basically built my career around JavaScript and Node and npm in general, and the toolsets that the npm team and the ecosystem created has really facilitated my career and a lot of the projects I built and created.

When I started chatting with the npm team about what their needs are and what they wanted to do, it was a very interesting opportunity that I couldn’t say no to, and actually being part of making the difference in developers’ lives, and helping people get the same value that I’ve gotten out of the ecosystem and the community that npm fostered and created.

How long have you been back?

I’ve been at npm since May of this year, and it’s been a very interesting journey. We’ve been working hard on a lot of areas and things that we needed to catch up on to serve the community better… But the thing that I’m focused on in my role is helping the team itself and helping the company itself, and being structured and being operational in a way that can better serve the open source community and our paying customers that rely on us every day for their delivery of their JavaScript packages. So it hasn’t been that long, but it still feels like it’s been a long, long time, and I’m just looking forward to what’s next.

Tough question, but if you could distill down those couple of years in the enterprise/telecom space, what was – did you have major revelations or takeaways, or things that you despised? What would be the biggest summary of your time and experience there?

Well, let’s just say I never thought I’ll get grey hair, and I left with a lot… [laughter] What I said earlier is I think there’s a bigger disconnect in – I’m gonna say “we”, the collective “we” in the open source community, and conferences and events like where I’m now in Montreal, at Node+JS Interactive…

When we come together and talk about technology, talk about tooling, talk about practices and patterns and standards, that is not the world that most enterprise development is in. And as much as the enterprise developer or people who happen to work in enterprises and are software developers are as interested in those topics and are trying to be engaged and be active in it, the boundaries and the limitations in the environment and the circumstances that they’re in prevent them from doing that.

You and I talked 4-5 years ago about microservices and APIs and RESTful services, and guess what - the majority of enterprises are still nowhere close to that; they’re trying, but they’re nowhere close. [07:51] Meanwhile, the industry is not talking about serverless, and functions of the service, and modernization and all this kind of stuff. So there’s a gap and there’s a divide that’s only getting bigger and bigger, and that’s a thing that I’m always keeping in my mind, especially in my role now at npm.

The JavaScript community as rich and vibrant as it is, we talk about all these modern tools and frameworks, and libraries, and methodologies, and guess what - there are people still running jQuery, still running Dojo version 1, still building UIs with Sencha UI, because that’s the enterprise adoption lifecycle. They’ve adopted something, there’s a sunk cost there, and these people are potentially suffering because of those things… But they’re still working, they’re still operational. Testament to the open source technologies that we’ve created all these years ago - they’re still operational, they’re still working. But now the gap between topics that concerned that developer that’s building things in Sencha UI, or old jQuery UI and they’re stuck with it, because that’s the enterprise system.

The topics that they’re concerned with are not the same topics that somebody who’s building modern React headless applications and deploying them with Electron every day - that gap is becoming bigger and bigger. I see that gap every day, especially with my role at npm, where the concerns of the one side are not necessarily achieving the solutions or the concerns for the other side.

Considering that, and the fact that npm is also starting to do a lot more enterprise work, how do you bridge that gap? Because generally, it tends to be if you’re a developer-focused tool - which npm very much is - you wanna focus on the developer experience, and often developers don’t pay for that. But you also wanna target enterprise users, who will actually bring in the money, and their use cases are very different. How do you bridge that gap then?

Yeah, I mean – I wouldn’t say npm is targeting more enterprise-focused things… I think we’re uniquely positioned in that middle ground, where we know very well the experience of the open source ecosystem and the developer there, and we understand it very well… And coming up with the solutions that the open source community relies on and needs, and then making that translation to the enterprise developer, or small to medium-sized company that are still stuck in some older technologies - there’s value to be given to those developers and those teams as well. And I think that’s a very good place to be, because then you can see both sides of the equation. And while they’re learning and adapting and helping the open source community, you’re providing direct value to the “enterprise developer” or the old-school systems, or some old IT infrastructure that they’re still catching up on.

When it comes down to the economics of it, that’s what people wanna pay for - they wanna pay to catch up and get out of the hole that they may be in, or get over the technical debt, or the hump that they might be stuck in. So that’s valuable impact that you can measure in dollars, and there lies business opportunities and ecosystem opportunities to serve those communities.

Yeah, that’s a very optimistic look at it, because I often find - and you see this in various tech companies, where it’s much like, as a startup, the focus is on developer experience, making sure that that is very well done, and then the moment it comes to like “We need to now make money”, there’s almost like a 180 shift away from developer experience into this completely other enterprise… And oftentimes developer experience tends to lag behind because they’re like “We have enterprise use cases which are very unique”, which oftentimes developers don’t have that scale or that ability to – they’re not dealing with the same problems. Sure, we can always adapt, but oftentimes it’s almost like two different perspectives.

That tends to happen, where you see a startup that’s very great for developer experience. The moment they focus on enterprise, it’s like a movement away from that. And I know you mentioned a little bit about making sure that you can adapt solutions, but to me it almost feels – maybe this is a very negative view, but it almost feels inevitable… The moment you start talking about enterprise, there tends to be that move away from developer experience… And I just wanna know, from your perspective, how do you make sure that your – because sometimes there tends to be like… Developers can almost see that; they notice, they’re like “Oh, npm is focusing on enterprise now. How will that impact us?” It tends to also be a communication thing, and how do you do that?

[12:08] Well, I think it’s a two-partner. Part number one is it depends on the attitude or the intent, and trying to solve enterprise problems or trying to sell products to software teams, as opposed to creating solutions that open source developers can use. And if your intent there is to help them modernize and get into the modern world, then your incentives will not be too just create solutions that keep them where they are… Which I think the pattern you’re pointing to is yes, there’s a lot of technology companies who focus on solutions for large-scale or enterprise or whatever, and then they inevitably fall into this trap of doing what their customer asks for, or building the thing that is the gap, so that the customer pays for it… But they’re not acting as the advocates, they’re not acting as the “Here’s the best practices.”

So if you’re trying to go into that space and acting as both the advocate and the solution provider, then you can help them get out of the technical debt that they may be in, or the legacy systems that they might be on, and help shepherd them towards a future where they may not need your tool or may not need your products… Or even better yet, they will use your product more effectively and use your technology more effectively. So that’s the one part.

The other part is, you know, there’s also – it’s not that one-way, directional learning experience, like “Everything open source is doing, the enterprise needs to catch up.” There’s actually the other path as well. There’s a lot of enterprise use cases and things at scale, whether it’s in collaboration of software development modes and practices, whether it’s in technology and system design, whether it’s in scale and operations of technology, that the open to the source community can also learn from. So again, being in the middle of that, you can take those lessons and give them back to the open source community.

As long as you position yourself as a shepherd for two-way communication and value exchange, I think you would be successful in the developer tooling space. That’s the approach I would take in terms of solving a business problem, but also trying to be an advocate for better practices, and bringing in the scale and operational constraints that enterprises have, and teaching the small team of 5-6 open source developers the value of those practices or the value of those scaling operations… Because then that makes its way into the open source technology as well.

One of our ways we do things around here is we just get interesting people on our shows and then we talk about what they like to talk about… So first of all, shout-out to Amal Hussein and thanks to her for suggesting this episode; a friend of ours and yours as well. We got hooked up with you, and I said “What do you wanna talk about? What’s been on your mind?”, because whatever you’re interested in, I’m interested in… And the thing that you’ve been talking about and thinking about a lot lately is modular software architecture, and patterns or ways that you can achieve, or reproduce, or migrate to modular software architecture. So let’s tee that up, this topic from your perspective. First, let’s just start very basic, for those who aren’t familiar with the idea of modular software. Can you define it for us, tell us what modular means in your perspective?

[15:56] I think this is a nuanced approach, but there’s a number of different ways people interpret modularity and modular software in general… Especially in the JavaScript world, when you use the word “module” or “modular”, people will either think of a package, or a package resolution methodology, as in with ESM, or otherwise. What I’m talking about when I talk about modularity - I’m talking to the age-old philosophy that started with the Unix philosophy all the way back in 1978, or something like that, where it talks about how you write code and how you write software, and some principles around that.

If I recall correctly, the 4-5 principles there was that in order to make modular software, number one is you make each program do one thing really well; everything has one job, and one job only. I think number two was there was like an output/input exchange, so every output of every program should become the input to another. If you’re ever used Unix or Linux and you pipe operations between command lines, you’re very familiar with those kind of approaches. Again, this is from 1978, so the very early days of computing.

But the things that I find most valuable, especially in the context of software as a social practice, that we all do - the tools and the way you build your tools and products and all these principles should be tailored to make sure that you [unintelligible 00:17:17.25] the programming task to other maintainers… And the idea that everything should be easily maintained and repurposed by developers other than the people who created it. So we would not be successful in the software industry if the person who wrote the code the first time is the only person who’s gonna be able to maintain it forever.

That’s why we have documentations, and we have practices, we have guidelines of how to actually make software repurposable and shareable by others, and that’s why we have patterns like forking, and cloning, and sharing code… Because the whole point of all of this is that at the end of the day software is about people, and you wanna make it so that some of these practices around modularity - you wanna make it so that it’s easy for others to come and repurpose or refactor or use your software without having to go through tomes of manuals and understanding all your individual author’s purpose and knowledge.

I think one other one that we all suffer from every day is – you know, one of the principles of the Unix philosophy was everything should be designed in a way that you can just throw it away and rebuild it… And as you know, in a monolithic world view, that’s not such an easy thing to do. But as you focus on building smaller and smaller units of code, and build them in a modular fashion, that is everything does one thing very well, every part of the program becomes an input to another, everything can be rebuilt and thrown away, and most importantly, it’s built in a way that others can just come in, understand it, do any changes or fixes and move away without having to spend years and sync up with [unintelligible 00:18:48.13] and everything.

To me, those are the key philosophy areas where the – again, the Unix philosophy did this very well back in 1978. But in today’s world we haven’t really matured that enough. We talk a lot about - especially in the JavaScript world - packages and sharing, and libraries and code, but we still have these big monolithic libraries, we still have these big, complex frameworks. And although we’ve done very well on things like sharing and making code repurposable by others other than the original maintainers, I think we’re still lacking some maturity around “How does our software become portable? How do our libraries become interchangeable?” So for me, when I talk about modularity, I talk about these kinds of topics in the general sense. And then I start talking a little bit about how to become more specific in nature about solving these problems. And just for context, people seem to like modular code; there’s no debate about that. I don’t think anybody goes into their day-to-day job and talk about building the next monolith.

I think there’s a valid debate in terms of monolithic approach to deployment and infrastructure and maintenance, but that’s separate than writing code and that’s separate than how you design your systems.

[20:09] From a numbers perspective - and this is something everybody sees every day when they go to npmjs.com, we are now at 1.159.000 packages, and these are just the open source ones. And I’m always curious by that number. I’ve always been curious about it from before I even joined npm, why are there so many packages; why does the JavaScript community create such a prolific amount of code and software to share? The answers that I came up with just based on my own personal observations is that we in the JavaScript community have had a good run of satisfying some of those human requirements, making things so easy to throw away and repurpose, making things so easy for a newcomer to jump in and get on board, making things simple and clean, and building one thing that does one thing very well, and not be concerned with big, complex challenges across different domains. That’s why we have so many packages, that’s why we have such a big JavaScript community. That’s what made my career, and that’s what made a lot of other people’s career, and its’ wonderful.

I think the challenge though - and coming back to our enterprise examples over here… The challenge is we’ve solved that in the open source world, but we haven’t solved it in a way that informs a method of building software. All of this so far has been about libraries and code packages, and patterns around that. But I haven’t seen it being adopted very widely in the way we build software at companies or at work.

So the approaches of modularization, and whether you wanna go down the path of packaging, or microservices, or any of those topics, or even the serverless world today, there’s a real pattern here to adopt, and I think - again, taking the JavaScript example, we’re in a world where JavaScript runs everywhere. Myles Borins from Google has a talk yesterday at Node+JS Interactive where he was talking about universal JavaScript. Universal JavaScript is just a new term that we’re talking about where the whole premise is you write once and run anywhere.

We’re in a world now where JavaScript is running in the browser, in your server in Node.js, you can write JavaScript on edge workers, on companies like Cloudflare, you can put them in your databases, even in productivity software like Google Spreadsheets - you can run some app scripts in there, you can do that I think in Excel nowadays… If you’ve gone to NodeConf EU this year, they gave out smart watches that were just running JavaScript… So that’s great, JavaScript is successful. But what about the portability of those software code and libraries that are being created? What about the developer experience associated with them? Wouldn’t it be great – and I think that’s the promise of JavaScript, that you can write the same software that can run in your browser, and on your smartwatch, and in your Excel spreadsheet. But the reality is there’s a lot of work involved in getting that to happen, and we’re kind of offloading a lot of that work to the developer who’s responsible for doing this. But we haven’t come up with the patterns yet of how to approach those things. I think this is where to me the Unix philosophy from so many years ago kind of touches on the key ingredients required to get there. I don’t necessarily have answers in this space, but I love asking the questions, so we can have a dialog and a debate in these conversations.

The one pattern I have noticed in terms of modernizing the way we adopt these Unix philosophies just so happens to be around package management. It’s not because I work at npm and that’s my day-to-day responsibility, but it’s true. You’ve seen the success of things like React, where people are now building design systems and iterating on them at such a large scale, and involving not just developers, but now designers, and UX designers in this kind of workflow. That’s becoming more and more attainable, and nowadays you have tools that are meant for designers that are generating the code, and generating it in a way that’s a package that is shared and distributed in a community within your company or your clients’ environments right off the bat.

[23:57] You don’t even need to write the code anymore, you can just have a designer drag and drop some things. I think the company is called Framer. I know other folks in the industry are looking at this as well; I think InVision and others are playing an interesting part in all of this… But this idea of modularization is beyond just the software and the code. It extends to UX designers, it extends to product design, it extends to every aspects of technology. I think, again, we in the JavaScript community have kind of solved or addressed that problem in a very efficient way with package management and packages in general. It would be great to start seeing that pattern being adopted more widely and more – I don’t wanna call it standard, but perhaps best practices around these ideas and patterns in the day-to-day work of people.

I know I’ve done this before, again, when I mentioned the enterprise space - when we have teams as large as 450 people, it’s not gonna be about just publishing a new version and expecting it to work. There’s a lot of workflow involved, there’s a lot of operations involved, there’s a lot of maintenance and upkeep and analytics involved. A design system with one component that has a button in it might have 15 different versions, but the adoption of it is all over the map, and we end up spending a lot of our time as community moderators and architects of the design systems, and the companies just aren’t chasing that down and trying to get the adoption going.

The way the software is built is really relying on those patterns, or at least it should become more and more embedded in the way that software is being designed, whether it’s a monolithic design or a microservices design.

The other interesting area of this - I’m using design patterns as an example because it’s an easy one to point at - but now we’re in the world of serverless; now we’re in the world of literally function-as-a-service. While you can deploy a big monolithic application as a serverless application and do that, you probably shouldn’t… But now, more and more, we as software developers, especially in a server-side context, are thinking of smaller units of code that have to be built and orchestrated and talk to each other through an events system to create the result and the output of our product. So again, those modular best practices keep coming back time and time again in all the areas of the software industry and all the different things that we’re doing.

I have a lot of questions that I get through the npm community, oddly enough, from people who are using npm in embedded systems, and they’re asking about best practices of “Well, how do we do package management and download big React libraries, or Lodash libraries and run them on these systems? …because there’s not enough memory, there’s not enough processing power.” And the answer is perhaps those libraries and those tools were not built to support those embedded system challenges, but the modularization approach allows you to have a more nuanced approach of like “I want this part of this library, I want this part of this framework, and I can just then put them together, create a modular pattern where every piece is responsible for its own logic, and the output of one can help the input of the other, and create kind of a workflow chain of how my system is gonna be designed and work. And hey, if something doesn’t work, maybe I can just throw it away, bring another library in or another part of that module in, and it will still function the same way. I don’t have to refactor my entire codebase. That’s the future I wanna see.

Do you think there’s a useful distinction between module complexity in terms of the internals of a module? So if I give you two functions and they each do one thing well - they both take a string as an input, and one of them downcases that string, and the other one returns the sentiment of that string - one of those functions is orders of magnitude more complex. I’m not arguing against modularity, I’m just wondering – I know lots of times there’s this flattening of “It shouldn’t matter what’s going on on that side of the API response…” But it seems like in practice it always does matter; I think it seems like it’s useful to have a distinction from a practitioner’s perspective of what’s going on on the other side of that module. I’m curious your thoughts on that.

The lens I would look at that is if I’m gonna be adopting a module, regardless of what it’s gonna be doing, I do wanna see what the internals look like, I do wanna see the approach they’re taking and the processing architecture they’re relying on, because that might cost me money. In today’s world where we’re running things like serverless and cloud-based infrastructures, it’s by computational processes that I’m paying the cost for. It’s no longer I’m a renting a server, and whether my software is efficient or not, that’s rented by hours; that’s no longer the case. You’re literally paying for the CPU tick and the CPU cycle.

If I have two modules or two libraries that are attempting the same outcome, but approaching it from two different perspectives, maybe one will cost me more than the other. And at scale, that matters. If you think of financial systems and financial transactions where a hypothetical credit card company has some processed credit card transactions, every microsecond matters. And not only do they pay the cost of that, but also the customer pays the cost of that. So from a performance perspective, from a system design and architecture perspective, I think that matters. From a pure outcome perspective, it may not, and I think there’s a good example of that.

If you ever used a Linux kind of – specifically, that’d be in package management systems. This is perhaps a pattern a lot of Linux as desktop operating system folks have gone through, where when you wanna install a dependency in your system, say Java, you’re asked “Well, which version of Java do you want? Do you want the Oracle Java, or the OpenJDK Java?” As a user that I’m not writing code in Java, it’s the same to me. I can say “I don’t care, whatever. Just pick one”, and it works.

So there’s this idea that - and I think the field in the package management Debian world is called Provides. So in ecosystem creating packages and creating libraries and tools, you can declare that “This provides a mail service, this provides the Java JVM, this provides a SQLite-compatible engine.” And for the end user, that doesn’t matter, because the end user can pick and choose the one that they desire, but the end result is the same, the operation is the same, because the APIs of those packages tools/libraries software - the APIs are the same; the internals might be different, but the APIs are the same. That’s why you can have any number of different mail servers that you can install in your Linux environments and Linux servers. The internals might be different, the operations might be different, but the APIs that they expose are exactly the same.

So it becomes a choice of performance, it becomes a choice of cost, it becomes a choice of impact on your development methods and approach, and I think that will vary. There’s no right or wrong answer there.

I think that’s keen. I think total cost of ownership is something that everybody should consider when looking to outsource a piece of their application, or pulling a dependency, or refer to a module that they aren’t in control of… And I think probably we don’t think about it as holistically, and that can tend to get us into trouble, so I think that’s a good answer.

[31:55] The total cost of ownership is something I’m always chasing and trying to put a formula around. I don’t think it’s that simple… But I would love to see a formula around the total cost of ownership of software maintenance and software delivery. But yeah, it’s exactly what you said - every choice you make, every time you adopt a package or a module, every time you write a package or a module, even if it’s internal, even if it’s not open source, there’s a total cost associated with it.

As we were saying earlier, you may not be in the same company for long; you may be moving to a different team, you might have different interests a year or so from now, so… Going back to the Unix philosophy there, it’s like “Well, what happens with the developer who’s gonna come after you and has to inherit this codebase and inherit the choices you’ve made?” How easy have you made it for that developer to understand the context, to make it portable, so that they can perhaps throw it away and replace it with something they believe to be better? …and giving them enough context and enough of that decoupling, so that they can be free to do so at will, rather than being - not to use a negative word, but being prisons of choices of the past.

Right. Worth noting - there is a cost to decoupling, there’s a cost to making something modular. So that’s worth thinking about… Although through time and experience I can attest to the fact that it’s almost always worth it. There are times when it isn’t worth it, and that’s subjective, and like you said, it’s hard to quantify these things and come up with an equation for TCO.

Well, there’s so many factors, but I think just us developers thinking about decisions in terms of total cost of ownership and return on investment - these business ideas, bringing them to our software… I would just say - I’ll add one more thing and I’ll pass this to Divya - a huge win for open source is it’s a lot easier to calculate total cost of ownership when you can inspect the internals of your dependencies, and you can say “Well, here’s two modules that provide the exact same functionality”, and I don’t have to guess at their cost, because I can see the approaches, I can see the software inside of those things, and I can say “Well, this one’s well-factored, it’s a pretty simple, straightforward thing, it’s well-maintained”, so I know what the long-term cost of that one is likely to be lower than this other one, because I can see their internals… Whereas the proprietary software - you hit an API, a sentiment analysis API provided by a service provider, and you basically are going off of the reputation of the service provider, because you can’t see how they went about solving that, unless it’s also open source.

I would argue that that’s actually similar, because for example with packages – so I’m all for using packages and modularizing your code, but there’s a part of me that’s pushing back on the idea of making a package serve every piece of your code, for example, which I think you mentioned… Just the idea of modularizing to the extent of everything being a package… Because there tends to be increased complexity with that. Like, sure, your code is very easy to parse, because every module is in charge of a specific thing… If you need specific Lodash methods that’s doing one thing. And that’s great, but it often adds a lot of complexity to the code, because then you’re relying on someone else’s code to run the thing.

The issue that happens there is – sure, it’s open source, and you can see for yourself the number of users, the maintainability, and so on… I think oftentimes when I relied on a package - I will use Hammer as an example. Hammer.JS I really loved, because it allowed for gesture-based interactions with a web app. It was really well-used 4-5 years ago, and then they stopped maintaining it. Just randomly. That’s really frustrating, and that tends to happen with packages… Because I’m all for using an npm package and having someone else deal with that problem. It comes to bite me back when that package is no longer maintained, and there’s a lot of dependencies that it relies on that are no longer compatible with dependencies that I have. So what we’ve been talking about, the cost of ownership - it increases drastically because of that, because I have to maintain and be very mindful of all the packages I’m using, making sure they’re all up to date, and swapping them in and out… Which oftentimes that’s not very easy.

[36:09] If I’m using Redux, for example - and let’s assume in some post-apocalyptic scenario no one is maintaining Redux anymore and I have to move to something else, then pulling that out becomes a huge cognitive burden, because now it’s like everything relies on Redux, the architecture is very specific… So almost at the beginning, when I made that decision, it seemed very easy, but now when I have to maintain and almost look at long-term impact, it’s a lot more work. So I think that’s something to keep in mind, which is why I’m pushing back on this… Modularity should not always equate to putting everything in a package…

For sure.

Yeah, I try to avoid the usage of the word package as much as I could… I don’t know if that’s slipped in or not. [laughter] I mean, it is my day job, so it’s hard not to slip in.

I think it was implied… It was assumed.

Yeah. But you are absolutely right - making things modular is one thing, and packages and package management is a whole other thing. You can build modular software and just put everything in a folder, its own folder, and there you go, you’ve got modularity. But the design constraints in how you write that code, and the boundaries you create between them is really where modularity comes in. And those decisions, as a software developer building big software, you would have to take into consideration.

Now, that said, it just so happens that packages and package management in general do solve a second-tier problem once you’ve achieved modularity, which is code sharing. Creating a dependency graph of what is using what library and what module, and to what degree am I gonna update or not update, or keep up to things… And yeah, there is an even bigger cost there, of keeping up and opeartionalizing all of that stuff. But one thing I would say is luckily that’s what robots are here for, and we’ve seen patterns where with tooling and CI environments and automation we can alleviate a lot of that load and make it so that humans don’t need to be doing that stuff and making decisions around that. To a certain degree, you can automate a lot of these things away, and making the complexity and therefore the total cost of ownership on it much lower.

A good example about this - I don’t recall who tweeted it, but there was a tweet a while ago where somebody in their GitHub had a dependency bot come in, notice that there was a vulnerability in the dependency that they were using, so it opened up an issue… And another bot came in and made a PR to fix it. Then the CI environment [unintelligible 00:38:37.22] verify the PR, and then another bot came in to merge it, and then the fourth both came in and celebrated the merge with a gif and posted it to the thread. [laughter] So the level of automation there is just very meta and very complex… But great. Humans were not needed here, which means that the cost of ownership is actually nil, in theory.

Assuming that everything went well.

Exactly, exactly. I think assuming that everything went well.

And also, bots are great when it’s a mindless thing, like updating a version. But the moment when it comes to deciding which package to use, I think that’s pretty subjective. Because there have been times where I’ve been on teams where we would go with a package that, for example, isn’t as popular, but is very robust, either from a performance perspective - the size of the bundle was small, or whatever that may be.

Yeah, and I think you’re pointing rightfully so at the examples of the open source world and the complexity there and the cost of maintainership… But I’ve seen those same examples in closed source code, in enterprises, across teams and across hundreds of developers. Those same problems exist internally, even if there is no context of a package or the package management. There is a repo and a team that worked on it at some point, and then that team moved on to other things, or team members changed and moved on to other teams, and now that maintainership is lost.

[40:03] Then another team may be relying on that, or an application may be depending on that, and now there’s an issue or a bug, or needs to update, and those challenges become even more complex, in my view, in the closed source space/enterprise space, in the things that are not publicly-published open source packages, because they’re even less visible… At least, thankfully, in the open source space things are visible. You have the choice of taking something, forking it, making changes and going forward with it.

I’ve had scenarios where there was repos that certain people didn’t have access to, and entire teams were blocked, because the original team was no longer there, or the original maintainer was no longer there. That’s an even bigger problem to untangle.

The same pattern applies, and this goes back to my earlier example about – you know, I see npm as uniquely-positioned in between, because we can see both sides of the world, and the lessons you can take from that, you can apply to the other, and vice-versa. And I think there’s a value exchange there to be had between how the open source community does things and how teams at enterprises and with closed source software does things.

What I was focusing on is more of the modular way of writing code, but again, that leads naturally to things like package management, code sharing, dependency allocation, and all those kinds of things.

There’s a lot of boundaries at which this conversation changes its focus a little bit. You can think about modularity in the small, like “How do I factor my own personal code, and how do I write it in such a way that my functions follow the Unix philosophy?” And then you can start to think about it as a team, and like “How does this team work together in such a way that I can pass my functions to you and you can use them?” and vice-versa, and you don’t have to worry about the internals of mine. And maybe I have a monolith over here; you don’t have a clue, because you have an API call and it works, and so that’s you being modular, but it’s me being monolithic. So there’s this weird dichotomy there.

And then you have what we’ve been talking about – that’s why it does look weird when you switch to packages now, which are really just kind of formalized modules in the JavaScript space… Well, that doesn’t necessarily have to be somebody else’s code, as we were talking about; that could be your own internal packages, and that’s just logistics. That’s just distribution of your own modular code. But then you go to somebody else’s code; now you’re pulling in somebody else’s package, and the jump in risk, and in complexity, and trust, and all these things - there’s a massive chasm between those two things… And most of the time what most of us are reaching for is for somebody else’s code. So that’s why maybe we even just start talking about packages all of a sudden, because - well, you’re with npm, but also, we think in terms of grabbing somebody’s package, and “Hey, it does what I want? Cool, let’s use it!”

Yeah, I’m gonna make a meta joke here… Maybe this conversation could be modular as well, and modularized. [laughter] Talking about all these different things… Yeah, I think in the API space, in the infrastructure/design space we talk a lot about monolithic and serverless and microservices and all that, but there’s no real definition of “What is a microservice? What is a monolithic system? Does the collection of microservices equal monolithic?” You can draw a box around anything and say “Well, there you go. This is a microservice.”

I used to ask people, “How small does it have to be?”

Are nanoservices next? I know that’s actually a buzzword that some people use… But it’s kind of ridiculous.

Well, now we’re doing nano front-ends, or micro front-ends, or something… [laughter]

That’s right.

But does the collection of micro front-ends equal a monolithic website? However you draw the box, again, around a package and the distribution mechanism? Because a package at the end of the day is a distribution mechanism and a sharing mechanism. You can still achieve the modularity patterns and the best practices that you wanna put in place, so that other developers and team members can benefit of the software that you’re writing, whether you’re shipping it as a monolithic enterprise product or you’re shipping it as a package. And I think that’s the lesson we should all take away - we kind of cross those boundaries of the conversations quite a lot because of where we are in the open source and JavaScript space in particular, but there’s no reason that you can ship a monolithic package and have it be modular on the inside, and make it easy to maintain.

[44:02] Well, maybe let’s talk practical in terms of achieving modularity. Maybe you would like to write modular software, maybe you have a big ball of mess on your hands… I think you hit on it earlier - I don’t think very many of us are like “Nah, modular is stupid. I don’t wanna write it that way.” But that being said, we all end up with these big spaghetti codes anyways. So it’s difficult to do right, or do well, or do it all. It’s a lot easier to just keep adding imperative things to my one big main function; just keep adding functionality right in there. At a certain point it becomes where it’s unwieldy. But up until that point, it was the smoothest way to get to where I needed to go. So advice from you, Ahmad, and even Divya, on either how to move to modular software, or how – how can you make sure you’re writing modular software? What are some best practices, or even just advice in that vein?

I remember a quote – I don’t recall who was the first person who said this, but I love it, because it’s psychotic and fun… Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.

[laughs]

And if you live by that standard, and you wanna do something – you know, take your codebase, make it modular, make it maintainable… Maybe not out of fear, but out of empathy to the developers and to the teams that are gonna be inheriting that code and working with it… I think that’s the right place to start. Because I know for a fact - I’ve worked on a lot of software, a lot of code over the years, and I’m not maintaining it anymore. It’s somebody else’s problem somewhere else. And I sometimes think back to that, I’m like “Did I make it simple, did I make it easy enough to be maintained?” To your example, did I write everything in one big file and assumed all the methods are gonna be called and understood, or did I break it up and try to put some context around it?

To me, things like documentation play a very big role in our industry. We tend to joke about it, we tend to talk about “Developers don’t like to write documentation, or documentation is not the end result or the end goal of a good software”, but it really starts and ends with documentation… Whether you’re documenting the entire ecosystem of your enterprise architecture that’s monolithic, or you’re documenting the one module, small piece of software that you’re sharing with other team members.

Just having that empathy of thinking of the other when you’re writing code is really where modularity comes a full circle back to me in my mind… Because I’m not always gonna be maintaining this code. That’s a given, that’s definitely gonna happen. So what happens to the person who’s gonna come after me? …hopefully they’re not a violent psychopath who knows where I live.

[laughs] There is a way you can look at that exact same equation if you’re a little more narcissistic or selfish, which is that yes, eventually somebody else will be maintaining that code, but in the near-term future that’s gonna be you, and near-term future you does not have the context that present you has. So you might be that violent psychopath that is looking back at the past self… So if you are a little bit more like “It has to be about me”, well, you’re gonna have to maintain this for a while, and you’re gonna be hurting yourself in the long run… And then in the long-long-long run, eventually, assuming your software has value and is still continuing to execute years down the road, it will be somebody else’s problem. Divya, do you have thoughts on this?

No, I actually agree with your sentiments on that. Generally, whenever I write anything and I try to be as modular or I try to think about it, it tends to be “I’m gonna be maintaining it.” Because there are times when I write things for open source and I’m like “Oh, this would be cool for me to publish on npm”, because it’s a thing that I figured out and I’m sure other people would benefit… And then I realize that other people are actually using it now and I have to maintain it… [laughs] It’s a rude awakening, because oftentimes I think most developers - this is just an assumption that I have - like to share the things that they build… And that’s great and all, but the moment someone else depends on it, that’s when you really have a huge responsibility on your shoulders… Because that’s something that not only you have to maintain, but potentially someone else down the road, if you were to give up ownership of that, has to maintain. So it’s always on my mind whenever I create something that I publish out in the world. And just to create good documentation…

[48:10] I’m someone who likes good documentation, because like Jerod was saying, I tend to come back to my code a couple of months down the road, and sometimes I don’t even remember how to run the thing that I wrote; it might not even be working when I do all the builds and I run it eventually. Everything might break. So that’s something that I always try to keep in mind, and I write notes to myself. I think there have been some codebases where I actually have comments, where it’s like “Note to self. Do this…”

And those are priceless when you come back…

I know…

You’re like “Without this, I’d be so lost. But with this, it’s just enough. I can remember…” It brings everything back to you.

Definitely. Yeah, so it’s just like trying to give yourself that little ounce of context… Because it also helps someone else when they are approaching your code, and then they look at it and they’re like “I have no idea why this function exists”, and you might want to – sometimes I just create comments above the function itself, and just mention that “This function is here for this purpose” or “This is the input/out. This is basically what it does.”

Then tests are also a really great way for things to run… Which I personally use when I’m using other people’s tools, because I don’t know how it works. Sometimes I use RunKit, which is great, because – in npm if you use RunKit you can kind of figure out how a library works very quickly, without having to download it… But there are times when I’m already deep in the weeds and I wanna know what one function does, or the internals of how a library works… And then when I look at those tests that someone has written, it actually shows clearly what specific things do, so I don’t have to go super in-depth into reading the entire function to understand that. I think that helps with modularity.

I think sometimes if you do it test-driven as well - it’s a really great approach, because when you write the test, it’s very clear as to what you’re trying to achieve, and then when you write that code, it does exactly what you think it should do. And then that’s when you stop; you’re like “Okay, it does exactly one thing, and now I need to do this other thing, so let me move on to writing something else that maybe takes that output as input.” So it’s very imperative, so to speak… And modular.

I’m so glad you mentioned testing, because of my controversial opinion about this, which is “You should always have 100% test coverage.”

100%. Not for any of the technical reasons, but purely for the human reasons. Because of that, the maintainer isn’t gonna come after you. Because if you wanna have empathy to the person - maybe for yourself even, because you’re gonna come back and say “What on Earth was past me thinking?” And having the testing approach of “The examples are in the test.” The code will tell you what it’s doing, and the tests operate as the narrative of saying “Well, this should be doing that, at this time, given this context.”

Approaching the goal of 100% test coverage is protecting for that future, whether it’s for yourself or the other, and just having that empathy to the person after you, who’s gonna come and not have to reach that edge case or reach that scenario where some code is not tested, but it just works, or maybe it’s too simple to test… But still, maybe the context is not clear enough.

So to me, that’s why I look at 100% discoverage as a mechanism to enable those kind of best practices, not so much just to achieve the bragging rights of saying “My code is 100% test covered.” So it’s just a mechanism for that empathy to the developer after you, or to the future self, just to tell yourself why was this done this way; you can tell a narrative through testing. You can revisit that story in your head.

I do the same thing - I write comments in my code and tell the story through the comments as well, of “This is why I’m doing this here, and there.” But that only takes you so far. The other side of it is like “Well, here’s how the code should be used”, and that’s where the tests come in and help you with that.

[51:46] So between documentation, 100% coverage, and even using automation… Because automation is also another mechanism for storytelling. If a contributor comes in - again, whether internally within the team, or from the open source community - and wants to make a change or suggests a pull request to you, the automation will tell them a story… Because the automation would run the unit test, would run the security test, would run some integration tests perhaps… Telling that story is valuable and useful to that maintainer, and again, for your future self… Because I’ve certainly come back to things and asked myself “What on Earth was I thinking?” And there’s no way to go back in time and remember, other than the code telling you, and the documentation and the automation and testing telling you that.

I’m an advocate for testing, but I don’t think in my entire career I’ve ever reached 100% test coverage. Maybe just at the very beginning, like I’ve written one function and one test, and I’m like “Boom!” Or enough tests to input that.

That’s cheating.

Well, if they say the best code is no code, then it follows that the best tests are no tests, so… Just chew on that.

[laughs] That’s definitely an anti-pattern, for sure.

[laughs]

It has to be.

Anything else we didn’t touch on on this episode?

Perhaps one thing I would point out is, you know, we might be influenced by the JavaScript world a lot - this is JS Party after all - but there are always lessons and patterns to learn and to adapt from other communities and other ecosystems, and I think that’s one of the fascinating things for me, to always go back and look at other ecosystems. I mentioned the Debian package management world as an example, because as a user of it, I’ve used it for years, and now that I’m in the package management world, it’s a good thing to reference and to think about.

So I’ll be interested to hearing from the ecosystem and the community as well about what problems are we trying to solve in the JavaScript world that have already been solved in other ecosystems around modularity, around all these topics that we discussed, and to what degree does it make sense to adapt or adopt some of them?

There’s always this feeling that there’s something just outside of your purview, but it’s right there, but just because you’re not looking at it, you cannot be aware of it… And I’m always curious to see from the audience and the community - if you know these things or you have some answers, please, share them. I’m on Twitter. @AhmadNassri on Twitter.com.

There you go. Talk to Ahmad on Twitter or elsewhere if you have thoughts on these things. Ahmad, thanks so much for joining us, it’s been lots of fun. Divya, thanks for hanging out with me, this has been a great conversation. That’s our show this week. We will talk to you next time.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. đź’š

Player art
  0:00 / 0:00