Changelog Interviews – Episode #403

Laws for hackers to live by

with Dave Kerr

Featuring

All Episodes

Dave Kerr joins Jerod to discuss the various laws, theories, principles, and patterns that we developers find useful in our work and life. We unpack Hanlon’s Razor, Gall’s Law, Murphy’s Law, Kernighan’s Law, and too many others to list here.

Featuring

Sponsors

DigitalOcean – DigitalOcean’s developer cloud makes it simple to launch in the cloud and scale up as you grow. They have an intuitive control panel, predictable pricing, team accounts, worldwide availability with a 99.99% uptime SLA, and 24/7/365 world-class support to back that up. Get your $100 credit at do.co/changelog.

Algolia – Make every search lightning fast and deliver the results your customers want every time. Algolia’s search-as-a-service and full suite of APIs allow teams to easily develop super fast Search and Discovery experiences. Best of all, Algolia obsesses over developer experience. Learn more and get started.

Go Time – Your weekly podcast with diverse discussions from around the Go community.

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

So Dave, I’ve found your repo on GitHub, and it immediately caught my interest, because it’s one of those lists, and there’s all these awesome lists… This is a list not of links to other places, but this is a list of hacker laws. You describe it as laws, theories, principles and patterns that developers will find useful… And I thought “This is very cool, let’s talk through some of these laws.” But first of all, tell us why you created this repo and where the idea came from.

Thanks. Yeah, I was kind of inspired by the Awesome lists as well, to be honest; I use them all the time, especially when I’m exploring a new technology… And to a certain extent, the idea of Hacker Laws as a repo came from that, partly through my work as a consultant. I’m an IT consultant, so I work with lots of different engineering teams, and I work with lots of different organizations, and I would occasionally find myself saying things like “This is kind of an example of Conway’s Law here, where what’s happening is that the systems that are being built are reflecting the organization’s structure, rather than actually adhering to a sensible designed architecture”, which sounds like the sort of smart ass thing a consultant would say…

[laughs] Yeah.

The more I thought about it, the more I would jot down certain things I would hear, like the 80/20 rule, which I’d always read when I first started learning about programming was that you spend 20% of your time writing the first 80% of your code or your project, and then you spend the last 80% of your time doing the last 20%… And then realizing that this has a name, this is called the Pareto principle, and it has a whole bunch of real-world examples which it’s based on.

[04:10] So I started jotting these down, just on like an empty markdown file to start with… And then once I had a few pooled together, I put it on a GitHub repo, and every time I came up with an idea, I added it as an issue, to kind of remind myself to come back to it. And then a couple of my colleagues made suggestions, like “Hey, what about this? What about that?” and then it kind of grew from there.

And then I guess I was – just through sheer luck… I tend to try and publicize when I’ve added a new law on Reddit or Hacker News. A couple of times those posts have generated lots of discussion, so that’s kind of brought a lot of traffic over to the repo, which then brought more ideas for laws, and spirited discussions. So it kind of just grew from there, but it’s fairly organic.

Yeah, today there’s 15,000 stars, you’ve got 55 contributors, and it looks like – I counted 13 languages, so it’s also been translated in other languages… So this is a very – somewhat typical success story on GitHub. You put a thing out there, you work on it over time, and over time here comes the contributors, there’s interesting conversation… Of course, because these laws are often referenced and thought about by hackers and developers, whenever you see a list of them you’re like “Oh, this is awesome. Here they all are.”

And what I’ve found interesting as I went through this list is that a lot of my interpretations or my memory of the particular things is slightly off of what they actually are… Or can’t be described by me in a way that shows that I’ve internalized it. Sometimes you just memorize a phrase, and you just kind of broad-brush apply it. I actually wrote a post recently about why so many developers get DRY wrong, because we did a show with the Pragmatic Programmers last year, where they were rejiggering their book for the 20th anniversary, and one of the things they said is they had to rewrite the DRY section, because so many people misunderstood what they meant by the “Don’t Repeat Yourself”. That was a case where a lot of us can memorize the acronym, and can just misunderstand what the actual point of what they were trying to say was… And in that case it’s a distinct point, but it makes a big difference what they meant by that.

Yes, I think that’s completely it. And it’s also sometimes the intersection or overlap of these things, like – I can’t remember where it was that I saw a whole bunch of engineering principles printed out on the wall, and one of them was something like “KISS is greater than DRY.” It’s like, “DRY is great, but still, keep it simple.” So sometimes it’s okay to repeat yourself, like if you’d write a unit test, or whatever, and it does make the code more readable… And I think that’s something that’s kind of interesting about the laws - although some of them are called laws, one thing that I’ve tried to do is make it clear that I don’t necessarily advocate that any of them are correct or not… But a lot of them only have limited applicability in the real world… And some of them are just kind of humorous, or slightly out there a bit about organizations.

Right. And just to put a point on what the DRY misunderstanding is for those who haven’t heard this - “Don’t Repeat Yourself” is that every piece of knowledge should have a single (I’m reading it out), unambiguous, authoritative representation within a system.” And the slight misunderstand of that is “Don’t Repeat Yourself.” So I just wrote some code, and I don’t wanna repeat that code.

Now, if that code is the writing down of knowledge, then in a lot of cases that applies. But we often take it to mean “Don’t type the same thing twice.” But it’s not really about that. It’s about having a single place for each piece of knowledge in the system, and that distinction does make a big difference, because we tend to prematurely dry up our code in a place where it doesn’t actually make sense. You’re not actually repeating any knowledge here, you’re just repeating procedures. So - slight distinction, but big difference in practice.

[08:17] And I think DRY is a really good example of that, because I see it even in code editors nowadays, when you’ve got things like static analysis tools that will say repeated lines… And unit testing I think is a really good example, where - to me, a well-written unit test, I can look at it in isolation and understand how it’s setting up its expectations, what it’s executing and what it’s kind of asserting…

Right.

But if you were to make all of that unit test scaffolding DRY, you’d end up with a whole bunch of helper functions, and stuff like this… And sometimes that’s useful, and sometimes it does make it more readable… But actually, the kind of authoritative source of truth is probably the function that’s on the test itself, and the unit tests are really there as a scaffolding [unintelligible 00:08:59.28] framework. So it’s a really interesting one, DRY.

Yeah, so we thought for this conversation – DRY, one of them… Of course, there’s – I won’t say there’s hundreds of laws and principles, but I think there’s a few dozen… We obviously don’t have time to talk through them all, and I find some more interesting than others… I’m sure, Dave, you find some more interesting than the others… We thought we would just kind of ping-pong back and forth, talk through some of these and principles, the ones that maybe come to mind often for us, and that we think are generally applicable and interesting for folks. Of course, everyone will have their own take on which ones are good, better, best… But if I had to ask you, Dave, what’s the one that you think about the most, or that you apply the most in your day-to-day work of consulting or programming, what would you respond with?

I think in the world of programming probably one of the ones that I tend to think about a lot nowadays is Kernighan’s Law. Kernighan’s Law basically says that debugging code is twice as hard as writing it. So therefore, by definition, if you write your code in as smart a way as possible, you are not smart enough to debug it. And I don’t know if this is just because I’m getting older, or if it’s because I work on a lot of open source projects, or I have a lot of context-switching… But increasingly, I really felt that message, which is that actually “Think about your future self when you’re coming back to this, or to the contributor who wants to jump into your project, who’s maybe new to the language or the platform, or whatever…” It’s not about being smart or clever, unless you’re doing something ultra-specialized, like chipset optimizations, or something… Much of the time it’s about creating a sensible abstraction of the system that you’re working with, and nothing makes you less smart or clever than that really cool trick that you put in there, or all these abstractions, when you’re trying to unpick it a year and a half later, and work out what’s going on.

So that one – when I saw that there was a name for this, it made me laugh, and I thought “Yeah, that’s funny. Debugging is twice as hard.” And I have looked at my own code through the debugger and just gone “What– what’s happening here? How can this be happening?”

Yeah, absolutely. So this “law” comes with an assertion, which is that debugging is twice as hard… And maybe it’s an understatement. Maybe it’s 3x, maybe it’s four times as hard, but I think we definitely spend more time debugging than writing when it comes time to do that. And it kind of goes back to read versus write. You write it once, you read it many times. Sometimes you rewrite a little bit. But we spend more of our time reading the code than we do writing the code; just like we spend oftentimes more times debugging the code than we do writing that initial implementation…

And I love the way that he uses the word “clever” there, because it really does make you feel clever when you come up with a solution to a problem that requires a side-step, or a special use of the language, that you know, and maybe not everybody knows; and maybe you’ll forget it later and you won’t even know that trick later… Or you just learned some sort of esoteric aspect of your favorite programming language to use that to solve your problem. It feels really good, and a lot of times that’s the stuff that we programmers love, like “Uh, I came up with this clever solution.”

[12:25] But it turns out that the actual smarter solution - not as clever, but the smarter way of doing it because of this knowledge of “I’m going to be reading this later, or I’m gonna be debugging this later” is “Is there a more straightforward way of accomplishing this? Can I remove the cleverness?” and that requires a humility to say “Yeah, I’m smart enough to do this clever trick, but actually I’m smart enough to know that I should not do this clever trick if I can avoid it.” And if you can avoid it, your code is much more useful.

Or at least leave a couple of comments in there, if you’re gonna do something cool, like “Yeah, I’m not gonna multiply this by two, I’m gonna bit shift instead…”

Exactly.

…or whatever it might be. Although interestingly enough, this law was one of the ones which generated the most heated debates I think on Reddit when they published this… Because a lot of people said, and I think quite fairly, that – well, to their minds, clever code is code which is simple, which is elegant. The clever code is the code that someone has avoided unnecessary abstractions, or whatever… I think that that’s a fair counterpoint.

Yeah, maybe “tricky” might be the way to think about it. When you’re pulling your rabbit out of the hat - that kind of code is the code that can become problematic.

Yeah, absolutely. And then in terms of, I suppose, the world of consulting, there’s lots of laws that have to do with organizations, and I’m sure we’ll talk about them, but one that I think really sticks out - and I do talk about this with clients and with engineers regularly, is actually Goodhart’s Law. It’s a statistical law, but in its simplest form it essentially says “When a measure becomes a target, it ceases to be a good measure.”

And the reason I find this one really important is when you’re doing consultancy work, you’re often maybe involved in changing things, changing how organizations work, or building new things… So of course, people want to measure “Are we doing things well? Is what we’re doing making people more productive or less productive?” And that’s great, and that’s natural, and that’s good; we want to measure and make sure that the changes we’re making are overall having a positive impact… But in our desire to do that, we can sometimes kind of go too far and actually cause problems. And I hear this a lot from people when they say things like “How do we measure engineer productivity?” and then my kind of answer is “Well, basically you can’t.”

You can try and use metrics like lines of code per day, or you can try and use metrics like average time to close a pull request, or whatever… But the problem is as soon as anyone knows you’re measuring that, they’re gonna also know that to a certain extent you’re using this as some kind of target, and the smartest people there are just gonna game the system.

So if you start measuring how many bugs are attributed to an individual developer, then developers will stop working on complex code. Or if you’re gonna start saying that productivity is equal to the number of lines of code changed, you’re just trivializing the fact that you can spend two days debugging a system and make a 2-3 line change, which has an enormous impact. And you find this all the time in organizations with things like KPIs, where if you put them in place, you’ve gotta be very careful, because if they do become targets, it’s then easy for people to try and game these targets or feel threatened by those targets… You know, “Are these being used to rank me, or monitor me?”

[16:03] And actually, sometimes users to a certain extent and defensive engineers say “Well, it’s very difficult to measure the productivity of a craft.” And this comes down to something which is in many ways still a common misconception about software engineering. Software engineering is not like an assembly line, where you measure the productivity of certain systems and their efficiency. It is much more like a craft. It is an intellectual activity. And those kind of activities are very hard to measure, and you wouldn’t necessarily say “How can we measure the productivity of every ideation meeting we have?” You wouldn’t even consider that, because you understand that this is a more abstract, intellectual exercise.

I had a similar situation back when I did contract development with people asking for estimates around building out of an application… And I would always tell them that the closest thing – these are non-technical people who are trying to get a business started, or try to build an aspect of their business… And I would say the closest thing you have to understanding the software development process is like building a house… But that metaphor fails in so many ways that if you think that it’s like building a house, where we can lay out the design of the house and we can lay out how many stories and how tall the walls are, and all the details of the house, and then you can go out and get a materials list, and then you can go out and get subcontractors, and you can pretty closely come up with a budget for the design of a house, especially if it’s a cookie-cutter house… But even a custom home; you know, here’s the plan, here’s what these parts cost. You can take off a room, save this much money etc. That’s the closest most people get to understanding custom software, and it falls apart almost immediately, because we don’t have the design of the house in custom software.

Absolutely.

All that we know is we don’t have that at all, and so estimating what’s our price for this project at the outset is a fool’s errand. It’s actually, I believe, impossible. So I’d have to explain that to people, and that’s a tough pill to swallow when you’re trying to say “Can I afford to build this software?” But it’s just the facts of how it works.

Yeah, for sure. That is a tough bill. It’s very difficult to say to a client “I can’t tell you how much this is gonna cost, because I can’t tell you how complex it is, because I won’t know until I know more about the domain. And then when I know more about the domain, I’ll be able to say there are certain parts that are more complex than we expected, and you can choose to have them with the associated cost, or taken away.”

So why the word “architect” is a really strange word - because most architects don’t really do architecture. Architecture is about designing something where you know the end state. I always think of software architecture or systems architecture or enterprise architecture - it’s a bit more like SimCity. You set up industrial zones where you know you’re gonna have to have access to lots of electricity, or you set up super-highways close to the airport. You’re kind of like planning for growth, you’ve got a certain idea of where you want to put things to keep things in a certain order, and how you’re gonna move around resources… But you’re also kind of planning that things will grow organically as well, within certain areas. You’re just trying to do your level best to kind of gauge it right.

So of these laws, the one that I actually verbally say out loud the most, I believe, is YAGNI. You Ain’t Gonna Need It. I say YAGNI to myself, or to others, even sometimes outside of the world of software; I’ll just say the acronym to a friend and they’ll be like “What?!” I’ll be like “Never mind.” You Ain’t Gonna Need It. I think that it’s so true, in so many contexts… It’s so easy to get into the mix and start planning.

We just talked about how you don’t really know – some of the system is emergent; kind of like city planning, like SimCity. And we build things that we don’t need all the time. And again, it kind of goes back to the idea of the cleverness or the “I know what I’m going to need later” mythos. It just lets us down so often, and so many times we’re just not gonna need that thing that we’re building.

Yeah. It’s funny you said YAGNI. When you were saying “There’s one that I say a lot”, I was thinking “It’s gotta be YAGNI.” Because it’s the same for me.

[laughs] Saw it coming?

And it must be the same for any other engineer who’s ever had to work on a feature, where they’re just thinking “Is anyone really gonna use this, really?” Or who’s looked at their own code and thought “Why have I spent all this time abstracting this away, so that I can use a different kind of file system when I know I’m never gonna use it again? I’m never gonna use a different storage mechanism for this whole thing here that I’ve built an abstraction there for. I didn’t need that.”

Oh, gosh… Let me throw my friend Nick Nisi under the bus, who is a good friend and a good engineer and a JS Party panelist. We’re working on some software around JS Party’s game show; we have a Jeopardy-style game show called JS Danger, and we’ve built a web app so you can actually have a gameboard… And in that web app you have the contestants, and they have their faces. So it’s “These are the three people who are the contestants”, and we’ve put their avatars in there… And I built the first version of things; I was building out the JSON structure of how we’re gonna load this data as we can reuse this gameboard… And I’d just go out and I’d figure “Well, we’ll just load a URL and make an image source.” So I just go out to their Twitter profiles, and I right-click, and – I can’t remember if I download the file, or I just grab the URL and throw that into the JSON blob.

Then I pass it off to Nick to continue working on this, and he decides that instead of just a string, which holds a URL to an image, he’s gonna have like a handler function, which does something else… And that way, we can just put their Twitter name in, whoever it is, and it will go determine whatever their actual current photo is, and all this kind of stuff. And then – I mean, totally YAGNI, by the way. We’re gonna use this gameboard like once a month, once every few months… And we know the contestants beforehand, and it takes about 30 seconds to go grab those URLs. But a dynamic lookup was nice, even though YAGNI, until something changed in Twitter’s API, and the core’s rules, or something… Anyways, he couldn’t deterministically figure out what the URLs were anymore, so then he had to write a proxy server in order to resolve the actual URLs of the avatar images, and get a token, and all this kind of stuff. So sorry, Nick, but I threw you under the bus there. We’ve all done it. You were just over-engineering a thing that was totally YAGNI… And he had fun doing it.

[24:11] There should be an extension though, which is like “YAGNI, but I wanna code it.”

Yeah, exactly.

Sometimes I realize that that’s what’s going on with me… My dotfiles on GitHub – I get a new computer once every five years. I don’t know why I go through the effort of trying to automate all the setup of it… And it’s totally YAGNI, but I also just kind of want to do that.

Right.

And yeah, sometimes I find myself doing that, thinking “Am I writing this because it’s actually useful, or I just think it’s cool to have that handler function that can do this?” But look, guys, we can also do X, Y, Z.” Yeah, we can, but no one needs us to.

Right… Which – I mean, if it’s for pure joy, and it’s on your own dime or your own time, I’m totally cool with it. I feel like the lazy part of me is probably the one that says YAGNI the most, because I have these two battling things… I have the desire to build cool stuff, and to think ahead and be smart, but I also have this desire not to do extra work. So that - what I’ll call lazy programmer is the one that usually says YAGNI, because the one that gets going is like “Oh yeah, and then I’m gonna do this and that”, and then I start thinking “Do I actually wanna build out all these things?” No. No, I don’t. So I usually say YAGNI.

And again, I think this is where it’s interesting seeing how the laws play off each other a little bit… Because we’ve already spoken about the Pareto principle, but that applies here as well, which is that it kind of is YAGNI, which is that 20% of the features are gonna be used 80% of the time. The vast majority of the features you’re developing for an application or solution or whatever - it’s like hockey curve, or whatever; a small number of core features, the 20%, are gonna be used extensively, and then there’s gonna be a whole bunch of stuff that’s not used at all. Or maybe just not used to the extent that justifies the effort involved in building it.

And then there are those exciting moments when you’re kind of whiteboarding, starting to put something together in the editor and you’re thinking “Yeah, this is cool, and I could extract this into some kind of interface or plugin mechanism”, and it’s those moments when you have to stop and think “Yeah, but am I actually gonna need to do this? And if I do do this, is this gonna be one of those projects where I registered a domain name…”

Exactly.

“…and kind of never got any further?”

Right. But one argument towards YAGNI, even if you aren’t gonna need it again - I did a show with Saul Pwanson recently, and he wrote VisiData, which is a very complex tool for visualizing data inside the terminal… And he said “If this was just for me, I don’t need all of this. But I wanna have something nice, so what I do is I open source it, and now it’s worth all my effort. It’s worth all the stuff. Even if there’s things that I’m not gonna need again, because all these hundreds and thousands of people can benefit.”

So one thing that my YAGNI brain often misses out on is the opportunity of providing an abstraction that other people are going to use. I don’t think in reusable libraries that I can open source as independent little things very often. Often I’ll look back at code and be like “Holy cow, this is a library right here. This could be an open source project.” But some people think that way. They think “Well, maybe I’m not gonna need this function again, but other people might need it, so I’m gonna actually take the time and build an abstraction, put some documentation together and release this as open source. And now even though YAGNI for me, somebody out there is benefitting.” I think that’s a nice counterpoint to that principle.

Yeah, absolutely. And that’s that kind of open source mindset of “If I share it, then it could also grow on its own as well. It could get better.”

I suppose that a counterpoint again to that as well - a counterpoint to the counterpoint - would be you can design it for open source and you can design it for other people to contribute to it, with a plugin mechanism or whatever… That kind of reminds me of another law which I think is really important in software design, which is Gall’s Law, which basically says that a complex system that works is invariably found to have evolved from a simple system that worked.

[28:12] Basically, complex systems are not created as complex systems. They start off as simple systems, and they evolve over time. A big example that often gets used is the internet, which started off as a way for academic institutions to share data, and then it’s become what it is today. But you could look at things like Kubernetes as an example; it probably started off life well of coursse it started off life much more simple than it is now…

Right.

…all of these extra features and abstractions for storage systems, and different container interfacing, and so on - they kind of got added on over time as needed. But initially, they weren’t there as abstractions. They evolved. But who knows, because then if you just let different people contribute in different ways, you also run the risk that you lose coherency, and different people have different ideas about things should be pluggable or extendable, and you end up with a project that’s no longer internally consistent. So I guess you also need to make sure that when it is evolving, at least it’s evolving with a set of principles or patterns or whatever, so that it still makes sense to people.

Yeah, so if you do set out to build a highly complex system, what is the takeaway there? It’s that you have to break it down into a series of not-complex systems; you need to somehow get to a point where your starting place is not complex. Because if you design a highly complex system according to Gall’s Law, that’s going to fail. But if you know that the domain which you’re tackling is highly complex and there’s no way of actually getting around that, you need to break it down and you need to be able to build some sort of a simplistic either representation of that system to start from, or subsystems, which can be simpler, in order to build a more complex system that can evolve from them.

Yeah. And I think that’s exactly it. If someone had to create, for example, Kubernetes right now from scratch, and they were basically given the APIs and said “This is the specification of what we want and how we want it to work, but we’re not gonna tell you anything about the internals, and we’re not gonna tell you anything about what’s been happening in software development for the last 30 years”, there would be an enormous challenge for them, because as you said, it’s made up from smaller systems that have then been proven to work. Like, under the hood there’s [unintelligible 00:30:36.10] but distributed state management is really, really complex; it has all sorts of challenges, and there’s some of the laws in the repo about that as well… But they didn’t invent that from scratch; they used an existing, proven system. I think [unintelligible 00:30:56.13] is based on a Raft protocol, but I’m not sure. But anyway, they took an already proven mechanism for consensus-based representation of state and a distributed system, and then plugged that in. Then they took existing systems like volumes, file systems in linux whatever it might be, and composed it together from there. It wasn’t like every part of this system was created from scratch.

That actually plays into what I’ll call an interpretation of Hanlon’s Razor. Hanlon’s Razor is “Never attribute to malice that which is adequately explained by stupidity.” I’ve also heard that as incompetence versus stupidity, if we’re gonna start to mince words there.

Yeah…

[31:46] Which I think is a great thing to fall back on. I think it’s a gracious way to writ large approach people in life, is to think that probably this was not ill will, but probably this was incompetence or stupidity, whatever the situation happens to be. Now, it’s not always that (it could be ill will), but if you start with that assumption, that people are generally not against you, but happen to incompetent, or make mistakes, or stupid, then you go from there and it’s a much better way to live one with another… But I think the interpretation of that or a slight change of that which applies to this complexity situation is that a lot of times we attribute stupidity to the programmer that came before us, and in a malicious way… Like, this person either didn’t know what they were doing, or they left this mess on purpose, or whatever we have; there’s always like the previous programmer scapegoat a lot, or the one that you’re blaming whatever situations on…

And I think the way that you can slightly change that in a positive way is that don’t attribute to stupidity that which can be explained by lack of information, lack of context… Because the person that made that decision which no longer makes sense, or is confounding, a lot of times they weren’t stupid, they weren’t malicious, they just didn’t understand the system yet; complex systems evolve over time, and they evolve as more information comes into the game. And so a lot of those decisions actually were the best decision at the time, it just didn’t scale.

Yeah. And also, a lot of decisions just have to get made, sometimes quickly, and sometimes without as much time as you would like to take the decisions. And I think part of this is, I suppose, an emotional maturity thing. I think you learn it slightly as well when you’ve been around long enough to have been sitting around a table and someone just absolutely shredding something to bits and saying “What was this person thinking?” I was looking at this thing… Total Amateur Hour, like “What are you doing?” and you’re sitting there thinking “How long before I have to tell this was me?”

[laughs]

And I thought it was the right thing at the time… And I get it. I understand that it was not smart. But at the time I didn’t know what I know now, or I didn’t know it’d be used in this way. So I think that is a good one. I think it’s just also an important one as well. I think technology can sometimes be a little bit of a harsh world for this; we just need to be kind and inclusive towards each other.

You can see amazing things in the world of open source, in terms of – say, for example, the time people spend contributing to projects for no other reason than they just think that they’re cool, they love them and want to support them, and giving their time… And that’s wonderful. And you can also see people just kind of rip stuff apart to try and show how clever they are. And we will grow and we will learn, and we tend to learn and grow the best from people who are inclusive, and when we make mistakes, look at those and say “Hey…” Instead of tearing it to pieces to show how clever you are, say “Maybe we can look through this together, and I’ve got a few suggestions”, and kind of guide that person through that.

Yeah. It’s easy to forget that there’s a human on the other side of that text area, because it’s all text-based communications, and because all of the cruft and all of the stuff of life that you’re bringing to your laptop today, and I’m halfway around the world and I’ve got all my own stuff that I’m bringing, and we’re just typing into a thing and hitting submit or send… And we see an avatar. Maybe it’s a picture of your face; mine’s just a weird, green blob representation of me when I was 18 years old… So we tend not to give people the benefit of the doubt on the internet. That’s one of the reasons why I’ve always loved podcasts, and one of the reasons why honestly we get way less blowback on things that we say that are stupid on our podcast, or misrepresentation, or whatever it happens to be - because there’s an empathy with voice that lacks without it… And people just give podcasters that benefit of the doubt, because they can tell this is just a person talking. They can hear their voice, there’s inflection… You can hear doubt, even if you’re saying the words, whereas if you just typed the exact same words out, that’s removed…

[36:04] And a lot of the malice and the way that we treat each other online, I think, is because we’re just so abstracted away from the human on the other side of that text area… And if we just were more aware of that and thinking about that, and thinking “How is this gonna affect this person’s day? What I say about their open source project”, or whatever it happens to be - well, I think that we would all be a little bit better off.

Yeah, absolutely.

But it’s hard. It’s hard to remember that in the moment.

It is hard. But I think that’s a really important thing. And as work becomes more distributed, and teams become more distributed, those kind of things are more likely to happen. I’ve found, again, through consultancy work, having to work with different teams and so on, one thing I’ll often suggest on engineering teams that I’m working on, particularly if we’ve got a mixture of people - maybe contractors, different organizations who are kind of [unintelligible 00:36:58.00] together - is when you do your pull requests, look over the code, take your notes, but then go and sit down next to the person and talk over them together.

That was something I learned after really just seeing the occasional incident where someone would write something, and even maybe they were trying to be funny, and the sarcasm they were using didn’t come across in text, or they were just having a bad day and didn’t do exactly what you say, which is thinking of “There’s another person at the end of this, who’s maybe also having a bad day”, and actually, instead asking someone to [unintelligible 00:37:30.20] but then go and sit with them and have a chat about it, and make it more of a two-way dialogue, and then you get that more empathic conversation that’s happening, and you’ll probably both get a lot more out of it, as well.

Alright, Dave, hit us with another law…

Okay. I think I’m gonna butcher the pronunciation… Hofstadter’s Law. Apologies to anyone who can pronounce the world properly. It always takes longer than you expect, even when you take into account Hofstadter’s Law. I think this one is great; it always makes me smile.

It makes my colleagues smile when I say it to them. Basically, the law is that it’s always gonna take longer than expected, even though you know that it’s gonna take longer than expected.

[40:04] You just can’t avoid it. Self-awareness does not matter…

Somehow still it’s gonna take longer… And I love it, because it applies to software development, but it pretty much applies to everything else in life as well. I don’t know if that’s because we’re naturally optimistic creatures or something like this, but things just do take longer. There’s always that little bit of complexity that you start to unravel, and you looked at it and think “Oh, this is gonna be something that’s gonna – I’m gonna lose an hour working on this.” And then four hours later you’re like “Oh, I’ve actually [unintelligible 00:40:36.14] backwards.” And then the day after that you’re thinking “I’ve invested so much time now I’ve just gotta at least get this fixed somehow.” So that one always makes me smile.

The sunk cost fallacy, yeah. Absolutely. My boss, when I was doing my early days consulting - he would ask for estimates; you know, because you’ve gotta come up with something. And his rule of thumb as a manager of developers was take the developer’s estimate and then just triple it. And then you might be close. And I always thought that was ridiculous as a young man. I was like “Seriously? Triple it?” He’s like “Yeah.” So if they say six hours, triple that, and there’s your estimate. And it turns out you still undershoot sometimes when you triple that thing. And if you don’t, then you just get pleasantly surprised.

It’s funny you say that. I do that all the time. People will say “How long do you think it’s gonna take to do XYZ?” and perhaps one of the more junior people in the room would say “We can probably do this in two weeks”, and then internally I’m thinking “Okay, so this probably means it’s about two months then”, because I’ve been there, I know what it’s like… And even my estimate of two months is probably way off. And sometimes you see that shocked look on people’s face, particularly perhaps more business-minded people, and they’re like “Really?” and you’re like “Yeah. And I’m really sorry to say this… And I know it’s a tough one to explain, but it is just gonna take longer than we expected. Even though it sounds simple, there’s gonna be stuff that bites us. So either we just accept that and plan for it, or we go for optimism and we’ll probably end up late.

Yeah, I think if somebody says “This is gonna take two weeks”, I think at that point you have basically unbound risk, because it means they have absolutely no idea.

Yeah.

If they say two hours, they may be off – even by an order of magnitude I guess it would be 20 hours; that’d be quite a bit. But they go to a day. But if they say two weeks - I can’t think two weeks down the road on a software project, and I’m not sure anybody else can accurately, on a recurring basis… Maybe you’re right here or there. When I was still doing consulting and doing development hours, basically my smallest unit of time was half a day. I would say “This is a half a day, this is a day”, and the longest unit of time was three days. Anything bigger than three days - “Sorry, you have to actually rescope this and break it down into smaller pieces, because I cannot estimate more than this much time in any sort of accuracy.”

Yeah. And that’s just being brutally honest with yourself about the complexities of software development. I think that’s about right. And that’s why in Agile there’s this whole idea of breaking down large stories into smaller stories…

Absolutely.

And until you really break it down to the task level, where you’re saying “How many chunks of my days it’s gonna take me”, it’s kind of just a big question mark… And of course, that means you need tons of details to break it down to that level of granularity, which is why you can then get this complex – sometimes people saying “How long will it take to build a system that does X?” I’m like, “Well, you know, 18 months.”

Right.

But what is X? Or it could be 18 days if you just need something quick and dirty that kicks off a Lambda function and writes into a Google Sheets document. But what is X? And they kind of look at you as if to say “Pff… Why am I getting this kind of attitude?” and it’s like, it’s just so hard to know. And neither of us actually understand what X means. Even if we spend two days writing a 70-page document trying to define what X is, we still haven’t defined it.

[44:11] Right. Well, you get a few days down the road and X has changed, because you have more information, and so now it’s a moving target. One of my favorite things/least favorite things - eye-roll or giggle, depending on how I feel that day moments, is when somebody announces a new product or service on Hacker News. Invariably, one comment - at least one - will say “I could build this over the weekend.” Invariably.

[laughs]

“What’s the big deal here? I could build this in a weekend.” And I just have to think “You have not been writing software very long, have you?” Because yeah, you could build a shoddy subgroup of the main functionality, that only fits the happy path and your particular use case in a weekend… And that’s probably what this thing started as. A lot of products start off as a weekend hack, or just a proof of concept, and I got it working – it’s the 80/20 rule, sort of… You spent 80% of the time on the last 20% of the work; or you’re 90% done and you only have 90% to go, kind of a thing. But we tend to definitely overestimate our skills and underestimate the complexity of these systems, which leads us to Tesler’s Law, the Law of Conservation of Complexity. This law states that there’s a certain amount of complexity in the system which cannot be reduced. So we talk about break it down and make it simple, and the cold, hard fact is that sometimes there’s just no further down it can go. The complexity is inherent in the thing that you’re trying to solve. This is one that you said has resonated with you quite a bit.

Yeah, this one when I read it, it really did strike me as quite profound. I always loved this idea, a bit like in mathematics, that you can sort of reduce and simplify and make things more elegant and eliminate complexity… But what I like about Tesler’s Law is it does kind of just state that the cold, hard truth is that there comes a point where you ain’t gonna make that system any more simple. So the shoddy, you can build it over a weekend/two days project - even that, if you look at it as a system overall, there’s a ton of complexity there that maybe the coder hasn’t put. But the complexity still exists for the systems administrator who has to kind of wake up and find a way to restart the system late at night. Or for the end user who has to have a workaround, or whatever else it might be. So of course, you can eliminate unnecessary complexity if you’ve just done something in a way which is needlessly complex, but there will just always be certain things that you can’t get rid of, like–

Timezones.

Yeah, timezones are always gonna be hard. [laughter]

That’s one that hits closer to home for you and I, doesn’t it? We had some scheduling problems because of timezones, and the complexity of that system, right?

The show that nearly never happened… And in fact, one of my earliest software development projects - I was working on chipsets and device drivers, and we had to do some stuff around timezones, and it was extremely painful…

Oh, man…

But yeah, going back to the conservation of complexity, there are some things you just can’t avoid, like say timezones… Or like financial transactions in software, or transactions that you really have to be absolutely certain have happened. It’s pretty much impossible to always be certain 100% of the time; you hope to have done, say, for example, a funds transfer, or whatever. So whether you deal with that complexity by having an end of day batch process that does some checks and balances, or a dead letter queue for failed messages, or something like a modern system like Kafka [unintelligible 00:47:57.25] or whatever else it might be - there’s just no getting away from the fact that if you’re trying to send a message from A to B, it’s an inherently complicated thing to do in the world of computing. You can’t just magically wave a wand and have a solution that makes that complexity go away.

[48:16] Absolutely. I was thinking about timezones again, and a funny joke people made around recent advancements towards Mars is that the problem with going to Mars is we have to add a new timezone. And the complexity that comes in with the timezones – the thing about timezones is they’re geopolitical. They wrap around cities, they change because of politics… They’re really complex. And the reason I bring that up is because sometimes the complexity comes not even necessarily in the domain, but the fact that your software exists over a time span… And it has to apply in the current time span, not timezone. But the world changes. The complexity might be that the ground is swept out from underneath your software over time, so you may have handled the complexity that was in front of you, but you didn’t actually account for the complexity that was coming your way. That’s incredibly hard to do.

Yeah. And it’s kind of coming and going here, isn’t it? Because if you look at things like all the panic that there was about Y2K - well, the panic came from a classic YAGNI, which was people probably were rightly saying “We don’t need to use more than two digits to represent the year. If anyone’s still running this software 30 years from now, the world’s already in a lot of trouble anyway, so don’t worry about dealing with the millennium.”

Yeah… Was it Bill Gates that – somebody said “Who’s ever gonna need more than 48 kilobytes of memory?” I can’t remember what the amount was or who said it, but it was that exact kind of naivety of “Who’s gonna need four digits to represent the year?”

And this is the challenge, isn’t it - we just don’t know. Do we just use a timezone library, or are we flexible about timezones? And like you said, it can be a geopolitical issue. Timezones can change.

Daylight savings…

Even time itself can be changed with daylight savings, and stuff like this… And trying to engineer that into systems, with the flexibility to incorporate that change, could be hugely time-consuming. And then you’ve gotta do that balance between “Do we need it? How badly is it gonna bite us?”

Well, that plays nicely into one of my other favorite laws, which is Murphy’s Law, which doesn’t apply strictly to software systems, but certainly applies to software systems, which is that anything that can go wrong, will go wrong. So I am a [unintelligible 00:50:42.01] pessimist. I like to think that I’m a realist, which makes me maybe an optimistic pessimist, but I tend to think about what’s gonna go wrong. And I think that makes for a pretty good software developer, even though your code can sometimes get more complicated than it needs to be, because you’re accounting for things… But I’ve just lived long enough to know that “You know what - Murphy knew what he was talking about.” If something can go wrong, it’s probably gonna, and you’d better be ready for it.

Oh yeah, absolutely. And it could be when you’re looking at a pull request and thinking “This certain section of code - when I think about it, I’m not convinced that it’s thread-safe. But it’s never really gonna happen that we’ve gotta context-switch at this time”, or whatever.

Right.

But no, it will happen. And when you’re up late at night, trying to fix this issue, that’s definitely one that you’ll be remembering. And I think kind of having that healthy, skeptical view to things breaking is really important. And I suppose it plays nicely into a law that I added recently, which is I suppose a bit more academic, but I realized is super-important, which is fallacies of distributed computing, which is – I’m gonna have to look for my notes to get some of the best examples… But essentially, when we’re programming, we often kind of just assume that we can do a remote procedure call, and we know that under the hood there’s some stuff going over the wire… But the fallacies of distributed computing are that the network is reliable, and that latency is zero, and the network is secure, topology doesn’t change, there is one administrator… And this is a bit like Murphy’s Law, because basically Murphy’s Law is like “Well, we know that it’s these fallacies when we’re doing anything that’s distributed.”

Right.

[52:31] Things can go wrong, things can change, people can change servers around or remove something from a rack [unintelligible 00:52:39.26] IP address. We can get weird latency issues… And if the software is running for long enough, we’ll definitely find those issues one way or another.

Mm-hm. Or sometimes you’ll hit them, but you’ll never actually be able to understand, because of the infrequency of it. So here’s a small example - we have a bot in our Changelog Slack that posts when we publish a new episode; it’s integrated into our system. We publish a new episode maybe 5-6 times a week, but it just posts in there “Hey, new episode of Brain Science. Here’s the link to the Slack community.” And about once every 4-5 months it posts it twice, like boom-boom.

Now, in the scheme of things, this is a good problem to have, because it’s not a big deal, everybody in our Slack channel is like “Yeah, funny. Jerod can’t code.” And I’m probably never gonna actually get to the bottom of that, because it happens so infrequently, and it’s so small stakes… How would I even go about debugging such a thing? And I don’t care enough to do so.

Of course, I could probably find out what’s going on there, but yeah… Computers are hard, especially in terms of distributed computing. Networked computers are extremely complex.

Yeah, they are such a pain. Life would just be so much easier if people did not network things together.

Yeah. We’ve just play Solitaire by ourselves.

Yeah. Why did anyone ever come up with network and compute? Let’s just build bigger mainframes and run everything on one big mainframe. It’d definitely make life easier.

Well, we’re getting close to the end of our time here… Any big ones that you had on your list we haven’t talked about? Of course, we’re not gonna be comprehensive. We’ll link up all of these laws we’ve talked about, and we’ll also of course link Dave’s Hacker Laws repo, so you can go read them for yourself… But maybe one or two more real quick, and then we’ll call it a day.

Maybe one that’s not in the repo yet… I’m still considering this one, because just like any larger project, when you get a number of contributors, people come up with ideas… And it’s difficult to know where to draw the line between “Is this something that you can reasonably say is a roughly well-known principle, or just something funny someone’s come up with, which might become a principle someday?” But I did hear about something, and it was when you talked about Murphy’s Law that it [unintelligible 00:55:07.16] called Schrodinger’s Backup, which I felt was great. Schrodinger’s Backup basically says the state of any backup is not known until you restore it.

Oh, I like that one.

Because if you’ve ever done any kind of disaster recovery type stuff, that is exactly the case - until you tested that backup, until you’ve actually restored it, you don’t really know. And you can dry-run things, and you can test things out, but there’s always that uncertainty there.

That’s great, I do like that one.

It ties nicely into Murphy’s Law as well.

Absolutely. It reminds me of how I think about backups, which I say sometimes - nobody actually wants backups. Everybody wants restores. The backup is just a liability, actually. It could be a data breach scenario, it could be wildly wrong, it could be outdated, it could overwrite things that are valid…

There’s all sorts of things that can go wrong with backups… And if we could just skip backups altogether, we would. But what we really want is the restore. So a backup is kind of a means to an end. A restore is what we’re after. So make sure you can restore that backup or it’s completely worthless; that’s Schrodinger’s Backup right there.

[56:17] Yeah, and if you search around the internet, you’ll probably find a Reddit thread with some horrifying stories of people who’ve had terrible experiences.

Another one which I think is gonna come in soon is Box’ Law, which comes from statistics, which basically says that all models are wrong, but some are useful. And there was a bit of discussion about whether this is valid for software development or not. The discussion kind of came to the conclusion that it’s actually very similar to Joel Spolsky’s Law of Leaky Abstractions, where he says that all non-trivial abstractions to a certain extent are leaky. And I think these two are kind of essentially saying the same thing, which is that “Well, we’re not doing any kind of software development. We’re modeling a system of some kind”, which will create some kind of abstraction that represents something like a network, or a train timetable, or whatever it might be.

And of course, it’s only an abstraction. There are gonna be mistakes or simplifications; there have to be, because to reproduce it in its entirety would be too time-consuming and complex. But it doesn’t mean that it’s not useful. And I guess to a certain extent that’s where some of the whole idea of the craft of software engineering comes in [unintelligible 00:57:24.09] How do you draw the line of abstraction? Where do you draw the line? Where do you stop? Where do you say “We need more detail”? It’s a process that I guess we’re kind of always learning, and hopefully growing on that one as well, from our experiences.

Yeah, it seems like we’re still in the phase of like “Is it an art? Is it a science?” We can’t yet call it a science, because there’s not hard, fast “There’s these rules, and there’s idioms, and there’s best practices”, but it’s not like civil engineering, where we can just plug in all the numbers and do the math and say with 99.9% certainty “Yes, this bridge can hold that weight.” That’s a science, and we aren’t there yet, because it’s so emergent, and we’re still figuring things out as we go… But I feel like we’re on our path to that… Hopefully. Maybe. Someday.

Yeah. But so many parts of that are changing. Even today a colleague was saying – you know, we’ve got our systems running at 20% CPU utilization, 20% RAM… Could we half the number of systems? And we were saying “Yeah, in theory you could.” You’d have to look at potential congestion at certain times, like peak loads, and things like this…

Right…

But at some point, as you start to kind of constrain resources, you’re just gonna see weird stuff happen. Other things that you did not expect to be a problem will suddenly be a problem. Suddenly you’ll start getting disk issues, or you’ll get some kind of network issues or something… Because these systems, by their nature, are so complex; there’s so much going on that – you know, we have the abstractions like CPU, network, disk, RAM, whatever, but the physical processes that underlies all of this and the hardware that underlies it is highly complex. And complex systems – I mean, this is, I suppose, chaos theory… But complex systems are systems which have wildly unpredictable results, even with quite similar inputs. You know, you run the software on day one, it runs as you expect; you run it on day two, and you get something wildly unpredictable. And that was because of the timezone, or whatever else.

Yeah, exactly. Well, we’ve just touched the surface of these different laws and principles. I will submit to the listeners out there to check out these laws if you haven’t heard of the ones we’ve discussed; there are many others… And I think even just having - maybe not intimate knowledge of all these things, but maybe call it practical or working knowledge will make you a more well-rounded developer or software person. Whatever your role happens to be, these are things that others who’ve come before you have found to be generally true, of course maybe specifically false in specific instances, but useful nonetheless.

Dave, this was a lot of fun. I really appreciate you joining the show and talking to me about these Hacker Laws.

Thanks, Jerod. I really enjoyed it. It’s been great having a conversation. I’ll also use this opportunity to thank the translators for the project. There are a number of people who have just been tirelessly working at translating laws as they come in, which I’m just blown away by. I think that’s fantastic to see.

And also, to shout-out a colleague of mine that started a podcast called The Venture, which is all about venture builders in Asia. It’s quite cool. They’ve got some interesting people talking on that, so that might be one to check out if you’re interested in building new ventures.

Absolutely. Hook me up with the link to that and we’ll put it in the show notes. Links to all the laws discussed, all the things; you know we put them in the notes, right there, for easy clicking… So that’s our show. Thanks so much for listening, and we’ll talk to you next time.

Thanks, Jerod.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00