Ship It! – Episode #71

Modern Software Engineering

delivered continuously with Dave Farley

All Episodes

Dave Farley, co-author of Continuous Delivery, is back to talk about his latest book, Modern Software Engineering, a Top 3 Software Engineering best seller on Amazon UK this September. Shipping good software starts with you giving yourself permission to do a good job. It continues with a healthy curiosity, admitting that you don’t know, and running many experiments, safely, without blowing everything up. And then there is scope creep…

Featuring

Sponsors

HoneycombGuess less, know more. When production is running slow, it’s hard to know where problems originate: is it your application code, users, or the underlying systems? With Honeycomb you get a fast, unified, and clear understanding of the one thing driving your business: production. Join the swarm and try Honeycomb free today at honeycomb.io/changelog

FireHydrantThe reliability platform for every developer. Incidents impact everyone, not just SREs. FireHydrant gives teams the tools to maintain service catalogs, respond to incidents, communicate through status pages, and learn with retrospectives. Small teams up to 10 people can get started for free with all FireHydrant features included. No credit card required to sign up. Learn more at firehydrant.com/

DEX: Sort the Madness – Join our friends at Sentry for their upcoming developer experience conference called DEX: Sort the Madness. This event will be in-person in San Francisco AND virtual on September 28. This is a free conference by developers for developers where you’ll sort through the madness and look at ways to improve workflow productivity. Learn more and register

Notes & Links

📝 Edit Notes

Gerhard & Dave

Chapters

1 00:00 Welcome
2 01:00 Sponsor: Honeycomb
3 02:26 Intro
4 07:27 The Modern Software Engineering book
5 13:00 What principles will survive?
6 18:18 It's nobody else's job to give you permission to do a good job
7 25:55 Sponsor: FireHydrant
8 27:23 Find the balance of speed
9 30:19 Gerhard makes a confession
10 34:59 You won't get this right. Not at first.
11 42:28 Real world fitness functions
12 48:51 Everything worth doing is hard
13 50:56 Plans vs proposals
14 59:51 Sponsor: Sentry DEX
15 1:02:26 Video: The real reason cyberpunk 2077's software failed
16 1:07:38 David's favorite t-shirt
17 1:12:15 What's your favourite video?
18 1:15:14 These principles are universal
19 1:17:54 David's takeaway
20 1:21:16 Wrap up
21 1:21:33 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

The last time that we spoke was episode five. It was summer of 2021, about a year ago. And now you’re back. I’m so happy to have you back, Dave. Welcome to ship it.

Thank you. It’s a pleasure. I’m looking forward to our chat.

I cannot believe it’s been a year. I was convinced when we talked in six months we will talk again, because it was such an enjoyable episode. So my first takeaway - I’m going to do this differently; we have to do this more often, because it’s so much fun.

[laughs] Yeah, well, it’s always fun to talk to you, so maybe, if we can fit it in.

Yeah, we just need to get better at planning. That’s it.

Yeah, always… [laughs]

Yeah. So what happened between last summer and this summer? I know that lots and lots of things… For you, something is happening every week, right? You have a new episode coming up every week, coming out every week on YouTube. But there’s also a bunch of other things. So - highlights, your highlights in the last year?

I think we’ve kind of hit our stride a little bit with the YouTube channel. I’m very proud of the YouTube channel. We started it by accident when the pandemic started and I wasn’t traveling the world. But now, I’m being opinionated on YouTube, largely about software… But I get the most glorious feedback from people all of the time, saying “I tried what you said about test-driven development, and it worked. It had this remarkable impact.” Or… Yesterday, I had somebody that contacted me saying “Our deployment pipeline caught a catastrophic bug on its way to production yesterday. So thank you.”

That’s absolutely delightful. So the pleasure that I get from giving my opinion on how to do software well, and people listening to that opinion, and applying it to their own work and finding sometimes that it works is enormous. So I get a huge amount of job satisfaction. To be honest, sometimes it feels a little bit like a job, because we’ve been releasing a video every week, really. We release videos every Wednesday evening at 7 PM UK time, and so far we haven’t missed one since March 2020. And there are times where I’m chomping at the bit, because I’ve got a huge list of things/topics that I want to cover, and there are times when I’m thinking, “Damn, what am I going to cover next week?” or whatever. And sometimes I’ve got a bit of a buffer, I’ve got a few episodes in the can, and sometimes I haven’t, depending on life and how that treats me. But on the whole, it’s been a great pleasure.

Some of my high points… I’m quite proud of the video that we did last week, and another one a few weeks ago. So I did for the first time a book about team topologies, and the way that you can use teams as a tool to structure development on larger scales. And the fantastic book by Matt Skelton and Manuel Pais that describes that model. So I talked a little bit about that. And this, the current video – we’ll be releasing another one later today, but the current video that was released last week is about platform teams, and platforms, and some of my approach to platform design… So how to design systems that are loosely coupled with services in respect to other parts of the system, and some of the strategies for that. And I was quite pleased with that, because that’s something that’s kind of been in my kit bag for a long time as a software developer, and I just wanted to talk about it for a long time.

[06:18] The popularity of the episodes inevitably on YouTube goes up or down, largely depending on whether you managed to hit the YouTube algorithm in the right place or not… And that’s kind of an entertaining game. My family, who work with me on the production and the marketing and so on for the YouTube channel - we all play the game of kind of watching the video and tracking where it’s going, and so on. So it’s been a lot of fun.

The other side, since we spoke last - this sounds like you’re advertising, and yeah, I’m advertising, but I’m not really advertising; I’m just telling you what’s going on - is we’ve done a lot with them self-training courses that teach different aspects of software development, and we’re starting to see the hockey stick effect of that taking off. So we’re selling quite a lot of those courses now. I’m very proud of those too, they’re very good, but that seems to be self-sustaining there. So we’re getting – at some level, I still need to work and get paid for these things, and so this is all paying for itself now, which is nice.

Yeah. There’s one other thing, and I remember that thing because we talked about it off the recording, when recorded last year, and we said, “Okay, we have to talk again in about six months”, because you were writing at the time, the Modern Software Engineering book, and I was really looking forward to that coming out. Of course, it did; January of this year, it came out. I could hardly wait to read it. For me, at least. That’s when I got it, January this year, I read it, I left a review… And I really enjoyed that. So that was like one other thing that came out, and it was like one of my favorite things that you produced this year.

Yeah, yeah. I was thinking in terms of the YouTube channel and stuff, but you’re right, that’s been a big event for me; that book was two or three years in the writing. I tend to write fairly slowly, for books anyway, because I’m usually doing other things, and so I write in the gaps between other things… But I’m very proud of that book. I was nervous when we spoke last a little, of the release, because I wasn’t quite sure how it would land. It’s a different kind of book, and I think it might fool some people.

Somebody in review recently said that it was a very philosophical book about software development, and I think it kind of is. I think that’s correct. It’s not a, “put tab A into slot B” kind of book. I would characterize it, I think of it as a somewhat thoughtful book about software development and what it takes to do it well. And one of the things that happens - as one gets older and more experienced, one gets a broader context on ideas, and kind of a little bit of ability to do the meta thing, and watch yourself and see why the things that work work, and why the things that don’t work, don’t work, and so on a little bit… And I wanted to try and synthesize some of that. I came to the conclusion that I thought there were some fairly deep principles that were common to all of the good software development that I’d seen, and I wanted to try and capture those. And that’s what the Modern Software Engineering book was really targeted at - what are the durable ideas that go beyond ideas like programming language, or framework, or even design patterns to some degree? What are those principles that are table stakes, that at least in my opinion it’s impossible to do a good job without those things? And I wanted to try and synthesize those into a description. And I’ve got my version of that. The Modern Software Engineering book is my version of that.

[10:07] One of the fascinating things is – and understandable; if I were right, those ideas are durable, they will have been around for a long time, and they will be around for a long time. And one of the things that I am seeing is that I keep seeing other sources now that reinforce my opinion on those ideas. I saw a fantastic presentation that was recorded by Michael Feathers ten years ago recently. I hadn’t seen it when I wrote the book, but he’s talking about one of the ideas that I describe in the book, which is that testability drives quality in design. So striving for testability in your software improves the quality of its design. He doesn’t quite say it that way around. He says it differently to me, but we’re talking about the same idea. And that’s something that pleases me. I don’t think that it’s a book that’s filled with brand new ideas, but I think it pulls together a lot of ideas and puts them into – it organizes them in a way that they’re easier to understand their relationship and how to consume.

One of the downsides of that, one of the criticisms that I’ve seen feedback on the book is that if you’re reading it to my mind, in a narrow way, you might see it as being repetitive. I don’t see it that way, but I can understand the criticism. Because it loops around, because these things are interlinked, and you need to talk about the way in which they’re related to one another. But I’m very proud of it. And I am proud that it’s landed extremely well, as far as I can tell. So it’s regularly a bestseller on Amazon in one or two, or sometimes three of the categories that it’s in. And it’s regularly in the top few thousand books on Amazon UK, at least, which is usually a sign of it going pretty well.

Yeah. I’m sure a lot of it has to do with my excellent review, which I left in January…

I’m sure it is. [laughs] I’m fairly convinced it is.

People have found it helpful so far, so… [laughter]

One of them might have been me. [laughs]

Exactly. And the rest of your team, so I think we know which people found it helpful… [laughter] But I even included a picture of my notes from the book, and even like a little drawing, and I remember putting like a loop there, the OODA loop, or whatever you want to call it… For me, to be honest, the title - I was expecting something else.

But I think the title is trying to hide the longevity of the ideas. Because it says modern. Well, they will always be modern, whether it’s 10 years in the future, or 20 years. I’m pretty sure this is a classic.

Well, thank you.

Yeah. So while it’s very difficult to capture those ideas that are very compressed, because the principles - that’s what they are; there’s like a lot of experience that goes into writing them. And to have a series of those that work well together - it’s really, really hard. To write something relevant for 10 years in the future - very difficult. Any framework, any programming language, even if it’s Java, 10 years from now, I don’t think it will be as relevant as it is today. It’s stood the time really well, but there’s new languages coming, new approaches coming, we have functions as a service, we have whole new paradigm shifts in the industry. So what are the principles that will survive all those changes? Because change is constant, and it’s big, and it’s bigger, and it accelerates, and all that. So for me, it’s a classic. For me, it’s something that I will keep in my library, and I intend to re-read every couple of years. And I think if you think of it that way, it’s a book that, there is pleasure in owning it; and few books to me are like that. One of them is a calculus book… just like an example.

Yeah… Well, that’s lovely. Thank you very much. And genuinely, thank you for the review. I think it’s one of the more popular reviews on Amazon. Lots and lots of people think it’s helpful. It’s usually top of the list of reviews, I think.

[14:21] Pictures make a big difference. If someone took the time to handwrite notes from the book, took a picture and posted it, it means they really enjoyed it.

Yeah, yeah. So thank you for that.

You’re welcome.

But I agree with you. So I’ve been working in professional software development for close on 40 years, and I’ve used – for the book, I counted up, I’ve written software professionally in something like 20-odd languages; I’ve used probably hundreds of different frameworks of different kinds, and thousands of libraries of different kinds over that time. And as you say, all of them to some degree are somewhat ephemeral. There’s not many people that write software professionally in Assembler anymore, for example. I did. But nevertheless, I think that there are some of these ideas that have had an impact on all of those things. And these days I largely work as a consultant, and advise companies building software of all different kinds, with all different kinds of technologies, and these ideas apply to all of them.

These are ideas that if you work in a way that’s focused on doing more of these sorts of things, being more iterative, focusing on feedback, being more experimental, those sorts of things, and building code that’s more modular, cohesive, has better separation of concerns, is loosely coupled in the right places, and has good abstractions - those sorts of ideas, if you do that, your software is better. Whatever the nature of the software, whatever it’s for, whatever technology you’re using, it’s just better than the alternative if it exhibits those properties. That’s something important, that implicitly, at some level, if you’re good, if you’re experienced, you know… But I’ve not seen that synthesized in the same way, in a way that can help people to see that haven’t got to the point in their careers where they’re seeing it. And I think that’s what an engineering discipline for software ought to be able to do; it ought to be able to improve our chances of success.

I think one of the things one of the bridges that we burned historically, in software development, was by imagining that software development was some kind of cookie-cutter process, and it’s not. It’s an intensively creative process, and we need to optimize for that. So the book is divided in two, really three sections. So it talks about a series of ideas that are focused on optimizing for learning, because that’s deeply part of what it is that we do as a discipline. And it talks about a set of ideas that are optimized for managing the complexity of the systems that we’re building, so that we’re able to maintain and sustain our ability to change our systems over time.

And the third part is about how we pull all of those together to be able to apply those, and some techniques, ideas like continuous delivery, optimizing for testability, speed, those sorts of things, that we can use as tools to kind of drive our ability to learn faster, and to manage complexity more effectively. But if we focus on optimizing for learning and optimizing for managing complexity, I think we inevitably increase our chances of success. We can never guarantee success, but a dam site more likely to be able to do a better job if we do those things. And if we don’t… And that’s really what interested me and what I was trying to get to. And in my personal experience, it works, and now I’m very pleased that it seems to be working for other people as well, thinking in those kinds of terms.

[18:17] Yeah. Those are some really powerful ideas. I’ve learned them the hard way. My career is maybe half of yours, maybe, maybe less… But it’s interesting how we all seem to converge on the same ideas, independently, that work. And then finding like-minded people that you realize, “Actually, it’s not just me. Many others had this problem, and this is a potential solution.” And guess what - it may work, it’s likely to work… You have to try it out to see what exactly needs changing about this approach. But the fundamentals - they will be the same; what changes is the implementation based on your context. And that’s what most people get stuck on. They think that - again, going back to the cookie cutter process… You can’t take specific steps that you apply, and it will just work. You’ll need to adapt, you’ll need to change it, you’ll need to understand why you’re doing certain things. And when you do that, you’re more likely to succeed. And the keyword is “more likely” to succeed. It still depends on a lot of other things that happen. And most of it is other people not understanding, or other people fighting against it; they want a different way, they want maybe a top-down approach. They want “No, I told you to do this, you have to do this. And you have to do it by tomorrow.” And people say, “Well, that’s not how it works. Like, we can’t. It’s impossible.”

So what would you say to people that are in those situations that they are starting to understand these principles, they’re starting to apply them, but it just goes wrong in a thousand different ways, and it’s not because of these principles?

I think there are a number of aspects to that… And one of the things that I’ve said on my YouTube channel a couple of times, and I think I wrote it in the book - I don’t think it’s anybody else’s job to give us permission to do a good job. That’s one of those things that we take responsibility for. In our own work, it’s our responsibility to do what a good job is. Now, of course, there’s always the possibility that we can work somewhere dysfunctional, and for people that don’t understand the problem. But the first thing, the thing that’s most clearly in our control, is how we think about and approach our jobs.

So the starting point is, don’t shortcut, don’t parse your estimates, don’t say to your boss who’s saying “Feature, feature, feature”, “Well, we could deliver it next week if we didn’t do any testing, and we didn’t do a good job of design.” That’s not doing a good job; that’s not going to give you or your boss success in the long term. And one of the things that I would like to – if I could wave a magic wand, if I could make one idea stick across our industry, it’s that software development is a long-term game, not a short-term game. So I can gain time by doing a crap job for a week or two. And if I’m leaving, and my somebody’s telling me that I’ve got to hit a deadline in a week, I could cut all sorts of corners, write software that doesn’t work very well, that’s almost unmaintainable, and then I could leave and I suffer no consequences. But that’s a very, very short-term view of the world.

[21:44] The reality of software development is usually we are in employment at least for a year or two usually, and we’re going to be around for a little while. We are going to suffer the consequences of our actions. And even if it’s not us, somebody else is going to suffer the consequences of our actions, because software lives for a long time, and is worked on for a long time. I can’t remember the numbers now, but they are hugely in favor of the amount of time that software spends in maintenance over the time it spends in development. Therefore, we need to be building software that is a nice, easy place to work. And if we optimize for that, if we optimize to make sure that our software is readable, concise,

understandable, testable, then that’s going to allow us to go into it later when we’ve forgotten about what it was and change it sensibly. If we have ideas about where the lines are in our software that separates one set of responsibilities from another, so we can work on one part without compromising another - that’s going to make it easy to maintain those sorts of things.

So if we do that, then that maintains our ability to change. That means that if I wrote the crap this week, and I’ve got to fix it next week, that was no longer a win. That was no longer a benefit; that no longer saved me time. And there’s data that backs that up. If you look at the state of DevOps Report, and the DORA metrics, and the Accelerate Book, that sort of information, then it says that teams that score highly on the metrics based on that sociological analysis of software development, high-performers on that scale spend 44% more of their time on new features than low-performers. And what are the measures that they’re measuring? They’re measuring the speed with which we kind of deliver a change into production, and the quality of the changes. So there’s throughput and stability, are the metrics that measures are based on. Stability is a measure of quality. And if you want to go fast, you’ve got to do high-quality work, because that’s what’s sustainable.

So cutting corners is not only short-term, naive from the point of view of you as the developer, because you’re going to suffer later when you’ve gotta fix the crap that you wrote last week, but it’s also naive from the point of view of the organizations that employ us, because now you’re building worse software, slower. And our objective is obviously the opposite of that. We want to build better software, faster.

So optimizing for the short-term is dumb, and so the first step that we can take is to make our own choices, and take responsibility for the quality of the work that you do. It’s not your boss’s job to give you permission to write tests against your code. That’s your job. And I think often those sorts of things are - forgive me being a grumpy, old man, but I think sometimes they’re used as a bit of an excuse. “I don’t really want to write tests, so I’ll blame my boss for not allowing me to write tests” kind of thing. But whichever way around, that’s what it takes to write high-quality software. So do the stuff that it takes to write high-quality software.

And the other thing that the dataset says - you’re gonna have a nicer time, you’re going to enjoy it more, you’re going to build more software, and you’re going to have a better time while you’re doing it. And that’s in the interest of your organization. So the second part is important, too. If you do work in one of those dysfunctional organizations that’s pressuring you in the wrong way, some of the stuff that I’ve just talked about, some of the sources that I’ve just pointed you in the direction of - the Accelerate book, State of DevOps Report, and so on - are sources of information to try and start changing their minds to point out that they are being irrational. If they want to deliver software faster, then they should be actively insisting that you do a higher-quality job.

So if I was to do a summary of what you just told us, I would say that, first of all, go slow, take your time, and do it right.

And while it may appear slow, you’re actually just going smooth, you’re optimizing for smoothness, long-term. You don’t want go fast, then go slow, then go fast, then go slow… Or go fast for quite some time and then start going slow, and you’re wondering “Why? Why am I going this slow?” Well, there’s many reasons, and it’s based on what you were doing in the past… So just optimize for that nice, smooth delivery, and figure out what your team’s pace is.

Everything has a natural pace, and if you try to go against it, if you’re trying to go too slow, people will get demoralized, because you’re just like dragging your heels; you’re just wasting time, basically. If you try to go too fast, people get frustrated, because they can’t do proper work. So find that balance, show up every day, and then the rest will just happen, basically… You know, just let it unfold.

Yes. And the key idea, the point – one of the things that really convinced me to be a big fan of the DORA metrics and the Accelerate Book and all of those sorts of things came fairly early on; it was either their first or second release of the State of DevOps Report, I think… But there was a statement in one of those reports, 2014, 2015 around that kind of time, that said “There’s no trade-off between speed and quality.” And I kind of knew that implicitly, but I didn’t really know it. I didn’t really grok it at that point. And I certainly didn’t have any data to back up my assumption, my belief that doing high-quality work mattered in terms of being able to be productive, as well as everything else.

I wanted to do high-quality work, because I was a software developer, and I like to write code that’s elegant, and well-tested, and all of those kinds of things… But I didn’t before then really – I was less forceful in my arguments for quality than I am now. Now, I think that there’s no argument for doing low-quality. The quick wins of cutting corners on quality are so short-term that they’re irrelevant. But I think that the line crosses, but where you’re going faster by doing high-quality work in small numbers of weeks, a month or two at the most. So if you’ve got a deadline that’s more than a month out, then you must be doing high-quality work if you want to hit that deadline. So that’s the way to optimize for doing that, not cutting corners on quality. And still, I see teams and organizations that are almost structured to try and cut corners on quality, which is crazy.

[30:18] Now, I do have to make an admission… It wasn’t that long ago, a few months back, when we did cut corners in my team, and we did ship code that wasn’t really tested… But the only reason why we did that was to learn. So we did the short-term – we took the long-term hit, so that short-term we can learn more as to what works, what does like the final design look like… And we just literally stitched it together, so that it kind of works, and we can learn from what does the right solution look like.

What happened afterwards is that we realize that some of the tests - because the systems which integrate were very difficult to write. So you need to write some integration tests, but focus most of your time on unit tests, and higher-speed tests, which take less than a second to run, rather than minutes.

So what we realized is that we were able to test the idea with users. We said “This is alpha software. We just want to know, like, does this look right? And can you tell us what this is missing?” And then we took the time to do it right. What do you think about that approach? Do you think it’s still a no?

No, I don’t think that’s a no. I think that’s absolutely fine. But you do it under controlled circumstances. That’s the difference. In the original Extreme Programming book (released in the late ‘90s by Kent Beck) there was an idea that was introduced that was called spikes. And the idea of a spike is a spike is a different kind of investigation. A spike is - what you’re really interested now is not producing some functions or features for the user’s benefit; you’re producing something that you’re going to learn from. And in that circumstance, you don’t necessarily have to be doing production-quality work at that point. You just want to get to the answer as quickly as possible. I would catch that in slightly different terms, in the way that I describe things… It’s all about working experimentally, and there are different kinds of experiments. This is an experiment where we want to kind of try something out as a conscious stepping stone to making some choices in terms of the direction of our products, or our team, or whatever else.

So I think it’s perfectly acceptable at that point to control the variables in a way that it’s not going to damage anything; you don’t want to be releasing shoddy code into production for everybody, but maybe for a small group of alpha users, or something like that. Absolutely sensible and acceptable to learn.

This kind of gets back to what we were talking about before, is that none of this is simple. If there was a recipe, if there was a sequence of steps that we could follow, that would always work out and give us the answer every time, we could write code to do that, and we’d be out of a job. It’s not that simple. It takes human ingenuity, human decision-making… And let’s be clear, it takes smart people to build good software. And let’s not be ashamed of that; that’s part of the joy of the job, is solving complicated problems. Engineering is about trying to apply/use all of the tools, whatever they might be - intellectual, physical, whatever, bring them all together, and try and do the best job that we can. And that includes applying all of our experience, and skills, and talents and so on, to try and do that.

[33:54] So I have nothing wrong at all with that. But I always try and be cautious the way that I talk about these things. You and I both were cautious earlier on when we were saying it doesn’t guarantee success, but it increases the probability of success. That’s the best that we can ever do. There is no guarantee of success. We could do the perfect job of software development, we could be flying as a team, and building something that nobody wants to use.

Exactly. That’s exactly right.

That’s not a success. So we’ve got to learn, we’ve got to figure out – and part of what we’ve got to learn is are we building the right products? Are we building them in the right way? Do they resonate with people? And so on. And none of that is simple. None of that is the sort of thing that we can just kind of put a measuring stick on and say yes or no. So we’ve got to carry out these experiments. Sometimes they’re subjective, sometimes they’re quantitative, but that’s how we learn - trying stuff out, seeing what works, see what doesn’t, and maintaining our ability to make progress in this sea of the unknown, to some extent.

I think being humble, admitting that we’ll be most wrong… I mean, we will be more wrong than right, and being able to accept that, “Hey, I was wrong”, having a team that is kind to making mistakes, optimizing for getting it out there as soon as possible, and as often as possible… Because let’s be honest, you will not get it right, maybe even on the 10th try. You will keep trying, and eventually, things will start making sense. But to do that, you cannot design or create the perfect plan, the perfect software, that when you get it out there, it will just work. Because you have a perfect plan, and a perfect delivery mechanism - that doesn’t exist. Nowhere.

And I think you talk about this often, where the perfect plan – it doesn’t matter how long you take to think it through, to plan it, to manage it. That’s not what this looks like. This game is played differently, and the long-term approach is key. If you’re optimizing for months or weeks, forget about it. It’s years, maybe even decades, in some cases.

Absolutely. And there’s a few things that you said in there that I liked, that you said about teams being kind to one another. I think that’s important. And being tolerant, allowing ourselves the freedom to get things wrong… I think very deeply in my own work, at whatever level, about that… Because as you say, I start out from the assumption that nearly everything I do is going to be wrong in some way. And so how am I going to be able to cope with that, and how am I going to allow myself the freedom to change my mind? Technically, if we’re designing software, that gets back to that stuff about managing complexity - all of the things about managing complexity are only giving us the freedom to make a mistake and correct it later on without throwing away everything that we’ve ever done. I think that’s really important.

I think of myself as a software developer as being defensive in these terms. I’m going to start out trying to design my systems and code my systems in ways that allow me to change my mind about some of the things as I move forward. And this is true about everything. This is true about the goals for the software. If we go and ask our users, they don’t know what we want. If we ask our product owners, they don’t know what’s going to work for users. If you ask the developers, they don’t know what’s going to work for users either. We’ve got to try stuff out and find what lands and find what doesn’t.

I’m occasionally guilty of using soundbites, and one of the soundbites I’ve used in this space is that “A perfect plan is a perfectly stupid idea.” Because a perfect plan has one solution; you’re precisely targeting this one point in time and space that you’re trying to hit. And there’s almost no chance for any – then anything beyond the few milliseconds, there’s almost no chance that you’re going to be able to perfectly hit that target.

[38:03] One of the ideas in the “Optimizing for learning” section of my book is the idea of iteration. Well, two ideas - iteration and feedback. So if we work iteratively, we are going to make progress in small steps. After each step, that gives us an opportunity to reflect on the progress that we’ve made. If we have some kind of fitness– so we’re going to have a target; our plans are in the form of there’s this flag on a hill that we’d like to kind of… “Wouldn’t it be wonderful if we got there?” And those are sort of loose, slightly imprecise, inexact in terms of what it is that they want to do… But it’s the moonshot. “Wouldn’t it be wonderful if we could achieve that goal?” And then, we’re going to start iterating. And as long as we have some kind of fitness function, a way of measuring, “Are we closer or further from that target?”, then even if we just did a random walk and just started iterating, without any intelligence at all, we could sort of try something out, “Does that move us closer to our goal, or further away?” We discard things that move us further away, and keep the things that move us closer. If we just did that, we’d hit the goal. Even if we move the goal, even if partway through we think “That’s the wrong goal. We’re going to shift the goal and move it over here”, but we’d reach out and we’d change our fitness function, we’d still hit the goal. That’s the power of iteration. With an iterative approach there are many ways of winning. So that’s one way in which it improves our chances of success, because now we’ve got more chances of success, because we can find multiple routes to our goal.

So I think planning is used, as the general – and I’ve forgotten who it was; one of the World War II generals, I think, famously said, “Planning is wonderful. Plans are stupid.” I like planning, I like thinking about what’s the future like, but I think people that spend… You know, I once worked on a team and the project manager went and locked themselves away in a room for two weeks writing Gantt charts… And guess what, the Gantt chart didn’t meet reality at any point. Even when she came out of the room with the Gantt chart, we had already moved on from there. [laughs] It doesn’t work. So it’s more complicated than that… And that’s one of those naive things.

So seeing it as these kind of complex, adaptive system, which the whole environment of the software and the people that are building it, and the customers that are using it are changing all of the time, and if you make one change in one place, it changes what other people perceive of it - that’s just where we live. That’s just the nature of the place that we inhabit as human beings and as software professionals. And so suck it up; we’ve just got to find ways of working there. And that takes iteration, feedback, working experimentally, those sorts of ideas to be able to navigate that kind of space. And some of our guesses will be wrong. That’s fine. There’s nobody that knows the right answer. The common refrain of software development teams saying, “Oh, we would have done a good job if only that the requirements had been correct…” They’re never going to be correct. That’s just an illusion. Nobody knows the answer. Even if you have the user sitting in your room, that’s only their guess.

One of the things that I liked that Steve Jobs used to say is “How will the users know what they want until I tell them?” That sounds incredibly arrogant, and he is. I mean, I think he was an incredibly arrogant man. But nevertheless, there’s this truth there. If you’re doing something innovative, you ought to be ahead of the users; you weren’t gonna get an iPad or an iPhone by asking users what they wanted. You’ve got to think ahead. But at the same time, you want a vision that your users are going to love, so you’ve got to be listening to your users.

[42:07] So it’s not as simple as – nobody knows the truth. Steve Jobs got it wrong lots of times as well, and built crazy things that nobody liked… That’s just life. And so and so working in ways that allow us the freedom to make those mistakes and correct for them, to my mind, is the only sane strategy.

Can you think of examples of real-world fitness functions that teams can apply to determine if, first of all, they’re going in the right direction, and are they closer or further away from that goal? What would that look like in the real world?

I think there are a variety of those kinds of things, but they’re always quite contextual. One of my friends has recently written a very good book about SRE. He works at Siemens Healthcare, and they adopted SRE at Siemens, and got some really good results. But one of the things - it slightly changed my thinking about SRE reading his book… And one of the ideas there is this idea – in my terminology, it’s about working experimentally. So part of the idea of SRE is that you set, you define what your service-level indicators are when you’re building something. You say, “This is the measure that’s going to tell me whether this is working well or not.” And then you set your service-level objectives; what scores on that measurement you would deem counted as success. And that makes sense. And nearly always, when you talk about SRE, people think about that in terms of technical measures: throughput, CPU utilization, those sorts of things. And that’s fine. That’s good. Those are one form of those measures; for certain classes of changes they’re extremely useful. But there’s another group that are equally valid, and that this model works equally well for, I think, which is the more business-focused things. And that’s I think one of the reasons that we tend not to think in these terms, is because it’s so much more contextual. It’s going to be unique – or maybe not unique, but it’s going to be specific to each individual feature. If you’re building a feature that’s intended to recruit more users, then maybe your service-level indicators are new registrations. And your service-level objective is you want to up registrations by 80%.

This is amazing, because you obviously didn’t know about this, but my team did exactly that.

[laughs]

The SLI was number of active users. We had this new feature which we made an assumption that it will generate more users that will use this service. And the SLO was 100. 100 weekly. The measure of success is 100 weekly active users. Obviously, it was zero, so the starting point was zero, so how far can we get on the scale? And that is an example of building this new thing will generate this many active users, in this timespan. And if not, why not? What is missing? Is it’s fundamentally wrong, is it the way we implemented it wrong? And that’s exactly the context in which we’re trying to learn.

So we took a few weeks to build it as quickly as possible, to figure out how many users, how many active weekly users we can get in one month. And we gave ourselves two months time total; actually, a whole quarter, but part of it was the proposal. We will come back to that. But the idea was the SLI, weekly active users, and the SLO was 100. That was it.

Exactly. So there’s loads of different ways in which that’s valuable as an approach. Of course, the actual measures are going to be dependent on the nature of the feature. Maybe it’s not about recruiting users, maybe it’s about making more money, or getting more throughputs, or people recommending their friends to come and play games, or

maybe just improving the share price for the company, I don’t know. But like any experiment, if we think carefully about the impact that we’re trying to achieve… And Gojko Adzic talks about this in his wonderful book Impact Mapping. He talks about focusing on impact, which is another way of thinking about some of these sorts of stuff.

[46:23] But if we do that kind of thinking, we’re going to come up with sometimes easy to measure things, and sometimes almost impossible to measure things. If we do one that I just mentioned, it’s going to improve our share price. How do we know that it was this that improved our share price rather than anything else? How do we control our experiment? How do we control the variables in our experiment in a way that we can kind of determine its impact? None of this is simple. This is incredibly difficult. But just the thinking about it, just thinking about it makes it clearer what it is that we’re trying to achieve, and it helps us along the road. And if we can’t come up with some measures that are simple and easy, that’s great.

Netflix used something that they call a Canary Index, which is an incredibly similar idea, which is basically they set up what they’re going to measure for each change, they say what the objective is, and into their deployment automation tools they put - if the canary doesn’t hit its service-level objectives, they’ll pull it from production as part of their release process. This is all just about working a little bit smarter, just being a little bit more thoughtful about how the changes that we make land with our users. And ultimately, that’s what we are for. That’s what our job is, is to build software that’s useful to people to do something. And so figuring out how we measure that as part of the development of each new change is a very good, disciplined way of thinking about that, and working a bit more experimentally. It doesn’t have to be heavyweight, it can be simple. But just as a starting point, just thinking about “How would you know whether it was a success or a failure? What would it tell you?” That’s going to change your perspective on the features that you’re building and how you build them, for the better.

Yeah, so those SLIs and SLOs are important, even outside of SRE. And if you start applying them to other things, especially to the business - well, guess what? The business will be happy, SREs will be happy, and developers will be happy too, because the measure is not lines of code. Not PRs merged, not test coverage; all those things. I mean, they are helpful to some people and in some contexts, but the stuff that really matters is this, the impact on business. And guess what - figuring that out is really hard, which is why most people will not even try; it’s just too hard. But it’s worth it. By the way, everything worth doing is hard.

Yes, indeed. But let’s just think about what that means for a minute. So we’re going to build a feature, and we’re not going to bother thinking about figuring out whether the feature is useful or not. What does that mean? How is that a good idea? We’re just going to randomly throw features at the wall, and hopefully cross our fingers that some of them will stick. That’s not really likely to be very successful. That’s just kind of the random – that’s random development, in some way. If we want to target our development, and steer it in the direction of doing things that are useful, rather than doing things that are not, we need to be a bit smarter than that.

And let’s be fair, a large part of the history of our industry is people just building features and throwing them at the wall. There’s great data, and there has been for years, about the proportion of features in commercial software that are ever used by users. There’s something like – if I remember rightly, I think it was Microsoft released some data many, many, many years ago, saying that something like 60% of the features that they built were never used by anybody.

Oh, yes.

[50:03] So that’s 60% waste, right there. And in our industry standards, Microsoft are pretty good at writers writing software compared to most organizations… So they’re not a failure case, really… That’s just how the industry worked. We can do better than that by just being more thoughtful. It’s hard. It’s incredibly hard sometimes to figure out, first, what the service-level objectives or service-level indicators are. And second, how to control the variables so that we’ll understand the message to know that this change had that impact, rather than some other change. But that’s what it takes to work a bit more experimentally. And when we do that, we can get better results.

That is a great answer to the fitness function question. Thank you very much. Amazing one.

It was a pleasure.

The other one, the other follow-up question which I have - and they’re somewhat related - we talked about planning, we talked about the Gantt charts that change by the time you just create them, they just no longer match reality… So what are your thoughts on plans, and Gantt charts, versus proposals? Proposals that teams make for a new feature, or some new initiative - how important do you think they are? What do you think they should contain, if you think they’re a good idea… But let’s just start here

The way that I think about that question is that this is kind of in the territory of no estimates; the #noestimates kind of idea of thinking about development. And I confess that I am a little bit on the fence, emotionally; I’m on the side of no estimates. I think in reality that’s closer to the truth, but I think that there’s some practicalities… If you are in the game of – if you’re an organization that’s delivering software services to other people or something, you’re not going to be able to win the contract unless you can kind of come up with some idea of how much it’s going to cost… And if we hire a builder, houses or something like that, or a mechanic to work on our car, they’re gonna give us a rough idea of what it’s likely to cost, and then they’ll go and say, “Oh, sorry, we found this thing, and it’s going to cost more”, usually. I think this is a complex area.

So the reality of the situation as I see it is - one form of the reality - is that there is no way to estimate accurately. One of the books that was influential in my history was a book written by Steve McConnell called Rapid Development. He talked about all kinds of different ideas, and one of the things that he pointed out is kind of the trumpet-shaped curve of estimation. At the point at which you start a project, estimates are typically out by a factor of four. And the only time when you know when your estimate is accurate is when the project’s finished, in traditional software development. And that resonated with me; I kind of liked that idea. And nobody, almost nobody on the planet is going to give you the contract if your estimate is four times bigger than somebody else’s. So there’s this nasty cultural, sociological, capitalist drive to underestimate. When your boss comes to you and says, “How long is this gonna take?”, you want to please your boss and so you say, “Oh, it’ll probably be a couple of weeks.” And then you find out later that you’re wrong. So I think that’s always the risk… So you can never be accurate.

[53:35] I think that practically, sensible organizations are moving away from the big budget brands, and sort of long-term estimates, and those sorts of things that used to hold sway… And the leading thinkers either don’t do the estimation at all, which is they just see – they think about the predictability differently. I’ll come back to that in a minute. But what they tend to do instead is that they just invest in making progress. So if you’ve got a business idea, you go to the people that look after the money and say “I’ve got this business idea”, they say, “Okay, well, we’ll give you this bit of money that will allow you to test the business idea. Come back and talk to us again in a week or two when you know more.” And if you think about that, that’s a little bit like kind of a venture capital approach to funding, where you have a little bit of seed money to try and just test out the sanity of the idea. And then you have a little bit more money to invest in trying to exploit the idea. And then little bit more money later on. And then hopefully, you’re starting to make money, and then it all starts to work out.

There are many organizations that are starting to apply those kinds of planning and budgeting approaches internally for projects. And I think that’s very sensible, because if we think about this problem of estimation, it’s kind of – it’s an explosion on a time-series graph. As time goes forward, you get these divergent – our worst case and our best case estimates kind of start to diverge. So the longer the time horizon the worst our estimates are going to be, the worst the variance, the error bars in our calculation. So estimating over shorter periods of time is useful if you’ve got to do it.

My favorite story about estimation is from one of my projects that I worked on… I worked on a project at a company called LMAX, where we built one of the world’s highest-performance financial exchanges. And at some point, one day, the team that I was working with at the time - we’d just finished an estimation session, and it was lunchtime on a Friday, or something like that. So being England, we went to the pub for lunch. And we were sat in the pub – actually, no, it was after work. We were socializing. So after work, and our product owner came in. And she said, “Hi, how’s everybody?” and she came and sat down with us, and we were chatting, and she said, “What have you guys been doing?” We said, “Oh, we just finished an estimation session.” She says, “Oh, do you still do that?” And we kind of looked at her and said, “What do you mean, do we still do that? We’re doing it for you.” She said, “Oh, I haven’t been looking at those for months.” [laughter] “What do you mean you haven’t been looking at those?” She said, “Well, I did a statistical breakdown, and if we just count the number of stories that we’ve got and project that forward, that’s more accurate than your estimates as a predictive tool… So I just rely on that.” [laughs]

And so we stopped doing estimation at that point, and worked forward. I said I’d come back to the idea of the predictability… I think that one of the things that working in an agile way, with as many small steps, gives you is the ability to optimize for either predictability or efficiency. But if you want to be predictable, the way that you’re predictable is that you build in enough error margin that you’re not very efficient.

That’s right.

So agile development is wonderful at hitting a date. If you ask most organizations, “What do you want? Do you want to be able to hit a particular date, with a particular feature set? Or would you prefer to work as efficiently as possible, and so deliver more features by that date, whatever the features might be?” I think nearly all of them would want the second one, if you could have the same conversation. For cultural reasons, they might want the predictability, because that’s what they think is more important… But in reality, I think that what you’d want is you’d want to work more efficiently. So agile can optimize for that.

Continuous delivery in particular, which is my favorite way of organizing software development - I define it in part as working so our software is always releasable. So we can always release, we can always hit a date, we just can’t say what’s in the release. And we can work until we’ve got all of the stuff that you want. But that’s one of those long-term guesses that’s almost certainly wrong, so why do you care…? And in order to be able to work to do that, then that means that we can’t fix the date. The stupid, the irrational thing is when people try to fix the time and the scope; that’s just not sane, and that’s where you have to pad the estimates one way or another, in order to be able to do that, therefore you are, by definition, working slower than you could be.

[58:11] Scope, or time.

Scope or time.

Time or scope. Exactly.

Yeah. Those are the variables that we have. And they can be useful. So I often draw a graph… I like tracking actuals. So if you’re tracking actuals, your actual rate of production, you usually end up with kind of a wobbly graph. And then you draw some lines that kind of touch the top-most points and the bottom-most points. Those are based on past performance, your best and worst case error bands… And therefore you can then draw a line - these are the ranges of time that you can hit for a particular scope, or these are the ranges of scope that you can hit for a particular time.

I’m suspicious of scope as a target, because that’s one of the things, if we are being experimental, that we don’t really know. We ought to be – I tend to fall on the side of wanting to be able to manipulate the scope of what we’re building for a variety of reasons, maybe partly to hit time, and time schedules, but more likely to be able to do what our users want. Because that’s going to change, and our understanding of that is going to change. So if we’re not changing our ideas about what’s in scope and what’s out, we’re probably not doing a very good job of understanding the problem, it seems to me.

So it sounds to me like you could choose scope, but you don’t want to choose scope, because then time becomes an unknown. You don’t know how long it’s going to take. And I think this is really, really important, because guess what? Your most popular video to date, “The real reason Cyberpunk 2077 software failed”, it’s a story of choosing scope over time. This is December 2020. That’s when you published the video, so it’s an older one. It had 500,000 views, just over 4,000 comments. 4,000 comments. It’s unbelievable, the amount of feedback this video received. So this game was announced eight years before it was actually launched, and when it was launched, it was a disaster. Can you tell us a bit more about that, Dave? Because I think it’s all linked to what we just talked about.

Yeah. So, as you say, it was a disaster. It was pre-announced, and initially, they defined just scope. They got this incredibly ambitious vision for this game, that was kind of pushing the boundaries of what the technology was like, which was advancing at the time, and so on… And then later, it started to – I think that they probably imagined that they were going to figure on scope, but they were going to deliver it within a year or two, that kind of thing. And then it was taking longer than that, and it was taking longer than that… And I think that they probably got a little bit nervous, and then they started putting some time pressures on, and then doing what in the game industry is called crunch, which ends up with development teams working 60+ hours a week to try and hit the deadlines and the scope targets. This irrational model of fixing the parameters; so you tend to trade off quality at this point, because the team is under pressure…

[01:04:33.01] And they released this, and the game was pretty good on completely high-end, bleeding-edge technology, and was pretty disastrous on the version before, particularly on the consoles, PlayStation and Xbox, the versions that were by far the most popular platforms in the marketplace at the point at which the game was released.

There are videos on YouTube of players walking through cars, and floating in mid-air, and buildings intersecting with other parts. It’s just broken. It’s just unplayable. Or was, at launch. And the team did a lot of work to fix this. But my video was about this as a failure of software engineering.

In those 4,000-odd comments some people kind of naively thought that I was just saying that these were bad programmers. But software engineering is what it takes to produce software. So it’s all of the things. So primarily, the video, from my point of view, based only on public sources of information, which I kind of link to and kind of show where I got my main interpretation from… But it was a failure of planning and execution.

So on the planning basis, they started off with good intent, trying to fix scope, but ended up losing their nerve, and then fixing time and scope. And the development team had the bad reaction of not evaluating their system. My impression, I’m almost certain, based on the information that was publicly available, is that they weren’t doing automated testing, they weren’t doing continuous integration… They were doing all sorts of common mistakes that lead to worse outcomes and slower progress. And they certainly weren’t doing regular testing on the lower spec’ed consoles that were the marketplace at the time of the release of the game. So it ended up being a failure.

I take it – I’m not a player of this game. I sometimes get game players saying “It’s a good game.” I wasn’t commenting on whether it’s a good game or not. I was just talking about the software engineering. But I gather that the team has done a reasonably good job of getting it better and playable now, fixing it after the fact, after it was in production. But it was kind of headline news for a while, and it is the video on my channel that kind of launched my channel. That video was released at the end of our first year, and at the time, my son and I were – it was coming to the end of the year and we were placing bets on whether we were going to get 2,000 subscribers by the end of the year or not… Which is pretty good going for the first year of a small channel. A month later we got over 20,000 subscribers, and now we’re 125,000, or something.

When we last spoke, I commented on that. I was saying 55,000 when we recorded our last episode, episode five. You had 54,000 subscribers, and I was saying I was wondering how many will you have next time that we record. So we have the answer, 125,000; more than doubled. And it just goes to show how much people appreciate what you share.

[01:08:01.01] And I think we are coming full circle, we’re coming back to the beginning of the episode, where - first of all, this started when I reached out, in episode five… Because of this continuous delivery channel, I was so excited about it. I was like “There’s so much great content there.” This conversation is less than 1% of what is available on your YouTube channel, the Continuous Delivery one. There’s hours and tens of hours to this point - maybe even hundreds of hours; I don’t know, because I didn’t count - but a lot of content which goes into a lot of detail about some of the aspects that we’ve only touched upon, and some we haven’t even touched upon. But I have a very important question to ask right now, which is in which episode can we see your favorite T-shirt?

[laughs]

Because that’s like one of my favorite aspects of those videos. I can see you’re wearing different T-shirts.

Yeah. The T-shirts are an accident, too. So I do have a penchant for silly T-shirts, and I had two T-shirts that I liked a lot, that were kind of in-jokes. I’m kind of a nerdy person, so I like science fiction, and those sorts of things… And I had a couple of T-shirts. One of them was a crew T-shirt from the Nostromo, which is the spaceship in Alien… And the other one is a T-shirt that just says “Surf Arrakis.” Arrakis is the desert planet from Dune. So I thought those were funny… And I wore one of those for one of the episodes, one of the early episodes… And I’ve got lots of comments saying “Oh, what a good joke! Funny T-shirt” and so on. And now, one of the commonest questions that I get in the comments in my videos is “I like your T-shirt. Where did you get your T-shirts?”

I’ve got two favorites. I think one of them - from what you were saying earlier, one of them is yours. So there’s one that’s kind of a scrambled collection of words and numbers, but they’re kind of readable, in a weirdly interesting way. Human beings can decipher them. And they just say “Intelligence is the ability to change. Albert Einstein.” And I liked that one. That’s good. But the other one that I like a great deal is - it’s a picture of Wile E. Coyote from Looney Tunes, with a stick of dynamite that says, “Trust me, I’m an engineer.” [laughs]

That’s a good one. I grew up with those cartoons, and they were so good. Bugs Bunny as well… All those. Amazing cartoons.

Yeah. So now it’s become a thing on my channel… So every episode, I wear a T-shirt. And I try – I don’t always succeed, but I try to have a reason for each T-shirt. Some of them are quite subtle. Some of them are kind of in-jokes, that are sort of related to what I’m doing. And I don’t always succeed, but I try to do that.

One of the things that we did recently, again, mostly for a joke, was that we reached out to the company where I was buying most of my T-shirts from, and said “We’ve got this YouTube channel, and we keep getting asked where we get the T-shirts from. Do you fancy doing something?” So we did a special offer where people would get money off every T-shirt that they bought, and subscribers of our channel ended up buying something like 600 T-shirts from this company.

Which was cool. We just did it for a laugh, so we might be doing some more of that. But yes, we were very pleased with that. So if you’re interested in the T-shirts, usually they come from a place called Qwertee.com. And go to my channel if you’re interested in this special offer; there’s some links. But it’s just a joke… It’s just as a laugh, and largely as an in-joke.

I quite like this one… This one doesn’t work quite so well, because it’s got green in it, and green screen for the videos…

Well, I think we can fix that. If we do the Ship It episodes more often, you can use the T-shirts that you like, but

there quite work with green screens - there’s no green screen here, so we can do that. We’ve fixed the problem. We’re engineers. [laughter]

[01:12:12.05] Problem solved.

Right. So from all the videos that you recorded in 2022, which is your favorite one? Your favorite one to produce. One that you enjoyed recording it and talking about that subject?

I’ve got one coming up, actually, that I enjoyed a great deal… Again, this is – I’m slightly nervous of this one, because I don’t know how it will land with users… But I’m a very, very nerdy person, and one of my hobbies is reading and learning about physics. So I spoke at a couple of conferences recently, and both of the conferences asked the question “What do I think about quantum computing?” So I’ve done a recording about quantum computing, in which I get to explain some ideas and some of my understanding of quantum physics. So that was a lot of fun. I’m not quite sure how useful it will be to people. I hope it will be useful… But it will tell you how quantum computers work, I think, and what quantum computer programs look like, and what it takes to write them. So I quite like that. That’s fresh in my mind.

There have been a few that have resonated… I’ve been doing some longer-form episodes. Once a month we release a chat a little bit like this, but not quite the same, with influential people from the industry, that we call The Engineering Room on the channel. I think there’s eight of those so far, with different people. I talked to Martin Fowler, Simon Brown, and so on. We’ve got some interesting people coming up. But I had a great conversation not very long ago with Randy Shoup, Chief Architect at eBay, as he was then, talking about eBay’s adoption of continuous delivery, which was really interesting. They’ve been doing some really interesting, nice things.

Longer-term - so some of the less popular videos that I liked a lot, that didn’t get watched as much as I hoped that they would… I’m not quite sure whether they fit into the last year or they might be the year before, but I did a couple of videos; one of them about engineering at Tesla, and one of them about engineering at SpaceX. Because they’re both continuous delivery companies. They both operate continuous delivery. They do trunk based development for spaceships at SpaceX. And I think there’s stuff to learn from that kind of engineering, and the challenges of building world class electric cars, and the biggest space rockets ever, and using the kinds of techniques that we discuss in terms of continuous delivery and automated testing, TDD, all that kind of stuff… And using that for factories and spaceships is just fantastic. So I thought those were kind of interesting.

But there’s a lot of videos I think that I’m pleased with and proud of… And I’m looking at my monitor at the moment, which has got sticky notes stuck all around the edges for upcoming ideas of things that I want to do, but I haven’t got around to yet.

So I remember us talking about the SpaceX videos and the Tesla ones last time, because there was like –

Oh, I’m sorry about that.

No, no, it just blew my mind. It’s interesting that a year later you say the same thing, because for me, I’ve been trying to - well, first of all, connect with someone from within Tesla or SpaceX to talk about these things. To be honest, I’m working my way towards Elon Musk, but it’s going to take a while for me to get to interview him… But I play the long-term game. A few years doesn’t make a difference. Even 10. It’s okay. I’ve quite a few left, or so I hope…

Anyways, it’s interesting how these principles that we talked, that you capture in your book, that you talk about in your videos, that many identify with - and to be honest, most of us, we cannot explain them - they are really universal. I mean, they apply to everything, and not just software engineering. And that’s where the fascinating thing comes in. It’s just engineering; it’s just good engineering, and some would say a sensible approach to anything, really.

[01:16:15.08] But we talked about time, and this is a really important one… So I just want to come back to that, because we are preparing to wrap up this episode… I wish we could go for twice as long, or even three times as long; we have plenty of things to talk about. That’s reality. So we record this a couple of months before people will listen to it. And we started talking about this episode a couple of months back. So these things happen on a fairly long-term scale. I mean, we’re talking months here, four months, I think, from us starting to talk to actual recording it; maybe even half a year, to be honest. Time is something that you cannot really choose; it just happens. I mean, you can pick a point in time… But I think that’s one of my favorite takeaways here, is that these things will happen - it will be spring, and then it will be Christmas, and then whatever else is going to happen… And you need to pick a time which you think is good for you. And then we have Black Friday, which happens whether we want to or not… And a lot of releases, software releases tend to coincide with these important dates.

So you can’t choose quality. We’ve already settled that one. Scope - it’s better if you discover it, to be honest; like, figuring out what you’re trying to build. But pick a time. Pick a time that’s good for you to get it out there, based on everything else that’s happening; and even that may be influenced… And just get it out there. Get it out there in the world, when the time comes. Ship it, literally, and figure out “Is this right or is this wrong?” And don’t just wait for that one moment in time. Do it more often, as we will try to do these episodes more often, because they’re great fun… But what is your favorite takeaway, Dave, from this conversation?

For me, the theme of the conversation really is – I definitely sound like I’m selling my book now, but it’s kind of the theme of the engineering book, which is… I’ve admitted already to being a popular science fan, and interested in physics as a hobby - which might be a weird hobby, I don’t know; but it’s mine… And one of the things I think about philosophically is really what we’re talking about when we’re talking about engineering is kind of the practical application of scientific style ideas.

I think science is humanity’s best problem-solving technique, and so we should be applying that kind of thinking to software development. And I don’t mean in simplistic terms, like “We should be writing down our experiments in the same way that scientists do”, or anything like that. But I think there are some kind of more fundamental philosophical ideas… Like we’ve talked about, starting off assuming that we’re probably going to make mistakes, and then trying to figure out how we can detect the mistakes as quickly as we can, and fix them when we detect them.

The reason that you were talking about time - time matters in the release of software, deeply. And what we’ve found is that we can make our lives an awful lot easier if we shorten the time horizons of changes.

If you think about one of the differences between releasing once every six hours and once every six months - it’s just the amount of stuff that we put into production. And just, inevitably, whatever it is that we’re doing, if we’re releasing six hours’ worth of work, rather than six months’ worth of work, that’s going to be lower risk, because there’s less stuff in the change. There’s a smaller delta between what’s in production now and what’s in production after the release over a six-hour time horizon rather than a six-month time horizon. So that means you’re going to be safer; each release, each individual release is going to be safer. And part of the reason why Jez and I wrote the Continuous Delivery book was to try and highlight that point, that we get an awful lot of benefit if releases were a non-event. We didn’t have to worry about them.

I remembered something recently… That we once released software into production on Christmas Eve, as we were leaving the office to go home for the Christmas break.

You were the bad boys. You were sitting in the bus, and the background exploding… [laughs]

Yeah, yeah. But it wasn’t a risk, because it was all automated, all tested… It would all be fine. We were confident. And so I think that’s where you want to end up.

I think embracing this idea that we’re – I think starting out assuming that we’re probably wrong, and working defensively, is probably the fundamental, the superpower; the thing that kind of really sits underneath everything else that we’re talking about. And that’s very like the kind of idea of the skeptical mind. Science works that way - modern science, you attempt to falsify things rather than prove things. And that’s kind of the same thing. So where are we wrong, rather than prove that my idea is right, is fundamentally the principle that we’re going to try and organize around.

David, it’s been a pleasure. Thank you very much for today. I’m very much looking forward to next time. And what I’ll try to do is shorten the period between having you on next.

So that’s what I’m looking forward to. Thank you, and enjoy the rest of your summer.

Thank you. And you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00