Today we have a special treat. A conversation with Brian Kernighan! Brian’s been in the software game since the beginning of Unix. Yes, he was there at Bell Labs when it all began. And he is still at it today, writing books and teaching the next generation at Princeton.
This is an epic and wide ranging conversation. You’ll hear about the birth of Unix, Ken Thompson’s unique skillset, why Brian thinks C has stood the test of time, his thoughts on modern languages like Go and Rust, what’s changed in 50 years of software, what makes platforms like Unix and the web so powerful, his take as a professor on the trend of programmers skipping the university track, and so much more.
Seriously, this is a must-listen.
Featuring
Sponsors
Square – Develop on the platform that sellers trust. There is a massive opportunity for developers to support Square sellers by building apps for today’s business needs. Learn more at changelog.com/square to dive into the docs, APIs, SDKs and to create your Square Developer account — tell them Changelog sent you.
InfluxData – The time series platform for building and operating time series applications — InfluxDB empowers developers to build IoT, analytics, and monitoring software. It’s purpose-built to handle massive volumes and countless sources of time-stamped data produced by sensors, applications, and infrastructure. Learn more at influxdata.com/changelog
FireHydrant – The reliability platform for every developer. Incidents impact everyone, not just SREs. FireHydrant gives teams the tools to maintain service catalogs, respond to incidents, communicate through status pages, and learn with retrospectives. Try FireHydrant free for 14 days at firehydrant.io
MongoDB – An integrated suite of cloud database and services — They have a FREE forever tier, so you can prove to yourself and to your team that they have everything you need. Check it out today at mongodb.com/changelog
Notes & Links
- Brian’s Wikipedia
- PDP-7 on Wikipedia
- Understanding the Digital World: What You Need to Know about Computers, the Internet, Privacy, and Security
- The C Programming Language
- UNIX: A History and a Memoir
- The Go Programming Language
- The Mythical Man-Month
- Dick Hamming’s You and Your Research
- Brian on Lex Fridman’s podcast
Transcript
Play the audio to listen along while you enjoy the transcript. 🎧
Well, Brian, first of all, we really appreciate you joining us on the Changelog. Welcome!
Thank you. It’s a pleasure to be here.
It’s a pleasure to have you. You’ve been in the industry a very long time. In fact, you have written the “Unix: A History and a Memoir”, because you were there. You were there when Unix began. Take us back – you don’t have to tell the whole story, of course; people can read the book. You’ve put the work in to write it all down. But take us back to that time period and just paint a picture of what it was like when Unix was born.
Right. Well, I guess the proper way to describe it is “Present at the creation, but not responsible for it.” I was a grad student at Princeton in the mid-to-late ’60s, and I got a summer internship one year at MIT in Project MAC, which was basically building Multics, a very large information utility, so called… And then the following year, 1967, I got an internship at Bell Labs in Murray Hill. People there were still working on Multics; I had nothing to do with that. But I spent two summers of internship at Bell Labs, I had a great time, and so I went there permanently in 1969, very early 1969, and found myself in the same group as Ken Thompson and Dennis Ritchie.
[04:22] Ken at that point – I guess the right way to say it is they were suffering withdrawal symptoms, because they had been working on Multics, which was a very big, very interesting computing system, and they had really, really good computing facilities, but then very early in 1969 Bell Labs withdrew from the Multics project because it was clear it wasn’t gonna produce what it was supposed to produce, at least on a timescale that the Labs cared about. And so this left Ken, Dennis and a variety of other people fundamentally with a taste for a nice computing environment, but no way to satisfy it, really… So they spent a fair amount of time basically doing what you would call paper designs, sketching things on blackboards, and so on. But as part of that, very early in that year, I think Ken Thompson found what is traditionally described as a little-used PDP-7. So the people who’d been working on the Multics project at Bell Labs withdrew, as I said, in 1969, and the people who’d been working on it, Ken Thompson, Dennis Ritchie and some others were left without a very nice computing environment; you know, very large, powerful computers that they sort of had all to themselves… So they spent a lot of time basically doing what you might call paper designs, or blackboard designs of what an operating system might be like, what they could do… And they also – Ken Thompson found what is traditionally called a little-used PDP-7, a machine that was pretty obsolete already in 1969. Nobody was really using it… And he used that as a vehicle for experimenting with file systems. At some point, this famous comment that he made, that he had three weeks when his wife took their son off to California to visit the relatives, and in three weeks he put together what was basically the first Unix system. A proto-version of Unix, because you already had the file system you’d need an exact call, a shell, an assembler, things like that.
So three weeks and we have the first version of Unix. And that was in 1969, so you could argue then that 2019 would be the 50th anniversary of Unix. The Unix history book that I wrote, that you mentioned - basically, I put that together as an attempt to capture some of what I remember and what lots of other people remembered of what was really kind of a golden era in computing, and certainly the early days of Unix, very interesting; the evolution of it over the next 20-30 years is very interesting, at least to me…
Sure.
So that was the excuse for trying to write a book.
I think it’s great, because preserving this knowledge is really so important. And obviously, having someone there during the born-on date, maybe not so much a contributor to actually making Unix a thing, those first three weeks, as you mentioned, with Ken Thompson - having that memory is super-important. I’ve listened to other interviews you’ve done with Lex Fridman and others just describing some of this history. I think it’s really important to draw those lines from past world computing to today’s world computing. I think of like this PDP-7, maybe it’s obsolete, but you could still make Unix on it… And I draw the conclusion of like, say, a Raspberry Pi today… You know, the difference in terms of size and power is just profound, I think, for anyone listening to kind of go back and think “Wow, this is how it began. This is what came out of it. This is the foundation that’s been laid.” Because Unix is the foundation that we all build upon still to this day. There was the lack of freedom of Unix, turning to universities, turning to this Linux system that was open source, and this whole movement that we’re still sort of in.
[08:03] I’m personally gonna go back and read this book. I haven’t yet, but I plan to, because that knowledge is so important to preserve, one, but then two, to reflect on; to know what the future might be because of what the past was.
Right. It’s interesting - I don’t know whether you can extrapolate, but the question of what has changed over the last 50 years or so is just astonishing when you think about it. Pretty much everything has changed. When I started programming, let’s call it in 1965 or something like that, I used Fortran, and we used punched cards. Remember punched cards? You guys have never seen punched cards.
Sorry…
I’ve seen a picture of a punched card…
Yeah… And that summer I spent at MIT in 1966 was a revelation, because they had time-sharing. That is there was a central computer, and you accessed it from various kinds of remote facilities, some of which even used the phone system. So in effect, it was kind of like cloud computing.
The early cloud.
Yeah, exactly. But when I went back and finished my degree at Princeton, it was still Fortran and punched cards in 1969, and computers were expensive; literally, millions of dollars. The computer I used there was an IBM 7094, and it cost multiple millions of dollars; it lived in a huge, air-conditioned room, and it was exceptionally slow, probably a million times slower than computers today. It had tiny memory, what 64k – no 32k 36-bit something like that. Really brief but physically huge. But Moore’s Law came to the rescue, right? Things sort of get better – let’s call it every 18 months, give or take, things get twice as good. So in 15 years, that’s a factor of a thousand, in 30 years it’s a factor of a million, 45 years it’s a billion… We’re nearing 60 years from when Gordon Moore made his comments about doubling back in 1965. So - smaller, cheaper, faster, better, exponential… And that’s what makes a lot of this stuff go. It’s why you and I can have a conversation, even though we’re scattered all over the place, and we all have these powerful computers, and everybody is carrying an exceptionally powerful computer with them all the time… And it’s connected to everybody else’s powerful computer. So an enormous amount… [laughs] Very different.
Is there anything that’s the same? Is there anything that remains? You have maybe some fundamentals of coding, or practices… Anything just has a throughline from then to today?
I think there are a lot of things that remain the same. One thing is probably just that our aspirations always exceed our abilities.
Right.
At some point we need more memory, or more speed than we have. It’s hard to scale up, and so we still run out of resources, or need that exponential growth to keep going, to keep up with what people want.
The other thing that really is the same is that people are the same. We’re complete screw-ups in a variety of ways, we make mistakes, we’re slow, we’re unpredictable in lots of different ways… So programs that programmers - even very talented programmers, working hard - they make bugs in programs, and programs are often hard to change, they have clunky interfaces… And then, you know, most people are good, but there’s always a collection of bad guys, and they’re still very much with us at this point… But interestingly, something that’s different - the bad guys are far away, but can still reach out to touch us through things like the internet. So you know, a lot’s the same, as well as a lot of difference.
What’s interesting about the birth of Unix is that you all were collocated in a building, or a few buildings, or a room at times… Whereas things that would be invented today, that may become the underpinnings for the future technologists - unlikely to be in that same room. Now, maybe it will be, but there’s so much innovation that’s happening, remote collaboration, that it’s very possible that people who are inventing things today are halfway around the world from each other.
[12:04] You know, that’s such an interesting point… I honestly don’t know; you are absolutely right that Unix was done by a handful of people in a very small space. Typically, they were sitting in the same room… And that meant that the bandwidth for communication among the people was extremely high. And I think that that is hard to replace over remote connections like the one we’re using. I think that people working together in the same space is often very productive.
On the other hand, if you look at these sort of cube farms - or not even cube farms; just tabletop farms that lots of big companies provide, I think that’s counter-productive, because you’ve got too many people in the same space and it’s very distracting. And I’ve experienced that to some extent, and I hate it; I find it just very hard to work in that kind of environment where you’re literally three feet away from the next person. So there’s some kind of trade-off there, and I don’t – perhaps the pandemic and all this experimentation with remote work and then hybrid modes and so on, maybe that will lead to something which gives you some of the combinations of the good stuff without so much of the bad stuff. We shall see.
Yeah, there’s some sort of sparking creativity, and there’s something in the room sometimes… I think even teams that go remote or are remote, they still have their summits, they still have their times where it’s like “We need to get six people in a room and just hash things out.” And when you’re in that room, it feels different. And then you’re like, “Okay, this three-day period we got more done than we got in the previous three weeks, because we did do that.” That being said, the ability to bring together different minds who are geographically distributed around the world and not have them have to sacrifice their life or their lifestyles in order to collaborate is also really powerful.
This kind of reminds me, Jerod, of when we talked to Jessica about mob programming. I don’t know if you’ve heard of this before, Brian, but this idea that you can mob-program. You can get together, four or five people, and it could be a designer, it could be a developer, sharing the same terminal, all focused on the same problem set, but they’re probably in most cases remote. I think in many ways that is trying to recreate what you all had back in Bell Labs. And obviously, in that day you had no choice; that machine was not mobile, so you had to be collocated with it… And in many cases, while it was a massive machine, underpowered in comparison to today’s machines… You know, you even mentioned the PDP-7 was kind of obsolete, which is what the Unix was born on. I kind of wonder if the reason why Unix can run on commodity hardware is because it was designed to work on a machine that had limits; so those limits would always be constrained. So a piece of commodity hardware would be limited, but eventually cheaper… I’m wondering if that might be a similarity there.
I think there’s a real element of truth in that. The original PDP-7 was extremely limited; it had, if I remember correctly, 8k, 16-bit words, so you couldn’t get carried away there… And the first PDP-11/2- that we got was not a heck of a lot bigger. I’ve forgotten, but maybe call it 24k bytes, or something like that; so it was really very small. And if you don’t have much memory and the machine is intrinsically not very fast, then it enforces a sort of discipline, and it also I think encourages you strongly to think about what’s the simplest mechanism that will do the job for as many different things as possible. So find simple, but general mechanisms. And certainly, an awful lot of what went on in the early Unix was exactly that. I mean, think about the file system. The idea of the hierarchical file system came from Multics, and probably others as well. But the Unix implementation of it was extremely straightforward and simple, very clean. The system calls that access it were half a dozen at most to access and do anything you wanted, and all files were the same, and then there’s this kind of freebie idea that went along with it, that devices were files in the file system as well.
[16:17] So that was an example of a very clean, simple idea, a hierarchy, and then - gee, a generalization. We could put more than disk files into this mechanism and it would work in the same way. So I think a lot of that was encouraged because there were not a lot of resources.
Contrasting today, where for most people the memory on their computers is infinite, and their bandwidth is infinite, and if the computers are so fast, it doesn’t matter, so you can trade off… And for most purposes it’s a fine trade-off - just waste more of the computer to make the people more productive in some way; but there are times when you can’t do that.
I wanna ask a question about Ken Thompson… There’s a lot of people in that room; you’ve named a few, yourself as well, so he’s not like singularly to credit for these things. But it seems like he was an amazing software developer. And you’ve taught probably thousands of software developers down through your time teaching, not to mention through your books; I’ll mention your C programming language book - it was not my first book in college, it was my second book… I actually started internal programming with a C++ course. Didn’t do so hot. And then I took C, and so I got your book in that one, and I actually thought C was easier than C++, because your book made it very easy for me to understand, so thank you for that.
You’ve taught a lot of software people over the years, you’ve seen a lot of software developers yourself, co-authored AWK, you have your own bonafides when it comes to writing code… Was Ken Thompson, do you think, like a unique, amazing coder? Was he in the right place, at the right time? Is he a stand-out in terms of just like once-in-a-generation kind of a software developer? Or are there a lot of people that you’ve seen that have been just as good as he was, but he happened to have that nugget, he happened to be at the right place, the right time, with the right idea and the right people?
I think he’s a singularity. I have never seen anybody else who’s in the same league as him.
Wow.
I’ve certainly met a lot of programmers who are very good, and some of my students, sure, the people I worked with at Bell Labs, very good… But Ken is in a different universe entirely, as far as I can tell… And it’s a combination of a bunch of things. I mean, just being able to write code very quickly, that works, very well done code… But also this insight into solving the right problem, in the right way, and just doing that repeatedly over all kinds of different domains. I’ve never seen anybody remotely like that in any setting at all.
One night he and Joe Condon and I - we had gotten a new typesetter at Bell Labs. It was basically a device controlled by a very small computer inside a computer automation naked mini if you wish to know. Just a generic kind of mediocre 16-bit computer… And the typesetter came with really awful software, so you couldn’t figure out what was going on. Of course, you didn’t get source code, you just got something that ran…
Right.
And so Ken and Joe and I were puzzling over what to do with this thing, and late afternoon I said “I’m going home for dinner, I’ll be back in a while.” I came back at 7 or 8 o’clock at night and Ken had written a disassembler for this thing so that he could see what the assembly language was, so that he could then start to write – well, of course, now you write the assembler, and then you… You know, that kind of thing where in a couple of hours he had built a fundamental tool that was then our first toehold into understanding the machine.
Wow…
[19:57] You know, writing a disassembler is not rocket science, but on the other hand, to put it together that quickly and accurately on the basis of very little information… Now, this was before the internet, when you couldn’t just sort of go and google for what’s the opcode set of this machine. You had to find manuals… It was all that kind of things, and he just kept doing that over such a wide domain of things. We think of Unix, but he did all this work on a chess machine, where he had the first master-level chess computer, that was his software… And he wrote a lot of the CAD tools that made it go as well… He built a thing that was like the Sony Walkman, with an mp3-like encoding before anybody else did, because he talked to the people who knew how to do speech coding down the hall… Just on and on and on.
You’ve said before that programming is not just a science, but also an art… Which leads me to believe that, for some reason, Ken was blessed with this art side of the science. So you can know how to program, and you can know how to program well, with less bugs, but to be able to apply the thinking to a problem set in the ways you described Ken - what do you think, without describing his, for a lack of better terms, genius, what do you think helped him have that mindset? How did he begin to solve a problem, do you think?
I actually don’t know. I suspect part of it is that he had just been interested in all kinds of things… And you know, I didn’t meet him until he and I arrived – he arrived at Labs a couple of years before I did, and then we were in the same group for many years… But his background I think originally was electrical engineering. He was much more of a hardware person, in fact, than a software person, originally. And perhaps that gave him a different perspective on how things worked, or at least a broader perspective. I don’t know about, let’s say, his mathematical background, but for example – you mentioned this art and science; he built a regular expression recognizer which is one of these things that dynamically adapts to what’s going on, so that it can process things in linear time, which if you did it dumbly would be exponential, in either space or time. Basically, a lazy evaluation mechanism for regular expression evaluation… And it just goes on. How do you get there? I don’t know. I’m not that person… [laughs]
So if we go from Ken Thompson to Brendan Eich now… So with Ken Thompson, this famous three-week stint… You know, probably it wasn’t Mountain Dews in the room, but I just imagine him at a terminal, just pizza delivered, not leaving the room… Who knows what actually happened, but there’s your stereotype… Brendan Eich infamously perhaps designed JavaScript in ten days, and that was a circumstance where I think it was a pressure cooker. They had ten days to do this, I think… Whereas Ken’s pressure cooker was like “My wife and kids are out of town for a few weeks and I can do this…” Which is interesting, these two platforms, so to speak; one, the programming language, one with the operating system, of course… Both, at least the core of them, designed or implemented (or both) in such a short amount of time. Do you think it’s just a coincidence? Do you think there’s something to this? What are your thoughts on that?
It’s an interesting parallel… I don’t know Brendan Eich at all; I’ve never met him, or anything like that. I think JavaScript – I mean, you can dump on it, but it has had an enormous effect on the world, and I think it was an excellent piece of work. And ten days? Sure. More credit to him for being able to pull that off. I think – is there anything to be learned by saying “Here are two examples, therefore it could be done more broadly. Everything we do can be done in a couple of weeks”? Probably not. But in some respects, the core pieces of these things are relatively simple.
[24:00] Suppose I was going to create a Lips interpreter. I could – not me personally, but lots of people could probably put together a Lisp interpreter in a day or two, because it’s fundamentally simple and core, and you can get it off the ground very quickly. And then you can spend a lot more time making it more efficient, or more expressive, or whatever. But fundamentally, it’s pretty straightforward. I think that the same kind of thing would be true of an interpreter like JavaScript.
My personal experience is not to be compared with Brendan Eich at all, but for example AWK, which is a programming language that Al Aho and Peter Weinberger and I did - we thought about the design for a few weeks, and then Peter Weinberger went off and built the first implementation over a weekend. So that’s an interpreter at very loosely the same kind of level as JavaScript; not the same…
The reason that Peter was able to do that over a weekend - two-folded. One, he’s a very, very smart, experienced programmer. The other thing is he had good tools to build it with. So he was able to use Yacc, the Compiler-Compiler, to define the grammar, and hang semantics on it… He was able to use Lex, the lexical analyzer/generator to do lexical processing… And then the rest of it - well, you just build the tree and then you build the thing that walks the tree. If you’ve done that before, the next one is easier.
So I don’t know, for example, whether Brendan Eich had done some other kind of language work before that gave him a leg up…
I believe so.
I wouldn’t be surprised. So experience like that does help you to build something quickly, but also to see which parts matter, which parts are the ones to get you off the ground, and which you can just kind of ignore for a while. So my guess is that lots of things can be built – call it the MVP. It’s a badly overused acronym, but…
Sure.
…what’s the minimum thing that would actually prove your point, be useful, and tell you whether you wanna go further or not. My experience is small tools, small languages, things like that, but you can see other things where people get off the ground quickly.
What about Wordle? I’ve never played Wordle, but gee - I look at it and think “Wow! That’s kind of neat. What an idea…” And how hard could it be to build that? Probably not terribly. I don’t wanna denigrate the guy who did it, because it’s a really nice piece of work.
Yeah, I mean, the game mechanics are I think where the genius is in Wordle… And I think, to your point, it has been recreated over and over again. It has inspired clones, and ports, and Wordle in this language, and Wordle in that language, and Wordle with this word set… I think there’s even like a POSIX version of Wordle… Because the problem set is well-scoped and well-defined, and it’s not that complicated. But there are some interesting aspects of it which make it a fun thing to build, so I think it’s a good point there.
It’s the constraints, really… Isn’t it? The constraints I see seems to be the thing. One, you hear this again and again in history, where it’s like someone created a logo for a brand, and they did it in five minutes. Well, it actually didn’t take them five minutes, it actually took them maybe 15, or 10 years, or 5 years of experience to then be so efficient, and as you had said, Brian, with the right tooling at the time of creation. So it’s kind of like constraints, plus experience, plus tooling that really enable this creation to be so condensed.
Yeah, I think so. You have to have all of the preparation, your own expertise and experience, and an environment or infrastructure that supports what it is you want to do. I mean, look at Napster. Napster is at this point over 25 years old… But very, very neat idea, not too hard to get off the ground given that you’ve got the internet, you’ve got tools like Visual Basic, or building windows interfaces, you’ve got a fairly homogenous environment for people to play with it… So again, not to denigrate Shawn Fanning for what he did there, but given all of that stuff and given that he’s probably thought about it very hard for a long period of time, then putting the thing together is not that bad… Say I, never having done it, of course, but that’s a different story… [laughter]
[28:26] It reminds me of this story of this old Bible preacher… He preaches a message, and then afterwards one of the hearers goes up and asks him, like “Thank you so much. How long did it take to put that thing together? How long did you work on this?” And he said “I worked on it my whole life.” So that’s kind of what it is. Your life of preparation and experience actually puts you in a place and a time, with a skillset and a perspective that makes things that are amazing, even though in and of themselves they may just be like this small-scoped, constrained thing, but it’s that combination of it that really brings it all together.
Precisely, yeah.
So C… Let’s talk about the C programming language. I mentioned how your book has taught probably multiple generations at this point how to code in C. You co-authored that with Dennis Ritchie, the creator of C… And C has been extant in huge numbers for many years, and continues to be today a very viable and powerful programming language that people probably are picking up right now and writing something new in C as we speak, years and years and years after its inception and creation. What do you think it is about C that has accounted for the longevity of its success?
I think probably it hit sort of a sweet spot of a bunch of competing or important areas. It’s efficient, and it was really important that it had to be efficient at the time Dennis did it originally, in the very early 1970’s… Because as we’ve mentioned earlier, machines are not very powerful, don’t have much memory. So efficiency, expressiveness… It really let people say fairly clearly and easily what they wanted to say, in a form that was a good match to what was going on in the hardware underneath it. You could see a mapping between what you wanted to say and what the computer would actually do under foot. It was complete, in the sense that you didn’t need anything else. You could write useful stuff with nothing beyond that, and I think it was completely comprehensible to programmers; you could pick it up and you could learn how to use it fairly quickly and fairly well. And I don’t think any other language has done that quite so well. I mean, obviously, every language has things that it does very well, and things that it’s perfectly adequate for, in other places where people complain… And C is like that. It has lots of flaws, a lot of those were a historical necessity because of limited resources… But I think it’s outweighed by that combination of efficiency and expressiveness and suitability for the job.
The other thing about C, and the reason why it’s still there, I would say, or at least one of the reasons, is that it has benefitted over and over again by waves of hardware evolution. So it started with mini-computers, like the PDP-11, it was there for the workstation market, like Sun Microsystems and lots of others… In fact, the existence of C and Unix enabled that workstation marketplace in the late ’70s, early ‘80s. It was there for the IBM PC and all of the following machines of that… So that’s a third wave. And we see embedded systems at this point, little tiny computers for which C remains suitable, and probably best, because you need that efficiency, speed and memory use, and often no runtime support.
Right.
So all of those things keep giving C another burst of life, and will probably keep it going for a while.
Yeah, it seems like the advent of mobile and IoT has really added to the longevity of those kinds of languages… Because whereas we used to go higher and higher up the stack, more abstractions, memory management etc. scripting languages, because we have these – these constraints are lifted in many situations. But all of a sudden, a reset back to highly constrained devices when mobile took off… And of course, the mobile phones now are very powerful compared to what they were ten years ago. But your refrigerator probably doesn’t have a very powerful chip in it, or your dishwasher, or these things, what people are coding.
Are there any problems today, a specific domain, like a text editor, or something where somebody said “I’m gonna write a brand new thing”, and Brian, you would say “You should pick up C and write it in C”? Or would you never advise C today?
[35:58] I think probably unless you are in one of these resource-constrained environments, clearly, right upfront, that you’re going to be resource-constrained and the improvement of hardware isn’t gonna rescue you in the next couple of years, I would not start with C. I really wouldn’t. And then it depends what is your application.
For example, some random kid at school wants to know what’s that first programming language to learn. Python probably, because you can do all kinds of neat things with it. It is very expressive. It is adequately efficient for most purposes, and it has an enormous library of stuff that just all is really easy to use. So that generic question of what the first programming language might be not C. It’d be nice if people did, but I think Python for many purposes would be a better choice.
And of course, the reason that Python works so well in many cases is that very often what you think of as a Python function or module is in fact just a bunch of C code through a foreign function interface.
True. Now, there’s been a concerted effort of late to replace many of our core infrastructure projects that are written in C - our routers, our web servers, our proxies, you name it - with memory-safe languages like Rust. What do you think of that effort? Are you for it? Do you think it will succeed? Is there just too much C code there that it’ll always exist and be executable on our servers? What do you think about that?
You know, one of the problems with C - it’s the trade-off. You need a sharp tool so that you can do things like write operating systems where you really do have to access memory in an unconstrained way. And unfortunately, that translates into the programs that ordinary mortals like me write, where you access the wrong memory in an unconstrained way, and things go bad in a variety of ways. C has lots of that problem, so replacing critical pieces of software with something where that memory corruption or access out of range, or all these other kinds of things, where that is in effect legislated out of existence - that sounds like a great idea. Is Rust the right language for that? I don’t know. I have never gotten into Rust. My one foray into it floundered on the fact that the language and its documentation were changing at high speed and differently, and so I couldn’t get something to work.
Well, you’re an early adopter. You came in early.
Well, “adopted” unfortunately is the wrong word. It was an early abandoned ship… [laughter]
An early abandoner.
I mean, Rust has clearly many positive properties, but I just don’t have anything to say about it. But the basic idea I think is perfectly sound. The problem is if you go through and try to improve the infrastructure. Any program, what you’re doing is changing things; and so what you wanna do is do it in a way where the external properties, all external properties remain the same, but the internal properties are better… And it’s hard to do that. So the question is whether the improvements that you’re making will improve it, or will you just change behavior in invisible ways; will you head off bugs or will you create new bugs?
So the straight answer is I don’t know, and I don’t think our ability to test and verify programs is at the state where you can be really sure. Even just making simple changes is hard work, to make sure that they’re correct.
I was reading something the other day about how the Linux Kernel - they’re proposing to update the version of C there from whatever it is. It’s probably C 88, or something… Improve that or upgrade it to a much more modern version of C. I suspect that’s going to be hard work, because you’ve got 20 million lines of C code there, and… Can you do that without breaking something? Hard to say.
[39:57] That’s a lot of lines of code. Given that then, let’s maybe hypothesize a bit then, or maybe share some ideas here on not throwing the baby out with the bathwater. So if the baby in this case is C, and there’s lots of lines of code out there - that is the baby, right? And the bathwater is the insecurity and the memory-safe concerns for a particular software that lives network-connected, so web servers and things like that. Routers and whatnot. And this is where the attack surface lives.
If the baby is C and there was no alternative called Rust, or a future language, or a more modern language that sort of diminishes these concerns, how would you propose or suggest or whatever we not throw the baby out with the bathwater and modernize C in a way that becomes memory-safe? What can we bolt on to save – is there a possibility to just augment C to be more memory-safe concern? I don’t know enough about the language to go deep with you on that, so I’m just curious if there’s a way to keep C, but memory-safe it.
There’s a thread in at least academic settings, and probably others as well, which says “Let’s take C, but then do something that makes it safe.” So there are languages like Safe-C, and there are people who make subsets of C; “Here’s this safe subset.” Or the verifiable subset, or the trusted subset, or whatever. So these have been an active area of research for decades at this point. I don’t think any of them have had a measurable effect on practice. The only one that I had any real experience with, and not very deep, is with automotive software.
A lot of the software that runs in your cars is written in C, for good and sufficient reasons. I worked for a while with Gerard Holzmann, who at the time was – he was a colleague at Bell Labs; he was then at GPL… And he was interested in how do you make reliable software for basically space missions like the Mars Rover, and that sort of thing. The automotive industry uses C, and they have a standard, MISRA. Motor Industry Software Reliability Association, or something like that. It’s a standard for how do you write C so that it will be safer. And some parts of that standard are machine-enforceable. “You shall not do this. You may not do that”, and we can check it mechanically. And some of them are more like statements of good intentions, which are not checkable. And people try to stick to that STDIN the field, but it’s imperfect, so your cars still have potentially software problems. I suspect the same is going to be true across the board, that you can improve the situation with C code. Some combination of tools, and checkers, some combination of limitations…
For example, one of the standards for spacecraft is that you do not do dynamic memory allocation. All memory is allocated at the beginning.
That sounds like fun.
So you don’t have these multiple freeze of the same block, or all these other things that go down in flames. So some combination of good behavior, legislated good behavior, checks, careful testing, and so on - all of these will improve the situation, but I am a little dubious that it will completely solve the problem.
And then if you come along with a language like Rust, which I don’t know enough about, I believe it certainly solves some of those problems with memory allocation, but it probably has other problems as well. In my experience, it came with an enormous collection of libraries stuff; how do I know that works? And that’s gonna be true with all languages, no matter what. There’s always gonna be ways in which you can screw up.
Sure.
So are we just hosed then, or is there hope anywhere? How do we secure ourselves? [laughter]
I think we’re hosed.
Okay. [laughs]
[43:57] Since you mentioned cars, there’s two people in particular that are pretty bullish on Rust. Obviously, you’ve mentioned that you don’t know enough about Rust deeply enough to know the concerns or lack thereof, if there aren’t any… But in particular Elon Musk is known to be bullish on Rust. And then a counterpoint is Jack Dorsey, who famously created Twitter, Square… You know, a forward-thinker on Web 3, which we’ll probably talk about to some degree, in terms of decentralization of the computer, and obviously, cryptocurrency. Those two people tend to be “thought leaders” or influencers or mega-serial-entrepreneurs that have widespread, almost cult-like followings, and therefore they’re –
Well, and investors, right? They can actually put their money into advancements, yeah.
But those two in particular are known to be bullish on Rust… So I just thought I’d throw it out there, since you mentioned cars, and that sort of standard. I don’t know if Tesla – I know they use Python; I know that it compiles down to C++ just based on a simple Google search. I’m not sure that’s fact-checked or not, but they use Python in a lot of ways, and that compiles down to C++. I’m sure they do others, but those are two folks that are in those spaces that tend to be bullish on Rust.
I will defer to their expertise in this, I don’t know. It’s clear, one of their core competencies is making money, another is actually getting things done. Full credit to both for that. But after that, I don’t know; software still depends a lot on detail.
Yeah. So a language that you know more about, which is more modern than C, is Go. In fact, you wrote a book on Go… So that one, while Rust a quick abandonment for documentation and other reasons, Go seems to have caught your interest and kept it for a little while, at least long enough to write a book. Do you wanna tell us about Go, what it impressed upon you, or why you liked it, or like it still? Your thoughts on Go.
Yeah, so my experience with Go is kind of weird, in a way. I often spent summers at Google in New York, and one summer I was sitting adjacent to Peter Weinberger, an old friend from the AWK days, of course… And one of the other people out there was Alan Donovan, because they were all working on Go in New York. In effect, I was an intern for Alan that summer, and I wrote some Go… And we got to talking about the state of the art of Go books, and his contention - and I think he was absolutely right - was that there were no really good books on Go at that point. So I said the obvious thing, “Well, then you ought to write your own Go book.”
Yeah.
So we did it together, but truth be told, approximately 90% to 95% of it is him. He’s an astonishingly good programmer, and he is also a very, very good writer, and he knew Go inside out. So whatever is good in the book is Alan’s work.
I am not much of a Go programmer. I could sort of cope at the time. And I haven’t done a lot with it recently. The place where I’ve found it particularly good was that I used it for basically crawling kinds of things, where you would start up a process to go and look for something somewhere else, and then you’d wanna have a bunch of those running concurrently, and then just grab the results as they came back. So think of it as a crawler; it’s the simplest example of that sort of thing. And expressing that in Go was just so much easier than expressing it in Python threads. And it seemed to run faster, at least in the specific case that I was playing with… So that’s a part of the language that I liked.
It was culturally compatible with C. It sort of looked like a modern version of C, although there were some weirdnesses that took a while for me to get used to… So in that sense, it all seemed pretty good. And of course, two of the three creators were good friends, Rob Pike and Ken Thompson. Their both good judgment and implementation skills are pretty remarkable… So it all sort of hung together in that way. But I just don’t write enough Go to have an informed opinion about should you use that, or should you use Rust, or would those solve the problems of mankind for you or not.
[48:12] Well, one of the things about Go which it shares with C is the simplicity. I think Go has something like 25 keywords, maybe less; I’m not sure the exact number. But not very many keywords. You can learn probably the entire breadth of the language in maybe an afternoon, at least at a surface level; you can grok it pretty quickly. It doesn’t mean you’re gonna be an expert at it, by any means, but it has the simplicity going for it. It also can be a constraint when you’re trying to build dynamic, complex things. Go has famously lacked generics support for its entire time it’s been alive as a thing, which is over ten years now… Until now. With Go 1.18 they are finally landing this new generics feature. It’s highly controversial; some people think – and these are gophers. Some gophers say “We don’t need generics, we don’t want generics. We have codegen, that’s good enough.” Others say this is gonna bring an entire new group of people into Go, it’s gonna make it much more expressive and useful… I’m curious your thoughts on that big feature, which has not created a Go 2.0. It’s still backwards-compatible, but it is complex. It took a couple of years to get in. Lots of iteration on the design of the feature, and now it’s landing. Do you think this is good for Go as a language? Do you think it’s perhaps departing from its simplicity? What are your thoughts?
The same disclaimer, I guess, is that I’m not writing enough Go to have a really informed opinion. I think in some settings generics are actually helpful, because that way you write the code once and then instantiate it with different types; that way you don’t have to think about it as much. And certainly, I’ve used languages with generics, C++ a bit, Java more, and they’re very helpful for certain kinds of things, no question at all. Does that then follow in Go? I’m not sure. As I say, I don’t know enough. I think the answer is probably yes, but the reasoning is more based on the people and the process by which Go changes then on the technical content.
The Go evolution process is exceptionally careful and cautious. Go remains backward-compatible right back to the beginning. If you wrote a Go program ten years ago, it’ll still work, no problem. And that is something that – well, we mentioned Rust, which seemed to be changing very rapidly, at least for a while. And then Python - certainly, I’ve been bitten by Python changes going from 2 to 3 etc. So if they make a change of substance, like the addition of generics, it’s been exceptionally carefully considered by people who actually know what they’re doing.
Now, the fact that people are still debating it - people can differ. Difference of opinion is what makes horse racing. So I don’t know enough about it to have anything more than “Well, I’ll put my faith in people who have actually studied it hard and decided in the end that it’s a worthwhile thing to do.”
Those generics - do you know enough, Jerod, to know if generics means that you have to do it a certain way? Is it by force, or is it just the availability of it that’s the controversial aspect to generics being added?
Yeah, it’s a new surface area. So you can just completely ignore it if you want to. I think a lot of the concern at this point is not the feature as implemented, it is how the community at large will use and potentially abuse the feature because of the excitement and the ramifications of that… Maybe not in the standard library, but in packages that people use, and popular things; it’s like, it might make it to where people abuse generics because they’re so excited that it exists… Which I think is also the consensus around goroutines, was that because Go made that whole deal so easy and nice to use, people were using it everywhere, and it ended up making Go programs more complicated and hard to maintain because of that.
[52:02] So I think at this point that’s most of the reservations. I think the design of the feature and I think the performance implications, which was also a concern… It’s like, “Is this gonna slow Go down quite a bit?” Because one of the things Go is famous for is being extremely fast, even to compile… And like, “Is this gonna reduce compile times?” was the question. I think now it’s like, “Hey, are people going to abuse this to the point where all Go example code and libraries has generics flowing around everywhere? We don’t want that to be the case.” I think it’s more of a cultural thing at this phase. And time will tell, I guess, on that front.
Yeah.
Well, we’re talking about big changes to things… I wanted to loop back around to Unix a little bit, even though we’re far afield from it, because I have a question about the web. Now, you pre-exist the World Wide Web… First of all, I wanna compare the web with Unix, but before we do that - when the web became a thing back in the ‘90s, Tim Berbers-Lee, and the www, that whole deal, where did you stand on it? What did you think? Were you an early adopter, or early abandoner of the World Wide Web? Did you think there were other things that were better? Gopher? I don’t know. What were your thoughts when it first came around? Was it gonna be a passing fancy? Because a lot of people panned it, they were like “This thing is not gonna take off”, and it clearly has.
It’s sort of embarrassing to admit, I guess, but my first encounter with the web was I was visiting Cornell; I gave a talk in the Computer Science Department there and I was visiting somebody who I think was actually a physicist… And he showed me this weird system that they had where you could type numbers and it would give you access to various physics literature. You could get a copy of a paper, or something like that… And this would have been probably roughly the fall of 1992 or something like that. And I looked at it and I said, basically, “So what?” I’m not sure I phrased it that way for him, but it was like…
[laughs]
And you know, in hindsight, if I had been smarter, I would own you guys. I would own everything.
[laughs]
So don’t take my advice of what the future of anything is going to be. I blew that one completely, sadly…
[laughs] Well, it happens. I think the iPod was famously panned by the – who was it? …the creator of Slashdot. When the iPod was first announced, he had a now famous –
Steve Balmer also threw it down, but then they also had their competing product, eventually.
Well, Steve Balmer was laughing at the iPhone when it was announced.
You’re right.
The iPod – he said something like “One gig of storage, smaller than a Nomad. Lame”, or something, was his quote… And the iPod, of course, was the beginning of Apple’s big run in innovation. So it happens to the best of us, clearly…
I recall Bill Gates being on Letterman… And Bill Gates was trying to describe what the web would be. This was in the ‘90s, in this initial phase. And David Letterman was like – but Bill Gates also tends to be a punching bag to journalists, or pundits, or folks in David Letterman’s –
Comedians? [laughs]
Yeah, exactly. Comedians, sure. I guess that’s – yeah, he’s probably more a comedian than he is a pundit, although both sometimes. He was like “What is this @ symbol?” and just sort of like making fun of Bill Gates… And Bill Gates is trying to describe – if you watch it now, you’re like “He was describing the future.” And David Letterman was totally laughing at him.
I think we often don’t get a chance to talk to someone, Brian, that has pre-dated the web… And I love that aspect of you that you’re like “I don’t wanna tell the future”, but just knowing the moments when past meets future, and your response to that moment is priceless to me, so I appreciate that.
So when did you finally come around? Because here we are, it’s 2022, we’re all using the web right now as we record this… In an amazing web application, in our browsers… Surely, you may have panned it or thought it wasn’t gonna be big, but at a certain point there was adoption and you probably hopped onboard.
[56:08] No, actually – my first web experience, I was still at Bell Labs, and it was very early days. I’ve forgotten the date, but let’s call it ‘95. Netscape had just appeared in the guise of Mosaic; so maybe that’s more like ‘93, or something like that… And one of my colleagues there at Gross had the idea we could take AT&T’s – remember, AT&T provided phone service for the country, at that point much of it, and it had this 800 number directory. So if you wanted to know the number for United Airlines, their 800 number, you could look it up in this thing. And it was a paper book, which was published every six months or so; like an old-fashioned phone directory. And Eric said “Gee, maybe we could put the 800 number directory on the internet, on the web”, or something like that.
So he and I and a couple of other folks basically cobbled together something; it was just straight HTML, with links and so on, so that you could go to this website and it would give you the 800 number directory for AT&T. I’ve forgotten the number, but it was probably millions (modest millions) of records.
So that was AT&T’s actual first web service. And of course, nobody in the company knew what to make of it. So we have a lot of flap, getting it, I mean, we had a prototype running in an hour or two, as you could imagine, because it was trivial…
Right.
…and then it took us months to make it visible on the outside. And the only thing that pushed AT&T over the edge was a rumor that MCI - another company no longer with us…
Yeah, I remember MCI.
…was going to release a web service of their own, and AT&T wanted to have the credit for having the first web app from a communications company, so we got approval to put it out, but… [laughs] It was kind of silly. But no, I thought the web was a great thing right from the beginning.
What was the stack for that? Was there a database? Was it just simply HTML? What was some of the hierarchy?
The original 800 number directory was just flat text. It was basically “Here’s a number, here’s a name”, and a scattering of other things related to it. So it was literally flat text; just one big file. I still have it.
Did you at least use AWK to generate it, or anything?
I don’t remember what we used. You could probably do it with a text editor, because it wasn’t that huge. I still have it floating around somewhere to see what it was. The other thing that was interesting about it - it was just riddled with errors. It was indescribably messy. I mean, how many ways can you spell Cincinnati? And the answer is 13…
13… [laughs] Wow.
So part of the job, what we offered AT&T was “Hey, we could clean this data up”, and nobody seemed to be very interested in that either. It was like totally different universes, the old line, let’s call it telephone service kind of thing, and these new people doing things with this new technology, the web. So it’s not the Letterman effect that Adam was describing, but it was the same sort of “Gee, this is brand new, and it probably isn’t gonna do any good, and so let’s not do anything much about it.”
Yeah. When presented with the future, it’s often so novel that you can’t understand what the future is going to be, so you just shrug it off.
Right.
Yup.
Or for a while we try to shove the present into it. You know, it takes a while… That’s why we talk about cloud-natives, or web-natives. People that grew up with the web. They think about it in a different way than those of us who pre-dated it… And come to it and say “How can I apply my current perspectives into this new thing?” Which, you know, generally produces some value. But then there’s like the next generation, or maybe a change in your own mind to say “No, I’m gonna think about it truly natively, as a starting point, versus as a thing I’m coming to.” And that’s usually where the creativity and the innovation takes off, because you just think about it in a different way… And it’s hard to shove the present into the future. You’ve gotta kind of build the future.
So when I think about platforms, Unix (and its derivatives) and the web, for my money, are like two of the greatest platforms ever created in terms of just opportunity and captured value, like people actually building things that change lives etc. And I think there’s some common things between the two. Of course, one is built upon the other, and it seems like Linux on the desktop never became a thing, but the web and web servers and server-side code really made Linux – or didn’t make Linux become a thing, but… Linux is entrenched because it was a great operating system for the web to run the server side.
So they’re related and one builds upon the other, but in terms of Unix, whether it’s in the philosophy or even in the implementation, and the World Wide Web and its design and its philosophy, do you see parallels? Are there commonalities that we can look at and say “These make for great platforms?”
Yeah, that’s a really interesting question. I think you’re right, I see some parallels that might even be instructive. I mean, fundamentally, it’s the core simplicity of the thing. These are not complicated; they are simple. As we’ve talked about earlier, the essence of Unix is a handful of ideas that work really well together. I mean, the hierarchical filesystem, the programmable shell redirection, not too many system calls… And interestingly, text is kind of the universal medium of exchange of information.
Now you look at the web. There’s (if you wanna call them) system-calls-y kinds of things. There’s HTTP, HTML, the URL. And that’s it. There is nothing else. It’s got the internet as an infrastructure… Oh, and everything that goes across the web is text. So that commonality there I think is quite real. And you know, Berners-Lee created that stuff kind of out of nothing, but building on what was already there. So he had a very clean, simple idea of what to do.
HTML is basically a dialect of SGML, which derives from GML, and on and on into the past, but he simplified it and cleaned it up in a way that made it very useful for this kind of application. So I think there’s actually quite a bit of parallel there.
Yeah.
This is what I think - going back to the book you wrote, the memoir on Unix, I think is so important. It’s why I’m gonna put it on my next list to read, because sometimes when you loop back there’s such fruit there… In another interview with Lex you’d mentioned in the early Unix days how there was so much low-hanging fruit; that’s why there was a lot of things happening. And I feel like with the web it’s still – even though we’re deep into it, I feel like it’s still the beginning in so many ways. So to look back to Unix and what it’s become, through Linux and others, and just the underpinnings it is for all of this - even the web itself is built on top of it - I think it would make sense for somebody that’s looking to the future… Because there’s so much you can draw about the future from the past, despite what we had just said about how we can sometimes take our experience into the future and use it as baggage, or it be baggage. But I’m putting it on my list; I’m excited to read this because I’m obviously a fan of Unix, but to see how it might paint the picture for the future is pretty interesting. And obviously, the preserving of the knowledge, and just going back into the past and looking at what has made what we are foundational I think is pretty interesting.
There’s a really interesting idea there, which I see from time to time… People have gotten used to over the last 20 years or more to graphical interfaces, the idea that you look at something on a screen and you click a button with a mouse, or that sort of thing… And underneath that there’s an awful lot of things you can do with a command line interface; and I think in various fields and in various areas people rediscover the idea of a command line, that you can use to abbreviate common things, to automate processes, to do things without you having to poke the buttons all the time… And I see that in any number of areas, where, you know, “Gee, I could write this little program based on text to process text that comes from the internet, or whatever, and do my job more efficiently, or more effectively.” So I think there’s probably low-hanging fruit, for example, in that sort of thing, pick your area.
That does assume though that the computer continues to be a paramount point of, I suppose, creation. I think we’re in this unique space, and I don’t know much about the future, because I haven’t been there… But there’s a lot of creators that don’t even touch a computer.
Right.
In quotes, “creators”. And they tend to be visual creators, and things like that, but their only machine they use is their smartphone, or maybe an iPad, and less of, say, a Linux machine, or a MacBook, or something like that. I know that you use a MacBook Air to program; I’m not sure if that’s still true or not, but you’re on a Mac these days even, so it’s got Unix underpinnings in there.
[01:08:07.11] I’m just curious how that plays out, because if the computer shrinks in terms of its usage, do we still have access to the command line? Can we still appreciate those original principles that sort of drive things forward? I think programmers and people in the software space gravitate towards a computer, because that’s where we have the most power, but you see more and more people moving to things, and the command line tends to take a backseat when handing out a tool to those operating systems.
Yeah, the iPad is a nice example of that, in a way. I mean, I have an iPad; I turn it on maybe once every six months or something like that, because it’s an utterly useless device, because it only lets me do what Apple thought I wanted to do. And I don’t know how to make it do most of the things, because it requires funny artificial “wiggle your fingers while rubbing your elbow with something else” to make something happen, which with a command line interface - and this is the old-fogey speaking - I could type two or three characters and I’d be done. But I don’t use it primarily because it’s not programmable. It is a totally useless device to me. And a phone I think falls into that same category. I use a phone occasionally; I haven’t turned mine on for several days, but you know… I can’t program it either, so not as interesting or fun.
Yeah. So I think this kind of ties together a couple of threads that we’ve been hitting around. One is the next generation of creators are growing up with that phone, and they’re growing up with that iPad. And that’s what they know, and that’s what they grew up with. So they are mobile-natives, so to speak, and they’re not super-exposed to the possibilities outside of that pane of glass. Now, one thing I’ve noticed is that your most recent work and what you teach now, this book that you have, Understanding the Digital World - it’s not a programmer book; it’s a book for a lot of people. It’s a broader audience, it’s a different audience. And I’m curious, as you’ve gotten older and more experienced, it seems like your focus has shifted, or you’ve changed your audience, to a certain degree… Who you’re targeting to teach and to instruct and to influence is not necessarily the programmers like us, and I’m wondering where that happened… If there was a conscious moment where you were like “I’ve gotta teach regular people things, too.” Or if it’s kind of this, because the next generation may be not programming; they may be on an iPad, but you can influence them to say “Hey, did you know there’s a whole world of possibility that you’re not experiencing because you don’t have a PC or a MacBook with a command line?”
Yeah, the book that you’ve described - I wrote it actually for a course that I’ve been teaching at Princeton for the last 20-odd years, off and on. It’s a course for people that are very non-technical; there’s nobody in it who’s probably ever gonna do computer science…
Yeah.
But they’re growing up in a world where computers and communications are obviously pervasive, and that’s changing the world extremely rapidly. It’s accelerating. And I think it’s important that anybody who thinks they’re an educated person ought to know some of that stuff, about how do computers and communications work, and how do they affect people, and what can they do about things like privacy and security and defending themselves in various ways. And you could see the effects everywhere. I mean, think of the social media - mostly bad, occasional good. The advertising industry, cyber-whatever… All of these things.
So the kids in my class who were the non-technical ones in this class - they’re gonna be in positions of power and authority and influence to a degree which I’d say the kids in computer science are much less likely to be, actually… So wouldn’t it be nice if these folks knew something, so that when they’re running the world 20-30 years from now they don’t make silly mistakes about technology, or at least they are better able to assess what’s going on and make better decisions? So that’s the hope of the book.
[01:12:06.17] I got into the whole thing kind of by accident. I spent a sabbatical, sort of – you know, one semester. I was still working at Bell Labs, but in 1996 I spent the fall semester at Harvard teaching CS 50, which was this big introductory course for pretty much everybody who wanted to learn anything about computing. I did it as a visitor one semester… And what I discovered in that class is there were lots of kids who were very capable at programming; they were the ones who’d started programming when they were basically five years old, but there were lots and lots of other people who had no insight whatsoever into computing, and would probably never need computers, but had to learn something about it; it in some ways satisfied a requirement. And it was hard to have one course that would satisfy that broad a population… So when I some years later wound up at Princeton, I thought “Why don’t I try and teach a course for the non-technical end of that broad spectrum of the kids who were history majors, or English majors?” and that sort of thing. That’s the genesis of the course.
It’s been a lot of fun… I mean, it really is fun to try and explain the kinds of things that you and I would think are interesting and fun and important and all that stuff to people who come from very different backgrounds and may not appreciate why we think it’s important, or fun, or relevant.
How do you impart that? How do you bridge that gap in your classes? Any techniques?
I think the way you do is there’s sort of a framework of stuff that I think they should understand. You know, what is computer hardware, how do computers actually do their thing, what is software, what does that mean, what’s happening when you put an app on your phone… And then the communication stuff - the internet, and the web, and all those.
So there’s that technical substrate underneath it, and then there’s the “Okay, but how does this show up in the real world? What are the things that the real world is doing to you that are dependent on the way that that technology works?” Advertising and what is called the surveillance capitalism, I guess, is a fine example of that.
I get the kids to start up their computers in class, and open up their browser, and count the cookies. And the standard response is basically “Oh my God, I can’t count them”, because they’ve got thousands of cookies. I explain to them what’s going on, and how they’re being tracked, how Facebook knows more about them than they do. And after a while they start to maybe remove some of those cookies, or disable the ones that they don’t need, or that sort of thing.
We talk about cryptography, I explain why government attempts to put backdoors into cryptography are a desperately bad idea, and you don’t need to know the mathematics of it to realize “You know, this is not gonna end well if you allow that sort of thing.” So getting them to understand the idea behind Moore’s Law and whatever might replace it, the fact that things will continue to get smaller, cheaper, faster, better in unpredictable ways, and that will continue to have an effect on their lives. The internet of things is an example of that today, that there’s all these little devices watching you, and talking to each other, and telling the world about you in a way that you may not want. So it’s a combination of actual technical content made at a level that I hope is accessible to them, but then how does that relate to the world that you live in.
As we’re talking about this, and you’d mentioned your iPad and how unused it is, every six months… And I’m assuming it’s a smartphone; I’m not sure if it’s an iPhone or not, but - your phone, how you maybe check it out every couple days, the conclusion I draw from this is almost a world of obedience, and the possibility to be rogue. And let me explain that… Because on an iOS device, let’s say - and I’m not trying to say that Apple is being malicious with this activity, but the functionality of the device is definitely limiting. You can’t program it in itself; you have to leave the device, enter your computer world with the command line, with the Unix underpinnings and the Linux backgrounds and the packages and this whole world of possibilities to make the other thing work.
[01:16:24.29] And so if you only stay - Jerod, to your point where there’s this sort of mobile-native aspect - in that mobile-native, it’s almost a world of obedience. Like, “Obey us and use the device as we see fit.” And Brian, you mentioned the cookies - well, try to find your cookies or count them on an iOS device without developer tools. You can’t, so you don’t know what’s happening beneath the system, so you accept it, and you just sort of obey and use.
But in the world of a full-fledged computer, where you can actually make it work and program it, you have way more control. On your network you run a Pi-hole that checks for cookies, or blocks certain URLs, or disables those ad-tracking abilities. So you have more control over your digital presence on that device. That’s just an interesting – I just thought about that as we were having this conversation. What do you think about that, Brian? What do you think about that parallel between obedience and – I don’t know if I would call it rogue, but just maybe freedom.
Right. No, I think that’s spot-on, in many ways… Because certainly, the devices you get, and especially the iPhone - I don’t have an iPhone - it’s kind of a walled garden. The idea is that you get in there and it’s a very, very nice environment, and it does all kinds of things very smoothly, but you can’t get out. And in particular, you can’t write code for your iPhone; even if you’re a programmer, you have to stand on your head to get code to run on more than your own personal phone. And even there, it’s probably restricted. I haven’t done that for years.
So yeah, you’re supposed to live within the confines of whatever that system is, and then they can do, in effect, whatever they want to you. And most people are not aware of it; they’re not aware that there might be something outside, and so the notion of freedom is kind of unclear if you don’t know that you’re actually kind of locked up, in a way, inside this nice walled garden.
Yeah. So you mentioned now how you’re teaching all these students who aren’t CS students - maybe they’re going into law, maybe they’re going into business, or medical or these other industries, or politics for example… These will be the leaders of the next age, and they’re not software people. What’s ironic is today the most influential, powerful people of our day are software people. It’s your Mark Zuckerbergs and your Jack Dorseys, and this whole group of Silicon Valley entrepreneurs and software folk who have – I don’t know, by pure chance and luck, or by just the motions of capitalism and the web, a free and permissionless platform, they’ve kind of sucked all the air out of the room to a large extent. So they have this power, which I don’t think any of them necessarily asked for, but it kind of came upon them. Maybe they desired it; who knows, I’m not gonna psycho-analyze these guys… But it’s just an interesting fact of history that that’s where we stand.
And now we have a next generation of programmers, many of which you will never teach, not because they’re not in your Understanding the Digital World class, but because they’re opting out of universities altogether. Many programmers today are going the bootcamp route, they’re going completely self-taught, online, and they don’t have a four-year degree and they’re never gonna get a four-year degree, because it’s too expensive, or they wanna move fast and break things, or whatever it happens to be. I’m curious your take on this trend away from computer science degrees and universities, and towards online learning, coding bootcamps, and this kind of like short-circuit into the workforce.
[01:20:06.28] Yeah, again, it’s one of these things where I don’t think I know enough about it to have a really informed opinion, but there’s absolutely the trends you see. I don’t know – bootcamps were very much on my radar 3-4 years ago. I don’t see as much of it now, and that may be because my radars aim somewhere else. So maybe they’re just as active.
There’s certainly enormous opportunities for online learning, although empirically, people who start online, things often tail away very quickly. Enrollments in many online courses decay exponentially after the first couple of lectures. But this isn’t to say that that isn’t a viable way to do things.
I think there are several things you get from going to, let’s say a four-year college, or something like that. If you do that with focus on computer science, you learn more than just how to write code. You actually learn a bunch of other things that might be germane, like “Gee, what’s a better way to do something?” You learn something about algorithms… You may have a better understanding of what’s underneath the various pieces of your stack. You may research, you know python is sort of an interpreter and it runs fast because underneath it it’s…” So there’s a lot of things that you might not see if you haven’t encountered them in courses, places where you’re sort of forced to try and understand them. So it’s not to say that one is better than the other, but there’s maybe a different level of experience.
And of course, another thing that you get - one could say cynically - at a university is networking. You meet a bunch of other people who might be in some way or other friends, acquaintances, business associates, significant others for your life going forward… And one of the advantages of university is that you meet people who are not the same as you.
Right.
One of the things that I like about that course that I teach is that I meet these people who are history majors. I’m not a history major; I find it interesting, so it’s really very valuable and important to deal with people whose view of the world and how they do things and what they find interesting and what turns them on and all that is just different. And I think that broadening experience is something that you probably would miss if you went straight into a bootcamp and straight into doing a startup with 5 or 6 other people who are exactly like you. But you know, different strokes for different folks. I’m not saying that’s the right way to do it either.
Sure.
It’s interesting to think about a world where this changes, because it reminds me of the process of making - let’s say tea, for example. Tea - you fully immerse something in something for a duration of time. An espresso – obviously, they’re two completely different drinks, but the process to create… They’re both consumable liquids, basically; it’s a similarity at least. With tea you may steep it 4, 5 minutes, 10 minutes. Some tea takes a good 10 minutes, and it may take a high degree of temperature, a lower degree of temperature. An espresso takes a high degree of temperature, but compression and the time is condensed, and still out the other end you get this liquid, both consumable, but with caffeine and both with similar attributes to the consumer… And that’s what it reminds me of. You can get to the same place similarly, with different paths.
So back to your “Different strokes for different folks” - I do agree with that… I just wonder how we preserve wisdom like yours and others, who can be in an environment – like, you’re not gonna ever not teach at Princeton. Like, would you ever eject yourself from that environment, and “Go to Brian Kernighan’s website and subscribe directly to him, and he will teach you directly” - would you ever eject from that and go into the basically direct-to-consumer model? You’re in a packaged goods scenario, where you package Brian up and you put him into a class…
[01:24:02.07] You’re a packaged good, Brian…
[laughs]
I mean, if we think about the analogies - you’re not a direct-to-consumer teacher. How do we preserve this non-direct-to-consumer teacher area where there’s still wisdom and reasoning for this idea of steeping? I know I kind of went way in the weeds of explaining that, but how do we preserve that? How do we keep the need for that in this future? I know you don’t hypothesize about the future a bit, but I’m sure you’ve got some ideas there.
Yeah, it’s clear that the steeping analogy, let’s say, a four-year university is one of many ways to achieve education… And not just technical. Any kind of education.
Sure.
And I think probably what you want is something that makes whatever pathway is gonna work for somebody readily available. That they don’t get caught out of it for financial reasons, or discriminatory reasons of any sort, and all of these things. What you’d like is this idea of kind of equal opportunity, and people go through whatever process makes the most sense.
You’ve seen from time to time discussions about, for example, whether it’s better to go to college and get a degree in something, or better to go into a trade, like plumbing, or something like that. For example, you can be something like a plumber, get out of high school, go through an apprenticeship, learn a trade, become very good at that, actually make a fair amount of money, and for many people that might be a viable alternative, and perhaps even more satisfying than going off to (let’s say) a four-year college. A two-year college is some kind of intermediate position in that…
I think the main thing is to make it so that anybody can understand that the options are, and find a pass-through where the options make the most sense for them. I suspect there are a lot of artificial barriers that the country would be better off if those barriers could be reduced. And I don’t know what they are for different people. Going to a private school like Princeton, an expensive place - that costs money. So there’s a financial barrier to that. Some places solve that with student grants, other places you have to take out loans, and then you’re stuck with debt for some period of time afterwards.
So I don’t have a good solution. I think it is important to make it possible for people to go in whatever direction seems to work best for them, and to have a clear idea what the trade-offs involved are.
Well, I mentioned our tech oligarchs… People talk about late-stage capitalism. Maybe this is like late-stage World Wide Web, because there has been a consolidation of power and value capture. I think there’s still a lot of opportunity on the web. That being said, there is a group of people, far and wide, who are trying to rethink, reinvent, change the web. They’ve dubbed it Web 3. There’s a lot of particulars on this topic that we don’t necessarily have time for, or the interest… But this idea of decentralization - do you think there’s a nugget there that could fix some of our problems? Do you think it’s a red herring, or a grift? What are your thoughts on the decentralized web, and kind of rethinking the web somewhat fundamentally?
Yeah, Web 3 strikes me as being just another buzzword. I have to look it up to know what people mean by it… I think the idea of decentralization - in some ways I wonder whether that’s going back to the way it was in the good old days, when the web was decentralized…
Yes.
…when we didn’t have these concentrations of power the tech oligarchs that we have – well, let’s say the folks from Google, or Facebook, or whatever… I would say decentralization in that form would probably be quite a good thing. Whether it requires a technical mechanism like blockchain - that just strikes me as kind of adding trendy things together to get something that’s purportedly even more trendy. I don’t see that at all, period, leaving aside the environmental impact of computing to make blockchain stuff work.
So color me pretty skeptical, but you know, maybe I could be convinced. Maybe it’s another one of these things where the future was before me and I didn’t see it.
[01:28:09.04] So we wanna be respectable of your time; we’re getting to the end here, and we’re gonna let you go… I have a few real quick, quick-hitter listener questions, if you don’t mind. I grabbed three of our listener questions that people submitted, that I think you should be able to answer pretty quickly. This one comes from Saul. He says “Do you still enjoy programming?” And as a follow-up, “What’s a tool that has been created in the last ten years that you like?”
Well, yeah, I do enjoy programming; it continues to be fun. I think the problem I have is that most of the programs I now write tend to be quite small. They are often AWK one-liners to do something, or maybe a Python program to clean up data in one format and convert it into some other data. Not doing anything that I would call big, or anything like that.
Are there tools from the last ten years that I use, in that respect? I would say, on average, no. And that’s probably just because the stuff you learn when you’re young sticks with you better. I have a lot of things in my head, and at my fingertips that let me get things done. Kids in my class look at me and say “My God, that’s just dinosaur-like…” [laughter] So I am not an early follower, I’m a late adopter, in many respects.
Early abandoner.
Early abandoner, exactly. I like that. Coin the phrase. Alright, Chris Shaver asks “Are there other languages besides C that you admire? Why?”
Yeah, that’s a neat question. I suspect, like most programmers, there’s sort of half a dozen languages that I’m comfortable in, but it’s now enough that when I switch from one to another I have to get myself back. It’s like driving on the left, driving on the right kind of thing.
And then there’s another half dozen that I have touched, but don’t remember enough about, and then there’s another half dozen or a dozen where I think “Gee, wouldn’t it be nice if I knew those?” but I never will.
The language – leaving aside C, certainly the two that I use most are AWK and Python, just because that’s getting the job done. I have from time to time written Java programs, and… Java has its merits, but sometimes it’s just like walking through glue to get stuff done.
Good analogy.
There’s a lot of syntax in Java. I have tried functional languages and… Yeah, there’s some mental barrier; I can’t get over it. So I can write some kinds of things in minutes in C or any imperative language, and if I’m stuck in a language like Haskell, it’ll take weeks. It’s hopeless. Sorry, folks.
Fair enough. Last one for you… Will Furnace asks “What are your favorite tech books that are not written by you?” And as a follow-up to that, “What makes for a particularly good tech book?”
The one that I come back to from time to time - and I’ll pick on just one - is The Mythical Man-Month by Fred Brooks. It’s a very old book at this point; it probably dates from the early ‘70s, or something like that… And there’s an awful lot of it that is very dated, in a sense. For example, all the programmers are male, and all the clerical people are female. That’s just wrong. But there’s a lot of insight in it as well, in what’s involved in trying to get software that works, especially if it’s a large-scale thing. So I find that interesting, and it’s well-written. It’s gracefully written.
So that’s one of the books that I go back to from time to time. The other one is not so much a book, but it’s on my radar right at the moment, because I’ve recommended it to a couple of kids in the last 24 hours… Dick Hamming, who was a colleague at Bell Labs for a while, he was in the office next to me in my first intern summer there… He gave a talk in I think about 1986 called “You and your research”, and it is basically how to make the most of your career. It’s kind of an hour talk, it was transcribed. Get the transcription of the first outing, and then watch him on video of the later outings, I guess.
[01:32:19.21] I’ve found that a very insightful way to make the most of what you’ve got, to have a good life, basically, in technology, but not exclusively in your technical stuff. So it’s not a book. He wrote a book, which I have as well, a lot of which isn’t the sort of thing that’s in that… But it’s kind of a distillation of a very effective career by a guy who really optimized his own way through life. So those would be two that I think are particularly interesting.
In particular a chapter that stands out to you? I know when I reference books that stand out to me, like this book stands out to you - is there any particular chapter that you reflect on, or go back to, that you can point someone to? If they’re like “I just wanna check out one chapter, maybe two.” Or maybe it’s a section. Is there a particular chapter that sort of grabs you, that you go back to often?
In other people’s books?
The Mythical Man-Month in particular, the one that you mentioned.
No, it’s not a long book, and so I probably would skim the whole thing. And you’ll find things that are just so dated, like how do you organize the software and the documentation for a big project. This was before stuff was stored on disk, roughly speaking, so it’s that dated… But no, I wouldn’t focus on any particular one. I haven’t looked at that one for a couple of years at this point, so… Not in my head the same way.
Gotcha. Okay. Anything left unsaid, Brian? I know that you interview here and there frequently, but not too frequently. Is there often a question that you’re like, “Man, I really wish they would ask me that”, or anything in particular you love to talk about, but people don’t often get to ask you that, and you’re just bummed out? Don’t leave this show by doing that; so if there’s something, say it now. If not, then we’ll call it a show.
We don’t want you bummed out, Brian.
No, I think you guys have covered it pretty well. There’s a lot of interesting stuff, and we’ve touched on a big part of it… So no, I think that will do, for the moment.
We appreciate your journey to get here, we appreciate you sharing your wisdom through books and through your teaching, and we just appreciate you showing up today, so thank you so much, Brian.
My pleasure.
Yeah, it’s been an honor.
Our transcripts are open source on GitHub. Improvements are welcome. 💚