Changelog Interviews – Episode #581

It's not always DNS

with Paul Vixie (contributor to DNS protocol design)

All Episodes

This week we’re talking about DNS with Paul Vixie — Paul is well known for his contributions to DNS and agrees with Adam on having a “love/hate relationship with DNS.” We discuss the limitations of current DNS technologies and the need for revisions to support future internet scale, the challenges in doing that. Paul shares insights on the future of the internet and how he’d reinvent DNS if given the opportunity. We even discuss the cultural idiom “It’s always DNS,” and the shift to using DNS resolvers like OpenDNS, Google’s 8.8.8.8 and Cloudflare’s 1.1.1.1. Buckle up, this is a good one.

Featuring

Sponsors

SentryLaunch week! New features and products all week long (so get comfy)! Tune in to Sentry’s YouTube and Discord daily at 9am PT to hear the latest scoop. Too busy? No problem - enter your email address to receive all the announcements (and win swag along the way). Use the code CHANGELOG when you sign up to get $100 OFF the team plan.

CIQ / Rocky Linux – CIQ is Rocky Linux’s founding support partner. They support the free, stable, and secure Linux distro called Rocky Linux.

imgproxy – imgproxy is open source an optimizes images for the web on the fly. It makes websites and apps blazing fast while saving storage and SaaS costs. It uses the world’s fastest image processing library under the hood — libvips. It is screaming fast and has a tiny memory footprint.

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 This week on The Changelog
2 01:27 Sponsor: Sentry
3 05:11 Welcome Paul Vixie
4 08:15 Internet's growing pains
5 15:30 Do we need a new internet?
6 21:56 Reinventing DNS
7 27:51 Who pays to progress the internet?
8 30:41 Sponsor: CIQ / Rocky Linux
9 34:31 Upgrading feasability
10 40:33 Pushing hardware changes
11 50:37 Cool uses and abuses of DNS
12 59:36 Sponsor: imgproxy
13 1:03:04 It's always DNS
14 1:12:48 Paul's resolvers of choice
15 1:14:14 How to run your own resolver
16 1:17:47 Origin of DNS
17 1:20:26 Who should run a personal DNS?
18 1:23:22 Benefits of running your own DNS
19 1:26:14 What keeps Paul going?
20 1:29:41 It's been fun!
21 1:30:08 Join ~> changelog.com/community

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

We had Alan Jude on the show to talk FreeBSD, and he said to us “Hey, if you’re into DNS, you should follow Paul Vixie.” And I followed that to Paul Vixie on LinkedIn, and said “Hey, Paul, we would love to talk to you.” I’m into DNS. Adam, are you into DNS at all?

Oh, yeah. Yeah, I love DNS. Love/hate DNS.

And thankfully, Paul was gracious enough to give us some of his time… So he’s here now with us. Welcome, Paul. Welcome to the show.

I’m here, and happy about it.

Do you like DNS, Paul?

Yeah, what’s your feelings?

You know, love/hate, like you were just saying, but what I want to point out is none of the technologies of the internet were designed for the scale we’re now seeing. So no wondering why we ran out of address space, or why routing table ingestion is such a problem, and why fragmentation doesn’t work, and all the rest of that… Because it was essentially a laboratory toy. It was built by a bunch of government contractors to communicate with each other… And it did that perfectly. But then it got into a fight with the commercial protocol suite of OSI. And OSI was very much a telephone company creation, and they were gonna bill us by [unintelligible 00:06:51.06] And a lot of people said “We don’t want that future. We want a network that the world itself built for itself.” And it turned out that the internet protocols were just far enough along [unintelligible 00:07:05.15] DNS being an example, but TCP also, IP also. Mind you, we didn’t have the hardware to support encryption; we just didn’t have it, and we didn’t even have a placeholder for it. So in this periodic “Let’s change everything. Here’s my rototiller. Let me go in there and turn it back to raw dirt, and change everything”, we probably should have done that with DNS. I may be the person who did it the most, but it needs it again, and it’s now too big. There’s no way to have a flag day. So we are stuck with a bunch of things in DNS that should have been revised with scale, and work.

[00:07:48.12] And then we inevitably had some opportunistic revisions that were backpatched in by somebody who had a business case that required them, and they got it done in a way that now we’re all living with that. And so some [unintelligible 00:07:57.09] because it’s too old, and some because it’s too new. But either way, it’s chaotic. If you read Eric Raymond’s book, “The Cathedral & the Bazaar”, this is the bazaar.

What happens if we don’t make these remedies? What happens to the internet that we know and love, or the connectivity we have with LAN, WAN etc?

Yeah, so I’m not sure that any end user will experience the painfulness. But I think every deployer, every internet operator, every innovator, every implementer of protocols has always sort of felt the pain of “Gee, I need to operate infrastructure, it needs to be able to support the following needs”, and “I need some software to do this part, I need some software to do that part.” You might even write some of your own software for some. And then you start looking at the specification and you find that it’s incomplete. So you go looking at the other implementations and you find that they’re incoherent and inconsistent with each other. And that what we actually have is a best effort system at local scale.

So there’s always something that is sort of a threat to stability, or a threat to our profitability, or our ability to go home for the weekend and visit our families… But yeah, those people like to live that way, so we’re not going to treat this as an emergency. But let me give you a very specific example. It has to do with IP packet size. DNS was originally a UDP protocol. It has grown beyond that, and some of that’s controversial, and we can talk about that later if you want… But as a UDP protocol, it meant there was no endpoint state. In other words, the kernel of the initiator, where the questions were coming from, was not trying to remember anything like a TCP session, or port numbers, or any of that. It was just saying, “Okay, we sent the packet. It’s gone, and I have no remaining burden.” And then the response comes, and gets delivered to you, and again, the kernel had no state query. And that was very necessary when the fastest computer on the new ARPANET [unintelligible 00:10:12.06] which was about 450,000 instructions per second. You did not wanna be putting extra burden anywhere. It had to be as simple as it could be; aim at a certain austere beauty. But the trouble with UDP is if you want to send something, send a response let’s say, that is bigger than whatever your network can contain - most of those are on Ethernet. Ethernet is still 1500 bytes as the maximum transmission unit. And so if we want to send more than that, we can’t. We have to make a choice. And every choice we could make will be a bad one. One choice is to truncate it, and say “Well, this is what will fit. And here’s a little indicator telling you, the person who asks this question, that the answer is incomplete. And then you, when you receive the answer, you could try and be intelligent and say “Well, I see it’s incomplete, but let’s see what’s there. It might be enough that I can get work done anyway.” That’s not what happens. They say “Ah, it’s incomplete, so I’m going to start over again with TCP.” TCP doesn’t have message length limitations, but it requires kernel state. In fact, it requires three roundtrips and a minimum of seven packets to exchange one question or one answer. And that’s crazy talk. That is an awful lot of network state and overhead just to ask a question and get an answer.

[00:11:43.00] So what you really want to do is be able to answer all questions in the size of a 1985 era Ethernet packet at 1,500 bytes. And that’s a terrible trade-off. So what we prefer to do would be to send a larger answer. And the IP protocol permits us; it permits you to say “Here is a datagram, here’s an IP datagram, and it won’t fit in one Ethernet packet. So it’s gonna get chopped up into several different segments, basically, or fragments. And you’ll be able to reassemble them on the far end, because we gave you just enough of a hand for that to be done. But that was not necessary. For every operator, for every day of the life of the internet, since 1970 until now, and pretty much anything that wasn’t necessary, ubiquitously necessary, was poorly supported. And fragmentation would be a great example of something that just doesn’t work. And a number of people have said, “Gee, we need fragmentation, so let’s figure out how to make it work.” If you want to find a quicker path to getting to sleep tonight, go read the Path MTU Discovery.

Okay…

But it doesn’t work. And later, with IP version six, which has been trying to take over for - God, a little over 20 years now - it took a different approach, and said “Hey, we’re not going to fragment packets inside the network. If fragmentation is necessary, it has to be done at the sender.” And the sender, of course, since it can’t discover the size of the packet that would get through, doesn’t know what size to use. And that in turn means that no receiver is ever going to have the ability to reassemble fragmented IP version six datagrams, because they were never sent. They were never tested, they were never deployed. And so we’re just kind of stuck. We’re way out on a limb. We can’t send messages that are large enough to contain even the current set of answers with DNS security signatures, let alone what will come in the post quantum world. And we’re just – we all know this. We’re in the river, near the waterfall, we know we’re headed to the waterfall, but we all have important things that we have to work on right now, or we don’t even make it to the waterfall. So we don’t. And as we get closer, you’re gonna see newspaper headlines about yet another y2k-style debacle for the whole industry to worry about together.

When do you think that would occur?

Well, fortunately, quantum computing is always 10 years out. If that changes, and we end up needing to have post quantum crypto, so that it remains impossible to factor large numbers and it remains impossible to store and decrypt information, then we’re gonna have a problem. Because we’ll be trying to move toward a type of crypto that simply won’t fit UDP. And we will get fragmentation as a backstop; it’s just going to happen to TCP, with everything. So that means there’s a tipping point. If we get close enough to real quantum computing, to where we need post quantum crypto, we’re just going to be using TCP for all DNS, except for the part that doesn’t get the memo, and therefore does not switch, and therefore doesn’t work, or doesn’t adopt post quantum crypto at all. And I don’t mean to sound like this will be an emergency for the world. y2k was not an emergency for the world. It was just kind of a big deal, and in the newspapers. But there’s always going to be something like this. It’s only in the case of the internet that we do it to ourselves.

Do we need a new internet? Because - I have to do it, Jerod… Silicon Valley - have you watched that TV show, Paul, by any chance?

Thank you, Paul. Thanks.

No? Never? Are you not a fan? Would you never? Do you just not watch TV? Is there a reason why you haven’t watched Silicon Valley as a TV show?

I don’t watch a lot of TV.

[00:15:48.23] Okay, cool. Well, let me just give you a picture. It very much simulates and satires a lot of the last 10 years of Silicon Valley. That’s why it’s called Silicon Valley. And in this TV show, Richard Hendricks creates an algorithm that compresses something so well, it does like a 4.8 on the Weissman score, for example. Like, compression we’ve never seen before, for example. Maybe five something, I don’t know. It was massive. It was a breakthrough. And so he had this idea to create a new internet. And so the whole show was essentially about stumbling into this compression, this algorithm, creating this platform, and then the platform really was eventually a new internet. Do you think we need a new internet? Is that what TCP will offer? But TCP requires a handshake, right? So it requires more than UDP, which is just sending packets blindly, in a way, right? How do we create a new internet if you think we need a new one?

Well, the cheeky answer to your question…

Give me the cheeky version. Yeah, I like this.

Because I don’t know what technology we will be using 50 years from now to do global commerce, and email, and messaging, and everything else… But I do know that it will be called the internet.

We’re not gonna rename it.

So some shorter term examples of how that might work, and you can see in the web - now, the web community, they’re large, they’re new, they’re young, passionate… But in many cases, they don’t understand that the internet is still here, and that we still have an internet, but we have a web that’s kind of built on top of it. And so they look at some of the limitations here and they kind of wave their hands at them and say “We don’t need that anymore. It’s clearly outmoded.” And this audience is certainly familiar with HTTP, and therefore HTTPS; many in the audience will know that something called HTTP/2 was created 10 years ago to deal with some of the limitations of HTTP/1. In particular, they wanted to be able to have multiple objects in flight, and them be interleaved with each other, rather than a whole bunch of connections in parallel, which creates other resource exhaustion problems. And this was cool. This was very cool stuff, but it still lived on top of TCP. And the trouble that you’ll have living on top of TCP is it’s reliable; it’s a reliable stream protocol. Every octet that is transmitted will be received in the order that it was transmitted. So if there’s any congestion-related loss or any other kind of loss, then that part that was lost will have to be retransmitted. And until it is retransmitted, you won’t as a receiver be able to receive anything that was sent after the part that has to be retransmitted. And it will almost sort of sit there, in various queues throughout the pipeline, until they’ll be delivered to you in sequence, without loss. And that’ll mean if you’ve got HTTP/2 and you are interleaving multiple objects, you lose one TCP segment of one of those objects, all of the transmission of interleaved objects will also be delayed, while the stream reassembles itself and gets back into synchronization.

So now we have an alternative to that, that was not understood as a problem when HTTP/2 was being developed. Now it’s broadly understood. And so now they’re developing HTTP/3, also called QUIC. And what is amazing about this is that it lives on UDP.

That means it’s still interleaves, but there’s no head of line blocking. If you lose some packet of some object - yeah, that’ll have to be retransmitted, but other things can still go on while that’s happening.

[00:19:48.16] Now, if you wanted to [unintelligible 00:19:48.20] squint just right, so that you could have your wish about how to interpret this, you could say that each of those is a reinvention of the internet. Because it is a fundamental change to the thing that we do mostly, which is to look at pictures of cats. And inevitably, over the last month or so there’s now a proposal that says “Yeah, QUIC is not the be all and end all, as it turns out, because a lot of networks block UDP.” Hey, welcome to my world… So they’ve proposed making QUIC live inside of TCP… Kind of missing the original point of why HTTP/3 –

I was gonna say, that was the big idea, was not to do that, right?

You know, if you have – and I was a 20-something once upon a time, and I came in and I wanted to reinvent everything, and I didn’t necessarily know the history of why things were the way they were, and what would be the hard part of reinventing it… And so I expect that every new generation of 20-somethings for all of our humanity’s future will always do that. And so that’s what they’re doing. They’re learning, bit by bit, piece by piece, why the problems that they waved their hands at were more difficult than they realized. And that we weren’t just idiots and we didn’t sort of choose what we chose without knowing what the alternatives were.

Right.

That’s okay. I’ve raised a generation of teenagers, so I’m used to this.

[laughs] Yeah, unfortunately it’s kind of like when you have a young child, and the stovetop is hot. And you tell them and it’s hot, and it’s gonna burn them… And some kids will just be like “Okay, therefore I’m not going to touch it.” Very few, though. Most of us have to actually touch it for ourselves and learn the hard way, that what that person said was correct, because they have to experience it. And so we do reinvent things, and… I don’t know any other way, I guess. I guess maybe reading the history books… Maybe if we had better history. I think these conversations are hopefully helpful to those who are paying attention. But if you are going to reinvent DNS with all your experience, and hard-earned wisdom, maybe just give like a 30-second, for those who haven’t been exposed… So just a 30-second of how DNS works, broadly speaking. And then you can dig into some of the details of what you would do differently if you could actually just start fresh today, and get global adoption; like, everyone’s going to do it, so you don’t have to worry about that part, which is actually, like you described, the impossible part, right?

Yeah. It turns out rebuilding the airplane in flight is hard; there’s only certain changes you can make. But if I didn’t have that constraint, what would I do? Well, first, as DNS is a request-response protocol, it is eventually consistent. So the authority data, which is edited and published by whoever owns a certain zone - like, I own redbarn.org, for example. So everything that ends in .redbarn.org comes from me as an editor and publisher. And some data changes; I renumber something, I can publish the new address in my zone file, in the authority data… But not everybody is going to notice that right away. If there are copies of the old address out there, they will have to timeout, because without a cache, if every question from every end user always had to go all the way to the authority to get answered, we could not have scaled nine orders of magnitude during the lifetime of the DNS; that hash is absolutely crucial. But it will eventually – there is one source of truth, and if that’s in stale data, it will eventually work its way out. And that’s kind of generally the way the system works. Now, the specific things that are causing us problems are mostly in the representation. In other words, the binary format of these messages turns out not to be as extensible as we would like it to be. And so whenever we’re talking about some way in which DNS needs to serve a new purpose, there’s always this question “Well, where are we going to put that? How are we going to express that? Will it fit? Will it be ambiguous, or can we find a way that old clients will not be confused by new answers?” and so forth. So that’s where we spend probably 80% of our time as the DNS technical community.

[00:24:17.10] And to understand one trivial example, look at internationalized domain names. We started out with just a bunch of United States of America contractors connected to this network, and they all spoke English. And now we used ASCII - well, I guess some of the IBM people were using it, so they had converters; it was a well-understood the problem. And so it all made sense. We weren’t going to have company names that had umlauts, or other special characters, and they certainly weren’t going to be in Kanji. It’s all going to be just US ASCII. But to be a global internet for all of humanity means you have to outgrow that. It’s just not reasonable to ask a bunch of people who have had no other reason to learn English to learn English, so that they can type in domain names and represent their own family names [unintelligible 00:25:10.11] school names in English. That is seen as maybe not an oversight, but certainly something that had to be corrected. Well, it turns out that that ASCII assumption was built way, way down deep into DNS. And the way it mostly appears is case insensitivity. An uppercase A and a lowercase a mean the same thing. It carried differently. So you can see what it really is in the authorities data. For example, 3Com.com, when they created their domain name, that C really had to be an uppercase C, because that was their [unintelligible 00:25:48.24] the number three, uppercase C, little o, little m. So that all works. But it is the same name as 3com without any capital letters, or 3COM with all capital letters, or whatever. And that business [unintelligible 00:26:06.17] with 26 of your symbols are semantically equal to 26 others of your available symbols meant that there was just no way to add something like any of the international character sets. You just couldn’t do it. So we had to do this whole – it took like a decade, to do this whole thing to do name prep. We had to take that data and essentially convert it into base-64. Hence, then the far end was seeing base-64 names when they were expecting ASCII, which it was. But they would then often just display it, so you could see all this base-64 gibberish in the browser bar, because every client in the world had to understand that if it’s an internationalized name, it has to look like this, and here’s how you want to pack it. So something like that has come to DNS every five years during my decades with it.

So to answer your question, I would start with an extensible encoding. I don’t know if I’d use JSON, but it would look like JSON, it would be like JSON, or maybe a binary version of JSON. It would be something that was not designed to be fast [unintelligible 00:27:25.02] to encode and decode, and it was a little more flexible in terms of future representations, as it kept down the number of assumptions and constraints that are designed into the encoding itself. And if you did just that, that would probably take half the pressure off of the future of DNS.

[00:27:51.24] Who pays for this? You said it took a decade, this work you did… Don’t expose your employers necessarily, but somebody’s paying for the progress of the internet. Who pays for this progress?

So a lot of companies see it in their best interests. They need a better internet to provide more value to their customers, and so they send people to the normal engineering task force meetings, where these protocols are debated, and [unintelligible 00:28:15.09] They will often fund an employee to work on what is now called open source software, although when [unintelligible 00:28:24.21] that name hadn’t been created yet. But if you’re in the tech business, you have to be an innovator here. You have to, well, have a seat at the table, and you do that by contributing. There’s no way you could get an internet without openness. And you can’t have openness without a big tent.

Aside from that, one of the nonprofit companies that I started and ran for a while was called the Internet Systems Consortium. And they still exist. I left them on 2013, but I’m on very good terms with them. And they’re still very much in the thick of all of this. And we used to simply accept contracts, either from companies or from government agencies [unintelligible 00:29:13.14] Advanced Projects Research Administration of the DOD paid us to add DNS security to the bind software. And it’s a normal-looking software development contract. We want these features, we want them by these dates, and this is a milestone of when we will pay you how much. And after we signed that, and everybody knew what to expect. And that was often – that’s the way a lot of this stuff got done back in the ‘90s and the 2000s. Because there were things that had to be done, where there wasn’t a critical mass of commercial interests who all saw “Yes, that’s vital.” Or if they did, they said “It’s vital, but I don’t want to work on it in conjunction with my competitors.” But they would all be willing to go to, let’s say the W3c, or the Apache foundation, or us at the Internet Systems Consortium and say “Look, you’re in Switzerland in this situation. As long as you do what the IETF has decided will be done, then we’re totally ready to pay for it.” So this is all the norm now, but it was pretty controversial in 1990. But it’s the norm now.

Break: [00:30:36.22]

Is it infeasible to design and spec and provide a reference implementation of a DNS 2, and then just let people opt into it, similar to the way you’d upgrade to h2 or h1? …and do the typical campaign of – I’m thinking about LetsEncrypt. They had a big effort to encrypt all internet traffic, and it took them a few years… They’re definitely not at 100%, but they’ve hit critical mass I think at this point, they’ve had lots of success… It seems like with the right core entities involved, and a good spec and implementation, that’s the kind of thing that you could get done, don’t you think?

I would like to say so. [unintelligible 00:35:26.26] ever taken on something that was going to be clearly possible… [unintelligible 00:35:31.23] And at the time, I left nonprofit service and went back into the commercial world in 2013. I was trying to do exactly what say. I was trying to figure out “Well, if there were a replacement to the DNS protocol that you could opportunistically adopt…” So you as an information publisher, you as let’s say a smartphone maker, just creating a lot of DNS requests, you’re going to fetch a lot of information, and just try to enumerate all the different entity types in the entire DNS economy, and say “Well, how could they speak both for some transition period”, which by this time is 30 years, “but anybody who was an adopter would immediately get some improvement [unintelligible 00:36:21.16] performance, less resource use”, whatever it is that would be the incentive to adopt. And in a way, that’s how HTTP/2 and now QUICK (HTTP/3) are doing it. And so I was really tinkering hard with this, at the time that I just kind of said that “That’s it. I’ve done 18 years of nonprofit service. I need to go think about my retirement” and I went back into the commercial world.

But I’ll tell you, the biggest impediment to this is that everybody wants to be the creator of that. So you’ve heard of crypto, and Bitcoin, and digital currencies… All of them followed each other along this model of “We don’t like the fiat currencies. We want to be able to have money via a matter of private contracts, without governments being able to either inject money into the economy, or control interest rates”, or whatever. And the problem is nobody [unintelligible 00:37:21.09] came in later and said “It’s a really good idea. I’m going to adopt [unintelligible 00:37:26.02] most popular.” What they did is they came in and they said “Whoever owns this is going to be a trillionaire”, and they created [unintelligible 00:37:32.20] Why would you use an existing one instead of submitting your own?

So some of those very same people have come along and said “Yeah, we could use the blockchain to encode domain names, so that takedown was impossible. So that no matter whose trademark you were infringing, or whose intellectual property you were infringing, there’d be nowhere to target a lawsuit, as you [unintelligible 00:38:00.03] There’d simply be nowhere to go to request takedown. And wouldn’t that be better than having government? I’m not sure it would be, but I’m pretty sure that that type of anarchy might do more harm than good. [unintelligible 00:38:18.25] opportunity.

But nobody said “Hey, let’s all get together, let’s work together. Let’s create something like that, so that we’ll have critical mass, and then we’ll all be able to live in a world that has those features, those capabilities.” No. Everybody said, “Whoever cracks this nut is going to be a trillionaire, and so they all launched in parallel, and they all came to a stupid dead end.

Yeah. I mean, I’m following that logic. However, I think HTTP/2 and 3 is a better corollary to DNS, isn’t it? I mean, with cryptos there is the cash incentive of being really rich, but it seems like with DNS, the incentive is if your company makes money off the internet, the internet works better. People get their names resolved faster, more securely, etc, etc. with less load, and everybody wins. Like, why do you want to be the inventor of that necessarily? Couldn’t we all just play nice…?

“Why can’t we all just get along?” That’s a very compelling vision that you’re painting there…

[laughs]

[unintelligible 00:39:26.20] Jerod.

But the fact is, depending on when you were born, you see certain parts of the current system as necessary or terrible, and getting everybody to just agree on sort of what use cases should it support, and what should it look like. If you get into a group of web people, they’re going to say “Yeah, the web can just do this. We’re just going to do this over HTTP.” Or “We’re going to add extra headers, meta headers to the HTTP, sort of above the body, to say “And this is where all the DNS security information is.” There are several existing standards. That stuff doesn’t work well if you’re not otherwise in need of the web. If it’s just you trying to access the file server at your office, you might not have been planning on using web protocols. But the web people think that you should. And the people who don’t agree are going to say “Well, no.” And so I don’t imagine that we’re going to have another unified vision.

And let’s just say hypothetically DNS 2 is a possibility. It doesn’t matter who’s writing it, who’s specing it [unintelligible 00:40:38.09] in your retirement, or whatever you’re trying to do as to not be involved in the long-term of that, if you want to. But at what point does this not become just software, but hardware, like NICs? How much of the hardware layer has to change to support the new software layer? Is it just simply all of Cisco routers, all of UniFi, Ubiquiti routers…? How does that trickle down into hardware manufacturing, and just the hard stuff, really? Hardware is the hard thing to change; that’s why it’s called hardware, not software. How does that work whenever you want to do something like this, that you have hardware that also has to follow and support?

I think that that problem would be avoided. Earlier I talked about how in the early days of the web we didn’t have crypto that was fast enough to support – the crypto hardware fast enough to support HTTPS, and so we just sort of didn’t do it… Until you needed it, and then you bought some hardware to assist you with that. And nowadays, I’m thinking about a number of different NIC vendors who make PCIe x16 cards that you can stick into your server, that will do a lot of the offloading for you. Check sum computation, segmentation, reassembly, and so forth.

[00:42:05.27] Now, if you need to operate at 100 gigabits a second, or coming now soon 400 gigabits per second, if your CPU has got other things to do than shoulder every octet through the bus, then you can get a very smart NIC, and driver support for it, and - you know, as we all know, Moore’s Law will continue giving us its annual gift, and the time will come when you don’t need that hardware anymore, and you’re just doing that with your CPU, because you have so many cores, so much cache, and so forth.

So I think any protocol, in order to succeed at all, would have to be the kind of thing – it’d be very difficult in the short term. But in the long run, it’ll just be the way everything works. And so I don’t see that hardware support is going to be called for in any of this. What will be called for is some hard decisions. I mentioned earlier that fragmentation kind of doesn’t work, and it got worse on IPv6. It worked a little bit in IPv4. It doesn’t work at all now. And packet size [unintelligible 00:43:14.10] on our WiFi is Ethernet cells. [unintelligible 00:43:18.20] And that is 1500 octets. And I knew one of the people whose name was on the Ethernet patent. He was my mentor at a mini computer company back in the late ‘80s. His name is David Boggs. And I had an opportunity to listen to him talk about the old bits, talk about being at Xerox and inventing Ethernet, and [unintelligible 00:43:45.20] And so I’m in a position to know secondhand that the intent was the packet size would continue to grow, so that as we got faster at networking, we would also get larger packets. And he was genius in a lot of ways, in this and every day, but his idea about this was every time the clock rate gets you 10x, in other words you go from 10 megabit to 100, to 1000, to a gigabit, or I guess a gigabit to 10 gigs, and so forth - every time you get 10x o’clock, you should probably give about a third of that to packet size, so that the number of packets in a given unit time doesn’t also go 10x. Your packet count and your packet size each go about a square root of 10 [unintelligible 00:44:39.00] And had we been doing that all this time, a lot of things would be simple, that are currently very hard. We certainly wouldn’t hear that fragmentation didn’t work if the packets we could send had gotten larger over time. But they didn’t. And the reason they didn’t is that the Ethernet market relies on backward compatibility. When somebody adds 10 gigabit networking to their office network, they don’t make everybody switch at once. They just say “New ports will be 10 gigs, but the old ones will still be one gig, and we’re going to run a network bridge, a layer two bridge to connect the old one to the new one.” And that won’t work if the packet sizes on the new ones are so big they can’t be bridged backward to the one that made the market exist in the first place.

So Ethernet is effectively trapped at 1500 octets for all time to come. Yes, a lot of us have turned on what we call jumbograms, so 9100 bytes, which turns out to be a very convenient size [unintelligible 00:45:44.00] So if you’re running jumbograms, your NFS is going to be faster, your [unintelligible 00:45:53.15] is going to be faster… Everything you do is gonna be faster. You just can’t use that when talking to people outside your own house, or your own campus. Because there’s no way to discover whether your ISP can carry packets that big, or will at the far end.

[00:46:11.03] So because we don’t have that, because we didn’t do what Boggs inevitably thought was the intelligent obvious thing that everybody should do, give one third of your [unintelligible 00:46:20.29] to packet size, anything we do with DNS in the future is going to have to take that into account. And that in turn means we’ll be making the assumption “Well, I guess we could probably send about 1,400 bytes plus room for all the headers and stuff that was added”, and now we’ve gotta find a way to connect several adjacent packets together, so that we can do essentially application-level fragmentation. Or else we’ve got to deal with the handshake overhead. So I predict, knowing the IETF culture as I do, if we start now, then within no more than four years we will come to an agreement on that sinful issue.

No less than four years, huh…?

Well, I mean, that is a tough one. I mean, unintended consequences, right? It’s very convenient to be able to incrementally adopt, or incrementally upgrade a network. I mean, I understand why it got stuck there. Because some networks are so large, it’s just financially infeasible to ever upgrade if you do it all at once. Like, it has to be done incrementally. And so what are you gonna do?

Forklift upgrades are very hard to argue for.

Yeah. It sucks that we’re stuck, though. We’re just stuck right there, for –

And it’s all because of the wiring, basically. Like, how hard is it to really change Ethernet in a building?

Well, not just the wiring, but all the devices in between. They have to all support the larger frames.

Well, even the wiring alone, just like going from like Cat5 or something, and its predecessor, to six, to carry more load even; like, you can transmit 10 gigabit over Cat5, but it’s not gonna be reliable at length. You’ll have interference, you’ll have packet loss, stuff like that.

Well, so I think you’re confusing two issues. If there is a Cat5 link somewhere in your network, connecting one switch to another, and you are using Cat7 at every endpoint, and you have endpoints trying to communicate with each other and they all see a 10-gig network, but there is this Cat5 in the middle somewhere, Cat5 is probably running at one gig, and so it simply won’t be usually a bottleneck. And that’s where the wiring will hurt you. But it turns out that’s a solvable problem, because you can simply map out your network, [unintelligible 00:48:38.26] the core of your network upgraded first before you start adding endpoints [unintelligible 00:48:43.26] the packets will look the same. And it’s just a matter of the bridge. Yeah, I understand you’ve got slightly different encoding of a Cat5 cable versus the Cat7 are using all four pairs, and things like that. But that doesn’t matter. That’s the active bit of electronics that will just be built to do the right thing. The problem that we’re having is there is no signaling by which an endpoint say “I’d like to send a request to a file server, and I would like to let that file server know that it could send me a 64kresponse”, which by the way would not be all that large. I mean, look at how much faster a 400-gig is than 10 meg was. A 64k packet size is not absurd. There’s just no way to tell it “By the way, that’s what I’d like you to do”, because you don’t know. You have no idea if the network between you and a file server, or between you and somebody else on the internet could tolerate a larger than 1500 or 1400 [unintelligible 00:49:50.28] octet packet. And so that has to be envisioned.

[00:49:57.26] We will need new ICMP message types on the internet, we’ll probably need new various Ethernet-level packets similar to what you do with bridge discovery… We’re going to need interoperability testing, we’re going to need to make sure it falls back reliably, so [unintelligible 00:50:15.08] and the appetite for that doesn’t exist. There isn’t a consortium of companies who collectively believes that they will be able to deliver more value if they embark on something that will last as much as the Apollo mission. It’ll take just as long.

Well, let’s talk about the good side of DNS, because this is very much its limitations, which is where you operate and where there’s lots of future thinking things… But people do some pretty cool stuff over DNS, and they use it, they abuse it… We had Haroon Meer on the show from Thinkst, who makes – what do they call them, honey pots? It’s a security service; they call them Canaries, and they install them into your network, and they’re honey pots. And they phone home, and they let you know if people are trying to – I’m doing a terrible sales pitch for Haroon. Sorry, Haroon. If your system has been compromised, basically. But the point is, is that their entire fleet of Canaries, all communication that it does is over DNS. And they do that because it’s convenient, and easy not to have to deal with NAT traversal, and other such things; you can just DNS your way out, and DNS your way back in… And that’s surely not the one it was designed for, but just a cool use of the protocol. And I’m wondering if there’s other things people do… I’m sure you’ve been exposed to all kinds of stuff that people are doing, using and abusing DNS, Paul. Your thoughts on that topic?

I have. And because of my sort of affinity for the software and the protocol back in the day, I was befriended by the actual [unintelligible 00:51:58.08] and I dare say that beer has been drank over the topic of “What would you intend?” And so I can tell you that scope that the they had at [unintelligible 00:52:17.15] anywhere, wherever they were, that “Yeah, we need something to replace the old host.txt file. The internet’s gonna be big some day.” And the scope was really just that - “No, we need to be able to do dynamically what we’re currently doing by having a file that everybody pulls down by FTP once a week.” And he made sure that his system would do that. Otherwise, it would not have fulfilled whatever development contract they had. However, he significantly overshot. He had a vision for a much more generalized system, that carried many more data types than just “Here’s the IP address of the server.” So they’ve made it very extensible. And it is because he overshot the mark that we are using the DNS in so many cool ways today.

And so a couple of examples… One is my own work. I created the first distributed reputation system, and the first anti-spam company. [unintelligible 00:53:20.13] something called the RBL, that was us. There will never be email sent or received in the future [unintelligible 00:53:30.28] And no, I didn’t patent it. I didn’t think of that. But…

“I didn’t think of that…” [laughs]

…what we did - my co-architect on this is. And he wasn’t thinking how he was going to change the world, he just wanted to get this out of his routers and into his servers, so that [unintelligible 00:53:52.11] would be better. And that turned out to be a really attractive model for a lot of people. It had to be that as an email receiver, a server, an SMTP server heard a connection from somewhere, and that connection, should it be accepted, would then allow the sender to initiate various email transactions. “Here’s where it’s from, here’s where it’s going, here’s the body etc.”

[00:54:21.21] So it just had to be that the SMTP receiver would make a DNS lookup where the name that it looked up was the IP address of the sending server backward than the usual kind of way. And then that was under my domain, rbl.maps.vix.com. MAPS was the name of this company; it was SPAM spelled backward, but it was also the Mail Abuse Prevention System. We were very clever.

[laughs]

And we got rapid adoption. We got way beyond how many queries per second we could tolerate on current structure in a matter of months, because there was nothing else like this, and commercialization and privatization meant that all of a sudden the internet was going to include everybody, not just people with a government contract. So right place, right time, right technology… But this is not what the DNS was made to do. But it did what you’ve said - traverse [unintelligible 00:55:18.23] it was completely transparent; you didn’t have to do anything at the far end in order to be able to [unintelligible 00:55:27.03] these lookups. So by using DNS to convey reputation data, we could just say “Hey, the address that you asked me about - that has sent a lot of spam lately. We have it in hand. We have proof of this.” And that meant you could just reject it. And spammers took a while trying to figure out how they were going work around this. And they did, [unintelligible 00:55:47.16] But that was maybe the first wide-scale use of DNS for something that had nothing to do with [unintelligible 00:55:56.10]

The second one that I saw a couple of years later was license key lookups. And I think this was Symantec; I don’t know who it was. But what they wanted to do is be able to give away antivirus software with every PC that was sold, and then have it be that you get 60 days for free, and then after that you’d have to pay money… And so they would have it be that every one of these PCs would create a random-looking license key, and when you paid, you were essentially paying down to allow that license key to operate. And what they would do is use that license key as part of their DNS lookups for their antivirus signatures. And it worked perfectly. And it let them build a global antivirus empire without having to sort of have every PC reach out to the mothership in the way that we all see today.

The third way - and this is maybe the best - was Dan Kaminsky. Now, Dan has since passed, and I miss him a lot, but he used to have a tradition where he would go to DEFCON in Las Vegas, and they would just put him on the schedule. He didn’t even have to file a proposal. They’d put him on there, and it would be something involving DNS. And then we’d all go there and see what it was, without knowing any more than that.

[00:57:26.12] One year it was DNS tunneling, where he was using the query direction as a way to transmit data, and the response direction as a way to receive it. And so if you had a DNS tunnel endpoint on your laptop - which is how he demoed this - and it was talking to some DNS tunnel gateway somewhere that would turn your DNS tunnel data back into normal packets, then you could for example use a hotel room WiFi without paying for it. Because they had to allow DNS to work, otherwise, they couldn’t make their paywall work. Same thing for coffee shops, and everywhere else. So they just used the fact that DNS was open by default, and this demo was Skype. And he held a full motion video conversation on the big screen with somebody somewhere using DNS tunneling. And all of us were just mind-blown. We didn’t think that it would ever be fast enough to do that. But it was just huge. So yes, DNS turns out to have lot of room to grow.

Resilient. I would describe it as being resilient, right? Because in a lot of scenarios you can do compelling things.

Or flexible.

Flexible. Sure, okay.

You know, most humans are pretty lazy… We will invent what we have to invent, and we will use what we can, especially programmers. Programmers are the laziest [unintelligible 00:58:55.19] And the fact that DNS does show anything so well means that it’s almost top of mind for a system developer at this point. There’s even a T-shirt somewhere that says “Oh, hell. Forget all that. Let’s just put it in DNS.” Because that’s what we do.

[laughs]

Because you have this global, coherent, sort of eventually consistent, reliable, semi-reliable database, that’ll turn out to be just good enough for almost anything you want to do.

Break: [00:59:30.04]

There’s also a refrain that you’ve probably heard at some point, Paul, was “It’s always DNS.” Have you heard that one? It’s something along those lines.

I have.

This is the culprit of many lost hours of debugging, only to find out it was DNS the entire time. And I wonder what your thoughts on that cultural epithet – not epithet, but this idiom that we have, and why it’s the case that that’s gotten that reputation.

Well, I think the reputation has been earned. The statement is not inaccurate, it’s merely misleading.

Hm… How so?

So any company who comes into the internet and says “Yeah, we want to deliver value”, it’s like, they’ll look around for opportunities. Well, what’s not working well today? Sometimes their solution will just be “Let’s relax a constraint”, and then it will be the company you go to. And a lot of people have come in with online services, for example, that used to be enterprise services. For example, if we think about Dropbox, or any of the file service companies - we all used to just pile on hard drives, and plug them into a lot of servers, and so forth. But it turns out, for a lot of what you need storage for, you don’t care where it is, and you don’t mind that you have to go across a wide area network to get to it, and you’re happy that they’re backing it up instead of you, and so forth. So there’s a lot of value to be created that takes the form of [unintelligible 01:04:45.00] or just simple disruption. And that’s not a bad thing. In fact, had we gone the other way, had TCP/IP not won the war, had we been on the OSI protocol suites as developed by the phone companies, none of that would be possible. We’d only be able to do the things that they wanted us to do, whereas the internet is designed to kind of let you try almost anything. It’s so called permissionless innovation, as we’ve been [unintelligible 01:05:12.24]

So one of the things that got done with DNS was done by OpenDNS. And that was to say “You know, people hate their ISP DNS service” or “They hate something about DNS, and so we’re going to create a global anycast DNS service, OpenDNS, so that anybody in the world instantly stop using their own enterprise DNS, or their ISP DNS, or any other DNS, just use us, and we will be more reliable. We won’t data-mine their queries to figure out where they’re going, and send them ads…” So they actually did that for a while. “We won’t block things that – we’re not a nanny state, we’re not gonna say “No, you can’t reach this, because it might be harmful in some way”, although there’s always somebody out there being harmed in a lawsuit [unintelligible 01:06:10.21] And there are some costs there. But they just wanted to centralize something that used to be distributed. And it worked really well. But you know, they were growing a for profit company, and they needed to figure out “Okay, we’re here, we have a lot of users. How do we monetize this thing?” And so they did end up – they did this strange thing, they intercepted queries for www.google.com, and instead of getting back the real address, which would be the Google web server, they gave back their address, of their website. And it did not falsely indicate that it was Google. It said “This is the OpenDNS search engine.” And then you would type something into the search bar, just the way we would anyway, and they didn’t have a search engine; they couldn’t answer it. All they would do is then forward that question onto Google, and then [unintelligible 01:07:08.03] the response back toward you. But it gave them an opportunity to associate your interests to keywords that denoted your interest with your IP address. And then they sold that data to advertisers, so that when you then later reached some web server, that web server could ask the question “Hey, this IP address. Tell me what they’re interested in.”

Now, you might be able to imagine that Google wasn’t super-happy about this, and they even went so far as to say “Hey, stop.” But story is that people at OpenDNS said “You know, there’s no law that protects you in this way. We’re not breaking any law [unintelligible 01:07:51.25] getting back the wrong answer. And we’re certainly not costing Google any money, because you’re receiving every bit of query data that you would otherwise have received. So Google is still going to be able to make its old business plan work.”

[01:08:05.04] Somebody at Google probably said “Yeah, but we didn’t want you to get free access to the thing that we monetize. So we don’t want you to be an intermediary here.” But OpenDNS was resolute; they were not going to stop. And that, in my opinion, is why we have 8.8.8.8 today. It’s the only way Google could prevent OpenDNS from continuing to intermediate itself between Google and its search customers was that Google had to build a bigger, more popular system. Once they did it, it was inevitable that we have 9.9, and 1.1… You know, if you think about it, the IP version 4 protocol is 256 octets in that first octet, so there are maybe 250 more companies who are going to get out there and try to get 11.11, and 12.12, and all the rest… Because if you can put yourself in the middle of DNS queries, then you can learn a lot. And then you can take that learning, and even if you’re totally privacy-respecting - which according to their stated privacy policy, Google is, and I have no reason to doubt it - you can still learn a lot that is not privacy-violating. And so why wouldn’t everybody and his brother try to create a system that would cause millions of people to send them their most vital information, which is what they’re working on and what they’re interested in.

Okay, so let’s fast-forward… You’re asking “Why is it always DNS?”

Yeah, yeah.

Okay, so it used to be that if you were a CDN, like Akamai, a content delivery network, that you could simply operate the server, or whatever, Microsoft.com for example - you’d get a query, you’d answer it. Somebody would ask “Where’s www.microsoft.com”, and you’d look at the source of the query and say “Gee, we’ve got 35 copies of that content around the world. The one closest to you is that one.” And it would give you an IP address as part of the answer to your DNS question, that was the mirror that was closest to where you were coming from. This went away once OpenDNS and Google and everybody else started doing this, because the place where the DNS question will be coming to you from was not the end user. It would be – and it wasn’t their ISP, it wasn’t their house, it wasn’t anything that would help you predict where the web fetch was going to be coming from. And so a collection of companies who had monetized things to the point where they no longer work, proposed additional complexity, with vast privacy risks, and then deployed it, and it is the standard for the internet today.

So EDNS, which was my thing - so I’m blamed for this sometimes - EDNS client submit ECS, and it’s just a way to amend your query from your 8.8 server, or your OpenDNS server, your 1.1 or 9.9 server… It would amend your query by saying “And furthermore, the question I’m sending you is due to an end user who was on the following network. So if you’re planning on doing the CDN thing, you craft an answer for them based on that address. That’s the address to use. Don’t use my address, because I’m not where the web fetch is going to come from.” And boy, there were a lot of bugs. And there are still a lot of bugs.

[01:11:45.16] And so by sort of getting in there and saying “This is my leverage point. This is how I’m going to innovate. This is how I’m going to shim myself into the internet ecosystem, so that I can add value and get paid for it”, DNS works less and less well. And so when somebody shows me that and says “Oh Paul, you’re such an idiot. You created this terrible thing. It’s always DNS”, well, it’s not my DNS. The world has taken DNS for a ride, and there’s no guardrail where you’re driving it, and it doesn’t want to work the way that you want it to work. And I’m not surprised that you’re having the trouble you’re having. Sorry, long answer to short question.

[laughs]

I like that. I mean, you point out a great point, obviously, with the fact that where you resolve your DNS at, whether you choose it’s OpenDNS, which was very popular back in the day, right? I had no idea about that back-story between OpenDNS and Google… But that’s true; where you point your DNS to is you give a lot of power to them. Speaking of, Paul, where do you point your DNS? What resolver do you use?

Well, there are two me’s. I’ll answer differently. So - now, I have a day job, and they provided a laptop, and it does whatever it does, it goes through the corporate DNS environment… So it’s logged, and filtered, and everything’s done however it is that the IT security team wants to be. On my own laptop, I have a VM on that laptop that does nothing but run a DNS server. So I carry my DNS server with me wherever I go. In my house, and back when I used to be a startup guy, we simply ran our own DNS servers on-prem, so that we could log, so that we could filter, so that we could get all of the benefits of that information leverage localed.

I have a friend, Tom Byrnes, who has created something called the Personal DNS Firewall; it’s a company called ThreatSTOP, threatstop.com. And this part of [unintelligible 01:13:48.00] as well. And I really would like more people say “You know, I heard Vixie say that I shouldn’t let anybody see my DNS traffic”, and that there’s a free thing I can install on my laptop that’ll just do it all locally.” Yes, you did hear that. Yes, you should do that.

Good answer. Running it local.

I suppose, on that note though, how do you not call out, I suppose, to say, a different resolver that’s popular? Like 1.1 .1.1 from CloudFlare, etc. How do you run your own DNS resolver, I suppose? How do I know how to do that?

So I assume that you’ve got some laptop running some operating system, and that it’s got an IP address, and one of its IP addresses is 127.0.0.1.

And all you do is grab some open source DNS server that has been compiled and packaged [unintelligible 01:14:50.13] have, install it there, tell it to listen on the loopback address. When you configure – you know, if you have a Linux machine [unintelligible 01:14:59.17] If you have a Windows machine, it would be somewhere in the registry. But one way or another, just tell your system that 127.0.0.1 is the name server, and then run an name server there. It’s just that hard. I mean, it’s just that easy.

Well, I’m still learning about DNS. I do not claim to be a DNS – I’m a novice, really. I do run - and you may know this - very well Pi-hole. It’s popular out there for a lot of homelabbers who –

Pi-hole is huge.

Any of your listeners who are interested in running their own DNS server and they don’t like what I said, they should use a Pi-hole. It’s great.

So I use Pi-hole. Actually, I have two, and I have it load-balanced. But inside of Pi-hole settings it has upstream DNS servers, which I have set to Cloudflare. Isn’t that the same, where you have upstream DNS servers?

That’s the default config. But one of the appendices shows how to configure it, talk directly to the root nameservers, and discover content without going through an intermediary resolver.

Okay. That’s something I haven’t learned to do yet. So all this time I’ve been so proud to be using Pi-hole. I’ve said it at least 1,000 times in this podcast, right Jerod?

[01:16:17.19] At least…

And I didn’t know that there was an alternative way to configure it so that I can just resolve direct.

One of the things you can also do with a Pi-hole is to have your own filter list. Over and above whatever you subscribe to, you can just say “Yeah, here’s the various advertising servers that I want to answer don’t exist, so that my web browser won’t go fetch them.” That’s what a lot of people use it for. But it is absolutely possible to make a Pi-hole ignore intermediate name servers… But I also want to speak in defense of your ISPs name server, right? One of the reasons that ISP name servers got a bad rep, and thus created the opportunity for OpenDNS, Google, Quad9 and Cloudflare and so on, is that they kept doing the wrong thing. They abused their position in your data path to data-mine you, and target you with ads, and all the rest of that stuff. They don’t so much do that anymore.

I know, as an example, that Comcast has adopted a completely hands-off attitude toward their customer DNS traffic, and I know their team. They are really good. If you were a Comcast customer, you don’t need to use Cloudflare in order to keep your information safe. And it’s worth looking online, finding out what is known about whatever ISP you have; if it’s not Comcast, they may still be pretty good, because now the world is watching them in a way that they weren’t 15 years ago.

Mm-hm. Beyond, I suppose, the praise you’ve given for Pi-hole here, what do you think about, I suppose, the open source – I guess program itself. Not just saying “Hey, more people should use it.” I feel like Pi-hole cracked a nut where it was just never thought of. Rather than solving, like you had said before, DNS resolving at the laptop level, which is a device, you do it at the network level, which means that the entire network benefits from the fact that Pi-hole is on the network, and you control it. What are your thoughts on that, considering what you know about DNS, given privacy, etc?

I love your question, because I know that you feel it in your heart that it’s something you really want to know. So I’m gonna just fill in a little bit more of the backstory. It didn’t used to be ISPs. We just had networks. The internet was a network of networks, and [unintelligible 01:18:37.07] and whatever it was you were going to do, you did, and you provided whatever services your clients need. But the time I took over maintaining BIND’s software in the late 1980s, they had a 100% market share, as DNS wasn’t sexy in the way it is now. There was no money to be made [unintelligible 01:18:59.26] people. And it ran everywhere. It was on every single network. That’s just how the world started. This thing where now people [unintelligible 01:19:09.27] came in after. They don’t know that that was the origin story. They think that Cloudflare has been here forever, and Google and OpenDNS… No. They came in the early to mid-2000s. There was a rich history of decades of everybody running their own DNS server. And yes, that gave you a network effect. You’d say “Hi, I’m part of this campus. I need some concrete [unintelligible 01:19:38.03] down along Highway 101 somewhere, and I have a connection to CSNET, and… Yeah, we have our own nameservers.” And everybody in the company who wants to use the TCP/IP protocols was using those nameservers. And so we get to share one answer that we fetch from the outside world among every internal user who wants it. That’s how this all started. So for you to ask “Could that possibly work?” is a little odd for me. Yes, that can possibly work.

[01:20:14.05] Okay. I did come after that era. Okay.

The only reason we’re not doing that is that it didn’t make enough money for enough people. Otherwise, you would have been born into a world where that’s the norm.

So this thing you mentioned from your friend, Personal DNS Firewall, and then Pi-hole itself, does it – I think Pi-hole requires a bit more for someone to adopt it… Do you personally advocate for, would you suggest as your prescription for folks out there that care about their privacy to run their own DNS server, whether it’s Personal DNS Firewall, or Pi-hole, or something like it?

It is, but I want to admit to some of the painpoints that we’ve encountered. So if you’re an average American in an apartment or in a single-family dwelling, and you’ve got whatever connection you’ve got, from [unintelligible 01:21:04.09] whatever you’ve got, we’ve got this whole modem or home gateway box, some client that connects to the outside, and you’ve just got a Wi-Fi access point. Maybe it’s [unintelligible 01:21:15.24] whatever. That’s your situation. You want to run a Pi-hole? Well, you’re gonna have to, number one, get yourself a Raspberry Pi, and install the image, fiddle around with it a little bit, make sure it works… And then number two, you’ve got to get into that gateway box, and it’s probably answering on the web port on 192.168.1.1, and you probably have a password which is written on the side of the unit [unintelligible 01:21:45.13] forget it. Then you’ve gotta get in there and configure the DHCP service inside that box, so that when anybody signs on to your home network, and they get an address from you, and they get told what the gateway is, they’re also told to use your Pi-hole as the DNS service. Or they’re going to use you, the gateway box, in which case, you need to reconfigure that gateway box so that instead of going to the ISP name server, it goes to the Pi-hole. That is all hard. Internet [unintelligible 01:22:20.07] easy. So everything I’ve just said is somebody swimming upstream. And I don’t want to make it sound like it’s going to be super-simple. But once you understand what those issues are, and you’re willing to account for them and cope, then Pi-hole is one answer.

Another answer is - you know, you don’t need the Pi-hole image per se. Take a Raspberry Pi and whatever version of Linux it came from, and install Unbound or indeed any other open source name server out there that has a package for whatever version of Linux came on your Raspberry Pi - you can just turn it on. The defaults are pretty reasonable. It won’t do ad-blocking, that Pi-hole is known for, but it will absolutely give you a local listener that everybody inside your single-family dwelling, or your company, or your apartment or whatever will always keep using that. So this is so non-magic.

And the benefit to them is obviously they’re not now freely giving their lookups to the ISP, to the resolver that they’ve chosen, which is “Hey, I’m Google. Come use 8.8.8.8, because it’s easy, it’s fast”, whatever they bless it as. And even Cloudflare - we’re fans of them, but I think they have a Family Edition which I think is pretty interesting. You can use a different IP address, or I guess DNS lookup IP address, to have a family-friendly lookup zone where if your family is looking at things that are inappropriate, or just sort of fringe to families, I suppose, it’s protecting young ones on those networks. Like, that seems like a good benefit. I understand why they’re doing it. But they are getting the ability to sniff it, right? They’re getting all of your lookups, and that’s not good, I suppose.

[01:24:11.04] They are, but I want to say that, again, I’ve read the privacy policies online for OpenDNS and Google. I have no reason to think that they aren’t implementing exactly what they say, and what they say is they don’t sniff. So I think there is a valid enterprise value proposition for these companies who just want to say “Look, I’m in the business of providing internet-related services. I will be more successful with that if DNS doesn’t hurt so bad. So I want to offer this service to make sure people have access to at least one reliable, high-quality DNS service.” I don’t think that’s a lie. I just don’t think that you should have to go that far and trust that far.

If you’re using a DNS server whose operators are in a different legal regime than you, it may be that the privacy law there is not the same as privacy law where you are. And maybe it won’t be them, it’ll be somebody between you and them who wants to data-mine your queries, and optimize your ads, and all the rest of that… And so now the IETF has said “Well, because people are talking to distant name servers by default, they have to encrypt all of it.” Well, if you’re going to encrypt all of it, then you have to go figure out “How do I get the encryption key so that I can know how to encrypt the data if I’m sending it to that service?” Well, and you go talk to other things on the internet, and it turns your otherwise tiny little island, the network of which the internet is a network of - you’re turning that into this viral cell in the body of something very large, and you’re depending on everybody else to be able to do the right thing. So I don’t have a pitch that says “Gee, if you use my stuff, [01:25:58.14] My pitch is “If you use open source, and control it yourself, and don’t go off net unless you need to, that will maximize your autonomy and your [unintelligible 01:26:11.03] experience.”

Paul, as we close out here, I’m curious… You’ve been in the industry a long time, you have a lot of experience… Before we started recording you said your family has a ranch, you have some things that you’re doing outside of the technical world… What keeps you going, what keeps you in the industry? Why haven’t you hung up the… Hung up the shoes? What’s the saying? Why aren’t you retired yet? In the nicest way possible. I’m not trying to push you out. I’m just curious, what gets you going in the morning to come back to work every day?

Well, it all started in 1980, when my high school guidance counselor explained to me that I would be in the 11th grade again next year, because I hadn’t turned in any homework, and I’m a terrible student and so on. And I remember thinking to myself “I think I know a better way. Because I know how to program computers, and I’ll bet there’s somebody somewhere who will pay me more than the minimum wage to do it.”

[01:27:11.20] So that kind of put me in the right place, right time, San Francisco area, 1980s… Internet was just about to commercialize and privatize, Unix was just about to become a household word… And so 30 years later, when I was receiving my award from the internet Hall of Fame, I said “I’ve spent the first half of that 30-odd years making communication possible. And then, because we succeeded so well, I have spent the second half trying to make it safer.” And when I exited my fifth, and what I hope will be my final startup, in November of 2021, I didn’t really want to do it. My heart was in continuing on, finding another investor, because we were in the black, we just weren’t growing fast enogh. But investors [unintelligible 01:28:05.17] so we sold. I thought “Okay, that’s it. I have been such a bomb thrower for the last 30 years that I am going to be unemployable unless I start another company, which I am just not [unintelligible 01:28:21.10] to do.” But I was wrong. I was wrong in two ways.

First, I was unemployed for the first time in 41 years, so I learned something horrible about myself. You should try this, see what you learn. Which is, I learned that if I don’t have a reason to get out of bed in the morning, I don’t. And that was intolerable.

But then, I had the problem of I didn’t have a team over and above my family. I wasn’t on a team. And I didn’t have customers to protect. And so the cloud company called me up and said “We don’t care that you’re a little bit of a bomb thrower. We think you’ll fit right in.” And I was so glad that I had a team to go join, and customers to protect. And that is my particular psyche. I need those things.

Well said.

So no end in sight, then. Because why? Because you need it.

I don’t want to die at my desk, but someday I will definitely get too old to do this, and I guess I’m gonna count on my coworkers to tell me that that’s that.

Okay. It’s a good strategy.

Someone’s gonna.

You’ve got some honest coworkers around you, you know…

No shortage.

Well, it’s been fun digging into the villain and the hero called DNS…

I’ve never had a chance to sit down with someone like you, to go as deep as we have with you, and I really appreciate you taking the time with us to entertain our questions, and… Yeah, it’s just been - it’s been awesome. Thank you so much.

It’s just been a lot of fun. Thank you, guys, very much for [unintelligible 01:30:00.20]

Thank you as well. Loved it.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00