This week weāre talking about DNS with Paul Vixie ā Paul is well known for his contributions to DNS and agrees with Adam on having a ālove/hate relationship with DNS.ā We discuss the limitations of current DNS technologies and the need for revisions to support future internet scale, the challenges in doing that. Paul shares insights on the future of the internet and how heād reinvent DNS if given the opportunity. We even discuss the cultural idiom āItās always DNS,ā and the shift to using DNS resolvers like OpenDNS, Googleās 8.8.8.8 and Cloudflareās 1.1.1.1. Buckle up, this is a good one.
Paul Vixie: I think that that problem would be avoided. Earlier I talked about how in the early days of the web we didnāt have crypto that was fast enough to support ā the crypto hardware fast enough to support HTTPS, and so we just sort of didnāt do itā¦ Until you needed it, and then you bought some hardware to assist you with that. And nowadays, Iām thinking about a number of different NIC vendors who make PCIe x16 cards that you can stick into your server, that will do a lot of the offloading for you. Check sum computation, segmentation, reassembly, and so forth.
[00:42:05.27] Now, if you need to operate at 100 gigabits a second, or coming now soon 400 gigabits per second, if your CPU has got other things to do than shoulder every octet through the bus, then you can get a very smart NIC, and driver support for it, and - you know, as we all know, Mooreās Law will continue giving us its annual gift, and the time will come when you donāt need that hardware anymore, and youāre just doing that with your CPU, because you have so many cores, so much cache, and so forth.
So I think any protocol, in order to succeed at all, would have to be the kind of thing ā itād be very difficult in the short term. But in the long run, itāll just be the way everything works. And so I donāt see that hardware support is going to be called for in any of this. What will be called for is some hard decisions. I mentioned earlier that fragmentation kind of doesnāt work, and it got worse on IPv6. It worked a little bit in IPv4. It doesnāt work at all now. And packet size [unintelligible 00:43:14.10] on our WiFi is Ethernet cells. [unintelligible 00:43:18.20] And that is 1500 octets. And I knew one of the people whose name was on the Ethernet patent. He was my mentor at a mini computer company back in the late ā80s. His name is David Boggs. And I had an opportunity to listen to him talk about the old bits, talk about being at Xerox and inventing Ethernet, and [unintelligible 00:43:45.20] And so Iām in a position to know secondhand that the intent was the packet size would continue to grow, so that as we got faster at networking, we would also get larger packets. And he was genius in a lot of ways, in this and every day, but his idea about this was every time the clock rate gets you 10x, in other words you go from 10 megabit to 100, to 1000, to a gigabit, or I guess a gigabit to 10 gigs, and so forth - every time you get 10x oāclock, you should probably give about a third of that to packet size, so that the number of packets in a given unit time doesnāt also go 10x. Your packet count and your packet size each go about a square root of 10 [unintelligible 00:44:39.00] And had we been doing that all this time, a lot of things would be simple, that are currently very hard. We certainly wouldnāt hear that fragmentation didnāt work if the packets we could send had gotten larger over time. But they didnāt. And the reason they didnāt is that the Ethernet market relies on backward compatibility. When somebody adds 10 gigabit networking to their office network, they donāt make everybody switch at once. They just say āNew ports will be 10 gigs, but the old ones will still be one gig, and weāre going to run a network bridge, a layer two bridge to connect the old one to the new one.ā And that wonāt work if the packet sizes on the new ones are so big they canāt be bridged backward to the one that made the market exist in the first place.
So Ethernet is effectively trapped at 1500 octets for all time to come. Yes, a lot of us have turned on what we call jumbograms, so 9100 bytes, which turns out to be a very convenient size [unintelligible 00:45:44.00] So if youāre running jumbograms, your NFS is going to be faster, your [unintelligible 00:45:53.15] is going to be fasterā¦ Everything you do is gonna be faster. You just canāt use that when talking to people outside your own house, or your own campus. Because thereās no way to discover whether your ISP can carry packets that big, or will at the far end.
[00:46:11.03] So because we donāt have that, because we didnāt do what Boggs inevitably thought was the intelligent obvious thing that everybody should do, give one third of your [unintelligible 00:46:20.29] to packet size, anything we do with DNS in the future is going to have to take that into account. And that in turn means weāll be making the assumption āWell, I guess we could probably send about 1,400 bytes plus room for all the headers and stuff that was addedā, and now weāve gotta find a way to connect several adjacent packets together, so that we can do essentially application-level fragmentation. Or else weāve got to deal with the handshake overhead. So I predict, knowing the IETF culture as I do, if we start now, then within no more than four years we will come to an agreement on that sinful issue.