It's not always DNS
This week weâre talking about DNS with Paul Vixie â Paul is well known for his contributions to DNS and agrees with Adam on having a âlove/hate relationship with DNS.â We discuss the limitations of current DNS technologies and the need for revisions to support future internet scale, the challenges in doing that. Paul shares insights on the future of the internet and how heâd reinvent DNS if given the opportunity. We even discuss the cultural idiom âItâs always DNS,â and the shift to using DNS resolvers like OpenDNS, Googleâs 8.8.8.8 and Cloudflareâs 1.1.1.1. Buckle up, this is a good one.
Matched from the episode's transcript đ
Paul Vixie: I think that that problem would be avoided. Earlier I talked about how in the early days of the web we didnât have crypto that was fast enough to support â the crypto hardware fast enough to support HTTPS, and so we just sort of didnât do it⌠Until you needed it, and then you bought some hardware to assist you with that. And nowadays, Iâm thinking about a number of different NIC vendors who make PCIe x16 cards that you can stick into your server, that will do a lot of the offloading for you. Check sum computation, segmentation, reassembly, and so forth.
[00:42:05.27] Now, if you need to operate at 100 gigabits a second, or coming now soon 400 gigabits per second, if your CPU has got other things to do than shoulder every octet through the bus, then you can get a very smart NIC, and driver support for it, and - you know, as we all know, Mooreâs Law will continue giving us its annual gift, and the time will come when you donât need that hardware anymore, and youâre just doing that with your CPU, because you have so many cores, so much cache, and so forth.
So I think any protocol, in order to succeed at all, would have to be the kind of thing â itâd be very difficult in the short term. But in the long run, itâll just be the way everything works. And so I donât see that hardware support is going to be called for in any of this. What will be called for is some hard decisions. I mentioned earlier that fragmentation kind of doesnât work, and it got worse on IPv6. It worked a little bit in IPv4. It doesnât work at all now. And packet size [unintelligible 00:43:14.10] on our WiFi is Ethernet cells. [unintelligible 00:43:18.20] And that is 1500 octets. And I knew one of the people whose name was on the Ethernet patent. He was my mentor at a mini computer company back in the late â80s. His name is David Boggs. And I had an opportunity to listen to him talk about the old bits, talk about being at Xerox and inventing Ethernet, and [unintelligible 00:43:45.20] And so Iâm in a position to know secondhand that the intent was the packet size would continue to grow, so that as we got faster at networking, we would also get larger packets. And he was genius in a lot of ways, in this and every day, but his idea about this was every time the clock rate gets you 10x, in other words you go from 10 megabit to 100, to 1000, to a gigabit, or I guess a gigabit to 10 gigs, and so forth - every time you get 10x oâclock, you should probably give about a third of that to packet size, so that the number of packets in a given unit time doesnât also go 10x. Your packet count and your packet size each go about a square root of 10 [unintelligible 00:44:39.00] And had we been doing that all this time, a lot of things would be simple, that are currently very hard. We certainly wouldnât hear that fragmentation didnât work if the packets we could send had gotten larger over time. But they didnât. And the reason they didnât is that the Ethernet market relies on backward compatibility. When somebody adds 10 gigabit networking to their office network, they donât make everybody switch at once. They just say âNew ports will be 10 gigs, but the old ones will still be one gig, and weâre going to run a network bridge, a layer two bridge to connect the old one to the new one.â And that wonât work if the packet sizes on the new ones are so big they canât be bridged backward to the one that made the market exist in the first place.
So Ethernet is effectively trapped at 1500 octets for all time to come. Yes, a lot of us have turned on what we call jumbograms, so 9100 bytes, which turns out to be a very convenient size [unintelligible 00:45:44.00] So if youâre running jumbograms, your NFS is going to be faster, your [unintelligible 00:45:53.15] is going to be faster⌠Everything you do is gonna be faster. You just canât use that when talking to people outside your own house, or your own campus. Because thereâs no way to discover whether your ISP can carry packets that big, or will at the far end.
[00:46:11.03] So because we donât have that, because we didnât do what Boggs inevitably thought was the intelligent obvious thing that everybody should do, give one third of your [unintelligible 00:46:20.29] to packet size, anything we do with DNS in the future is going to have to take that into account. And that in turn means weâll be making the assumption âWell, I guess we could probably send about 1,400 bytes plus room for all the headers and stuff that was addedâ, and now weâve gotta find a way to connect several adjacent packets together, so that we can do essentially application-level fragmentation. Or else weâve got to deal with the handshake overhead. So I predict, knowing the IETF culture as I do, if we start now, then within no more than four years we will come to an agreement on that sinful issue.