Programming with LLMs
For the past year, David Crawshaw has intentionally sought ways to use LLMs while programming, in order to learn about them. He now regularly use LLMs while working and considers their benefits a net-positive on his productivity. David wrote down his experience, which we found both practical and insightful. Hopefully you will too!
Matched from the episode's transcript đ
David Crawshaw: [00:20:18.21] I think that is exactly the right way to frame the question for a business, and I donât know the answer to a lot of those questions. I can talk to some of the more technical costs involved. What the benefits would be to the company is extremely open-ended to me. I donât actually â I canât imagine a way to measure that. Based on talking to customers of Tailscale who deploy it, thinking about the [unintelligible 00:20:43.07] And so to go back to something you said earlier, about how you use it and you donât pay for it⌠I think thatâs great, because Tailscale has no intention of making money off individual users. Thatâs not a major source of revenue for the company. The companyâs major source of revenue is corporate deployments. And thereâs a blog post by my co-founder, Avery, about how the free plan stays free on our website, which sort of explains this⌠That individual users help bring Tailscale to companies who use it for business purposes, and they fund the companyâs existence.
So looking at those business deployments, you do see Tailscale gets rolled out initially at companies for some tiny subset of the things that it could be used for. And it often takes quite a while to roll out for more. And even if the company has a really good roadmap and a really good understanding of all the ways they could use it, it can take a very long time to solve all of their problems with it. And thatâs assuming they have a really good understanding of all the things it can do. And the point youâre making, Adam, that people often donât even realize all the great things you can do with it is true. And Iâm sure a tool that helped people explore what they could do would have some effect on revenue.
In terms of the technical side of it and the challenges - there are several challenges. In the very broad sense, the biggest challenge with LLMs is just the enormous amount of what you might call traditional, non-model engineering has to happen at the front of them to make them work well. Itâs surprisingly involved. I can talk to some things Iâve been working on over the last year to give you a sense of that⌠Beyond that, the second sort of big technical challenge is one of Tailscaleâs core design principles is all of the networking is end-to-end encrypted. And the main thing an LLM needs to give you insight is a source of data. And the major source of data would be what is happening on your network, what talks to what, how does it all work? And that means that any model telling you how you could change your networking layout, or give you insight into what you could do would need access to data that we as a company donât have, and donât want. And so weâre back to - it would have to be a product you run locally, and have complete control over⌠Which is absolutely â my favorite sorts of products are that. I like open source software that I can see the source code for, compile myself, run locally. Thatâs how I like all things to be. But trying to get there with LLMs in the state they are today is actually, I think, pretty tricky. I donât think Iâve seen an actually shipped product that does that really well for people. Thereâs one, thereâs a developer tool that I hear a lot of good talk about, that I donât â Iâm just trying to live-search for it for you. Nope, thatâs the wrong one. Thatâs Magic Shell History, which also sounds really cool. I should use that one.