Changelog & Friends – Episode #46

Is it too late to opt out of AI?

featuring our favorite tech lawyer, Luis Villa

All Episodes

Tech lawyer Luis Villa returns to answer our most pressing questions: what’s up with all these new content deals? How did Google think it was a good idea to ship AI Summaries in its current state? Is it too late to opt out of AI? We also discuss AI in Hollywood (spoilers!), positive things we’re seeing (or hoping for) & Upstream 2024 (June 5th)!



CronitorCronitor helps you understand your cron jobs. Capture the status, metrics, and output from every cron job and background process. Name and organize each job, and ensure the right people are alerted when something goes wrong.

Neon – The fully managed serverless Postgres with a generous free tier. Neon separates storage and compute to offer autoscaling, branching, and bottomless storage.

ExpressVPN – Stop handing over your personal data to ISPs and other tech giants who mine your activity and sell off your information. Protect yourself with ExpressVPN. Go to and get three (3) extra months free.

Notes & Links

📝 Edit Notes


1 00:00 [Check, 1, 2, $, $]
2 00:37 Let's talk!
3 01:15 Sponsor: Cronitor
4 03:13 Compliments & Friends
5 04:51 AI content deals
6 10:42 User revolts
7 16:15 What is fair use?
8 20:42 The Onion is The Onion
9 22:13 Inevitably incorrect
10 23:23 Code / law constraints
11 26:56 Sponsor: Neon
12 29:02 The micro level
13 34:56 It's too late, isn't it
14 40:33 Better, if it works
15 42:49 Sponsor: ExpressVPN
16 45:10 Hollywood AI (SPOILERS)
17 54:45 AI's impact on devs
18 1:01:59 "Yet" or "For now"
19 1:07:06 Brilliant minds on both sides
20 1:13:51 Positive things: Luis
21 1:19:13 Positive things: Jerod
22 1:21:08 Prometheus
23 1:22:23 Positive things: Adam
24 1:25:30 Upstream 2024
25 1:36:40 Bye friends!
26 1:37:52 Coming up next


📝 Edit Transcript


Play the audio to listen along while you enjoy the transcript. 🎧

Well, we’re here with Luis Villa. Luis lives at the intersection of law and technology, and all the things that we care about, and so you’re one of the most interesting men in technology Luis. Did you know that?

Oh, wow.

Or sought after. I want to know what you think about stuff. I’m like “This guy knows.”

That’s better than coffee in the morning. Thanks, man.

That is. Started off with a nice compliment. Well, it’s true. I’m always like “We need to get Luis back, because I don’t know what’s going on… I don’t know what’s gonna happen… I’m scared… I’m excited…”

Oh, I have bad news. I cannot help you with any of those things. [laughs]

I think you can at least help us see a little bit of at least what – maybe not what’s going to happen, but what’s happened so far. I’m curious about your open-ish newsletter… Where is it, man? Where’s the newsletter? I’ve been waiting for the next edition.

Oh, man… I was supposed to get out a newsletter this weekend, and then family life happened. It turns out this whole parenting thing and having a newsletter is sort of – you get one or the other.

At odds. Right.

Yeah. I mean, it’s been an interesting time. People are talking about – I mean, what I’ve really gotta do for the newsletter… Well, first I’ve gotta get done with Upstream, the title of the conference that we’ve got coming up, because I’ve been preparing a lot for that… And then I’ve got to read – I mean, there are bills coming out on the California State Senate that might impact Open AI, there’s one in DC… So it’s not boring times.

And then we also have the striking of content deals as well, which is kind of interesting to me, at least… We had Reddit sign a content deal, I think 60 million dollars, with Google… News Corp struck a $250 million deal with Open AI, which covers Wall Street Journal, New York Post, Sunday Times, and probably a bunch of other properties… And then you’ve got Stack Overflow, which has deals with everybody…

In the meantime, we’re wondering about copyright. We’re wondering about the law regarding ingestion and training… And in the meantime, it seems like orgs are just like “Well, let’s just strike deals”, and maybe that will be the answer in the short term… I don’t know, what do you think?

I mean, for those who haven’t followed along, the basic idea here that’s going on is everybody wants to buy some content, everybody who has content wants to sell it… There’s a lot of uncertainty. I mean, one thing is – well, all these companies have to sort of check their terms of service. We used to always say “Well, yeah, they’ve got big, grabby clauses in their terms of service”, because all these terms of service, you read them and they’re like “We can do whatever we want that we need to do to run the service.” And people who are lawyers read that and think “Oh, that sounds pretty creepy. You want all the rights, all the time?” And Silicon Valley lawyers are like “Yeah, but really, it’s just to keep the lights on. It’s just to keep the thing running.” And we all sort of hand-waved that away, and now all of a sudden it’s like “Well, we’re keeping the site running, and we’re doing that by making revenue by shipping everything you ever wrote into the Mall of the AI machine.” And it’s like, it’s probably legal, right? I mean, much depends on the little nuances of each terms of service that were signed… But it is probably legal. Now, is it a right thing? Is it a good thing? Boy, that all of a sudden gets into much harder questions, right?

I think so, too. I was reading [unintelligible 00:06:56.20] I believe, Jerod; you and I are subscribed to this newsletter. It was actually part of the – I think this week’s or today’s newsletter. And I think one thing they mentioned was essentially that nearly 3,000 newspapers have closed or merged since 2005. And I’m just reading from essentially their perspective on this… Which is kind of telling, because before AI there was social media, there was the News tab inside of Meta, slash Facebook now, which caused a lot of drama… There was a lot of deals struck then, which - the challenge there is not “Oh, it’s now funneled through one place”, it’s algorithmically funneled through one place. And now you have newsrooms, who should be journalists, in quotes journalists (and sometimes they are actually journalists), they should be journalistically pursuing the truth of what’s happening in the world and telling it to the world… Because that’s the whole point of news, right? It’s not that it’s biased based upon a political stance, or an ideological stance, or a newsroom stance… There’s editorial, of course, but now they’ve got to compete with the algorithm, which means we get visibility, or we don’t. And that really shifted a lot of stuff, too. And now essentially we have a new version of what happened then. Now, with AI, which is “Will AI only be consuming AI content?” There’s lots of stuff I’m sure you can tell us, but… Before this was social media, essentially.

[08:18] Yeah. Well, and for newspapers in specific in the US, it’s even before social media that Craigslist was eating their lunch, and even before that… And private equity is eating the backend… There’s a lot going on there. But yeah, I mean, this is something that we dealt with it at Wikipedia for a long time… Because Wikipedia got really sort of lucky timing-wise… I mean, obviously, we all know it, we all love it, but it rose to prominence in part sort of hand in hand with the Google algorithm. Before there was SEO, Google had already decided “We frickin’ love Wikipedia”, which is great for Wikipedia. As Google got more popular, Wikipedia got more popular. It’s a pretty clear relationship there. And then at some point, Google was like “We could just read Wikipedia articles. We can read the info boxes. We can start pulling out all this information.”

And Wikipedia - that was something we worried about a lot when I was there. And Wikipedia probably has some qualities that make it a little more resistant to that. But if I was a newspaper, I’d be terrified. They’re reading all my headlines, which is all most people have ever read. Even before social media, that was mostly what people read, was the headlines… And they’re in a world of hurt there. I can understand why that’s terrifying, especially if you don’t think your local news or your local spin on it is all that interesting to people… And I think a lot of people in the newspaper industry are very confident in their own product. At least Wikipedia - whatever else you think of it, Wikipedians are pretty confident in the product, and I’m not sure that’s the case in the news industry right now. So you’re looking around for other revenue sources…


And same thing with Stack Overflow. I mean, at least Reddit will always have the community interaction part of it, right? Because so much of what people want from Reddit is to come and chat, hang out… StackOverflow has some of that, but at the end of the day what you were really looking for was the answer.

The green checkmark.

Yeah. And if the algorithm can give you the answer – I mean, what a miserable place to be in if your Stack Overflow’s leadership. I don’t envy them the hard choices they’re making right now.

And they are facing a little bit of a user revolt with people going in and changing their answers to be wrong because of this deal. I think Reddit obviously faced a big revolt last summer, when they locked down Reddit in terms of the way it was going to work going forward, which was very unpopular…

I almost think it’s more of a straightforward deal now, though. Like, if this is the new way that user-generated content generates revenue, and everybody knows that with eyes wide open, you get to decide if you’re going to participate in Reddit, if you’re gonna participate in Stack Overflow. And so the people who do - it’s almost more straightforward. Because in the past it was like users generate content, platforms take that content, use it for Google juice, Google points browsers to your web page, you get traffic, and then you sell that traffic against display ads, or whatever. And that was always kind of roundabout. Now it’s like we just take it directly and just sell it directly to the – so it’s almost taking out a layer on the inside. It doesn’t necessarily make it better, but at least it makes it more just a straightforward line to the money.

Yeah. I mean, it’s definitely clarifying in that sense…


I don’t know if it’s exactly – you know, simplifying has some implications of being like “Oh, yeah, now everybody understands. This is all good.” I mean, you know, sometimes clarifying can just mean “Now we see exactly how the beast works, and we don’t necessarily like it.”

[12:07] I mean, I don’t really know – I mean, a couple things, right? I think that’s right… But okay, well, one, what’s our alternatives? Are we going to start seeing more alternatives that are sort of bottom up, community up in some way? Distributed in some way? I don’t know. I suspect not, because it’s still expensive to host this stuff… But there’s going to be people who opt out, and what are they going to do? Where are they going to go? I think that’s an interesting question.

That’s the hard part. I think the only current best answer is like Fediverse and ActivityPub, and we just haven’t seen that really lay enough technical foundation. I know there are Reddit alternatives that are ActivityPub, and I can’t think of the name of the protocol… Not the protocol, but –

There’s a couple of them, yeah.

Yeah. And I’ve tried them, and the technology just isn’t there yet. I’m not sure if and when it will get there. I think as a Twitter alike, I think Mastodon technologically is pretty much there… I mean, there’s some places where it’s got rough edges, and is slower, and is expensive to host, like you said… There are some alternatives, but they seem still relatively fringe. I just wonder if – in the case of social media, I think it’s still… Even though it is clarifying and simpler, I think it’s still completely fraught and terrible… But in the case of journalism, maybe not as much, because that’s not user-generated content, that’s employee-generated content. So if you’re the Wall Street Journal, and you have a direct line of revenue from Google and Meta and Open AI, or whatever… And you know, okay, we’re gonna make $250 million over the next X years based on this content deal, and we take that money directly to hire journalists to do journalism, and to create the journalism that then goes out to the bots that answer our questions - this seems like it might work.

Yeah, I mean, though, a couple of things there… I mean, one is simply the obvious ones of you’re not seeing your local community paper getting these deals, right?


And we know from all kinds of research that the death of local papers have been really bad for local government, local democracy, local accountability… So that’s one –

Good point.

And that’s partially just a matter of it’s really hard to negotiate deals with Fox’s lawyers, News Corp’s lawyers… They are professionals; they’re gonna sit down in their room, and they’re going to negotiate the hell out of this deal with Google’s lawyers, and then it’ll be done. Whereas Mission Local, which is my local neighborhood paper, doesn’t have a lawyer on staff. They would probably literally publish in the comment section “Hey, do we know any IP lawyers?”


So there’s just overhead there.

Yeah, totally.

The other thing though is I’d be really curious to see one of these contracts, because – so when you’re licensing IP, or when your licensing texts like this from somebody, one of the things you can have or not have in the contract is you can say “Oh, and we agree that we’re not going to contest these rights.” We can say “Oh yeah, these are definitely copyrighted”, or we can all agree “These are definitely not copyrighted.” Or we can agree not to agree. We can put a line in there that says something along the lines of “Look, just because we signed this contract doesn’t mean we agree with you that copyright applies here.” So this could be a deal that’s permanent, and it lasts for the rest of our lives, or until the next technological change… But it could be that this contract essentially ends the day Google gets a favorable ruling in court.

[15:40] Because if they get a ruling that this is all fair, that all this scraping is fair use, they don’t need a contract like this anymore, and they could just go do it. And so we don’t know – as part of that negotiation, what did they agree in that case? If they get a favorable Fair Use ruling, do they keep paying? Do they walk away? That’s actually, I think, a really important thing for our understanding of what the equilibrium is going forward, and we just don’t know. For the moment, that’s a totally secret clause. We don’t know what that looks like.

How clear is fair use, to your knowledge? Pretty ambiguous?

Oh, in this specific sense, or in general?

I suppose in this specific sense… But generally, is it pretty ambiguous, meaning it can go either way when you sort of – it depends on who reads it, how they discern it is how it’s read.

Yeah, I mean, it depends. Well, the right of a library to buy a book and loan it out has been pretty clear… That’s not technically fair use actually, but same general principles apply, of like, you know, maybe we could have argued about that 100 years ago, but it’s been 100 years since anybody argued about that in a serious way. So we’re pretty sure – so when a library buys a book - yeah, great. It gets to go do that. Whereas for like – well, and scraping for web searches, we know… There was a period of about 10 years where we didn’t know if that was fair use or not. We were pretty sure it was fair use, but there was an ongoing series of litigation… Actually, mostly about porn thumbnails, but anyway… That was the driver, where people were trying to figure out “Is scraping for web search fair use?”, especially for Google image search. And now, that’s not really contested anymore. There was a period of about 10 years where we spent a lot of time and money arguing about that… And now, the past 10-15 years, that’s more or less settled that that is fair use. And we’re gonna go through that period again, where right now we’ve got something like 20 live cases of various sorts, between various sets of parties arguing about this… And some of them are arguing fair use, some of them aren’t, some of them are doing sort of more weird, nuanced… There’s technically some DRM-related stuff in some of them even… But the key thing is nobody knows. And that period of uncertainty will probably last about 7 to 10 years, depending on how long some of these cases take to get to the Supreme Court. And then of course, you’re gonna have to redo the whole thing over again in the EU, and Japan, and China…

Rinse and repeat.

Well, not just that. In 7 or 10 years it’s gonna be different. Don’t we expect change between now and then? Something’s gonna change.

The tech moves so fast… It’s gonna change; it’s not gonna – it’s gonna change under their feet.

Yeah. Well, I mean, the tech and the ambition, too. Because Google Book Search, for example, was – I mean, same basic tech, right? You’re just doing it to books instead of webpages. But the ambition of doing that to books - boy, that was that was scary to a lot of people in the book industry, even though from a tech perspective, “Whatever, it’s just a file of text.” Like, it wasn’t any – the only real technical innovation was in the scanners themselves.

The OCR, yeah…

Yeah, how fast could you OCR this. So will we get changes? Will we see advances in synthetic text, such that the machine can really eat its own tail, and therefore the original source text just gets further and further away, and harder and harder to prove any connection?

The other thing that I think we really need to seriously consider at this point is - we were told for several years that if we just fed more text into the machine, that the machine would just keep getting better, up and to the right, right? Like, there was a direct one to one… And I think maybe we’re seeing with like some of the news this past week about Google’s search returning –


[19:50] …some hilarious garbage, right? Embarrassing garbage… And there’s just no amount of like – there’s no amount of additional text you can feed to the machine to get it to not embarrass itself this way under the current LLM paradigm. It’s just not going to – so maybe we see that all this stuff gets put back in a corner a little bit, and it becomes less… I mean, part of the reason why everybody’s doing these deals now is because everybody smells a giant pot of money. And maybe the pot of money is not as big as we think it is. Maybe hallucination limits – hallucination or just the inability to tell fact from truth. I mean, my favorite of these ones from Google last week - people have been calling them hallucinations, but they’re not hallucinations; it is really faithfully copying The Onion, and it just doesn’t know that The Onion is The Onion. [laughs]

Yeah. Well, talk about a hard problem… I mean, we’ve had humans getting tricked by The Onion for years, you know?

Oh my gosh, yes. They believe truth that The Onion says that is not true.

A satire can be difficult to read, especially when that which they’re satirizing becomes more and more ridiculous… It’s very difficult sometimes to know if that’s a real article or not anymore.


So hard to blame the LLM on that one, even though it is – I mean, for Google, this is such an embarrassment. It’s so hard for me to imagine them… I mean, and this isn’t even the first time. They’ve been embarrassed repetitively in this current age. But now they’re doing it right there in their Google search… I mean, we knew it had to happen, but man, is it not ready. And like you said, maybe with this current crop of technologies it’s not gonna be ready.

Yeah. I mean, I think that’s the really interesting technical question. And then how does that play – you know, obviously, with my hats on, how does that play into the legal side? But first, we’re going to spend a few years seeing “Is this actually ready for primetime, going to be ready for primetime?” I’m really curious to see what Apple does, because they’ve struck this deal with Open AI, but they’re normally more conservative about what kind of quality of stuff that they put out there. So it may be that they sit on it for a few years. I’m sure they’ve done the deal with Open AI. I’m sure they’re going to be experimenting with it internally. But are they actually then going to pull the trigger, ship it? They have all the money in the world, which means they can have all the patience in the world if they want.

Right. Well, last week we were at Microsoft for Build, and we were talking with Mark Russinovich, who’s CTO of Azure, and we were talking about this exact subject with him with regards to code gen, basically, in that context. And his take is that with the current transformer technology – like, there’s no fixing the root cause with this technology, All we can do is put in the guards and the shields, and you can do defense in-depth, have one model that’s checking another model, and doing all these things in order to just make it more robust… And it’s papering over the fact that they’re always going to have what we currently call hallucinations, until some new technology comes out which doesn’t currently exist. That’s what he said. And it sounds like – I mean, surely the smartest engineers and research folks in the world, some of them are at Google, trying to solve this problem, and they’re shipping a product that is woefully inadequate doing this.

Yeah. I mean, it’s a really big culture moment for them. Like, how can they – well, and to your point about satire… It’s so interesting that you were talking about code gen at Build, because I think it’s actually a really interesting sort of… You know, the way these things happen, nerds got excited about all this. And I’m a nerd. So I say that with love. And I include myself in this. Because Copilot was amazing. Like, Copilot was – but also Copilot, because it’s code, we have linters, we have compilers, we have test suites. We have like this whole framework of stuff. Forget even the next – forget even what Mark was talking about last week, of like layering in different models and stuff… We’ve already got huge suites to help us tell – they’re not perfect, but to help us tell garbage from not garbage. There’s no test suite of like “Is this The Onion or is this not The Onion?”

Very few satirical codebases out there… Except for maybe [unintelligible 00:24:05.06] used to write some probably, but that’s about it.

Right. And what was this test-driven development? We’ll have to bring back his codebases…

[24:13] Yeah, exactly.

TDD for satire. Yeah. So maybe we all got nerd-sniped into “Oh man, this is so amazing”, without thinking through like, actually, code is weird. Because it is creative, and complex…

It’s constrained.

…and so we thought “Oh, well, other creative and complex things will clearly be the next thing to fall.” It’s like, well, okay, so it’s creative and complex, but it’s also constrained in ways that the news, law… I mean, I think I told this story last time I was on the show, that it turns out lawyers don’t have our notion of compiling, and as you send it to a court, it costs you a million bucks and three years of your life. And then you get back “Oh yeah, sorry, you misplaced this colon. You lost the whole case.” We don’t have the quick cycles that programming does.

Right. But you also have the constraints, which makes it a place where LLMs might have less problems in legal documents, I think, because of the structure, and because of… I don’t know, they get pretty wordy, I guess… But I’m just thinking, versus answering arbitrary questions from all humans around the world. That seems like a very difficult one that Google is trying to do.

Yeah, that is fair to them. I mean, they – and adversarial questions now too, right?

For sure. Yeah.

The thing that I’m curious about with law… We’ve seen some signs of these LLMs having a sense of structure. Law very much depends on like “Okay, well, we’ve got sentences, paragraphs…” Okay, you’ve got to hold the logical structure of all that in your head. Lawyers never talk about it this way, but a lot of your first year of law school is like jamming the big-picture constructs into your head, in like a structured, organized way… And then you get new facts, and you apply them, you sort of pass them through this structured filter… And LLMs are not yet super-great at that. They’re still trying to figure out how to figure out that kind of structure.

I mean, we know there’s certainly some interesting research that shows that they’re figuring out structure in large codebases, and there’s certainly some analogies there with the law that I think you’re gonna be super-interesting… But it’s still early days, and it’s still – I mean, there’s plenty of bad examples of bad LLM search out there in the law, I would say so far… But it might be tractable. I don’t know. We’ll see.

Break: [26:44]

I think it’s interesting at the micro level, at the clause level, or the section level, so to speak… Because there’s a lot of opportunity to sort of write a better accountability clause, or just something that’s in an agreement, that doesn’t have to be a full-on document. Maybe there’s an existing document already, and you just need to massage it for this one use case, and you explain the use case that it currently solves, and you say “Well, I need a new clause to now support this one section of concern”, and there’s help there.

Now, I could be just the layman wishing for a magic genie inside this bottle to help me with my legal challenges whenever it comes to agreements or whatever it may be… Because we sign agreements on the weekly around here. And they’ve largely not changed for a while, but there’s some times we get pushback on certain clause, or just questions that I can’t quite fully answer, because I’m not the attorney… We’re going to shove it off to an attorney to answer that question, but it’d be nice to have something that can massage words in ways that agreements can be founded. Because I think, for the most part as a layman, it seems like that’s possible, or more possible than “Hey, give me an entire document.” I think that’s probably more challenging. Whereas “Give me a clause or a section that covers a certain concern”, that’s a little easier to execute on.

Yeah. Well, this is one of these things… Lawyers take it as a point of professional pride that every sentence and every paragraph – like, “If you ask me for a clause, I’m gonna write you the perfect thing.” And one, actually, we’re pretty bad at that.

Isn’t that because they bill by the hour? [laughs]

No, maybe not just that, but as a matter of like craftsmanship, man… The best lawyers are really – there are plenty of bad lawyers out there, don’t get me wrong… But the best lawyers are like “I’m a craftsman, and I’m making this thing bespoke for you…” But even then, even if you get one of the good lawyers, and he’s super-great about that, they’re still pressed for time, they’re still like [unintelligible 00:30:58.04] I haven’t had my coffee yet, and you said you need it by 9am… Well, okay, I’m gonna –” You don’t want to pay for all the research to make sure it’s 100% right… And at that point, it starts getting a whole lot – I mean, I think one of these fascinating things, both sort of general, and specific to the law is how do you compare… Because we want to compare instinctively LLMs and AI more generally against what’s perfect, right? Because I can tell you all the ways. If you ask an LLM for an NDA, it’s gonna make mistakes, especially against like a perfect template NDA. But so are most lawyers, most of the time, especially if you just ask them to do it from scratch. They’re totally gonna forget things if you ask them to write an NDA from scratch.

And so there’s going to be a gap there, which as a profession, how do we talk about that? How do we reason about that? I don’t know.

[31:53] And then as like a legal system… So I live in San Francisco… We see Waymos all the time. They’re not perfect. So if you judge them against perfection - you know, they do some weird things on occasion. I saw one get very confused just last Friday. Are they safer than human drivers? 1,000%. If I could flip a switch and turn every car in San Francisco new a Waymo tomorrow, I wouldn’t hesitate. I would do it in a heartbeat. And so what do you compare against? Are you comparing the LLM against perfection? Are you comparing it against what would a human do? Are you comparing it against the last generation in Google search? I don’t think we know – we haven’t figured that out as a society how to do that yet.

Yeah, I don’t know. I think I would probably compare it against getting it done, on time, with less money, that still achieves the goal… But I understand that law is massaged over the years; it changes. A new case or a new win in court changes the next agreement that could be written, because now there’s new case study, so to speak, or case law that you can reference as backing for X, whatever the X might be.

Well, this is one of the things that lawyers are terrible at. Like, we love our boilerplate, we copy and paste that stuff… And “Oh, there was a new case? Yeah, I’ll get around to fixing the boiler plate tomorrow.” And then maybe you do, and maybe you don’t.

There’s a great book by an old law prof of mine, where he talks about how there was this one clause in international bond contracts, that was there for like 120 years, and everybody thought they knew what it meant, but if you like put the plain language in front of people, in front of a lawyer who wasn’t a bond attorney, and you’re like “What does this mean?”, they would say exactly the opposite of what the community thought it meant. And finally, there was a judge that was like “Hey guys, I know you all say it means this, but I just read the thing, and it doesn’t mean that.” And then everybody put their hands over their ears and didn’t change it, and they just kept copying that boilerplate.

[laughs] Oh, really?

And about five years after that one case - that one case was sort of a small one, like a few hundred million dollars… And then Argentina sued over the same language for like $10 billion, and threatened to blow up the entire international bond market over the exact same language… So this law professor of mine went around New York, because all the international bond lawyers are in New York, basically; New York or London. And he was like “So why didn’t you change it?” And the book is just like compiling excuses, rationales…

Oh, gosh…

And it’s a really – I mean, it’s a good nerdy book, but it sort of reminds me of the Mythical Man Month a little bit, where there are just things that we all do as a practice that aren’t always the right thing, but they’re instinctive, they’re intuitive… Lawyers are just as bad at that as anybody else. Sorry…

That’s okay.

Well, no, that’s –

Well, then you can apply this to a whole new world, which is the stock market, or to investing. Right? That kind of data. How do you apply there? Because this comes back to this larger question I’ve been looming on, which is is it too late to opt out? Because that was the question earlier, “How can we opt out? Can we opt out?” Like with the news organizations, with different sites…

Right. With content…

Right. I think societally, I think humanistically it is too late. In my opinion it’s probably too late. Let me just say it more clearly… I think it’s too late to opt out of AI. So now what? What do we do now, essentially? So you’ve got law, you’ve got code gen, you’ve got just generative art, and text generally, out there in every permutation, and then you have investments probably happening… Like, is there any news around AI and investments? How has this kind of gone into predictiveness? What might happen? What might not happen?

[35:58] I mean, all of my baseball games are now sponsored by a mortgage company that claims to evaluate your mortgage applications with AI.

I don’t know how true that is… Whether that’s just something we would have called an algorithm six months ago, I can’t say. I don’t know. I mean, I think that’s actually a really interesting – because you could imagine sort of bottom up, Reddit actually staging a successful revolt, or maybe on a pro rata basis… I know there’s some that say they’re banning AI-generated content; how good they are at that, I don’t know. Wikipedia is definitely trying to figure out “What do we do about AI bots?” So you can do that bottom up; we can ask our legislators to give us some top-down options, watermarks, or things like that, but I don’t know… I think we’re living through a period where we’re gonna have to throw stuff at the wall and see what sticks.

Yeah. Some of that stuff keeps the honest people honest, you know?

It feels like pushing back for pushing back’s sake, because of, in one case, fear. And I think fear comes from the unknown. We have lack of knowledge. We can’t predict the future. And this is a very scary moment. There’s a lot of disruption that’s happening. But you can point to history and say “There was disruption here, there was disruption there…” I mean horses no longer pull around things. I don’t know how you got to where you are now, Luis, but did you go by horse? Probably not, right?

I did not. I did not go by horse. Magically, bike… But yes.

And the last time you travelled any sort of distance, you probably flew in a plane, rather than by horse carriage across the country that changed your entire life. That’s how it used to be 100 years ago…

I mean, if you asked my mom, she’s pretty sure I came to California on a covered wagon [unintelligible 00:37:45.27]

Yeah, maybe that’s why. But disruption happens everywhere, right? But this is such a big disruption. It’s such a big opportunity for disruption, and a big opportunity to silo. I think that’s the biggest concern I have with News Corp and these deals, is how you silo the big incumbents, and those with money and power… And maybe even going back to some things Cory Doctorow talked about, with – what was it called again? Chokepoint Capitalism. This whole thing where it’s a chokepoint against the artists, in a way, or the creators in a way, that now it sort of puts this toll road, this gate, this “You can’t go through unless you pay.” And then only if you pay can you have your content in this AI, which then generates results, which impacts millions, and you get – it’s back to the algorithm thing, again, where you can only become known if somehow you’re feeding this beast. That’s a strange world to live in in the future. I hope it works out, but I’m just like “How is it gonna work out?” I’m just – that’s where I camp out; not so much this doom and gloom kind of thing, but really, how will this really work out if we all submit to this thing? Is it truly the all-knowing and helpful, or is it, well, useful in certain ways, and it’s compartmentalized?

Boy, if I knew that one… I mean, I’ll tell you, I think my sort of gut sense - really terrific book I read a couple years ago on the printing press, history of the printing press. Long story short, the printing press - even more impactful than you realized, probably…

Oh, for sure.

…but none of us would trade in for like a pre-printing press kind of life… But also, those first 100 years were pretty rough. Or religious wars, religious censorship… A bunch of stuff in that first 100 years, as societies were figuring out the impact of the printing press was not pretty. And I suspect we’re going to be going through something like that, where we see a lot of unpleasantness… Even if our grandkids will be like “I can’t believe they didn’t like AI”, and our great grandkids won’t even know. Our great grandkids will be like “Of course they loved AI from the beginning.”


[40:00] But that in-between period, as you say - a lot of dislocation, there’s going to be a lot of chokepoint stuff, there’s going to be a lot of mediocre… More than anything else, we already had this with Google search. The SEO crap that was dominating all the everything. It’s not like Google search was great a year ago, before they put the AI stuff in.

No, it’s been failing, which is why it’s ripe for disruption, which is why I think ChatGPT poses such an existential threat to Google. Because really, if you think about what we will like years from now - I mean, is it too late to opt out? Like, we don’t actually want to as a human race, because this is kind of… Okay, it’s a proxy of what the dream is. Like, I can just talk to my computer, and it has answers for me. Why would I want Google searches? Now, the problem is you don’t always get the truth, but you just want the answer. It’s a better user experience ultimately, until it tells you that you should go eat rocks once a day… Because that’s one of the things it said, that it’s healthy to eat a rock a day to live longer, or some crap like that.

Or [unintelligible 00:41:03.00]

But in a world where it works, it’s fundamentally better than what we currently have. And so there’s no going back from that.

Yeah… I think that’s right, but then I worry about sort of the ecosystem effects. Because you’re talking about opting out… There’s two sides of that opting out. There’s opting out as a consumer, as a user, where all will google-search a bazillion times a day… I mean, I’m on DuckDuckGo, but still, DuckDuckGo just does not flow as a verb… So I’m still –

Right. DuckDuckWent.

You should call it DGo, or something like that.

Somebody was telling me Kagi was great. Or Kagi… I’ve no idea how you pronounce that.

I’ve heard that as well. I haven’t used it. Yeah.

But then as content producers - and we are all as humans to some extent or another content producers - what’s that look like? How do we choose – how do we opt out or not opt out? Degrees of opting out… Like, that’s a really – I think that’s a sort of fundamentally different question, because like you’re saying, Jerod, it’s from a search perspective if I’ve got a digital butler who anticipates my every need, and just has what I want, that’s obviously better. But if to get the inputs for that we sort of like homogenized all content production… Like, I’m not sure that – that’s a different question about whether you want to opt out, and I think a much harder one. And I don’t think we have any good answers on that.

Break: [42:34]

It’s kind of the luxury that Hollywood has, insofar as they can just invent data on Star Trek Next Generation, who has all of the world’s knowledge in his computer chips. But they don’t have to actually figure out the hard part of where data got his information from, and how many people that displaced, and like you said, the wars that maybe happened in order for that to just be a fact of that reality.

It sounds like you just wrote a prequel.

Ooh, some good [unintelligible 00:45:35.12] there, yeah.

[laughs] I have been sort of jokingly – I mean, with reading, and I want to do movies next… Like, what are the AIs in fiction that didn’t – the AIs in fiction that weren’t like Terminator. What are the ones that –

Meaning positive?

Not necessarily positive, but at least not negative in the same clichéd way…

Or the Matrix even. Like, the Matrix is still machines, so I would categorize that as AI. They’re intelligent to some degree, right?

Yeah, yeah. I mean – well, I mean, I asked about this on Fediverse, and quite a few people were like “Well, you need to watch this specific next generation episode about data, and whether data is human”, that kind of thing.

Really? Do you recall the episodes, so we can put it in the show notes? Because I want to go check it out… Do you have a list?

Do you know which episode that is?

I’ll find it, I’ll send it to you guys so you can put it in the show notes.

Okay. You’re amongst nerds. We will literally go watch the episode.

And you know, Her came up… I mean, obviously, it was Her that I was like “Wait… Yeah, I guess I need to rewatch Her. Because – did these guys miss it as much as I think they missed it?” I did not I remember coming away from that movie with like a good sense of like “Oh, cool. AI.”

It was largely a love story to my knowledge. It was like an unexpected love story.

Yeah, but it didn’t end well. Right?

I don’t recall how it ended.

I think she –

I think she’s in love with everybody. Right?

Well, and then doesn’t she – aren’t all the AIs just like “Yeah, actually, we’re in love with each other, and you guys are boring. And we’re out. Peace.”

Okay. I just deleted that in my brain just now, just in case.

[laughs] Adam’s usually the one who spoils things around here, so this is [unintelligible 00:47:32.15]

Well, I do you have to spoil one more thing, Jerod, if you don’t mind…

Alright, I’ll just close my ears.

If you haven’t watched the TV show Silicon Valley, it’s largely about artificial intelligence. Have you watched it end to end?

[47:48] I got through the first two seasons, and then sort of… I was watching it on – I was watching it because Tidelift, my company, headquartered in Boston, so I was doing cross-country flights… And the thing is, all my co-founders are East Coast, and they watch Silicon Valley as like anthropology.

[unintelligible 00:48:08.05]

And they’d refer to people by like –

That’s how it is, Jerod. It’s not how I’ve watched, it’s how it is, okay?

Well, that’s the thing, right? Because I had avoided watching it for exactly that reason. There’s a whole –

No, it’s two reasons. It is anthropology, but it’s also very comedic. I mean, it’s a masterpiece, in my opinion.

It’s hilarious.

But if you want one more to watch on artificial intelligence, and not exactly Terminator - it doesn’t end well, I’ll just say… But it ends… It actually does end well, actually, now that I think about it. It just depends on your perspective of it’s well or not.

Later, later seasons?

Alright, I’ll tack onto the list.

The last season in particular. So honestly, I think it’s worth a watch for anybody in the software world, in my opinion. If you’re in software - I’ll just say this right now; if you’re in software and you’ve not watched this show, end to end, at least once, you’re wrong.

But man, they’re just – I mean, so the end of season one, where they get a pallet of Red Bull and they’re staying at a hotel…

You did that?

Literally that hotel, I had a morning order of RedBull at 5am every morning.

[laughs] See?

But it wasn’t for TechCrunch Disrupt, it was for the Oracle/Google trial. But I still like cringed. Because they show the outside shot of the hotel, and then they like cut to the Red Bull.

That’s usually the reason most people don’t watch it, is it’s too close to reality. The only reason I was bringing it up was just because it has artificial intelligence –

Because he has to…

…and it does uniquely well or not well, depending upon your perspective. So I would definitely add that; it’s unexpectedly about artificial intelligence.

I’ll put it on the list. Yeah, because I think that’s – I mean, I don’t know, I don’t find the Terminator stories all that… I mean, again, I live in a neighborhood with killer robots driving around all the time, and everybody’s just like “Hey, they stop at stop signs. It’s fine.”

Are you talking about Waymos?

Yeah, yeah. Waymos, and briefly Cruises. ZOOX –

They don’t have actual guns though… I mean, that’s the difference.

No. But I mean, what’s the – in America…

If they did, would you be more uncomfortable than you currently are?

Literally, more people get killed in the city by cars than by guns. So like…

Fair. Car accidents are like one of the number one killers. Cigarettes and car accidents. It’s crazy.

I got some stuff on my YouTube algorithm because I watched one video –

That’s how it does it.

…one video on like “Crazy car crashes you must see.” I don’t know what the headline was, but it was something that got me, and I was like “Oh my gosh, I should check this out.”


And that was yesterday, and today I drove for the first time since watching a few of them, because they got me again and again… And I was like “OMG, I’m scared to drive”, because “This is what could happen when you drive.” Well, to pepper the conversation a bit more, I asked our favorite LLM… Well, at least my new favorite, GPT 4.0, as they call it, The Matrix, Ex Machina, Her, I Robot, AI Artificial intelligence… That’s what the movie is actually called [unintelligible 00:51:14.10] and spell it out… Transcendence, which I think had Johnny Depp in it, Jerod. This is –

I don’t think I saw that one. I haven’t heard of Transcendence.

Yes, it was interesting. Ghost in the Shell. And that’s had like a couple anime versions of it, a more modern version of it, I think that included ScarJo. Tron Legacy was obviously about AI… Blade Runner 2049, and I guess the original Blade Runner as well… Terminator, which - we’re striking that one; get out of here. Bicentennial Man, Wall-E, Chappie, The Machine, Upgrade, Alita: Battle Angel, The Hitchhiker’s Guide to the Galaxy, Big Hero 6, Stepford Wives, [unintelligible 00:51:51.09] Eagle Eye, Morgan…

Stepford Wives…

Stepford Wives.

That’s an interesting one…

Deuce, Next Gen, Simulant, Archive… These are ones I’m starting to – these are hallucinations at this point maybe, potentially.

I think we’re obsessed with this topic. Look at all these movies…

Right, yeah.

[52:10] They’re starting to hallucinate at this point… [laughs]

I mean, AI, the one that they literally had to spell out, the – that was the Spielberg working on…

Is it Jude Law in that one?

Jude Law was in that, yes.

Yeah. Yeah, that came up several times in the Fediverse. And it’s weirdly like – it’s recent enough that it probably feels more modern. I haven’t watched it since it was in theaters.

Same. I feel like Haley Joel Osment maybe was in that. And then –

Keenan Feldspar.

That’s the actor?

Yeah. That’s a joke, because that’s his name in Silicon Valley… The same guy plays a whole different thing.

We’ve got season one and season two here. You can’t keep doing this to us. We’re not gonna catch these pitches.

Sorry, man. Gotcha. [laughter]

All I remember for that movie, besides just generally Jude Law, Haley Joel Osment, and then he’s a robot, android, whatever, is it lasted like 45 minutes too long. And there was this weird thing at the end where they went back to some home place, and it was like in a house… And I was just like “Why is this movie still going?” That’s all I can remember. I can’t remember exactly why that happened, but I was like “Are we still sitting here in this theater? It’s ridiculous.” So maybe we can have ChatGPT summarize it for us and we don’t have to go back and actually watch it.

Yeah, can we trust ChatGPT to summarize the AI movies for us?

[laughs] It’s an existential question.

It’s gonna tell us Terminator was the hero, right?

Right… Well, it could be confused, because Schwarzenegger came back as the hero. So… It’s not exactly straightforward. [unintelligible 00:53:37.03] villain, he became the hero.

It’s true. There’s two more [unintelligible 00:53:40.29] worth mentioning. Elysium, which had Matt Damon in it… The Signal, and “I am Mother”, which had Hillary Swank in it.

I am Mother?

Yeah. I think it was on Netflix, if I recall correctly. Basic premise is a child that had a mother, that lost the mother, I believe, and that was raised by machines. That’s, I think, the basic premise of it. Interesting to watch, though…

Kind of like The Jungle Book, but with AI instead of –

Right, yeah.


That’s gonna be Hollywood’s new trick, is just every old movie that they –

Don’t give them that.

Now, that’s actually a good use of AI, right? I want to write something like this, but in the light of x.

Would that be a good use of it, or just a use of it? Come on…

Well, that would actually be a good use of it, because you have to think less about the research, and it can give you 50 responses, and then you can start thinking faster.

But at the end of it you have a story about the Jungle Book, but it’s AI instead of bears and wolves and stuff. Like, it’s not good.

Mashups, you know? “Help me mash up something.” It’s not bad use, [unintelligible 00:54:40.09]

Child raised by Alexa…

Oh, gosh…

How then do you feel about the way that AI is impacting literally software developers, every single day? …writing code, trying to stop the next takeover, so to speak, from XZ hacks, and stuff like that… What are your thoughts on all these different things that we deal with as developers, that may or may not displace us may, may not anger us - usually might - and may or may not circumvent the open source code we put out there.

I mean, a couple things. One, I’m not super-worried about displacement. There’s so much demand for good software out there. This feels to me like saying, you know, when we went from handwriting assembly to using compilers, to say “Well, it’s gonna displace the assembly writers.” Like, okay, yes, but we all got more productive. I think that might not be the case in all domains, but I think in code, there’s just so much more demand that than there is supply of developers. I’m not particularly worried about that one.

There’s I think a more interesting concern of “Well, is this creating new cruft? Is it creating new technical debt? Is creating new security vulnerabilities?” And on the one hand, I think it probably is, and on the other hand, have you looked at our code lately? Even before AI, we had piles of technical debt, we had a lot of vulnerabilities… And I am not – so this is one of these things where the question, as we were saying earlier, “What is it you’re measuring against?” And I can see a legitimate case of maybe it does make these things worse, so I think we need to understand and research that… But at the same time also, these things are already very bad. Like, XZ is not caused by AI.

[56:34] That we’re aware of…

Left Pad was not caused by AI. These are mistakes that we’ve been making for a long time. So I’m more worried about, with like my Tidelift hat on, some of these questions of “How do we think about these piles of very human systems that we put a lot of pressure on?” I mean XZ was really… I think, actually, I want to float this with you guys, because I don’t think I saw – I was reviewing some notes for Upstream, our conference coming up soon, and I was realizing, I don’t… You know, everybody read the email from the XZ maintainer, who was like “Yeah, I’m burnt out. I have some stuff going on in my personal life.” The thing that I’m curious what you guys think about this, because it jumped out at me weeks later as I was reviewing all this… He mentions in there that he’s been maintaining the project for 15 years. When was the last time you guys had a job for 15 years straight, without changing? I don’t know how long has the podcast been on.

Well, I was gonna say, you just happened to hit the wrong two people, because we’ve been doing this for 15 years. But generally speaking, that would have worked very well, and the answer would have been “I haven’t had a job for 15 years”, for most of us.

That’s true. A lot of change in other career paths, but we’ve been doing this for 15 years.


Right. And the thing is is that library has got to be around for another 150 probably, right? So what are we doing about that kind of long-term thing? Maybe LLMs help with that, or maybe they make it worse? Or more likely, it’s a little bit of both, right?

Yeah, that seems like an intractable problem. Any software of sufficient value, over the long term will outlast its creator; as long as it continues to provide value, it’s going to continue to exist, and be deployed. And even after it stops providing value, it’s still going to be out there in these latent places that just never kept up with the Joneses. So that’s one that I think about a lot… We talk to a lot of people who have ambitious goals for very long-standing projects. I appreciate that from them, and I ask them questions like “Well, how are you actually going to do that?” One that comes to mind is Drew DeVault’s new language, Hare, which he intends to be a 100-year programming language. So we did a show with him, and it’s like “Well, if you’re gonna make a 100–” First of all he’s like “Well, it has to be valuable to people.” So he has that to overcome. Not every project is worth it, at the end of the day. But if you’re planning for that, there are certain things that people do around longevity. And every single one has to do with replacing themselves early in the process, right? Making themselves dispensable, not indispensable… Which is very difficult, and takes actionable steps and planning… And it’s still hard to pull off. You can’t find somebody else who’s willing to do the work. So I don’t know the answer, I just know that yes, that is a very real and a very hard problem to solve… And we don’t have to solve it just once. We have to solve it thousands of times.

Yeah. We have to solve it thousands of times. And we’ve talked for a long time about “How do I make my project more sustainable?”, but I think it’s going to become more acute, and I don’t know that we have a great – you know, with my lawyer hat on, I can’t help but think about what are legal solutions that we could use to help with things like this. Like, do we need a JavaScript maintainer co-op, where you’re one of these smaller projects, and there’s a formal way for you to “Hey, congratulations. You entered the 10 million download club. We’ve got our private maintainer space, and our private revenue streams.”

[01:00:15.18] But that may be a little bit too much, so my brain runs to those kinds of solutions. I suspect they are part of the story, but they’re probably not all the story. The human parts have to come first. And I don’t think LLMs really one way or the other – you know, I’m sure they’ll make some parts of that easier. Adam, we can write the co-op agreement with a ChatGPT…

Right. I think they helped maintenance for those who want to maintain. Like, it’s gonna make a maintainer’s life easier in certain tangible ways, just like it’s gonna make a lawyer’s life easier in certain tangible ways, where it’s like that thing that used to take two hours takes me five minutes now… And so now I can sustain myself personally longer. But I don’t know about –

But what if you have 20 times as many things to do, because of bots on there? I mean, our financial system is already in large part – you know, Adam, you were talking about finances, and the finance system… Our financial system is in large part bots trading with other bots, on the sort of milliseconds…

For sure.

Are we gonna get like – again, we’re generating a lot of good science fiction ideas today, guys. Somebody should write a short story about what GitHub looks like when it’s entirely bots, filing issues, writing patches, approving patches… What’s GitHub look like on that day, with the humans just sort of standing back and being like “I don’t know how the software works… But it does.”

Right. If an issue closes in the woods, and no one is there to hear it, does it really –

What’s the semver change?

[laughs] Yeah.

We add an extra digit to semver. All the changes in this revision were done by bots.

There you go. It’s like major, minor patch, and bot, something like that.

Do you hold the word “yet”? Or “for now”? I suppose “for now” is a phrase, and “yet” is just a word… But do you hold that near and dear when talking about this stuff? Because things change, right? A lot of this conversation is contextual to now. The time of now. The present. Do you have the “for now” or the “yet” parenthesis in mind when you talk?

I mean, it’s not just time, it’s also place.

Silicon Valley is adopting this stuff in a very different way than a lot of the rest of the US, which is adopting it very differently than the EU, which is adopting it very different from Japan, China… So it’s both this time-wise, for now, yet… Also this place. I mean, also language. English is better supported, because the corpus of texts is better, is just bigger… You know, what does this mean for small languages? How do they – maybe this makes it easier to teach small languages, right? Kids can have a robotic tutor in the small language of their choice and their people… Or maybe it becomes totally irrelevant and everybody just speaks English, because they’ve got an English tutor, too. I think it is both genuinely exciting… Like, I try to remain very positive about all this stuff.

Me too. I mean, even what you just said was kind of positive. I think those are good things to layer onto humanity. If a child can learn a new thing faster with a tutor… The human tutor is totally possible as well, but it’s not always possible financially, or even timewise. Like you said, the time and the when - a literal human may not have the time, or the geographic location to be present in that child’s life, one to one. Whereas on another hand we can invent that thing via what we call artificial intelligence today, and they can supplant what would normally be a human function, and potentially do it just well, or maybe better. That’s a good thing, I think.

[01:04:03.08] But then we get into this position of who is the arbiter of what’s good and what’s not good? What are the, as we’ve talked about before, the unintended consequences of allowing this thing, and opting in? …because we can’t opt out; everyone’s stuck. We’re all opted in. Because you said that Silicon Valley is adopting this stuff in unique ways, and so is the EU, and so is Japan, and so is China… Like, there is a layer of “We cannot opt out” in humanity that we don’t personally hold anymore, you and I, and the three of us in this conversation. There’s a lot of good things, but there’s so many unintended consequences, or bad things that may result as a result of it.

And we don’t in our decision-making process these as societies. We aren’t well adapted to move at this speed. Which isn’t to say I would trade our democracy for some of the other options on offer right at this particular moment… But it is – it’s been really striking, for example, in San Francisco to watch local politicians struggle with “How do we regulate Waymo?” Because none of them want to acknowledge that the worst safety problem in the city is not drugs or crime. It’s cars. You say that, you’re gonna get voted out of office immediately.

Oh, yeah. I mean, we have this whole thing with – anyway, you don’t want to get me started on San Francisco politics.

It’s not – I mean, it is politics… But it’s also, that’s just in a way stupidity. If there is a major problem, and you’re turning a blind eye to it, and you are in a position of power to change how that works, or how it does not work… Wow. That’s just the silliness of the world.

Yeah, but I mean…

That’s like with politics all around the world.

Yeah. I mean, politics is just another way of saying “making decisions.”

Yeah, for sure.

And making decisions is hard. There’s no magic wand we can wave to make some of these fears go away. The fears are real. I mean, sometimes they’re out of proportion, or they’re based in – I mean, I don’t know, y’all must have tried to explain some of this stuff to family… I mean, I tried to explain how Waymo works to my mom, and her first response is “I don’t know, I don’t trust it.” And then I say “Well, mom, but within five years, I’m gonna have to take your keys.” And then she’s like “Well, I won’t trust it… But I’ll ride in it anyway.”

Yeah… Given no other options.

Yeah, it’s very difficult to reason about, difficult to explain… Like you said, just making decisions with a large populace i just like - you’re not going to have agreement, so it’s difficult to rally around that.

Even in small populations, right? I mean, Silicon Valley - we’re super-homogenous here, pretty much… And we can’t figure out “Are we going to have AGI in five years, and so none of these discussions matter, because we’re all going to start uploading our brains, or whatever?”

Yeah, that’s been my refrain probably – I probably say this more than Adam brings up Silicon Valley, but…

You brought it up.

…I’ll say it again anyways, because he never stops… It’s amazing to me how divided brilliant minds are on this topic. I mean, you can go from the doomers to the utopias… What’s that, [unintelligible 01:07:24.25] Whatever it is.

I have no idea how you – I think that’s the first time I’ve ever said it out loud. I hope it pissed somebody off.


I’m upset.

So you go from that extreme to that extreme, and you go to the individuals, and you look at their credentials and their histories… And of course, there’s going to be some outliers in there, of whatever, but… Very smart people, very informed, and they are completely on the opposite sides of what they think is going to happen. And I don’t know if you can name a technology that I can remember… I mean, even the Web itself wasn’t so divisive. There were people that were not thinking it was going to explode the way that it did, but they weren’t like “It’s going to destroy humanity.”

[01:08:07.23] So that to me is just interesting. I mean, here we are, we have like massively wild differentiation of opinions… Not like the smart people know one thing, and the dumb people don’t get it. It’s like, it’s pretty smart people and dumb people on both sides of this argument.

Well, some of that has been informed by just the past few years of our tech history, right? I just read a great book called The Victorian Internet. It’s about telegraphy, telegrams, and it’s all about like “Well, they all thought this was gonna save the world.” It’s like “Uhm, actually…” They were like “This is gonna bring about world peace. We’re all going to be able to chat with each other, and so therefore…” And this book was written in ‘99. So it was very much a sort of like “Hey, you all saying that the web is gonna save everybody from everything… Maybe hold your horses a little bit.” And it wasn’t like doomer. I mean, obviously, the telegram didn’t end the world. And the author wasn’t trying to – I mean, it’s interesting… I think if you wrote the same book now, probably there would be at least some people trying to make it out that the telegram ended the world. It’s like, hard to prove that one, guys… Right?

How about the Segway? Remember the Segway?


I think that was mostly just hype based on the guy who invented it. But he had a huge amount of hype surrounding the launch of this revolutionary new transportation mechanism. And I remember – I mean, it made mainstream news that this was going to change the world. And he came out and announced it, and everyone was kind of like [unintelligible 01:09:33.21] Yeah, it’s like, “Wait, you revolutionized the way mall cops get around, but that’s about it.”

Well, but that’s such a great example about how innovation is channeled by the stuff that’s already there… Because if we had – I mean, look at what’s happening in… If you go to like Stockholm or Copenhagen, where they have good bike lanes, the grand descendents in the form of all these electric scooters and stuff… Like, those actually are replacing cars, making cities – but if your built environment means you have to go 10, 20, 30 miles to get to the corner store, of course it’s not changing things. So again, to your point of when, where, how… All these things vary a lot.

Well, I would certainly – we were just in Seattle for, as Jerod mentioned, for Build, and we got back to the hotel on our scooters, because we Limed around… We walked as well, because we were like “Hey, it’s night. It’s cool. Let’s walk.” There was a couple of times where it’s like “Let’s scoot”, and we scooted, and we got back to the hotel, and in true Dumb and Dumber fashion, I was like “Can we just keep going, Jerod?” And he’s like “Yeah, let’s just keep going.” So we scooted down the hill. We just kept going. We just went on a joyride.

We just scooted around downtown Seattle.

Yeah, it was a lot of fun.

Oh, it’s fun.

If that was an option in my town, I would certainly scoot, as opposed to driving my F250, which does – it houses diesel in its fuel tank to make it go. That’s how it works, just so you know… Which is more expensive, obviously has gases and things that happen as a result… But at the same time, to consume electricity, somewhere, unless it was turbine powered - if it was coal powered - you know, do I knew my electricity is green electricity? Or is it renewable electricity? I don’t know those things. But I would certainly choose a different mode of transportation if there was a different option in certain scenarios. And in my local town, you would die on a scooter. Not because of the scooter, but because –

Probably by somebody with an F250.

Right. Maybe. Maybe.


Yeah, most likely.

That’s actually incorrect. I’d bet you it would be – first off and foremost it’d probably be a Tesla, because there’s so many where I’m at. There’s Cybertrucks everywhere… Teslas… You’d probably die from a Tesla speeding, turbo mode or something.

We can all agree that it was a Dodge Charger.

Okay. [laughter]

I cosign that.

[01:12:06.20] Yeah… No, I mean, that’s a – but I’m sure, if you live… We drove a Tesla from Montana to San Francisco a couple of years ago, and we stopped in Eastern Oregon… And I was talking to somebody a year later, I met somebody who lives in that neck of the woods. He’s like “Oh yeah, the one Tesla Charger in all of Eastern Oregon… That’s my grocery store. It’s a 45-minute drive.”

Like, that guy’s not swapping out for a scooter anytime soon. The geography of how he lives is just not compatible… Which is fine. Which is fine.

Yeah. We were looking at a new car recently; we had to drive 40 minutes to the nearest decent mainstream car lot. There’s just not one in my small town. Walmart is not down the road. It’s 30 minutes away from where I’m at. That’s how far into rural I am. So…

Yeah. My mom’s in suburban Miami, and she basically doesn’t do anything in life that’s closer than a mile… And for me, anything further than a mile, like living right in the city –

Just forget it, right?

I mean, it takes planning. It’s like “Oh, we’re gonna use the cargo bike instead of the… Yeah, and I think we’re gonna see a lot of this with – I mean, it probably won’t be geographic. But different jobs are going to be impacted in such different ways, with all this new tech… And different jobs, different cultures, different languages - it’s all going to be impacted in totally different ways. Maybe Hawaii should be an LLM-free zone. Another free sci-fi story out there, right?

Ooh, I like that.

Are you generative AI? Because you’re really cranking them up…

I’m on fire this morning, guys. And I haven’t even had my coffee yet.

Let’s give you another opportunity then maybe, and I think we can go around the table with this… Let’s see if this is a good idea. Let’s name some positive things that we would like to see happen as a result of what we call the current version of artificial intelligence, and where it may go. You mentioned in the blink of an eye or a finger snap you would Waymo SF and maybe every other city… So that’s an example. I mean, you can expand on how that might actually roll out, and what are other examples of positive impacts of AI, not just the doom and gloom.

So this is a very small petty one, but – look, I have a CS degree. I haven’t written any code in useful anger in 20 years? But I had to – I had to grab a bunch of federal government documents for a project I was working on. But I didn’t have to. Here’s the interesting thing. It was like 700 pages that I wanted, and they were each PDFs that were five to 15 pages long, 700 and some pages worth of them… So I asked ChatGPT “Write me a Python script to download all these”, to summarize each of them and give me the most important points out of each of them. It didn’t matter if it was 100% accurate… I was trying to get the gist of it more than the whole thing. That’s a project that I wouldn’t have even tried to take on without ChatGPT. Maybe if I had an intern, I would have sent an intern to do it, but I wasn’t gonna do it myself. And I think there’s gonna be a lot more personal scripting, personal control of computers in that way, aided by ChatGPT. That might end up being small in the grand scheme of things, but it could also end up being like Excel spreadsheets that the whole world ends up running on Excel spreadsheets, and nobody actually knows that. It could end up running on ChatGPT small scripts… Right? Because it’s one thing if you’re like “Is ChatGPT gonna write the next self-driving car?” Probably not. Too complicated. Too many concepts there.

For now.

But can it help me write this little script that just does a few little things? Hell yeah. Right?

Yeah. I dig that.

And on the big side - again, globally, a million people a year die in car crashes. Let’s cut that down. Right? I think that’s a great question. That’s a great optimist question. I love it.

[01:16:10.17] Well, I think we can always be so negative… And I think we have three people who think about this a lot, and we probably see both the positive and the negative. And there’s certainly positives I can see from – like, I like the idea of a Waymo takeover… Or not so much just Waymo, but the idea of what Waymo offers a city, and a city being designed around a certain traffic pattern that has that. But that’s also like the old way of thinking, in some ways. We have always traveled by car. Is there a different way? Trains are very popular New York, subway is very popular… I don’t know the stats, but I’ve gotta imagine those are way more safer than driving the New York streets… Because I’ve been on New York streets and they’re crazy. They’re always jam-packed. But we also can’t dig in every city, so you have to be practical.

I do like the idea of automated driving, because I’ve seen some really terrible drivers… People are constantly distracted. You can see somebody like navigating on their phone… I literally saw this lady, she was reading her phone, driving in and out of her own lane, going fast. Like, what is wrong with you? You’ve got children on the streets, you’ve got people who die… Last year my kid’s classmate’s father passed away at a red light, because somebody just jammed right through it being dumb. Right? Those are preventable deaths. You’ve got a little girl, who’s known to me very closely, without a father. And you’ve got to see that new reality. So I’m all for some version of that… But then, you watch “Leave the world behind.” I don’t know if you’ve seen this movie… That’s another version that might be potentially AI bent, to some degree… I’ll ruin one thing for you. And if you’re gonna watch this movie, stop listening for just about three and a half seconds… Teslas are self-driven to become weapons, let’s just say.

So you’ve got the Waymo idea out there, but then you can weaponize this thing if a nation state or something else takes over the system and uses it against the way it was supposed to be used. And then you’re locked out of it. So you’ve got this autonomous system that is sort of a black box, because we’ve forgotten how to code in 50 years from now, whatever the number is… That’s not the time of this movie, but… Then you have that version of it.

So I’m like, all four of those things - and I lived through this, one of the situations that I just mentioned to you… But then on the other hand, what do we do when somebody else gets a hold of this thing? You’ve got to have security down pat. You cannot have the XZs be in that world whatsoever. You have to have a totally buttoned-down system. And maybe it’s actually AI that buttons down this system. Who the heck knows?

How it this optimistic, though? How is this optimistic at all? You’re like –


Well, that was my response to yours. That was not my positive.

Oh, that wasn’t yours. You were just responding. Oh, okay. Alright.

Well, I want to be for your positive, but then I see this other side, this other glimmer of negativity. It’s like “What do we do then?”

Alright, so tell us your positive one, then. You just doomed and gloomed us.

I think… um, what is my positive one? Waymo in SF. That’s mine. [laughs]

Waymo? That’s Luis’es. That’s not yours. [laughs]

I haven’t thought about it enough yet. You go ahead, Jerod. I’ll think of something, I promise. Go ahead.

Well, I look at it like this… There are many jobs that humans are currently doing at capacities that don’t scale enough. Education is a huge one. We need more educators, we need more equipped educators. Medical professions is another one, where we have doctors who are just dead tired, because they’re working too long, too many hours etc. in high-pressure situations… And so I think these tools to equip educators, specifically around the drudgery of the process of educating, thinking grading papers, thinking tooling, how to become a better teacher… Oftentimes you need materials, you need ways of explaining things… And these are all ways that these tools could potentially equip people to do their job better, and with less stress. And probably educate more kids per capita if they are so enabled. So I think that’s exciting.

[01:20:09.29] I see some stuff in the medical profession, although I’m not close to it, where they’re saving hours and hours of times for doctors, specifically around medical record entry, that kind of stuff, data entry… How many folks are out there doing data entry positions still, to this day, that could be better equipped? We’re not trying to replace them, we’re trying to free them from the shackles of this current role, and enable them to do something that’s higher value.

Of course, there will be inevitably some fallout from that, some displacement, which is unfortunate, but I don’t think it can be necessarily mitigated 100%… So people will have to get new skills, new roles etc. in order to kind of realize their potential… But the people who are currently just stressed out, and working way too hard… Dangerous jobs, a lot of very dangerous jobs, where we’d rather lose a robot than a human, in a certain stance… I think these are all relatively optimistic, and I think they’re potentially feasible short-term.

Yeah. Let me add one more movie to the list, because I thought of one while you were talking there… And I was thinking about Prometheus. Have y’all seen Prometheus?

I did… I did not like Prometheus. I felt like – again, my algorithm with movies is I usually end up with a general sense, and then like one or two criticisms. I can’t remember any of the rest of the movie. And so I don’t know why I didn’t like Prometheus. I remember the acting was bad, and the characters kept doing stuff where I was like “There’s no way you would do that. It doesn’t make any sense.” Do you know, like nonsensical decisions? I can’t get over them, where I’m like “Nope. No human in the real world would ever make that decision.” And so I kind of wrote it off. But I know this was the prequel to… Aliens?

It was, yes.

And so it’s science fiction… It was Ridley Scott, right?

Ridley Scott, yeah.

So I think I was also very pumped for it, which is why I ultimately was disappointed. But –

Expectations management is a key skill…

Yeah, now that I’ve crapped on it… Yeah.

Well, if you liked the last minute-ish, then you should tune into the Plus Plus version of – was it coming on Friends, Jerod? …this deep dive we did into 1999, basically…

Oh yeah, we have a bonus episode coming out soon, all about movies…

Yeah. Jerod and I unexpectedly went deep on 1999 movies… Which was an interesting year. I’ll leave it at that, but [It’s better.] That being said, I think my positive would be kind of in line with yours, Jerod, which is I think just enabling. And kind of in line with what you said, Luis, which is enabling. I think there’s an enabling factor that AI can do.

Think about something as simple as repairing your dishwasher, or your washer and dryer. It’s got a manual. What if an LLM was attached to that manual, and you can ask it questions? What voltage does the regulator operate at? What wire needs to go where? Versus the manual being archaic, and like largely just unaccessible… What if you had things like that, that you can just tap into your everyday life and be enabled? Not so much DIY, but there’s so many people who can build their own backyard deck if they wanted to. But they don’t, because they don’t have a dad, or somebody who could shepherd them through the process. What if you had something that could shepherd you through the process, to some degree, shape or form, with a washer fix, or an air filter change? Simple things in life I think could be leveled up just by having a better access to info that isn’t just like a Reddit thread that’s got tons of opinions, but something that’s a bit more unbiased, I suppose, that’s straightforward to the answer. I’d like that. I would use that.

Now I sort of want to ask one of the latest GPTs for their step-by-step instructions to building a deck.

Oh, yeah.

That’s gonna miss some awesome steps in there…

It would certainly tell you different – so I’ve done this enough to know… It would tell you different platforms you could build on. Like, would you use for four by four? Six by Six? Would you use – you know, various different frameworks you can leverage to make it… How long should your nails be? Should they be galvanized? Is it pressure-treated lumber? All these things. Will it be near water? So all these things you’d need to know, it would tell you all those things. You’d still have to go make the decision, but that’s current state of that now. It’ll tell you that today. I mean, it’s pretty crazy.

Like, even with building stuff like a Linux box… It’ll tell you all the things about different CPUs, different RAM options. I mean, you can build a Linux box on your own with little to no knowledge, which is what I’ve done in the last couple years… Some on my own, with lots of searches, but then halfway through my journey of doing that it got enhanced, with ChatGPT being accessible. Now I know a ton about Linux that I just never knew before, because all the information was widespread and opinion-based. It wasn’t really – it wasn’t centralized, in a way; it wasn’t freeform and accessible to have a conversation with it…

I think that’s the uplift – to your note, Jerod, with teaching, I think that’s super-awesome. I think the idea of Waymo and the idea of self-driving has promise. I just think if we actually deploy it at scale, it needs to be locked down, and it needs to be sanctioned in some way, shape or form to have the utmost highest security in whatever way we can… But yeah, I think from this we should come back at some point off the mics, and write some fanfiction. That’d be cool.

Ooh. Off mic fanfic.

That’d be fun.

Sounds good. Luis, let’s close with Upstream. Tell us about Upstream. We have June 5th, right? It’s coming right up as a one-day virtual event, coming up as we record a week away, roughly… And as it ships, three or four days away. So what’s it about this year, and what are you talking about?

So you can find more on the website at All the new TLDs… Very fun.

So And it’s a one-day celebration of open source where we try to bring together both maintainers and executives. There’s a lot of events for open source execs these days, a lot of events for sort of community grassroots stuff… Very few that actually try to bring them together in a coherent way. So that’s what we’ve been trying to do with Upstream for the past four years now, I think. And this year’s theme is “Unusual solutions to the usual problems.” Your listeners certainly have a good grasp of what the usual problems are in open source… The XZs of the world… We all talked about XZ last year… We had to put a ban on the XKCD Nebraska comic, because otherwise every single speaker would have used it…

So many.

Yeah. This year we’ve commissioned some new comics. You’ll see some of those.

So we’ll be talking – I just did a great panel recording with two Germans. One who runs their sovereign tech fund, and so works in getting federal government money to open source maintainers as an infrastructure project… Which shouldn’t be that unusual. I mean, in some sense, highways - we’ve been talking about cars all this time. But for software, pretty unusual.

On the flip side, government regulation - we’ll be talking some about that. Again, that’s a pretty, for a lot of the world, a lot of industries, not unusual, but for software, that’s a pretty unusual – regulation is a pretty unusual solution to the safety problem. So we’ll be talking about that. We have a maintainer panel, we’ll be talking with execs from a couple of big companies… I’ll also be interviewing a professor from Harvard Business School about the value of open source… All online, streams live for the first time, with live chat… So I and a lot of the other speakers will be in chat, so you can ask us during our pre-recorded talks what we think of things, and ask follow-up questions… And then we’ll make it available in the few days after that from if you missed it next week.

[01:27:56.19] Yeah. Big fan of the new TLD, I think I got a preview of one of these comments that you mentioned. [unintelligible 01:28:01.13], the commissioned ones that you were talking about?

Yeah, yeah.

I saw that on – Chris Grahams, a friend of ours, also a Tidelift… I’ll link it up in the show notes, I suppose, but… It’s an open source maintainer on an island, saying “Please help”, and all that happens is a plane comes by and just drops a bunch of issues on their head… Which is not exactly the help they were looking for.

And the plane has a banner that says “We love OSS.”

That’s true. I should have mentioned that. Yeah, “We love open source. Issues…!”

That’s adding insult to injury.

And actually, it has Corporation on the plane, too. I’m looking into the details…


Yeah, there’s some – I mean, it’s so hard not to get… We’ve always tried at Tidelift, and we tried at our events. I mean, these can’t be complaint fests. If you do that, it’s no fun for anybody. So we try to make them, as we’ve been trying to do, Adam, positive, constructive…

But boy - yeah, some days you just want to be like “Come on…!”

“Let’s be positive!”

“Get on board!” There’s just so much – well, how did we get to XZ? It’s like, well, we’ve been telling you for years that these people are going to burn out… And then they did, and you’re like “Oh, no! Horrors!” It’s like, well, you know, maybe we should try to do something about that collectively. And it’s a real collective action problem for the industry… And that’s part of how I’ll be talking about it in my opening talk at Upstream, is this collective action problem that we have.

We’ll link up the post you wrote, “Paying maintainers: the HOWTO”, because we got compared – I think one of our… I think our Adam Jacob conversation, Jerod, got compared to this… I think some of us were right, and some of us were wrong… I don’t know, we were just talking on a podcast, obviously.

I mean, I love Adam, so I guess I’ve gotta go back and listen to that one.

I think it was that one, that we got some comments where they compared the sentiment in that conversation to what you wrote, and how we were not in line with the same thing, basically. I can’t recall which, but I think it might have been Slack… Do you recall this, Jerod, the sentiment?

No? Okay. I could be hallucinating, honestly. It could be the human version of a hallucination at this point.

Humans also hallucinate from time to time.

Yeah. We misremember, we misalign… It’s like “Oh, that wasn’t actually the Adam Jacob conversation…”

Well, I mean, for those who haven’t read it yet, my post was simply –

Yeah, I was gonna ask you to summarize it, if you could. Just give us a TL;DR.

Yeah, in the wake of XZ, some people were like “Well, we tried to pay maintainers and it didn’t work.” And it’s like, well… So we wrote up – because we’ve never actually written up before how is it that we pay maintainers, right? In fairly good detail. And it works, right? We pay out quite a bit of money every month to maintainers, from our corporate customers, to work on things that our corporate customers use. That said, there are different approaches to paying people. There are different types of communities. Paying a solo maintainer is very different from paying the Kubernetes project. That’s a very different beast.

I mean, this is just one of the things that is recurring… I’m sure this must come up in the podcast all the time, that we tend to talk about open source as if it’s like one thing, when in fact at this point open source is so successful that it is many different things. But it’s easier for us to talk about it if it’s just one thing, and so we often make mistakes of “Well, it’s impossible to pay open source maintainers, because I tried this one form of payment to one set of maintainers…” It’s like “Well, yeah, no s**t that that one doesn’t work.”

So I don’t know, I’m curious where Adam said [unintelligible 01:31:40.09] I mean, it’s not a magic – the blog post is about how to pay maintainers. It does not claim that this is therefore a magic wand, and that these projects will always be secure for the rest of time.

For sure.

People will still burn out, people will still have challenges. But we think we’ve got at least part of the solution at Tidelift.

[01:32:01.21] Well, I think one thing that was revealing – and we have known of Tidelift and have been adjacent for many years, and worked together in some cases over the years… We’ve had you on various podcasts, we’ve had your CEO on our podcast before… And I think last year – we’ve talked to Jordan Harband before on podcast, but we actually met him face to face… At least I did. I don’t know if that was the first time you met him, Jerod, but it was last year at All Things Open… And he could not stop singing the praises of Tidelift for him as a maintainer.

So I think what you all could do better or more of - I don’t know how well you do this, because I’m not like in every single thread you’re in… But I think what he had done, to me, was reshaped – I already knew what Tidelift was. I already knew what your mission was. But there was a cementing of like a boots on the ground individual that we respect and have talked to, that’s doing the work. And they’re like “I’ve got various forms of payments, but I love the way Tidelift helps me. One of my biggest streams of revenue is from Tidelift.” I think that was on a podcast too, so it’s already in transcript form… But that changed my perspective on Tidelift, even though I knew who you were already; even though I have respect for you and everyone else who’s involved in Tidelift, it changed that perspective, because you saw people that have boots on the ground, that have teetered and shared how they’ve teetered on the line of burnout or not. And obviously, we do not want people to burn out. Back to what you said before, Jerod, I think it’s an enabler where you sort of force-multiply somebody doing something that they’ve got too much on their plate, and artificial intelligence might be able to help them take something from an hour to 10 minutes, that kind of thing.

Or in the case of Jordan, having an organization have his back to let him do what he does best, which is be inventive in open source, and not be bogged down by the minutiae… And literally get paid to do it, because he’s not going to stop. He wants to keep doing this common good for the world. But if he can’t sustain his life and his family, then it’s not going to happen. And so we have to find ways to make that happen. Money is obviously one of the biggest ways to financially sustain somebody, because that’s what it’s called, financial sustainability… It literally is money. But he could not stop singing your praises, and I was so proud of you all for that. But then also, it reignited a - I guess a curiosity from my standpoint on what Tidelift is, and what you’re doing for the world.

Well, we’ll have Jordan and several other maintainers on a panel on Upstream… So if any of your listeners are interested in hearing more about that and how we work with maintainers, that will be definitely a topic there… Though it’s mostly not a pitch for us. It will be more – I think the official title is “State of the maintainers.” So you’ll hear I suspect about things like what do these folks think their risk is of becoming the next XZ, or the next log4j.

And like you say, Adam – I mean, this is one of the things that when you talk to somebody like Jordan or one of our other maintainers who we partner with, there’s a lot of joy and love for what we do… But of course, the people who write the check are often at Linux Foundation events, they’re talking with other execs, they’re talking with the leaders of Kubernetes. And that’s not a bad thing, but it is a challenge for us that these folks in the middle, who are numerically the – I was at a Linux Foundation event a couple years ago, and somebody says “Yeah, I’m the maintainer of a small project. There’s only 15 of us.” I’m like “You are so in the fat head of –”, you know, the long tail of maintainers is one-maintainer projects, with an occasional patch… And that’s not necessarily a good thing, but it is our reality right now in open source. And getting folks to acknowledge and grapple with that has been an uphill slog for us at Tidelift. So it’s great to hear positive words from you, and it’s always good for me to talk to Jordan. I saw him just a couple of weeks ago at RSA, so…

We’ll be tuning in, for sure.

The episode I was mentioning was episode 563, and it was lovingly called “The way of open source.” It was an anthology that we did at All Things Open. It included Matthew Sanabria, ex engineer at HashiCorp… Nithya Ruff, I believe Chief Open Source Officer and head of open source, the Programs Office, at Amazon. And then obviously, I mentioned Jordan Harband. So he was there representing open source maintainer at large, with dependencies in most JavaScript applications out there. So obviously, somebody who’s got like three different angles into the way of open source - I think we captured that pretty well, so we’ll link it up in the show notes. And if you haven’t listened, Luis, you should check it out.

And Nithya is always worth listening to. So yeah.

For sure. Good stuff. next week. We’ll be tuning in. Hopefully, our listeners check it out as well. Luis, it’s always a blast, whether you’re telling us what’s happening, or prognosticating on what might happen next, or might not happen… It’s always fun for me to talk with you.

Yeah, it’s always fun for me to talk with you, too. By the time we talk next, I suspect we’ll have a lot of actual case outcomes. We’re still in this very early phase for some of these things, and there will of course always be new news from open source software security land. So…

Yeah. I was gonna ask you about the GitHub Copilot litigation, but it looks like it’s just kind of ongoing. There’s nothing to talk about there.

Yeah, it’s still early days. I mean, there’s some stuff to talk about, but we’ll know a lot more in the coming months, I suspect. So…

Awesome. We’ll have you back in six to eight months and talk about what’s changed since now.

Sounds like a plan. We’ll do Happy New Year 2025… I believe that’s coming already.

Oh, my gosh. 2025. Alright…

The year of Linux desktop, and/or AI. Alright…

Bye, friends!


Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00