Author, journalist, travel writer & software engineer Jon Evans joins us to weigh in on the cultural history (and present-day sentiment) of AI doom. Along the way, we talk plausible Sci-Fi, ultrasound drug delivery, the maybe-evolving laws of physics & even weirder stuff.
Featuring
Sponsors
Tailscale – Simple, secure networks for teams of any scale. Built on WireGuard.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com
Fly.io – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.
Notes & Links
Chapters
Chapter Number | Chapter Start Time | Chapter Title | Chapter Duration |
1 | 00:00 | Let's talk! | 00:38 |
2 | 00:38 | Uncorrected bound manuscripts | 01:21 |
3 | 01:58 | GitHub Arctic Code Vault | 02:52 |
4 | 04:50 | Never gonna have that in my bio | 01:09 |
5 | 05:59 | Market for predicting the future | 01:58 |
6 | 07:57 | The Simpsons predictions | 04:24 |
7 | 12:21 | Cultural history of AI doom | 02:59 |
8 | 15:20 | Sounds & the pyramids of Egypt | 00:46 |
9 | 16:07 | Ultra sound drug delivery | 02:55 |
10 | 19:01 | We can't build the pyramids? | 02:50 |
11 | 21:52 | Physics evolving? | 00:44 |
12 | 22:36 | Gravity was different | 00:41 |
13 | 23:17 | Greg Egan | 00:23 |
14 | 23:40 | Where do your ideas come from? | 02:03 |
15 | 25:47 | Sponsor: Tailscale | 04:28 |
16 | 30:34 | Adam's idea for Jon | 00:45 |
17 | 31:18 | Enter the Bobiverse | 02:44 |
18 | 34:02 | Fiction for software devs | 01:02 |
19 | 35:04 | Nick Jones | 00:18 |
20 | 35:22 | The Adolesence of P1 | 00:37 |
21 | 35:59 | Eliezer Yudkowsky | 00:28 |
22 | 36:27 | Humanity & AI | 01:04 |
23 | 37:31 | The future is going to be weird | 00:10 |
24 | 37:41 | World of change | 01:12 |
25 | 38:53 | Your software lenses | 00:48 |
26 | 39:40 | Balancing ideas | 01:51 |
27 | 41:31 | AI doom scale | 01:14 |
28 | 42:46 | What is doom? | 01:22 |
29 | 44:07 | Welcoming our AI overlords | 01:34 |
30 | 45:41 | Symbiotic AI | 02:06 |
31 | 47:47 | AI religion | 01:40 |
32 | 49:26 | Jerod thinks AI has plateaued | 03:59 |
33 | 53:25 | Adam's use of ChatGPT | 04:19 |
34 | 57:44 | The open letter signatories | 02:11 |
35 | 59:54 | AI user manuals | 00:58 |
36 | 1:00:53 | What are they so afraid of? | 00:33 |
37 | 1:01:27 | Calculators for words | 01:42 |
38 | 1:03:08 | More of Adam's ChatGPT tips | 01:27 |
39 | 1:04:36 | OpenAI just wants to train | 00:20 |
40 | 1:04:56 | We're not doomers | 00:14 |
41 | 1:05:10 | The ultimate 10x-er | 01:10 |
42 | 1:06:20 | One of the last human written books | 00:34 |
43 | 1:06:54 | Having an AI editor | 02:21 |
44 | 1:09:15 | Not good, but okay | 02:12 |
45 | 1:11:27 | Humans are jagged | 01:03 |
46 | 1:12:30 | More books? | 00:30 |
47 | 1:13:00 | Adam's future book? | 00:42 |
48 | 1:13:42 | How many readers is success? | 01:11 |
49 | 1:14:53 | Author's commitment | 02:35 |
50 | 1:17:28 | How would you use AI? | 02:51 |
51 | 1:20:20 | Exadelic 2? | 00:40 |
52 | 1:21:00 | Audiobook? | 00:33 |
53 | 1:21:33 | The Exadelic hook | 00:30 |
54 | 1:22:03 | Bye friends! | 00:08 |
55 | 1:22:15 | Coming up next | 00:59 |
Transcript
Play the audio to listen along while you enjoy the transcript. 🎧
Kick in whenever you guys want. Exadelic, that’s how you pronounce it, Adam.
I’ve got my copy right here.
Very, very limited edition. I think there are only about 50 of those.
Yeah, this was cool. I remember being like “This isn’t even real”, and I was like “Wait, that makes it even more unique.” Uncorrected bound manuscript… I’ve gotta admit, Jon, as I’m reading it, I’m wondering, “Has he changed any of these pros?” Because I wonder how final is this copy you gave us. Because I was like “That’s an interesting way to say it. I wonder if it’s still in the –”, you know… And I keep asking myself that.
Well, obviously you’ll have to read it again, the final iteration…
Yeah. Are there any edits between this uncorrected bound manuscript and the official one that launches?
There are, but they’re not major ones. It’s very, very lightly edited and polished [unintelligible 00:01:20.07] But it’s like 99% the same.
Okay.
Yeah.
So we’re talking of course about Exadelic, Jon’s new book, sci-fi craziness; that’s how I describe it. In stores today, I guess. Well, congrats, Jon. You shipped the book to the world. That’s cool.
Yeah. You made a thing.
Thank you. It feels very weird and good to have it finally out there.
And you shipped to us an uncorrected bound manuscript that we’re talking about, maybe six months or so ago… Just as a nicety. You said “Hey, we don’t have to talk about the book, but I figured you guys might like a copy.” We do appreciate that, and we’re happy to have you back on the show. So it’s been a while… In fact, Adam, you have not met Jon previously, because –
No.
…I interviewed Jon a couple of years ago about GitHub Arctic Code Vault, which I think, Adam, is an episode that you –
I coordinated that, and I couldn’t make it.
Yeah, you coordinated it.
I was so bummed. Yeah, I was like “Geez, I want to talk about the Svalbard Arctic Code Vault, and all the fun things…” And I mean, how cool is that, to think that you’ve made a contribution to humanity to some degree, that for the moment the epicenter of open source code and a lot of the software contribution that’s given to the world happens, archives it in this insane idea. I think that’s just kind of cool. And then hopefully, when we’re truly invaded, not just when the government says we’re invaded, or seemingly invaded, and humanity is gone, they can pull up the code that was just terrible, for the most.
Right.
Because everybody thinks that their past code is terrible… That’s why I say that.
Oh, yeah.
Yeah, we did a video for YouTube, it had like a million views… And the most common comment was “Please don’t have included my Hello World.” [laughter] But no, we swept it all up.
And I think there was some sort of threshold of what the contribution was to warrant being included… Wasn’t there something like that? Like, was it a date? Was it just a date, or was it like criteria?
There was a date, but then there was also a criteria, yeah.
Yeah. It had to have been active. Either you had to have a certain number of stars, or you had to have made some commit to it in the last year.
Right.
But everything else, we swept up, you know, for the future.
Which, as I mentioned years ago in that interview Adam, is that our transcripts, which are markdown-formatted plain text on GitHub, are in this Arctic Code Vault. So not only will our bad code be out there, but our bad questions and comments…
Our bad words, yes… [laughter] It’s such a surreal thing to think about, isn’t it, Jerod? To think that at some point in the very near future, as we speak into these microphones, the words I’m literally saying right now are being transcribed into text, into markdown, this markdown repository you’re talking about, and it’s in a GitHub repo that has infinite history, essentially, that you can go back to… Maybe you can scrub that history if you’d like to, but it’s there; it’s there, right? And then eventually, it’s somewhere else in the world. It’s translated ideas, it’s stuck in a vault somewhere, or whatever.
Assuming that Jon and his team did a sufficient job with their archive strategy… Because you know, forever is a long time in data archiving, isn’t it, Jon?
Oh, yeah. I mean, we were theoretically aiming for 1000 years, and we started thinking [unintelligible 00:04:28.12] There could be some sort of tragic event and we could go back to the Stone Age, we could have like uplifted ourselves to being AI gods or whatever by then. We really have no idea.
We don’t.
But we sort of plan and hope that most eventualities will include Svalbard still being there.
Yeah. Well, one thing we can say about that particular aspect of your life, Jon, is it creates an excellent biography. So I was reading your little bio blurb on your book, just thinking like “How do you introduce this guy? Who is he? What does he do?” and like how many people can write this in their bio? “Was the initial technical architect of bookshop.org, and is a founding director of the GitHub Archive Program, preserving the world’s open source software in a permafrost vault beneath an Arctic mountain for 1,000 years.” I mean, I can’t write that in my bio, Jon. I’m never gonna have that in my bio. I mean, that’s so cool.
I mean, I’m not gonna like, I kind of chuckle to myself [unintelligible 00:05:24.18] “Yeah, as you do that.” It was an extremely weird opportunity that doesn’t come to most people. I’m very grateful for that one. And I think I did a reasonably good job, yeah.
I’m not gonna downplay Jon by any means, Jerod, but he also has something you don’t have too, and that you and I also get to share.
What’s that?
It’s that in our bio we can say that we produced the world’s greatest podcast.
Oh… Alright. Well, I guess you can just say whatever you want in your bio. Is that your point? [laughter]
I’m pretty sure there’s no bio police that comes along, so…
[05:55] It’s my bio and I’ll say what I want to. That’s funny. So catch me up then, Jon, because we haven’t spoken much. Obviously, you’ve written a book in the meantime. You were an author prior. I remember us talking about some of the novels that you had written, and what you were up to… But how did you go from that? That’s 2020, was when it was all said and done, I believe, when it was all entered in…
Yup.
And then what have you done since then? Have you just been writing this book? Have you taken up work elsewhere? What’s your life been like?
I have actually, actually. Actually, that’s kind of a segue from the Archive Program to the book, in that I was professionally thinking about the future of humanity, which somehow crystallized into me writing a weird science fiction novel about the future of humanity. But I have also – I took an engineering job, because I had kind of stopped coding. As a CTO and then program director I hadn’t written much code for a long time. You can see a big gap in my GitHub green visible map… So I took an engineering job at a company called Metaculus, which is extremely weird and science-fictional itself. It’s a platform for predicting the future. That’s where I’m working now.
Hm… So what am I going to say here in a second?
A platform for predicting the future… Wow.
Yeah, Metaculus.com.
What are you trying to predict?
Well, there are a huge variety of questions, everything from, whatever, the World Cup, to “Are the robots gonna kill us all?” Anyone can create a question, and then people go on and make predictions. And the theory is that if enough people predict the future, strangely, it seems, studies show that our errors kind of cancel it… So in a group of people we’re actually much better at making predictions than any individual person, even if the individual person is an expert, most of the time.
Wow.
Is it a marketplace? Is it a place where you buy and sell predictions, or place bets? Or what’s the interaction like?
No, it’s just a pure prediction, doing it for the love and [unintelligible 00:07:35.08] and the recognition.
Oh, I see.
I mean, marketplaces have their place, but there’s also – there’s weird failure modes, and people hedging, and people bidding against other people, instead of against ideas… So yeah, there’s a place for both of those.
Which is fascinating stuff. I’ve looked a little bit into prediction markets, and I’ve always been like “Well, this is just gambling.” But a very interesting form of gambling. [laughter]
Yeah, that is interesting, because people on podcasts make predictions all the time… You know, to come back home a little bit.
Right. But it’s always just one person, and they’re usually wrong.
The Easy button can be just to get whoever is involved in producing the Simpsons to just go on there.
Yeah, I mean, that’s kind of eerie, right? They clearly have a channel directly to the future.
There was something with that… I’d seen a TikTok and I can’t recall exactly, so I’m gonna paraphrase what I recall, I think it said… Was that they compared the Simpsons, I believe, to Star Trek, which both equally predicted some versions of the future… And I think there was something with the amount of episodes versus the amount of episodes for each, and the number of clearly correct predictions… Are you guys familiar with this, before I go further?
I’ve not seen it.
Well, not so much the video, but the idea that they say the Simpsons, in many ways, just potentially might be time travelers of sorts, or have some version, some eye into the future, because it predicted the future to almost the detail, so many times that it’s uncanny to think that they’ve just – it’s a phenomenon that they’ve done it. Or it’s not a phenomenon, and they have access to knowledge of the future.
So the other interpretation is that our future is so hilariously weird that comedians are better at predicting it than scientists.
Right. I’m with that one. Satire – given enough satire, eventually the future will map on top of some of that satire.
I mean, the degree though to which they’ve been accurate is the scary part. It’s not so much roughly accurate, it’s pretty much accurate, in like several cases. And unfortunately, I’m not such a scholar in The Simpsons that I’ve got this list that I could describe to you, but this is what I’ve heard. So this is secondhand knowledge to a degree, but I’ve heard of it, and I believe the people – there’s enough people that have verified this as accurate, that they’ve accurately predicted the future.
I have definitely seen on Twitter, or X, or whatever you call it these days, people saying “According to the prophecies of the second volume of The Simpsons”, and then some like eerily specific, accurate [unintelligible 00:09:57.05]
Yeah. Well, I do think the law of large numbers comes into effect here… And the fact that they have –
20 seasons…
[10:08] More than 20 years. They have, I think, over 750 episodes. And then if you think, “Okay, per episode, how many “predictions of events” will there be per episode?” Hundreds of things they come up within in a 22-minute time period, that could be true in some sort of distant future… So that’s just large numbers. And I think if you have enough numbers, you’re gonna hit on a few. And it’s better than Nostradamus, because his stuff is all very vague, and like interpretable… But at least with The Simpsons, it is like you said, Adam, it is like a very specific thing that happens. And it’s not like an interpretation of the thing, it’s like no, that’s literally what happened; or it’s off by a skew. So it is pretty impressive, but it is just like – what’s impressive to me is their large numbers. I mean, it’s amazing, the ability to sustain for that long.
Yeah. The most sustained one they had though was Trump coming down the escalator, being president. That was – nobody predicted that, really. And that was like the meme… It was a predicted meme, essentially; it turned into a meme, but it was predicted.
Maybe go back to Jesse ‘The Body’ Ventura became governor of - was it Minnesota? …and then you’re thinking “Well, what’s more absurd than that?” Well, it’s like Donald Trump is president, right? That’s a trendline, perhaps. And then you go there.
There’s a bit, I don’t know if you got to it in Exadelic the book, where [unintelligible 00:11:21.00] goes back in time to 2003 and is like “I think I remember the future, but it’s a future in which Arnold Schwarzenegger is governor and Donald Trump is president. Is that a real future? Am I hallucinating this? This doesn’t seem very likely when I think of it…”
[laughs] Right?
Yeah…
Well, I haven’t gotten far enough, I guess, because I haven’t hit that bit… So you’re spoilering it on me.
Apologies… It’s relatively early.
I mean, so far in my experience in this book - and I’m not super-far into it - it’s just like craziness begets craziness. Where I feel like I am is in the first third of the Matrix, before we figure out what the plot actually is, and you’re just kind of along for a ride, and you’re wondering “When’s it all gonna make sense?” But I also read the quote on the back, the review of [unintelligible 00:12:01.10] version, and it sounds like maybe it never makes sense. Like, it’s truly great, but it’s just truly weird, and it never gets less weird, is what one of the quotes on the back says. So…
I promise you, it all makes sense in the end. Actually, that quote does say it all makes sense in the end.
Alright, good. I don’t want to mischaracterize. I haven’t gotten to the part that makes sense yet, but I’m sure it all well. In addition to this, you’ve been working on some nonfiction, you said. Some writing about our weird past, and maybe what it’s going to be for our weird future. Do you want to talk about some - what you called a cultural history of AI doom… I’d love to hear your thoughts on this. There’s like 1990s mailing lists… It opens up into lots of subcategories. Where do we go here, Jon?
Totally. I mean, I guess we can start with – so [unintelligible 00:12:43.14] sort of the very first science fiction novel was actually a novel of AI doom, believe it or not.
Really?
Yeah. So this goes back to the early 1800s. There’s a giant volcanic explosion in Indonesia. 1815. 1816, the weather is terrible across their own hemisphere. There’s droughts, crops fail… And a bunch of people at this mansion in Switzerland, extremely rich and privileged and weird people, have a terrible holiday. This includes Lord Byron, who’s like the Kanye West of his day. Super-controversial. His daughter grew up to be the world’s first computer programmer, Ada. [13:18] Percy Shelley, whose poem Ozymandias you probably read in high school… Anyways, Byron challenges them all to write the scariest story… And the most culturally significant person who gets this challenge is Shelley’s 19-year-old girlfriend, Mary Godwin, who writes Frankenstein as a result. And everyone knows Frankenstein, right? The guy [unintelligible 00:13:37.23] That’s not actually how the book works, though. In the book, Frankenstein is this brilliant creature that teaches itself to read, teaches itself languages, invents new things… It is basically an artificial general intelligence, that the other characters are concerned is going to reproduce and take over the world. So like the early 1800s, we were already worried about AGI and the AI doom, and the robots killing us all.
[14:13] What?!
Yes, that is correct.
Was he a monster though, or a robot? What was he in the book?
He was like an artificial creature stitched together from other parts. So not a robot as we understood it, but I mean, we barely had science back then, so…
Right.
Well, the famous phrase “It’s alive!” came from Dr. Frankenstein.
Oh, yeah.
Of course, yeah.
Screaming it, of course… I’m not gonna – I can if you want me to. I can scream that.
[laughs] Please. Please do. We’re here for it.
Yeah? Okay. “IT’S ALIVE! IT’S ALIVE!!!”
Okay, I was waiting for the second one to drop. That was good.
Well, yeah, he crescendos with it. I think he says it twice.
He does.
I mean, he’s excited, right? He had stitched together body parts, electrified to some degree… I know the story, I’m not specific on the details, but I think he is stitched together and electrified. I think that was kind of like the thing. And we are electrochemical beings, so that makes sense, that you would initiate life through power, electricity, you know…
Yeah. And this was just the era of the 1800, when they were just figuring that out, right? Electrifying frog’s legs and saying that they twitched when you ran a current through them. They were totally freaked out by that.
Totally. Yeah. Man, this is such a weird space… Between electricity and sound, have you heard of like the stuff around sound even? There’s a lot of interesting stuff around sound. But you can produce sound waves and make shapes, and make all these things… And that’s how they suggest that they use some sound technology to move with accuracy all the stones to the pyramids in place. It would have taken – they had like a small margin of window to do certain things, and between electricity and the unique things you could do with that and sound… Kind of like modern Stone Age, because we don’t know it all, how that works, and so many of us are just removed from the science of this stuff that it seems science fiction, but it’s quite possible.
Have you heard about ultrasound drug delivery? This is kind of a new thing.
No.
Ultrasound drug delivery.
Yeah, it’s kind of amazing. So if they want to deliver drugs to your brain - like, it’s hard to get drugs into the brain. There’s the whole blood brain barrier…
You can’t, yeah.
Right. So their solution is just to inject you with all these tiny, tiny little bubbles that go everywhere in your body. And then if you apply the right frequency of ultrasound, then the bubbles break, and the drug inside them gets released out. So you can very specifically micro-target drug delivery anywhere in the body. This is a relatively new thing.
So you put the drug into a bubble, and multiple bubbles, and you distribute the bubbles to different areas of your body, and then you make the bubbles pop?
That’s correct.
That sounds amazing… ly weird. How do you target which bubble should pop? I have so many questions.
You just – given the area, you aim the ultrasound at a particular area.
Oh, I see; you target with the sound waves directly into that area.
Exactly.
So the bubbles would spread everywhere, even to your brain, and then you put the sound waves into your brain and it pops right there?
Precisely.
That’s brilliant.
That’s interesting, because there is an ultrasound that women get when they’re pregnant, and they have a child; they get an ultrasound to look at the baby. It’s kind of the same thing, right? Like, it’s sending sound, or some sort of thing that makes an image based upon – I don’t know how ultrasounds work. I’m just roughing it based upon what you’ve just described. [laughter]
This is fun, Adam. Please, keep guessing.
But the word ultrasound is in there, so I’m assuming they’re connected, and it’s plausible.
Yes. I assume the bubble-popping ultrasound is the higher energy, but I could be wrong. I don’t actually know.
I mean, I don’t have the ability to pull up my personal archive here of what this thing is called, and I’m gonna be so upset later… But this sound stuff is legitimate. Like, they do some really unique things with sound. Like, when you pay attention to that spectrum of people uncovering this knowledge and this experimentation, it’s just, you cannot – it’s like science fiction. It’s so wild what’s possible with sound.
[18:03] So the other thing I’ve heard about ultrasound is people are speculating - you know, you get an ultrasound image, and it’s very hazy, and you need an expert to interpret it… But they’re talking about if you can get an AI to clean that up and turn it into like something movie-quality, everyone could have their own personal ultrasound. And if you’re like “Oh, I feel weird today. I think I’ll inspect the inside of my body by aiming the ultrasound there and having the AI show me exactly what’s going on, to see if there’s anything weird going on.” Which is a little disturbing, honestly, if you’re not in the medical profession… But within the bounds of possibility.
Well, according to Egyptfwd.org, a study shows the ancient Egyptians used sound waves in building pyramids. So if that headline is anything to be believed, then Adam’s sentence is also something to be believed.
I mean, I’m sure they shouted at each other a lot.
[laughs] Yeah, I mean, that’s how work gets done, isn’t it? We use sound waves all the time to get work done around here.
“No, man, the Sphinx goes over there. Look at the plans!” [laughter]
They said that the window to which they had to construct these pyramids, to do it in the timeframe that they suggest they did it, where they had to cut and move these large blocks - I don’t know if they’re granite, or what the heck they’re made of; sandstone, or something just unimaginable… They’re just so big. The degree of accuracy of the cuts, when they had to move them into place and construct these things, the margin of error was within like minutes. So the accuracy to which they built them in the time they suggest they built them is just – you have to think, like, how in the world could they do it? Because even in modern technology, we cannot replicate how to build such constructions. Like, it just hasn’t been done.
We couldn’t build the Pyramids of Giza today, is that what you’re saying? We couldn’t build them?
Like, all the pyramids. There’s pyramids throughout the world.
I know. But those in particular are the ones in the website I just referenced.
Well, sure. Let’s use the Giza ones then.
So you’re saying that today’s technology and engineering couldn’t create those?
Yeah, not the same way, no.
Not the same way, or to the same – could they fashion the same product?
They can’t figure it out. There’s pictures of like large cranes that should carry lots of weight, but like topple over trying to pick those kinds of stones up. That’s how big they are.
Okay. What do you think, Jon?
Those are crazy old. I was just picking sort of a tangent, but – we know things are old, but part of the archive program, thinking in thousand-year stints, I started thinking about just how old things are. So 1000 years is a very long time, right? Ancient roads like Great Zimbabwe, Angkor, they had not even been built yet 1000 years ago. The pyramids are much, much older than that. When Herodotus, the ancient Greek went to visit the pyramids, they were as old to him as he is to us today. They’re four thousand years old, which is insane. How did they do anything of that skill 4,000 years ago?
Maybe they were quite a bit more advanced than we give credit to.
I mean, there were as smart as us, right? There were a lot of really good engineers.
Clearly. Maybe even better engineers. If Adam’s to be correct and we can’t even build a – not even a facsimile, like the same artifact, different techniques… I don’t know, I like to think we could get done, but who am I, except for a –
Probably not in a cost-effective way.
Well cost-effective ways never stopped us from doing stuff before, has it, Jon?
[laughs] That’s true
Even thinking about magnets. Aren’t magnets a wonder of the world? Something as simple as a magnet…
They absolutely are.
I mean, these things are just insane. I was looking at it, it’s like harmonics, I believe is a word used. I can’t even find it. I’m just so upset about it. Oh, here it is. Cymatics. It’s this language, essentially, in sound. Cymatics. Look into it; it’ll blow your mind.
[21:52] So this is not quite the same thing. But in terms of things we understand that blow your mind - three days ago in the New York Times there was this opinion piece by an astrophysicist, saying “Our standard model of physics doesn’t work.” The more information we get from like the Hubble telescope and so forth, the more it doesn’t fit in line with what we have. And there’s this amazing “one possibility” raised by the physicist Lee Smolin and the philosopher Roberto Unger, that the laws of physics can evolve and change over time. Different laws might even compete for effectiveness. So this is an actual proposal being proposed by an actual astrophysicist in the New York Times three days ago… [laughter] We live in a very strange universe, is all I’m saying.
Maybe the laws are also changing over time… Okay.
Exactly. Yeah.
Well, a comedian – I was almost gonna tell you guys this as truth, but I think it’s actually just hyperbole, for a comedian’s sake… Because it’s one of those folks that are – I don’t know, a content creator. They seem, if we look into them further, like they’re telling the truth, or they’re really unearthing some deep, dark secrets, basically, but it’s really just a comedian. It’s a bit, essentially. But he basically said “What if Isaac Newton didn’t actually discover gravity? That gravity in that very moment changed, and he discovered it.” Before then, gravity was different. The laws of physics changed immediately for him to discover gravity.
Publish that in the New York Times.
Put it in the New York Times, yeah.
There’s a science fiction writer called Greg Egan, who writes about stuff like this. In one of his books a mathematician comes up with a mathematical representation of the universe, which is more efficient than our universe… And the universe is like “Yes, thank you. We’ll do that. Goodbye to the old universe. We’re taking over the new math now.” And so he thinks himself out of existence. So yeah, science fiction has covered all this, if it’s any consolation.
Interesting.
Well, it’s interesting, because sometimes life imitates art, and we see things like the tricorder, or the different things in Star Trek from the ’80s and the ‘90s, and then we see things like smartphones. And sometimes people pull direct revelation from science fiction; maybe even people named Zuckerberg. So the Metaverse is a thing in a book written – was it Neal Stephenson? I can’t remember the book now. Snow Crash.
Yup.
And it was a dystopia though, wasn’t it? I think it was a dystopia.
Totally. It was not portrayed as a happy future.
Yeah. And Zuckerberg just missed the mark there - pun intended - and decided he was gonna name it Metaverse. Now, we have a concept called Metaverse. So sometimes it’s directly, and then other times, art imitates life. And so where do the science fiction writers get their ideas? And Jon, you are one. So I could ask Adam, and he could guess, but I could ask you, and you could tell me directly. Where does your AI either doom, or utopic views that you end up putting in these books that are described as weird and just continually weird - where do they come from?
Well, I think I’m writing software science fiction. There’s a review that came out recently and said “The branch of science in this particular science fiction novel is computer science.” Because I’m a software guy [unintelligible 00:24:55.28] like computers of the world. Software mediates everything we do. This conversation, every text message, most of the news you read… We live in a software-mediated universe. So I’m [unintelligible 00:25:06.09] the notion of like a programmable software universe, like the fundamental substrate of reality is more like software than like hardware. That’s not that differ from the world we actually live in anyways, right? On a day to day basis.
There used to be a lot of space travel science fiction in the ’70s and ‘80s, when we had the Apollo program… And nowadays there’s going to be, I think, a lot of software and computer science fiction. And then we’ll get into like the biotech science fiction in 10 or 20 years. I think people adopt wherever like the big engine of change, and the big changes are happening in the world around them.
So you’re pulling it out of the software world. That’s cool.
Break: [25:46]
I have an idea for you.
Uh-oh. Pitch session here.
Fire away.
I’m gonna ruin it too, because you’re not going to do it, but it’s hilarious. [laughter]
Well, please tell us anyways.
The idea is this - it’s a long, drawn-out, dramatic entire story, and in the end it was DNS.
Oh. I like that one.
Right? Isn’t it always DNS? I mean…
I thought you weren’t gonna like it. I like it.
No, no, that is good. There are 100 million people out there who are already ready to think that DNS is the villain..
That’s true.
The deep, dark villain.
That’s too plausible science fiction.
I’m picturing like a Scooby Doo meme… You know, you yank the mask off, “Ah, it was DNS all along.”
“It was DNS!”
“The whole time!” And it would have got away with it, too.
If it weren’t for those silly kids. Yeah, well, that’s actually a good – I love the idea of… And actually, one of my favorite authors is Dennis E. Taylor. Tell me if you know this name.
I know the name, I don’t know the work.
I do, because you mentioned him recently, Adam. Was it with Kris Brandow?
Yeah, he’s my favorite, honestly. And I will eventually have him on the podcast, I just haven’t gotten up the nerve; a little intimidated. But he said yes. But we’ll see. Anyways, he’s written many books… I classify them as plausible science, because it deals with artificial intelligence, and the future… And he’s got a trilogy, which is not a trilogy anymore, it’s actually more like a five-book series now… So it’s called the Bobiverse Trilogy, and the main character’s name is Bob… And I’m not ruining the plot by any means, because this is the premise of the first book; Bob essentially becomes AI, and goes into the future, and does all – the book translates, essentially. I’m doing a poor job of describing Dennis’s life’s work, but it’s amazing stuff. But he’s a software programmer.
Oh, is he?
He lives in like Vancouver, BC, snowboards, and mountain bikes, and writes software… He writes software to architect the storyline behind the scenes. The software for him to maintain… Because when you write a book that’s so connected, and storyline connects to storyline, and timeline to timeline… Especially in this one, there’s time dilation, there’s space travel, and he literally thinks about the scientific lightyear aspect of time, and travel, and timelines, and storylines… He’s written software to maintain the truth, essentially. And I think he’s talked about it on podcasts and stuff like that, but he’s a software engineer, initially. I mean, I don’t know how much he’s – he actually writes some software. Thankfully, Bob was a software engineer in the storyline too, and he’s written – I think he’s uniquely good at the role as the main character, because in human form he was a software engineer. And as AI that goes out and does what Bob does in the Bobiverse Trilogy (that is not a trilogy), he can do what Bob does because Bob writes software, and he writes VR software. And it’s like really – if you’re at all a software geek and you haven’t read these books, you’re missing out on life. The best part of life is reading these books. Seriously, it’s good. It’s good stuff.
The Bobiverse is well-known.
Yeah.
And Greg Egan, who I mentioned earlier, also writes his own software. On his website he has simulations of the physics that he’s using for his highly advanced and abstract hard science fiction. And so yeah, I wonder if we’ll expect novels to come out of GitHub repos.
[33:54] Well, I just think there’s a lot of – I was joking about the DNS idea, but I think it would be kind of a good plot. It’d be kind of cool. There’s a growing faction of humanity that are interested in software, and software tech, and budding software, that I think don’t have particularly – not all science fiction gets me the way that like software-driven storylines go…
I totally agree.
…that truly pay homage to what we consider as truth. There’s people who don’t make software, or involved in software creation, that just assume what is being told to them is somewhat true. But then there’s the version of us that build software, and understand software, that get it, and there’s no true storylines for us. They’re kind of like missing it, in a way.
Right.
Yeah, there are 100 million developers out there, according to GitHub’s latest – and I agree that publishing does not really target or serve that enormous audience of people who are super-interested in software, who write code every day, as much as it should. It’s weird to me.
Yeah.
There’s Jon Evans, there’s Dennis Taylor… Right?
That’s right. There’s us…
I mean, there are people who are doing it, man… Serving the needs of the software people.
One more name I’ll throw at you guys… Nick Jones. Not quite software, but close enough. Joseph Bridgeman, that series begins with a book called “And then she vanished.” And it’s time travel. It’s a unique version of time travel. Phenomenal book. Phenomenal series. It’s four books.
My favorite – this is like a deep cut, but my favorite classic science fiction novel about artificial intelligence is from the early ‘70s. It’s called “The adolescence of P-1.” It’s really prophetic. It’s set in my alma mater, University of Waterloo in Canada, [unintelligible 00:35:32.26] But it’s like a very software-oriented, very realistic attitude towards artificial intelligence getting out of the box, and going onto the internet – in the 1970s, when we didn’t even have an internet, basically. So it’s pretty crazy.
And of course, Vernor Vinge – you know, he invented everything, but… True Names and Other Dangers, the first real internet AI story, that’s great. Which in turn, by the way – you know Eliezer Yudkowsky, the AI doom guy?
No. We need to make a list, because my list is short. And I’m barely into it.
I guess Yudkowsky is also a science fiction author, but he’s more like the high priest of “We must stop building AI. Ai is gonna kill us all.”
Okay.
He once said on Twitter that reading this Vernor Vinge short story was like the defining moment in his life, that changed everything for him. And after he read it, he knew he was gonna do with the rest of his life.
Is that right?
Yeah. So science fiction is influential.
Oh, for sure.
That’s why I think the Bobiverse series is so unique, because of the way AI plays out in that storyline… I’m trying my best not to spoil anything, but it’s just – it’s not at all negative, nor positive. It’s just the way, I guess; it’s just like, not quite the word inevitable, but it just happens, and it’s actually better for humanity, in the grand scheme of the human storyline in that series… It’s just so interesting how you can think so drastically bad. And I suppose when you think about humanity only - and maybe there is only humanity, maybe there isn’t - that everybody’s take is “How is AI to humanity?”, not “How is AI to universality?” – I don’t know, how do you think about it from a nonhuman perspective.
A weird question I like to ask is if an alien species were to build an AI, would they be like the AIs that we build? Can we even imagine a different kind of AI? And if not, then aren’t we kind of all – you know, humans, aliens, [unintelligible 00:37:22.15] – all sort of crescendoing on like the same thing?
Yeah.
But anyways, yeah, I don’t really buy the sort of “AI is going to be terrible and kill us al.” I don’t really buy the “AI is gonna be wonderful, and turn us all into angels”, blah, blah, blah. I’m confident that the future is gonna be really weird, however.
[37:41] Well, I think we got to this degree of the conversation by saying that so much is changing. Like, you mentioned physics in your Times article… And I think we’re kind of in this world right now of change. We’re in a world of change around, in particular, our world here in software development, where AI is becoming a pair programmer, and very much becoming like a sidekick to the individual, and a sidekick to the corporation building software. And then we’re also in a turbulent time of economics; world economics is turbulent right now. There’s lots of stuff happening in all parts of the world. There’s things trying to change the power of the dollar, the US dollar, as I’m talking about it… There’s lots of change around physics, and going back to the Moon, or have we gone, or… All these things that are like basically every question imaginable is now in question again. It’s like, is the Earth flat? Is the Earth round? Have we gone to space? Is there space? Is there only low Earth orbit? World economics, physics… I mean, all these things are essentially what seems to have been hard truths are now like “Well, really? Are they true?”
Well, I think this is like a factor of what I’m saying, that software, like what you’re seeing and your exposure to the world is coming through these layers and layers of filters, like journalists, and software, your feed algorithm deciding what you see [unintelligible 00:39:04.27] So everyone’s getting a slightly different view on what used to be a shared common world, which is an interesting background. I don’t think that’s entirely bad though.
I was gonna ask how you combat that, but it sounds like maybe you don’t think that we need to.
Well, not necessarily. I think “misinformation” is more a demand problem than a supply problem. No matter how much is out there, people are gonna [unintelligible 00:39:24.15] But also, I think to an extent a variety of perspectives, like genuinely original takes, is how we get new stuff.
Right.
So I actually don’t want everyone to be thinking the same way about everything.
Right. Yeah. We need different. We need Steve Jobs’ original campaign, “Think different.” Because that’s true. I mean, we do need uniqueness.
Well, we also need collaboration, and camaraderie, and connectedness. So there’s – I mean, ultimately, my takes are usually boring, because it goes back to things like moderation… I feel like we need both. I feel like we need a physical, analog connection to real people, in the real world, and we also need the abstract, digital, morphological demand side misinformation world that we’ve invented… Because there’s so much, like you said, Jon, that comes out of that, that are new things. Some are bad, some are good, and then hopefully we gravitate and elevate the good ones, or good uses of things, and find ways to combat the bad ones… But I feel like both at the individual and maybe more at the societal level, a healthy grounding in both is probably what’s best.
Yeah, I’m with you. Like, I call myself a radical moderate. I walk down the street with a sign saying “Reasonable informed discussion where possible.” [laughter]
I like that. A radical moderate. That’s good.
But yeah, I think you need both. And you do need some wackiness and craziness and willingness to sort of reject what the status quo things are, because sometimes the status quo is wrong. But you can’t go embracing every alternative [unintelligible 00:40:57.09]
Yeah, because there’s a lot of people who are just in it for the clicks, just in it for the views, just in it for the followers.
For sure. Or they’re literally outside of their minds. Like, there’s a lot of people that are just out of their mind. Like, they’re not sound at all.
Legit, yeah. And they will find conspiracy in conspiracy. That’s –
[laughs] And where those two things intersect, that’s where magic happens, right?
The bad kind of magic, yes.
That’s where you get Kanye West, or you get Lord Byron right there. That’s Lord Byron.
Right. Exactly. Yeah.
So on the scale of AI doom, Jon, if 100 is utter doom, that’s Eliezer Yudkowsky, and zero is like no doom, zero doom, where do you feel like you personally fall on that scale as we move forward? Are you like a 50/50 guy? 70/30? What do you think’s gonna happen on a scaled fashion?
[41:50] If I’m forced onto that scale, I’d be like 30 on the doom. I’m not as concerned, and that’s in the long-term. I’m really quite unconcerned with the short to medium term. But also, I think the scale – I feel like we’re going to have a future which is sufficiently weird in unexpected ways that we’re going to look back [unintelligible 00:42:07.15] and think “I don’t know what we were thinking, because it turns out things are much stranger than that, and what actually happened was totally orthogonal to what we expected.”
It’s a lot different than we – it was completely on a different scale that we didn’t even know existed.
Right. In the same way that we got the governor of Minnesota from the movie Predator, we got [unintelligible 00:42:24.27] President Donald Trump… You don’t look back and think, “Well, what are the odds that we’re going to have Republican or Democrat governors in these states, or this country?” That was not even an option when we look back on this, what actually happened. The weirdness of the world is accelerating and increasing, and I think that’s gonna keep happening.
Yeah. You’ve got to define doom, Jerod. You can’t just assume doom. What is doom in this scenario? Describe what doom would be. If 100 was doom, what is doom, and zero is no doom?
Well, I think when you ask the question, doom is in the eye of the beholder. Whatever you would define do as, then you scale it according to that thing. I think we could all have different definitions, but somewhere in the realm of like takeover, death, insufficiency for life of humanity… I don’t know. Of course, my doom - I’m rooted in like ’80s and ’90s pop culture, so I go Terminator 2.
[43:20]
Come with me if you want to live.
Right.
I don’t know about you guys. But to me, that would be bad. Rise of the Machines… That’s kind of a typical AI doom scenario that most people, I think, think about. Is that what you think about, kind of the machines rebel and take up arms against us?
Well, I’d go slightly bigger in scale, actually. My doom would be AI turns the entire Earth, including us, into more computronium, or whatever… [unintelligible 00:43:41.27] we just get sacrificed to the altar of better hardware ultimately…
Okay, so that’s a little bit more of the Matrix style doom right there… Just harvesting us energy.
Exactly, yeah.
Yeah, fair.
Wow.
Both bad… [laughter]
I think we can agree on that much [unintelligible 00:43:57.00]
Yeah. I mean, both are doom. That’s why I said I think maybe it doesn’t matter whatever you picture it as… But do you think that’s gonna happen, that thing that you picture?
I can’t even give a prediction of any means. It’s neither good, nor bad, I suppose, because it’s such a big world, and I don’t have a big enough mind to encapsulate what I think could truly be accurate. Plus, literally, my words are being transcribed right now into the GitHub Arctic Code Vault for all of humanity to remember… And it’s like, “What did Adam say? It was stupid.”
“It was stupid.” [laughs]
Note to future AIs: Adam was totally on your side.
Yeah, exactly. We have him on record as saying you would take over.
I do kind of want to go the Guilfoyle route, which is a Silicon Valley reference… And Guilfoyle said in there – like, he wanted to submit… Like, at first he was against it, essentially. And I won’t ruin the story for anybody who hasn’t gotten to season five or six, or whatever number this is; I think it’s six. He’s like “I want it to go on record. Send me an email that I’ve helped you with this thing, so that the AI overlords eventually know that I was on their side to help them into - you know, when they go back to the humanity archive of what was said and done, they know that I was the initial help to ensure that their overtake was not thwarted.” Because it’s inevitable, essentially.
This goes back to the famous Simpsons meme, right?
Yeah.
“I for one welcome our new AI overlords…”
Right. That’s literally what he said, I believe. Great job quoting. That’s what he said.
Proving once again that Simpsons had a time portal to the future.
Exactly. Yeah.
See? I didn’t even know that was a throwback, because I’m not a Simpsons scholar… I didn’t know that was a throwback. That’s so interesting.
Nice full circle loop there. Yeah, that was good.
[45:40] For sure. So I’m kind of like – I’m not actually scared of the AI overlords in the future. A little bit, just because I have to say that; it’s required. A little scared. Man, I just – I guess my hope is less a prediction; I hope that humanity finds a way to institute these artificial knowledge bases to be comrades… And that’s maybe a bad term. Collaborative, than not. To not be us versus them, but more like symbiotic. I hope that that’s how it remains. But I imagine at some point an intelligence would get to be so intelligent that I think that enlarge, from what I know about humanity and the Earth, and the way we treat it, and the way we grow, we are kind of like a virus. Where we go, things get decimated, from the eyes of the Earth. And so when you zoom out really, really far – I mean, I know at the closeness of humanity there’s love, and there’s respect, and there’s all these beautiful things. But from the – it’s like Monet. Monet, I think there’s a thing when it’s a classic Monet, is what they would say; from far away, a Monet looks beautiful. But when you get closer, you see the artifacts, you see the imperfections. I’m not saying Monet is not a beautiful woman, I’m just saying that’s the thing. And I think that might be the case here, where if when you’re AI, maybe you zoom so far out to humanity you think “Well, ultimately, this is like a death doom. They’re going to war. Destruct, fight, civil war.” I mean, we see that in today’s society; you turn on the news, it’s all – it’s not good, generally. Where’s the good news channel?
Some of that’s supply and demand as well, though.
I feel you, I feel you. But that’s my hope, I suppose. I hope that we can be symbiotic, and that – I think eventually if a computer can become so intelligent, or there’s an intelligence that becomes so intelligent that it realizes “Well, realistically, humanity is just bad for itself. Let me protect it.” That’s the age-old thing. AI is really trying to protect humanity, and the only result is to get rid of humanity to protect itself.
There’s this Robert Heinlein line that mankind rarely ever manages to dream up gods that is better than itself… Thinking of like the Roman and the Greek gods, and all the terrible stuff they did. But even like the Old Testament biblical God, who was constantly [unintelligible 00:47:56.13] But I think that’s true. And I also think, when we’re talking about super-AIs of the future, we are basically talking about new gods. Ultimately, this is a religious discussion as much as anything else.
Kind of. I mean, I don’t think so, but… I mean, it depends on what angle you come from. We invented it; we invented the machines, we invented microprocessors, we invented the ability for a computer to compute… And so can you invent God? I don’t believe to truly be God you can invent God.
Well, lowercase g.
[laughs]
Right.
We’re certainly trying to invent God, aren’t we? Yeah, I think we have that drive, for sure.
Well, because the true nature of God is always existed, outside of time.
Well, because if you invent God, then you are God, right? That’s the point you’re making. But also, that desire, I think, is innate in each and every one of us, is to elevate ourselves to that point. And so that’s the whole Cast Away, Wilson, “Look what I have created”, when he created fire.
[49:05]
Yeah!! Look what I have created! I have made fire! I… Have made fire!
Right.
And then he talks to himself in a volleyball. That’s kind of in there, in us, and I think if we are left to ourselves, then we end up doing such things. And so yeah, I think that desire is certainly in there in humanity…
Oh, yeah, for sure.
…and so we find ourselves doing it. I don’t know, I look at the current state of AI, and I feel like we’ve plateaued – I don’t know, maybe this will be dumb here in six months, not even in the Arctic Code Vault… I feel like we’ve plateaued again, to a certain extent. I feel like there was – I think that the progress that we’ve made in the world of machine learning has been leaps and then plateaus. And you kind of have a new technique, a new thing, a new idea that gets implemented, and then you have just kind of revolutions around that. Not revolutions; evolutions around that idea for a while, until a new thing – I mean, transformers is the current technique that has produced this new step function in AI’s ability to do what it does. Go ahead.
[50:07] As an aside, I totally agree with that. I’ve actually read about – I have a Substack about AI, and I just read about this recently, in the last five days, that we’ve plateaued. Because like this last winter, when things were dropping every week, I’d go to AI events in San Francisco, and people were stumbling around like someone just hit them on the head with a hammer, going “What is going on? This is insane. There’s something new every week, and it’s blowing my mind.” And then GPT-4 dropped, and that sort of ended it… And we have pretty much plateaued since then. I think everyone in the field would agree… With some relief, honestly, because that was a really crazy time, January to March of this year.
It absolutely was. And so we’re in a better place than we were, but we don’t know how long we’re going to be in this particular plateau. And I’ve said this, I think probably not on this show, but on JS Party, as a personal user of the tools, I’ve hit now what I call the trough of disillusionment. I didn’t coin that term, but I’m gonna apply that term to this particular case, where I know the limits of the tooling, I use it for what it’s good at, I avoid it for things that I know that it’s bad at, and it’s just become another tool for me, and a useful tool, but not a life-changing tool for me personally. And so even in my own personal use, I’ve kind of hit that plateau where I’m like “Okay, it fits into my workflows here, it doesn’t fit into it here, and I’m more productive because of it in this case”, especially in like Give me 40 synonyms for this word.” When it comes to words, it’s really good at words, and so I use it for those things. When it comes to Elixir, it’s just okay Elixir, and so I don’t use it quite as much. When it comes to TypeScript, it’s better at TypeScript than it is an Elixir, and so I’ll use it for TypeScript.
But it doesn’t know anything after September 2021. I’ve run into this a bunch. So new libraries and new releases, it knows nothing about, which is annoying.
It drives me crazy, yeah.
Totally.
If you know something’s a few years old, you can kind of ask it about it, for the most part.
And I think on the current plateau, we will get there with that kind of functionality, where it’s going to get better from here; it’s going to have a better memory, it’s going to have access to newer information because of the tooling, and the processes, and all the work going into greasing the skids of this current technology… But as far as another step function, like the next plateau, I don’t – obviously, I didn’t know where this one was coming from; I don’t know where that is, I don’t know when it is, and I’m not sure what it’s going to be like. But for now, it seems like, just as a microcosm, the era of full self-driving cars is just as far away as it was the last time we asked.
Right.
We’re still not there yet. They’re better, there’s more uses, but we still don’t trust them to full self-drive.
Well, they exist in San Francisco, right? Like, I see them every day in San Francisco.
Okay, so in limited domains, limited contexts. But like, what you would consider the AGI of driving, which is you drop me a human into pretty much any circumstance… I’ve been driving for 25 years. Okay, maybe certain machines, I can’t drive. But give me the sun in my eyes, give me the ice, give me the place I’ve never been, and like I can figure it out, roughly speaking. Of course, we still have tons of crashes, and stuff. But that level of full self-driving to me just feels so far away still.
Yeah, that’s right. I’m Canadian [unintelligible 00:53:06.01] driving snow, then I’ll be impressed. Until then, I’m not buying it.
It’s a whole different game, isn’t it?
Yeah, exactly.
Well, even for humans it’s challenging.
Oh, it is challenging, yeah.
And scary. I’ve been wrecked before… It just brought back some PTSD in me. In terms of AI, some really interesting uses I’ve done recently - I’d like to share one, because I just didn’t consider doing this, and this is where I think it’s leveled up humanity in subtle ways. So this weekend I was barbecuing, because that’s what you do on holiday weekends… At least here in the States, it was Labor Day weekend. And I had some family over, and I had these gigantic, Texas-sized potatoes. So I was gonna make baked potatoes. And they weren’t just like baked potatoes, they were smoked baked potatoes. And so I wanted to go true barbecue and do low and slow, 225, super-smoke, for as long as it takes to get to 205.
Sorry about that.
You’re killing us.
We didn’t have enough time though… So that was my plan, so I had an alter my plan. I’m like “You know what, let me ask ChatGPT, like “I’ve got this amount of time, and I’ve got this potato, and I want to get to this temperature. Rather than me get flustered and skip it and go to a restaurant, I’ve got this much time ChatGPT, I’ve got the Traeger, whatever model, and I can get it to this temperature…” And I fed it this data, essentially, and it said “Well, if you put it at this temperature, you’ll meet your criteria for getting this potato doneness in this amount of time.” So rather than skip the meal, I used ChatGPT to sort of reverse-engineer thermodynamics, essentially. Like, “How do I get the potato to 205 target internal temperature, and to what temperature do I have to cook it at for this amount of time?” And they were like “275”, or whatever. I forget what the number was, but it was like, it wasn’t 225, where I wanted to be at, which is low and slow.
And then the other use I did recently was I have a Denon 4400 Home Theater Receiver in my media room in my house, and on Plex I’ve got all these different films, and they’re all on different – you’ve got this one that’s DTS HD 5.1, you’ve got this other one that’s TrueHD 7.1… And these are all like original, sound formats in the sound studio. And your Denon receiver, any given home theater receiver can process that sound into the speakers that you have available, and make it sound good. It’s a sound processing thing.
So I’m like “Well, ChatGPT, help me figure this out. If I’ve got these available settings in my Denon to translate this sound into my speakers, into my format, what’s the best one to use, given its original format?” And I’d never thought to use it like that. I would just guess; read the manual or something like that, and which one does it map to? So I had ChatGPT make me this matrix. So I took all the films I have, all the original sound formats, and all the available ones in the 5.1 and the 7.1 settings, and so now I don’t have to guess anymore which one to use; I just go to this grid that ChatGPT made me based on what I have available and what the film might be, and boom, I’m using the right sound processing on my Denon. Those are things that make subtle advancements in today’s – like, am I earning a million dollars because of that advancement? Heck no. But am I enjoying my home theater better and cooking potatoes faster, or to the degree I’ve got to within a certain amount of time? Yes. That’s amazing.
If that isn’t AI making your life better, I don’t know what is.
That’s what I’m talking about, right? That is good life right there.
Oh, for sure. Yeah, the current plateau is much nicer than it was prior to being here. I say I’m in the trough of disillusionment, because I’ve hit up against the seams or the edges of what it can and can’t do. But the stuff that it can do, Adam, like you’re describing - life is a lot better, because I can say “Give me the FFmpeg command for this thing”, and then I don’t have to go read the man page, and I google way less, and I just ask it for things that it knows… But when you first start to use it, you don’t know the boundaries of its abilities, and so you tend to be like “It can do everything, and it’s always right”, and then you’re like “Wait a second. Chill out, Jerod.”
It can’t do everything, and it isn’t always right.
It’s wrong a lot…
Or because it can’t do this most imaginable thing you want it to do, therefore it’s a failure. Like, can you scour the internet, find me the stock to buy, and make me a millionaire in six months? If it can’t answer that question, has it failed you? No, it has not, because that’s not quite where it’s at.
The people that have failed you are on Twitter, telling you that they can do that for you. [laughs]
That’s right.
Right. And they’re also telling you that doom is upon us… [unintelligible 00:57:24.05] be scared of GPT-4, like “Have you ever used GPT-4?” I understand that it’s changing and it’s gonna get better, but these are fundamental restrictions on what it can do. I totally agree with Jerod. You bump up against the wall, and you’re like “It’s gonna get better, but it’s not gonna be like a transformative wizard anytime soon.”
Right. What’s interesting to me is like if we talk about that open letter published - it was probably last year, or maybe it was this spring…
Oh yeah, that looks really funny in retrospect, doesn’t it?
Yeah. That letter is signed by really smart people. I mean, you mentioned Yudkowsky, and he actually - I’ve found a Time article where he says that that letter didn’t go far enough. And so he really is, as you said, kind of the high priest of this particular belief. He’s like way on the edge of doom.
[58:12] What’s the letter say? Give a summary.
The letter says we should stop all AI research until we understand what the hell was going on. That’s basically it.
Yeah, exactly. But it was signed by a lot of people, and people that are like – they’re not Joe Schmo.
Oh, yeah, totally. Very impressive names. I know a couple of them.
Is there a blockchain to verify they signed it, though?
[laughs] They didn’t protest and say they didn’t sign it…
Okay… [laughter]
We’ve also found the boundaries of what blockchain can do for us… So they did. I mean, you can argue individuals and go ask them, but there’s just a lot of names, a lot of people who are leaders in AI things… This guy is no slouch, far more impressive than myself… But you wonder, I don’t know, like, how all those smart people could land on a position that’s so strong, and then you have a lot of other smart people that land on a position that’s so opposite strong. It just is an interesting conundrum, I guess, maybe because none of us know what’s going to happen. What do you think, Jon? It’s hard to dismiss that many names on a signed letter, but at the same time, we just kind of are dismissive of it, because it seems like it’s not right, at least for now.
Well, I think it goes back to what you were saying with the plateau. I think people who signed that did not think we were going to plateau. They felt things were just going to keep accelerating from January through March, to April…
Just exponential progress.
Yeah, exactly. You know, June would be even crazier than March, and September would be beyond crazy.
Right.
As you say, we have pretty clearly seen that that is not the case. Nothing dramatic has changed. So I think it’s a reasonable concern. I would have argued strongly against it at the time. There’s always plateaus; you never get uninterrupted exponential growth, except maybe Moore’s Law, and that’s like a one-off. I understand where they were coming from, it just looks like they’ve made a bad call now.
While we’re here on this prediction, let me share one more other today leveling up. It’s really good. You’re gonna love this one. Do either of you manage hard drives? Jerod’s gonna laugh about this one, because he does not manage hard drives; just the one that’s on his machine.
Yeah, I really try not to.
I’ve transcended in life far beyond – [laughter]
Right. “I’m beyond…” Well, as you know, most modern hard drives, whether it’s an SSD or a physical disk that spins, has the software in it called SMART. And I forget what it’s called; it’s an acronym.
Oh, yeah.
But that report that you get back from the SMART data report is like, reading it as a human is just like “Forget it. Like, what’s important here?” So I take that report and I just pipe it right into ChatGPT, and I tell it to tell me exactly what’s happened with this harddrive. Should I replace this thing? I don’t wanna look at that report whatsoever. And it’s like “No, Adam. You’re good to go. Keep going.” Or “Adam, listen, let me tell you something. In about six months, you’re gonna have to replace that hard drive.” That’s a paraphrase of what ChatGPT says back to me, but that’s another modern leveling up of –
For sure.
“I don’t need to plateau. I don’t need to worry about these people scared of the future.” Like, what are they so afraid of? Do they think literally these machines are going to construct robots and they’re going to take over Boston Dynamics, and the next thing you know the company isn’t ran by the company anymore now, it’s ran by some machine that manifests the corporation and pays the taxes and bills the things, and the humans are just subjects of this control? Meh, probably not.
Yes. That’s actually the short answer.
That’s what’s gonna happen?
Yeah, yeah. [laughter] But I also think that raises a really interesting – because like, all my non-tech friends are like “Oh, ChatGPT - that just makes stuff up. It’s useless. I don’t even know why we’re talking about it.” They don’t realize that people in the industry like us - you know, sometimes we ask it questions and ask it to write things for us, but we know that it’s going to be hallucinating, and not everything, if you just ask it, is going to be correct. But what we use it for is what Adam was talking about. It takes information and it transforms it into another kind, and it’s phenomenally good at that.
So good.
[01:01:52.06] Yeah. And it writes like 30% of my code, too. Copilot writes like 30% of my code nowadays. And I think non-tech people don’t realize that it’s a powerful transformation tool. They think it’s just a Q&A tool… Which is too bad, because I think they’d get a lot of use out of it as a transformation tool.
Right.
The way Simon Willison described it really resonated with me. He called it a calculator for words. And so it’s going to be very good at taking words - you can put a lot of words into it and have it summarize those down, compile them down into less words, and like Adam just said with the smart diagnostics, “Tell me what this means” or “Highlight the difficult parts”, or whatever. It can also take a small amount of words and expand them into much more words. And those two use cases - and there’s many permutations of those - are hugely valuable, beyond just Q&A.
Simon’s great. I know Simon. He’s super, super-good [unintelligible 01:02:37.18]
Yeah, we’ve had him on the show multiple times, because he’s also – he’s very excited about things, but he’s also very scared of things, and he’s also very practical. So it’s like, you get the excitement, you also have a little bit of trepidation, so it balances; it’s not pure utopia. And actually, what I like about him the most, is he shares what he’s doing with it today, right now, in his life, and how he’s using it to be more productive. And I think that’s ultimately valuable for all of us; kind of like these tips that Adam was sharing.
It’s like a version of bionics, but you’re not actually embedding anything into your body. It’s just your human form, I don’t know, typing into a machine… Maybe at one point we can actually think our thoughts through something and then think ChatGPT, instead of typing… I use the app on my iPhone a lot, and I just talk to ChatGPT. It’s so strange when you volley back and forth a few times… But you can speak into your phone, and it does a good job of translating your words into text…
And so for long conversations that are deep like that, I won’t type them out, because it’s just too tiring. I also wish it kind of had AutoCorrect, or sort of predictive text whenever you’re typing… Because it just doesn’t. There’s certain things I’m like “You can totally just complete the sentence for me”, but it doesn’t. Anyways…
Well, OpenAI, they’re too busy printing money at this point, so…
Yeah, gosh… Even favoriting. For example, the one I mentioned about the Denon, and the sound fields, and stuff like that, sound processing - that’s a chat I go back to and reference, but I’ve got to scroll, scroll, scroll and find it. And it’s just too challenging. So now I’ve just got to link to it.
Bookmark to it, yeah.
Yeah, bookmark to it. But like, just give me the favorite feature. Just let me kind of go back to these conversations, keep the context, and kind of keep them going over time… Because there’s context, and I don’t want to rebuild for the thing again. And in some cases, it kind of forgets. “I know we’ve got like 30 back and forths here, but I’m new. I’m new right now. I have no context of this past conversation.” And it’s kind of frustrating.
I think OpenAI doesn’t really want to be a consumer services company. They’d like to just train GPT-5, and GPT-6…
Maybe so.
Yeah. Be the API to those things… It seems true, the way that they’re building things. It seems they are more focused on that side than they are about improving the consumer product that is OpenAI chat. Alright, so we’re not doomers, we’re not particularly not not doomers either…
There’s fear, but I’m not afraid.
There’s trepidation. I like that word you used, Jerod. Trepidation.
Yeah. I thought that was a good word to describe it.
I’m not shaking in my boots about it, you know? I’m actually quite hopeful that something will come from this that’s good and better for humanity. How can we – like you were saying, Jon, some people just won’t touch it, because it’s not accurate enough for them, or whatever. Like, don’t dismiss it; leverage it, but don’t lean on it that it’s your only source of information. You’ve got to be wise, and you’ve got to direct it. 30% of your code is being written by it, but those are still your ideas. Like “Can you help me…?” You’re just saving yourself time. You could have gone and probably written that code just as well, if potentially not better… But why would you spend three hours doing that when ChatGPT can get you in – it’s the ultimate 10x-er. It gets you there in 30 minutes versus three hours.
Yeah. I really liked the word Copilit. I thought GitHub was brilliant when they came up with that… It’s like, you’re landing the plane yourself. That’s fine. But when you’re flying, the copilot can take care of most of the work, right?
Right. And that’s true.
[01:06:06.01] Well, that’s why their next big innovation is going to be called GitHub Pilot, because then you’re just out of the loop. Who needs you anymore…? “We’ll take it from here, guys. Thank you.”
[unintelligible 01:06:13.22] We don’t do that anymore.
[laughs] Exactly. Well, the real question, Jon, is how much of Exadelic is human-written and how much of it is not?
So when I started writing it, GPT-3 wasn’t even out yet. So you can be very confident…
Okay. This might be one of the last great human-written books at this point, you know?
Yeah, I mean, it is kind of fun. People are worried about poisoning AI models with AI-generated data, because there’s so much of it already out there, and… Yeah, it was written long enough ago that you can be very certain that this was entirely written by my weird subconscious, rather than a transformer-based architecture.
That’s even something too, where people that would have never written a book are able to get out the outline. It’s like an editor almost. I thought about this more recently. I was like, in a lot of cases, a real human editor to an author is sometimes all the extra beauty in the words, and the forming, and the sentences, and the structure. I mean, there’s a lot of authors who are good at that, but maybe they just have the good idea, but don’t know how to manifest that into a well-articulated, fun to read sentence, that helps your imagination bloom with picture, which is what a lot of books do. I think about that… Like, even today, people are writing books that they would have never written before. I think that’s a positive sum for humanity. Like, let me get the outline, and maybe ChatGPT, or whatever this GPT world we live in will become is just the get-over-the-hurdle, the unblocker, the writer’s block remover, essentially. Let me get you moving. Let me help you take that outline into something that’s – maybe you don’t even like what I’ve given you, but it helps you think it’s possible. Because sometimes humanity is blocked behind possibility, rather than “Oh, if I don’t think I can achieve it, then I just won’t do it”, kind of thing.
It is great at just giving you lists of ideas. I don’t know what to call this thing… “Give me 20 alternatives, wacky names to call it.” Oh, these are actually good names.
That’s my majority of uses. It’s “Give me 40 alternate phrases from this phrase.” And I’ll kind of say – usually, if there’s a phrase that I can’t remember, that I once knew…
Is that how you’ve been titling these shows lately, Jerod? You’ve been using ChatGPT? Because the last several times we’ve had to title shows, I’ll admit this, my ideas have been horrible. The most recent one that we’ve put out for the Changelog, the interview one - what was it, Jerod? Back to the terminal of the future? That was amazing. And I was like “Forget you, Jerod.”
That was 100% human-crafted, I’ll let you know.
It was Jerod-generated? Nice.
It was Jerod-generated text…
Except [unintelligible 01:08:47.06] [laughter]
Thank you for the compliment.
Well, it was a good title. What’s funny is a lot of times I’ll discount it, because I’ll say “Give me 40 of these”, and I’ll be like “These are all terrible.” And then I’ll be like “Okay, I’ll use this one.” [laughs] “Oh, these are awful… That one’s actually not that bad. I’ll go with that.”
“I will take the least terrible one, because it’s less work than coming up with my own.”
Yeah, exactly. And it’s like “Well, better than what I came up with…”
I do think that, by the nature of what they’re trained on, they’re gonna take like the median quality level, right? So I don’t think ChatGPT’s ever gonna write a particularly good book if it’s trained on just all the books that are out there. It’ll write an okay one, but you’re not gonna like push the envelope; you’re not going to create new, groundbreaking art with the current architectures. They’re literally designed to take the most common approach, and follow that.
Right. Which is like mediocrity ingrained, right?
Yeah. But it also lowers the bar. The base level that everyone can get to is actually reasonably good, so…
[01:09:47.02] Yeah, Damien Riehl talked about this a little bit on Practical AI. He’s a lawyer/programmer who’s done a lot of work, like all the [unintelligible 01:09:52.17] and stuff, with computer-generated music specifically, and law around computer-generated music. And he was talking about the smoothness of AI-generated music, and how humans don’t create like AI creates. AI creates with smooth trends, smooth data… And I think by that you’re kind of referring to like mediocre normalness, like the normality of the data, of the produced sounds, for instance… And humans create in this kind of beautiful, abnormal, jagged way with music… And so they’re using those designs, or those ways to differentiate between human and AI-generated music, for instance. I think it’s probably very similar with words, where you’re gonna have this thing that’s taking all of human words and crunching them, and then spitting out this next-best word, which is often the most guessable word for the circumstance. It’s by definition the next best. But that’s not really the way that humans think, or write. We come up with something entirely weird, and off kilter, and askew. So there’s something there.
Yeah, I don’t know if you’ve ever listened to Google’s MusicLM thing, which on the one hand sounds really good, and it even includes vocals sometimes… You just pick a genre of music, pick a length, pick instruments, and it will create it on the spot. It’s always great background music. It’s never something you’d really listen to in the foreground.
Yeah, this would be awesome for an elevator…
Yeah, exactly.
…but not for my wedding, or for a rock concert.
Precisely,
Well, there’s something that’s just magical about beautiful imperfection. I think that’s what you’re describing, Jerod. Humans are – we’re not predictable, in a lot of cases. Like, there’s some predictability to humanity, but in creativity I think there’s not a lot of predictableness, if that’s a word. Predicticality… How would you describe that?
Predicticality… I liked what Damien Riehl – I think he did describe it as jagged, which I thought was an interesting way to describe the way humans write and create. It’s jagged.
Yeah, well, the jaggedness might be the pausing; you might create, evaluate, repeat. And that might be the jagged; it’s like, there’s a pause in the evaluate scenario of what you create.
We call that writer’s block. [laughs]
Right.
I forget what rockstar said “If you’re gonna hit the wrong note, do it loud.” [laughter]
That’s right.
See, an AI would never say that, because an AI doesn’t have that level of [unintelligible 01:12:23.27]
[laughs] Yeah.
That’s a rock star. I love it. I love it.
Jon, do you have any more books in you?
Maybe… I mean, this is my first book in a while, and it honestly just invaded my mind, and I had to write it to get it out of my mind. So that’s usually my creative process. So I have no idea when I’m going to be invaded by another one. So I think the answer is yes, but I’ve no idea when.
Well, you just were invaded, Jon; it’s a DNS config. Come on, we gave you the best ending ever.
We can collaborate. I can help you with outlines, and you can write the stuff…
Just give Adam the co-author, just for that last plot piece, and he will be happy.
Oh, [unintelligible 01:12:56.28]
I would love to eventually write a book. I don’t have the motivation yet, my mind hasn’t been invaded by an idea to the degree where I’m like “I’ve got to get it out”, but I do aspire at one point in my life to write a book, and it would probably be in this world, that’s not really catered to very much. And the idea that there’s a total addressable market of 100 million developers globally. That’s interesting. Now, do they all speak English fluently? I don’t know if the 100 million is all in my – I don’t speak other languages, so I’d have to write in my native language. I suppose I can work with somebody to translate, but that’s even harder, too.
I mean, ChatGPT’s really good at translation.
Yeah, that’s true. Well, yeah, I can be like “Hey, translate this book.”
How many people have to read your book for you to consider it a success? Honest question. For you, Adam, and it’s for you, Jon. Because you’re saying 100 million might – maybe it’s not all of them. Like, do they have to all read it?
Well, I think about – it was more like less enough and more thinking about what’s the total addressable market. Like, is the total addressable market large enough to consider going after? I think 100 million is plenty, so yeah.
Okay.
[01:14:09.04] I would be happy if 1,000 people read it; maybe even 20,000 people. That’d be fine with me.
Okay. Jon, do you think like that? Do you think like “How many people do I want to read this thing?” Because you put a lot of work into it.
Yeah. I mean, what you’re supposed to say is “Oh, I don’t care how many people read it as long as they’re moved by it.” That’s not true. Everyone knows that’s not true.
Right.
No. [laughs] That’s a lie. You’re feeding me a lie.
Yes, exactly.
That’s what ChatGPT would tell us if you asked it to generate a response.
Honestly, my previous books have sold tens of thousands of copies total, which is not a huge amount, but it’s not trivial. So I’d be happy with 10,000. I’d be extra happier with 10 times that much. Obviously, everyone wants to have a huge hit, blah, blah, blah. But if thousands and thousands of people have read your book and thought about it, and moved by it, then that’s a pretty good outcome.
The commitment to the craft - not the writing craft, but the craft of taking The idea from the brain of the thinker, and putting into words in a form that is cohesive and readable by another human being - somebody that committed to that, I’m just not sure, I could do it more than once. And to be really great at it, to get like 10,000, 20,000, a following like Dennis E. Taylor, for example - he’s got quite a following. The Bobiverse has done quite well, and he’s got Outland and Earthside, and other spin-offs of other stories he’s got… There’s a short story that he’s got called Feedback. I think it’s probably his masterpiece, that he barely claims… I think that’s probably his best book, honestly. To get to that level, it takes such commitment. I’m just not sure I have it.
George Orwell once wrote “Writing a book is a horrible, exhausting struggle. Like a long bout with some painful illness. One would never undertake such a thing if one were not driven around by some demon who one can neither resist nor understand.” Now, Orwell was a downer, famously… [laughter]
He was. Yeah, 1984, right?
I don’t really believe that entirely… But there is an aspect of that, where you’re like “Oh, man, I’m doing this big thing, and I’m wrestling with it, and I don’t even know if people are gonna like it for years. What am I doing?” You do go through those –
Right.
Yeah, you have to be driven by something. I mean, I think, to even consider the exercise, I would have to think “Is there a market for it?” So that’s why I began with [unintelligible 01:16:14.26] Like, is there a market for the idea? Is it worth sharing? I’m not so driven by the idea that I have to share it. But to get there, I think it takes some discipline, really, some discipline to get to 50 pages in a week, or whatever in a day… Authors tend to think in weeks versus days, because it’s just too challenging to accomplish a goal on a daily, because kids get sick, you get sick, life happens, you’ve got to go to the doctor, whatever… You know, life gets in the way.
Totally.
You need gas in your car, that takes all day… I’m just kidding. That doesn’t happen. But something disrupts your day where you can’t get those pages. And so you didn’t fail; you’ve got to think in weeks. I just don’t know if I can do that… Yet.
So I like to do that… Like, whenever I’m not sure if I can do something, I like to put the word in parentheses “yet”, at the end. If not comma, space, yet, with an exclamation point. Because I am determined to do something, but am I ready to do that right now? Maybe not. But I can’t do that yet. So at some point, I’ll equip myself to do so… Or not.
It’s also true with a big code insight project, right? Like, if you want to build some significant open source library, that’s a commitment measured in weeks, if not months or years, too.
Yeah, for sure. Well, the reason I was asking you about your next book was less about what can we look forward to, but more, back to Jerod’s question, since you didn’t have artificial intelligence assist you in the creation of this current book we’re talking about, if you would use it, how would you use it to help you and assist you?
So I think people should use it. I also think I would not, which is weird. Not for any moral or ethical [unintelligible 01:17:48.22] but just, I’m obviously very left-brained, orderly intellectual guy; I write code [unintelligible 01:17:54.17] companies, blah, blah, blah. But when I’m writing, I’m totally not that. When I’m writing, I’m something like “Well, I’m gonna jump off the cliff and hope my subconscious catches me on the way down. I have no idea where I’m going or what I’m doing.” So for my particular wackadoodle, “I don’t know, I’m making it up as I go” process, I don’t think ChatGPT would help. For most authors, it would help, and should be used. But for my weirdness, I don’t know.
[01:18:16.15] I’m going to make it some homework for myself; potentially for the shownotes, Jerod, I’m going to ask ChatGPT, if I were to write a book about DNS as the villain and the antagonist to the story, or the ending plotline, how would I go about – like, give me 50 200-words summaries of the book, like you might see on the back of a book. Summarize the book in 200 to 500 words, give me 40 versions of that, and see if there’s anything interesting… Because I’m kind of curious, could DNS be a true villain?
[unintelligible 01:18:45.15] I might use it, after I’ve written the first draft… Like, have it go through the first draft and say “So what needs work? How would you summarize this?” An analysis of it after I’ve done it. It’d be pretty good for that, I think.
Analysis is great, because there’s lots of – I mean, I said this on podcasts before, because Jerod and I podcast a lot together, but there’s times when I want to ask… You know, Jerod’s my business partner, and so there’s questions I ask people, like, the role he serves in our enterprise… “Can you help me with this?” But he’s busy; he doesn’t need to answer my dumb questions. And I’ve got this thing here that’s totally willing, and potentially with more accuracy, and potentially more patience.
Hey, [unintelligible 01:19:24.20]
And so I think that’s to be leveraged.
[laughs]
I think to not leverage that is silly. Not a very wise move, to not leverage such a willing participant in your adventures. What a shame.
Yeah. It’s like, everyone has an assistant now. And if you’re not giving your assistant jobs, then that’s kind of silly, because you have an assistant.
Right.
Well, let’s look forward to Exadelic 2…
Ooh, there’s another good use, is continuity. So you have it read your first book, and then you tell it “Help me make sense in my sequel, without contradicting something in my first book.”
Yeah, if I write something contradictory, turn it into [unintelligible 01:20:02.09]
Exactly. Because continuity, as you said, Adam, with this Bobiverse thing, this septology or whatever that was written… I mean, that has to just get harder and harder and harder the more sequels you write.
Well, it’s turned into a septology. It was originally a trilogy. But yes.
Right. A sequel - is it even possible? Is it feasible? Is Exadelic 2 a potential in your life, or we’re just making stuff up?
It’s possible. I put sequel hooks into everything I write, just out of reflex. I have no current plans to write a sequel. I have only the vaguest idea of what it might entail. But there are various sequel hooks. It could be done.
Okay. Well, no pressure. Enjoy it, man, because you shipped a book to the world today, and not very many people have done that… And happy to be able to talk to you on your shipping day. It’s cool.
I’m glad this worked out on publication date. It’s great, and I’m pretty pumped.
Yeah. Well, excited for books, excited for you… And we’ll read them.
Alright.
One more question as we close out, which will get Adam to read your book in a split second, is - audio version. Is there a plan? Will it be read? Who will read it? What’s going to happen? Because that’s something that’s highly desirable, is an audio version.
I agree. There is a plan. Details are not yet forthcoming, and sort of coming down the pipeline, but I’m not totally sure what.
Alright, fair enough. In the meantime, just take the text, pipe it into some sort of AI, and have it read it to you.
Yeah, I think OpenAI has a Whisper [unintelligible 01:21:30.15] that’ll do that for you.
There you go.
Did we even read the sentence of the front though, for the audience? Because we’ve talked about the book, but we haven’t gone into – I’m not suggesting we do so, but have we even read the hook?
No.
Let me read the hook for everybody, so when you walk away from this, you have a reason to go and check this book out. Of course, Exadelic is the title… And what it says on the front, it says “The world’s most powerful AI has awakened to sentience, and decided it’s your worst enemy.” Dun-dun-dun.
Dun-dun-dun.
Dun-dun-dun.
There you go. Go read that book.
There you go. Alright. That’s all for this time. Thanks for hanging with us, everybody.
Thank you very much. That was fun.
Bye Jon, bye friends.
So long!
Our transcripts are open source on GitHub. Improvements are welcome. 💚