Shawn “swyx” Wang is back to talk with us about the state of DevRel according to ZIRP (the Zero Interest Rate Phenomenon), the data that backs up the rise and fall of job openings, whether or not DevRel is dead or dying, speculation of the near-term arrival of AGI, AI Engineering as the last job standing, the innovation from Cognition with Devin as well as their mis-steps during Devin’s launch, and what’s to come in the next innovation round of AI.
Featuring
Sponsors
Sentry – Code breaks, fix it faster. Don’t just observe. Take action. Sentry is the only app monitoring platform built for developers that gets to the root cause for every issue. 90,000+ growing teams use sentry to find problems fast. Use the code CHANGELOG
when you sign up to get $100 OFF the team plan.
1Password – Build securely with 1Password - 1Password simplifies how you securely use, manage, and integrate developer credentials. Manage SSH keys and sign Git commits. Access secrets stored in 1Password. Automate administrative tasks. Integrate with third-party tools. Also, check out our INFRASTRUCTURE.md file for more details on how we do secrets with 1Password.
Paragon – Ship native integrations to production in days with more than 130 pre-built connectors, or configure your own custom integrations. Built for product and engineering. Learn more at useparagon.com/changelog
Fly.io – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.
Notes & Links
- DevRel’s Death as Zero Interest Rate Phenomenon
- Measuring Developer Relations
- pandas
- Jasper AI
- Writer.com
- harvey.ai
- Cognition.ai
- Anthropic
- Changelog Interviews #594: Microsoft is all-in on AI: Part 2 (with Mark Russinovich, Eric Boyd & Neha Batra)
- The Rise of the AI Engineer
- AI News
- von Neumann probes (Self-replicating spacecraft)
Chapters
Chapter Number | Chapter Start Time | Chapter Title | Chapter Duration |
1 | 00:00 | Let's talk! | 00:38 |
2 | 00:38 | Sponsor: Sentry | 03:39 |
3 | 04:17 | ZIRP and Friends | 00:45 |
4 | 05:02 | Where in the world? | 02:21 |
5 | 07:23 | Zero interest-rate phenomenon (ZIRP) | 04:35 |
6 | 11:58 | Good vs bad DevRel | 06:05 |
7 | 18:03 | Sponsor: 1Password | 02:37 |
8 | 20:40 | What exactly is DevRel? | 10:06 |
9 | 30:46 | Just publish hits | 08:03 |
10 | 38:49 | DevRel folded into Product? | 05:41 |
11 | 44:30 | Just tell/show people the story | 03:54 |
12 | 48:24 | Data Science ~> AI Engineering? | 01:51 |
13 | 50:15 | Attributes of an AI Engineer? | 03:06 |
14 | 53:21 | Success besides Midjourney? | 05:32 |
15 | 58:53 | Sponsor: Paragon | 03:39 |
16 | 1:02:32 | Share more about Cognition/Devin | 09:56 |
17 | 1:12:28 | Is AI alive? | 05:05 |
18 | 1:17:33 | PULL THE PLUG?!! | 04:00 |
19 | 1:21:33 | If West World is even close... | 08:05 |
20 | 1:29:39 | We're done. | 00:35 |
21 | 1:30:13 | Outro time | 01:31 |
22 | 1:31:44 | ++ Teaser | 01:38 |
Transcript
Play the audio to listen along while you enjoy the transcript. 🎧
Alright, we’re here with our old friend, multi-time recurring guest; too many times to count. I don’t know, swyx - 3, 4, 7, 11 times on the pod? I’m not sure. But you’re back. It’s been a little while. Good to have you, swyx. Welcome back.
Thanks. Good to be back. I’ve been always a loyal listener, and it’s just an honor to be invited on every single time. It never gets old.
Well, we love your enthusiasm and your availability. I can hop on with you on Monday and say “Hey, do you want to come on the pod tomorrow?” and you’re like “Let’s rock and roll.” [laughter] That helps.
That means I keep myself relatively free… I currently do not have a real job. That’s what it means. I did move a meeting, but that’s just because it’s easy for me to move stuff, because I’m my own boss now effectively.
Remind me where you’re at in the world again.
Yes. So I am now no longer having a real job. I run – I always call it two and a half companies. It’s the Latent Space podcast and newsletters, so the media empire, which we can talk about later… The AI Engineer Conference, which just finished two weeks ago… And I also have my own venture-backed company, Small AI, which I’m working on with a couple engineers. So yeah, it’s hard to describe, because I don’t work at a regular employer; I kind of split my time between three business ventures. But that’s just how my attention is spent. I think each of them independently have different time horizons of success, and hopefully they all have a common theme of like me being a prime mover among the engineering field.
How do you make money?
So the thing that actually currently makes money is the conference that I run. We have now successfully run a 2,000-person conference for the first time ever… And same deal as most conferences - we sell tickets and sell sponsorships. You pre-commit to a whole bunch of expenses up front, and then you freak out for three months, hoping that you sell enough tickets to cover your costs. And then we do, and we make some money back on top of it, and that’s the profit.
Well, happy to hear that you’re running in the black there. A lot of folks run conferences in the red, or very near the red. So that’s awesome.
Yeah. We had a lot of help, because Andrej Karpathy basically tweeted about us, and we immediately sold out like an hour later. So I think for me it’s a long-term game of like – I want to build like the KubeCon equivalent, the definitive industry conference for AI engineering, which is the thing that I’ve decided to sort of place all my chips on. So we don’t have to talk about that at this time, but I think it’s relevant to Dev Rel, in the sense that when I was a developer advocate I spoke at conferences, and a couple of times we actually even organized company conferences. That’s pretty much the peak of what you do as a Dev Rel. And any Dev Rel who believes their own BS enough should actually go out on their own and do this as an independent business venture, because it is worth much more to dozens and maybe hundreds of companies than it is to a single company.
Yeah, that’s interesting. Well, I remember last time you were on the pod we had just experienced the ChatGPT moment, I think. It was probably a month or so after that… And you said “This is it. I’m going all in on this. I’m learning –”, which you do, which is one of the reasons why we invite you on, because you learn in public, and we learn from you and with you… And you had made multiple transitions, kind of frontend stuff, Dev Rel, you were thinking backend for a while… I know you were at Temporal, doing workflows and backends, and then it’s like “Alright, here’s where I’m going to really dive in.” And that was a while ago. So I definitely want to catch up with you on that stuff.
[00:08:00.12] But the reason why you caught my eye this time around was a post about Dev Rel, and about the zero interest rate phenomenon… Which it seems like we’re learning now post ZIRP that a lot of things that we were living with and thinking they were normal perhaps we’re not so normal; they were kind of bubbly, or frothy, or maybe symptoms or side effects of all of this cheap money which was in our industry… And that quickly left our industry when the macroeconomic situation changed.
You have a post which we covered in news just the other day, yesterday as we record, but a few days back as we ship, “Dev Rel’s death as zero interest rate phenomenon”, where you ask and answer the same question, “Is Dev Rel dead?” And I’ve heard a few whispers of this, like “Okay, is Dev Rel dead, or dying? Or what’s going on with Dev Rel as a thing?” And so that’s the opening of the can. Swyx, take it where you want. First of all, you say no, but maybe why is it not dead? Start there.
I think to claim something as dead means there’s no longer demand for a role like this, and it’s objectively not true. I have friends who are desperately trying to hire Dev Rel, with full knowledge of all its faults, because they are close friends of mine, and I have complained to them about the failings of Dev Rel. So full knowledge that a lot of Dev Rel is completely ineffective, a lot of people are really bad at Dev Rel, and they still have the job… Full knowledge of all that, they still need it. So you cannot say the role is dead if people just really demand it still. But just like with any technology, the moment people start asking “Is it dead?”, it’s not quite dead, but it’s less cool.
Yeah, it’s not a good sign, right?
Redux isn’t dead, but people have been asking “Is it dead?” Yeah, you know, it declines… And I try to quantify it. My approach the first time anyone’s actually tried to say that, “It’s not dead, but how much of it has died?” And my number is 30%. Over ZIRP it increased 200%, and then it declined 30% since the peak.
And where do you get that? How do you quantify that?
Google Trends? That most objective amount of data.
A good proxy for perhaps being truthful, right?
In terms of search, like what’s the search trend?
Search trends, yeah. But also - and that seems to coincidentally line up with the Common Room Industry Survey of Dev Rels, where about 26% of them have said that they’ve been involved in layoffs. And anecdotally, we’ve seen a bunch of Dev Rel layoffs, including my old company, Netlify, including Planet Scale, including a bunch of other companies out there. Auth0 as well. And these are layoffs without replacement. So straight up, we no longer have Dev Rel. And that is one form of Dev Rel being dead. Companies that used to heavily invest in Dev Rel. Actually, I’ll put this on record, because I couldn’t find an authoritative source, so I’ll be the authoritative source. Microsoft, 2018 to 2019, hired something like 200 Dev Rels. They call them cloud developer advocates. Some of the top names in our industry spent really a lot of money gathering all of them. And like two years later, half of them are gone. It’s not polite to talk about power politics within Microsoft, but there was definitely a big power struggle in there, trying to build a Dev Rel org and not succeeding. I think that was like a really early precursor. Because that wasn’t ZIRP, right? 2018-2019 wasn’t ZIRP. But it was a pretty, pretty early precursor, and people having very inflated expectation of what Dev Rel could do for a business, throwing a whole bunch of money in it, and then realizing that just money alone and number o warm bodies alone doesn’t actually solve it. Like, you actually have to have taste, and a clear message, and a working system that scales healthily and effectively, and that takes time to build.
[00:11:56.26] Yeah. There’s a lot of facets to this, and one of which you mentioned maybe offhand that I think about a lot… I think Adam thinks about it a lot, because we talked to a lot of Dev Rels; of course, Dev Rels would love to be on our shows, and all this kind of stuff. And there’s a fine line between a Dev Rel and a shill… And I think there’s a big difference between good Dev Rel and bad Dev Rel, at least from where we’re seated. We can just like see through certain things… And then other people were like “Yeah, we’d love to have you on.” And it’s like there’s a big difference between the two. It’s clear. And I wonder – first of all, do you seem to agree with that? That’s generally a true sentiment?
Sure. Yeah. Absolutely.
So secondly, when it comes time to die, but then also have life –
Gosh, Jerod…
…like, there’s still value in the position; it seems like - and maybe this is just like stating the obvious, or shallow, but it’s like the good ones are gonna stick around, and the bad ones are the ones… No offense to any individuals, but the ones who aren’t good at what they do, they wouldn’t provide much value in the first place, right?
Yeah. There’s some amount of that, but also inherent in the job - it’s a high burnout job regardless of the macro, in the sense that it’s mostly like a mid-career job. There’s very, very few chief Dev Rel officers. There’s some CDXOs out there, but there’s no career path to like VP Dev Rel in most companies. It’s a job that you sign on –
A stepping stone.
It’s a stepping stone, or you’ve just decided to opt out of the rat race altogether, and all you want to do is make contact –
They’re a lifestyle –
It’s a lifestyle business.
And travel.
Yeah. Well, for most people, professional travel actually starts becoming a drag. It’s nice to see your friends every once a quarter or something, but professional travel actually is a drag. Like, nobody actually really wants to do that in that role… Unless you just love travel. But [unintelligible 00:13:50.05]
That’s my point. I’ve met a few… And maybe they’re just saying this, but it’s like, they just love traveling.
Yeah, check back with them after like two years of it, you know…
[laughs]
Definitely. Over time that gets old, for sure. Even like you go to Sedona or some special place for vacation, like “Man, I want to buy a house here”, which is how I felt when I went to Sedona. I was like “Man, I want to live here.” But if I lived in Sedona – and I lived in Orlando, Florida for a while too, and it’s like, after a while Orlando is just Orlando. Now, yeah, I can go to Universal Studios anytime I want… But do I? I went there like twice, maybe three times over a three-year span. Like, it’s just not – vacation places…
It loses its luster.
Exactly. It loses its luster in terms of like “Well, I don’t think this two years of travel constantly is really the game.” Steve Klabnik I think is probably the best example that I’ve known of, well before even any of the ZIRP. Or maybe it was even like – maybe it was ZIRP. I don’t know. It was freer money. Maybe that was part of it. But not this phenomenon we’re speaking of when it comes to like COVID, and like literally lots of free money out there, this more recent occurrence of it. Steve Klabnik traveled, I think, so much, and he was outspoken about this way, way back in the day, and just got burnt out bad on it… Because it’s just – you’re not built for it. Timezones, travel, you never know where you’re at in the world…You’re constantly in a different timezone… Your body, your circadian rhythm can’t even keep up with it.
Yeah. So I want to preface this with, I think, maybe some people’s impression of Dev Rel is it’s a lot of travel… That number has definitely come down a lot. Part of it is cost cutting, part of it is environmental concerns, and part of it is people just don’t want to travel that much. So actually, when I say people burn out of the job, it’s actually not only that. And probably it’s not even majority travel; it’s actually just the grind of constantly dealing with people who are new to a technology… I call this the eternal September effect. There will always be more beginners. So if you care about, for example, viewership numbers, you’re always writing the one on one level intro to whatever, and like that is your life, for as long as you want to do Dev Rel… Because that’s the highest tab content that’s possible.
[00:16:10.17] The best intro to something - that’s the sort of pinnacle of your success, is to write the best intro to something. So a lot of people don’t want to do that forever. They want to have more seniority and more impact in their work… And impact being maybe financial impact, rather than impact on the industry… Because Dev Rel does have impact on the industry, and they choose to move on.
So people leave their realm not only because they’re not good at their jobs, or they just aspire to something different. And that’s normal as well like. This job, way more than other jobs and startups, has more churn inherently, and that’s normal. I just want to establish that. It’s not a judgment thing of like “Oh, you sucked at it, so you had to leave.”
Yeah, so overall this was a very tough piece to write, because obviously, a lot of my friends are Dev Rel. Obviously, I had that job for a long time, and a lot of people know me for that, and come to me for advice on that… And actually – so the idea for this post was actually one calendar year ago I tweeted it on my sort of private alt account… And I had to wait for more people to start saying it for me to be okay publishing what you could read today. Because me saying it too early would have pissed off a lot of my friends who had that job. And I think for me to find the words that would accurately try to say what I was thinking, without also being too inflammatory. I’m trying not to bite the hand that feeds me, but I’m also saying “Hey, let’s call a spade a spade.” That was a mania in Dev Rel over the ZIRP period. And now everyone can see it. Let’s put an end to it, let’s try to learn some lessons from it, now that it’s okay to say it out loud.
Break: [00:17:59.11]
This Microsoft era you mentioned that was pre-ZIRP, and maybe even after it - like, I think one of the reasons maybe there’s churn is that it’s a hard, even to this date, kind of hard to find what exactly is Dev Rel. And when you have a challenge defining what it is, you have a challenge in defining what it should do, what the function should do, which maybe is the reason why people flunk out of the job, or move along or churn, because maybe even in the Microsoft era that you referenced, where they hired lots, it was challenging to define “What exactly are you trying to do?” Because if the job primarily is focused on a function that sits between the company, which usually is a tech product of sorts - a SaaS, a dev tool, a dev service etc, maybe even an open source company, a [unintelligible 00:21:26.09] company, or an open source project - they can have Dev Rel as well - if the function is to nurture the relationship between the company, potentially the purchase of a product, and the developer community, there’s a lot of ambiguity in there in terms of what you could do to be successful. And it might be challenging even as a manager to manage Dev Rels. Like, what do you really do here? What can you do here? What is success in your role? And when you have lack of clarity as an individual, it’s kind of hard to maintain what a good friend of mine, coach Michael Burt says “the prey drive”. You have to have a reason. It’s your because goals. “Because I’m a Dev Rel, these are the things I do in my role.” Or “Because I want to do these things within the developer community, I have clarity.” When you have ambiguity in your role, or a lack of clarity, it’s kind of hard to kind of wake up every day and be motivated and get something done. Or when you’re tweeting or doing social media, or doing these things that aren’t really seen by peers, or adjacent peers, like engineers, or marketing, or sales… It’s like, that person is Dev Rel and they’re just tweeting. Like, is that work?
So when you feel like you’re not clear, it’s kind of hard to kind of just get up every day and just do you do well, unless you’re a self-motivated person. So I guess all that to say, how much of this churn is because the management or definition of what the role is, and what success is of the role, is well defined, so they can be successful?
I would say – so I worked at AWS as Dev Rel. I’ve never worked at Microsoft. But I do think you should have some faith in the big corps to really define roles before they hire for them… Because it’s hard to open headcount in these things. And so they have their definition. And I think there’s more security there, just because these things are very, very well defined, at least internally, for that stuff.
What’s less defined is the startups side of things, where I just got a bunch of funding, I’m going to allocate one person out of my 15-person team to go be that sort of public face for my company. What is your job? Do whatever seems right. And I had that job. That was me at Netlify. I had no manager for a year… It was fantastic. It was also absolutely ZIRP, because I just did whatever I felt like doing. It was fantastic. And then eventually we got adult supervision with Sarah Drasner.
[00:23:58.12] But also, I thrived. The most intrinsic motivational job I had, just figuring out the meta game… And I think a part of it is – with marketing, with anything, with people, the true [unintelligible 00:24:14.18] cannot be taught, and the true Dev Rel is not the Dev Rel that can be written down. And the moment you try to write it down and try to systematize it, the game has already moved six months ago, to like the new game. If you think “Here’s the way to success, and we’re gonna scale this for the next five years”, tough luck. People and trends move quicker than that. So it can be a really tough thing to nail down.
That said, I do have a piece that is fairly popular… I’m told it’s like required reading within Google, of measuring Dev Rel. And I think basically the definition of any job, just from the outset [unintelligible 00:24:53.08] as a black box. Money goes in. What comes out of it? So I have three major buckets. It’s community, it’s content, it’s product. And either you’re producing in one of these areas, or you’re not at all. And I think most expectations of Dev Rel is one of these three things. And I can go into those things further, but you should have some definitions of what the visible output of Dev Rel should be from there, and then that basically becomes the job. And if it’s not successfully captured in those three buckets, then you’re probably doing a different job than what most people think of as Dev Rel.
I think that’s well said. I think that it is really tricky, because almost the more formal and described and delineated the role becomes, you’re probably on the lower end of the value chain in terms of like actually being able to execute on it well… For the reasons that you just stated. It’s like you almost need this – the formula that works really well is like the small startup, and I think you talked about there are companies who are doing really well with no Dev Rel or minimal Dev Rel, but they all have like a charismatic leader, or somebody who’s already very online, and very good at being online, that just continually brings them more and more interest, more and more community, more and more relationships… And that person’s almost a unicorn to a certain extent; like, can you systematize what they do, and then hand it to somebody else and say “Go and do this?” I’m a bit skeptical. Maybe it can work, but I don’t think it’s going to work on a repeated basis into the hundreds of employees, right?
Into the hundreds of employees, on a smaller scale, yes. If we’re talking about like the Twilios of the world, which I feel like they have successfully done that, it becomes much less personality-driven, and much more about a repeatable process that can be scaled across every major city. And it might be also a relic of maybe seven, eight years ago, when things were maybe more sort of in-person-centric. Now that I think online has taken more and more and more of our mindshare and time, and remote working and all that, the sort of online-first Dev Rel, meaning that it is just no location, means there’s no need for repetition, and therefore more centralization in a single person. So I think that’s important to note.
I think the other thing also – so I agree with this sort of low-level, junior/beginner person thinks about it as like “Oh, I will produce three blog posts a month”, or something. The higher level thought process is “I am [unintelligible 00:27:28.09] a mission. I’m promoting an idea. And the output is three blog posts. But those are downstream of me promoting the idea. Like, I’m starting a movement.” For me, when I talk about the stuff that I do for Latent Space, AI Engineer Conference and Small AI, I don’t think about it as I organize a conference, I think about it as “I am starting my own industry, and the visible output is I start a conference.”
[00:27:58.14] So I think people who think about it on a higher level have a more coordinated approach to their actions. Even though the actions still look the same, they have a more cohesive outcome, because there’s a broader plan beyond that, instead of the individual units. And I do think for the junior Dev Rels – so I do some advising on the side, just for fun… And for the beginners who come into the job, they very much do like the one-off “Hey, we’ve got to get this launch right. We’ve got to max out this launch, and we’ve got to get all the retweets… Make this the most awesome launch possible.” Whereas for me, it’s not about the launch. It’s about the long-running campaign from past the launch through whatever’s next. And I think getting people to see that whole journey I think is something that levels them up to the next tier.
And I think that’s what the founders can uniquely do. And you talked about these unicorn founders. That’s what the founders can uniquely do apart from the hired guns, which is the founders started the company because of this whole mission, and being able to authentically tell that story. I think it’s very rare. I think maybe one possible positive example is Lee Rob at Vercel, who has successfully taken on that sort of voice of Next.js.
A great Dev Rel, by the way. When I was thinking about the good ones, he was one that was in my head. I think the guy does a great job.
He actually prompted this post on the “Death of Dev Rel”, because he was like “Yeah, I’ve been thinking about this a lot.” Because he and I chat offline quite a bit. But yeah, I mean, it used to be Guillermo primarily promoting Next.js and Vercel, and now I think Lee Rob is kind of number two in the company, at least as public-facing figures go, being be the voice of the company. And I think that’s absolutely like a rare success story.
I think it’s very much a two-way street about the founder being able to trust whoever they hire, and then the employee being able to rise up to the task. And very often those two things don’t happen in either direction. So that’s what it is.
The last thing I want to point out as well, just in terms of why people fail at the job, or why measurements fail… This is some I’ve been thinking about which I did not sufficiently capture in my [unintelligible 00:30:00.24] post, which is that often Dev Rel, just like any other content or media business, is a hits-driven business, not a consistent output business. I used to often say, I’ll write 50 blog posts a year, but just one of the 50 will actually be the one that people remember. And it’s very, very hard to run any business, it’s very hard to learn from any information or any market reaction when most of the stuff you work on is going to flop. And it doesn’t mean that anything is wrong; it means you should still keep going, even though most of your stuff is flopping, because it’s the stuff that’s going to produce that one hit, that’s going to justify everything else.
Right. No, I 100% agree. Any hits-based business is like that. You say “Okay, well, all I need is hits. Then I’m just going to only publish hits.”
Oh, yeah. Just don’t fail.
Yeah, exactly. You don’t know why it’s – I mean, even with this post, swyx, you said “I’m not sure why you guys invited me on” or “I’m not sure why this one resonated with people.”
Yeah. Oh, there’s many reasons why I’m not sure. [laughs]
“I don’t even know why.” But I was like “Oh, this is interesting.” And I hadn’t talked to you in a while, so…
I could probably answer that question…
Okay.
Well, I think that we talk to a lot of Dev Rels, we have a lot of friends, and fans, and people we’re fond of, that are in this space, that have, I would just say, tumultuous times. And so it’s a hit because we don’t want to see people or companies shrink that particular function size, because that means friends of ours are out of work, or they’re changing what they do. They’re moving into adjacent roles, or different roles, maybe directly into marketing, where Dev Rel’s sort of adjacent to it, but sort of in a lot of cases under it… Sometimes even under Product… And I think we care, because it shows the health, to some degree, of our industry.
[00:32:06.10] If one of the core functions is withering or failing or churning or not right, I think it’s an indicator of how healthy the market might be. I think that’s why we care about this particular function so much, because this is literally where product and the future potential buyer might be. One thing you reference in your measurement is the Sean Ellis question, which essentially is “How would you feel if you can no longer use the product?” And so if you ask this question to a free user, a free tier user, and you say “How would you feel if Changelog was no longer a thing tomorrow?” Would you be happy, unhappy, somewhat disappointed, very disappointed?
Devastated?
Devastated. Thank you, swyx. I think that if we had a large majority of people saying “devastated” or “very disappointed” versus the other two or three options, then that means that we’ve got some version of product-market fit, or we’re very beloved, and so we should find a way to exist or live… If that were us on a deathbed, Jerod. Geez, this is terrible. Point is, is that I think that we have a lot of people who are in that space we care about, and any unhealthy measure in this particular space shows signs of an unhealthy market. That’s why we care. That’s why I care.
Plus, we’re looking for answers and explanations roo, right? I mean, we see things going on, some of us talk about it, some of us don’t… And it’s tumultuous, and it’s scary and sad. And then you’re looking for answers. You’re looking for like “Well, what was going on?” And it’s like “Well, here’s a post that surmises that it was this.” And now, “Okay, well, that rings true.” I mean, it rang true with me, which is why I put it on Changelog News… And then at the end of it, like “Well, given all this, what now?” What can we actually move forward? Because we know that it’s not dead insofar as it’s not a valueless thing. There’s huge value in having high-quality developer relations around your product or service and all that that entails… But the free money is gone, which was allowing it to bubble… And as we’ve discussed, now what?
In that post-ZIRP environment, what does it mean for Dev Rels, what does it mean for everybody else? Is it still a job that I should go out and seek? Is it not? Is it something that I – what are the “now whats”? So swyx, key in on that point, and what are some of your thoughts being deep in this area, of here we are, 2024, halfway done… Who knows what’s going to happen by the end of the year… But I don’t think we’re gonna get back to zero interest rates by then. We might see one, maybe two cuts from the Fed this year… Maybe not. Maybe zero. But for folks who are either in Dev Rel currently, or considering it, or trying to get back into it, what are your thoughts for them?
Yeah, so I actually tried to leave solutions out of this post, because it’s been covered elsewhere, and I haven’t identified as Dev Rel for maybe a couple years now. There’s a bunch of solutions out there. I do think just the straight, job hasn’t really changed. I think what the removal of free money has led to is basically we can no longer get by with lack of accountability in Dev Rel. It’s probably a good thing, it’s probably something that we needed. And so what I tried to do in the post is to list out the smells of what ZIRP Dev Rels looks like. And so I tried to use that as a checklist for people in the industry of like “If you were doing this, there is no longer any appetite for this.” It is no longer okay to do – let’s just call it free-tier Dev Rel, for example.
[00:35:52.13] You only talk about how to use your company’s free services, and have blissfully zero knowledge of anything paid, because that doesn’t serve the company’s needs… And actually, more so the point, it doesn’t actually serve the customer very well, because you don’t know your product.
And there was a lot of free tier Dev Rel in ZIRP… Because it’s easy to talk about something that you can adopt for free, and it’s easy to get applause for something that’s free. The hard part - and it was really challenging your skills - is saying why your company’s products are actually worth real money. And people who obviously were successful at that were probably more valuable to the business anyway.
So yeah, there’s a lot of thoughts… So at the end of the post I linked to Lee Rob, I linked to Sam Julien, who used to be VP of Dev Rel at Auth0, and I linked to myself as independent thoughts. I think everyone’s basically – the common consensus, let’s just say, is that Dev Rel moves into developer experience, which is kind of an annoying rebrand. Every industry likes to rebrand itself. In DevOps there’s this ongoing rebrand to platform engineering. Same thing for Dev Rel.
Same thing for data science, right swyx? Data science…
[laughs] We can talk about that after, but… I would argue not, but…
Okay. Save it, save it.
We can save that. For me, I do think that basically there’s a maturation Dev Rel that I’m looking for, where you don’t have the one-size-fits-all Dev Rel that does the sort of full stack of production, to publication, to sort of idea generation, and you have a sort of front/middle/back office Dev Rel. This is definitely for more scaled up organizations. I was leading a team of nine at my previous company, and I definitely saw that need to grow more structured process around Dev Rel.
And then I think understanding – for people who choose the developer experience path, understanding how you interact with the rest of product and engineering, and having to buy into actually have features… For Lee, he straight up just became VP product. There was no “We will coexist with products.” No. Dev Rel just took over product. That’s how they solved it.
Good for him.
Very, very few other companies will actually let that happen. Because product usually has way more political power than Dev Rel. That’s just how it is. So Dev Rel then gets [unintelligible 00:38:10.09] to marketing, and then loses all power from there. I think having Dev Rel become PMs is the path that I see some of the really motivated people interested in impact do… And I think that is the right way to do things. But Dev Rel as a title is going to continue to exist as primarily sort of marketing and community and docs and function much more than product, just because products is its own beast. It’s a much more established industry by far, and much more politically powerful, and therefore a harder force to have any impact on. I don’t know if anything I said is controversial…
Well, leading practice is tough. That’s a tough role. What do you know about how things have changed for Lee Rob? Because we’ve talked to him several times, but I’m not familiar the details of how the Dev Rel folded into product - how did that actually play out? How does that roll out now? Like you said, Dev Rel took it over. What does that mean?
I mean, so he was promoted from Dev Rel to product. So there is still Dev Rel at Vercel, it is just far, far less visible than it used to be. And probably for the better, I don’t know. They basically just had attrition without replacement. And that’s just how the team sort of shifted its priorities.
I mean, they needed a VP product, and Lee proved to himself and to the company that he was up to the task, and I guess they promoted him. I can’t really speak for his personal experience, just because I only hear tangentially from him and other people, but I don’t hear the full story… So you can talk to him about that.
[00:39:45.18] That was less on the specifics of his specifics, but more like how they as an organization achieve that… Because leading product and leading Dev Rel is uniquely different, but also not exactly far off. To build the best product you have to have a connection with the people that you’re building it for, which is a function of Dev Rel. A connection to community. But you also have to have a business mindset, like “Where do we actually make money? Where do our users really get joy? Where’s our business trying to go?” Not just “Where’s the product trying to go?” Which sometimes is similar or the same, but not always. So I would not suspect it would be easy for a Dev Rel just to take that over, unless they’ve got some prior leading product management experience, or have just – they’re just a Lee Rob, where they just like slay it.
Yeah. Again, I’m not really speaking about his specifics, but I do think that if people care enough about developer experience, then it basically is a shadow product team anyway. This is something I’ve talked about again and again, which is kind of the existential problem with Dev Rel, which is that you’re supposed to be the voice of the user, it’s supposed to be a two-way street, you spread the good word out; that’s the dev evangelist role. And then the Dev Relations role takes that good word that you get the feedback from developers, and brings it back into the company, except most of the company doesn’t want to hear it, because they already have backlogs, and you’re just adding to the backlog, and you’re not welcome here, go away.
So a really good dev experience person would prioritize and justify and go like “Here’s what our developers are telling us. Listen to me, I’m good at the people, and I talk to the people…” [laughs]
Yeah, I understand. What you’re shining a light on though is that friction between Dev Rel’s job and product’s job.
Yes.
So rather than fight the fight, merge.
Just take it over. Yeah.
Yeah, this [unintelligible 00:41:41.16] It’s why I asked the question… Because I was less trying to understand Lee Rob’s personal specifics, but more this function of – because I think you kind of clarified it there, where there’s that friction point; if you’re just kind of going out there and you’ve got less respect or less political power with product and direction, can you even do your job well? If when you go back to the table you say “Hey, I’m out there, fighting the fight. I just flew 10,000 miles last month, spent three weekends on the road, and here’s the wisdom…” And everyone’s like –
Here’s what the company paid for it. We’re paying for this.
Right. And then Product is like “No, we’ve got different – I’ve been talking to users too, but in a different way.” And so we’re gonna pause your thing, because we’ve got enough backlog already, and I’ve already led this direction here.” So it’s almost just wasted.
It’s absolutely wasted.
Yes, absolutely. Then you go back to the developers that you spoke to…
I was trying to be kind about it, I suppose, by saying ‘almost’.
Yeah. And then you didn’t deliver for the people you spoke to either, right? You couldn’t actually get their request representated in a way that gets it – so you’re ineffective on both sides. That can be incredibly frustrating, I’m sure.
Yeah. I call this a two-way *bleep* umbrella, for the company to the users, and then from the users back to the company. And you just have to filter a lot. And so I call this an emotional burden. And when I tweeted that, I was definitely feeling it.
Yeah.
Yeah, I mean, this comes with the territory… But if you want to actually change anything about it, instead of just tolerating it, you take over Product. And this is something I actually ended up doing at Temporal. I ended up being the PM of the TypeScript experience. And actually, I think it helps, that sort of two-way synergy, because after I was done being the PM, then I also then flipped back to my Dev Rel role and started talking about the stuff that I did. So if you are heavily involved in talking to users and designing the thing, then you can very authentically say “I designed this, and here’s how you’re supposed to use it”, and people believe you.
Right. And if you’re that highly invested, you might as well just be repping your own product, right?
Yeah. [laughs]
I mean, that seems to be the move, right? It’s easier than convincing the product manager to do your things, is just become the product manager. And that can be very difficult, unless it’s your own company, in which case you wear all the hats, and you bear all the burdens, but you also get all the upside.
[00:44:01.14] Yeah. I mean, that’s exactly what I’m doing. But I would say it’s a very tough job to hold all those things in one go… And I think it’s a very privileged position to be in, to help to do that for a company that has a lot more resources than you. So I’ll just say, yeah, if people are interested in entrepreneurship, you want to be able to build, and you want to be able to sell. This sort of dev experience, Dev Rel combined with product role is probably one of the best jobs out there in developer tooling.
I agree. I think putting Dev Rel - or whatever Dev Rel’s function is; even if you don’t call it Dev Rel - under Product makes a lot of sense. Because I think the reason why Dev Rel kind of gets this “shill”, as you mentioned earlier, Jerod, or this bad rep, or this sort of pejorative feeling is that you feel like you’re out there trying to sell, and that’s not your job. I think the job of Dev Rel generally is trying to showcase the vision of where the product is going, and get that resonance from the community, and see if it’s landing, and also create advocates out there, who become passionate about where you’re going, so that you can essentially take that wisdom you’ve got back to the team and say “This is what we’re doing. This is how people feel about it. This is where they’re not getting it, this is where my demos and my tutorials are not landing. This is where my 101s are falling short, is because of this part in the workflow”, or whatever it might be. Their job is not to sell, their job is to tell and share the story… Which, if you do it right does sell, but you’re not trying to sell.
Even in our ad spots – I don’t know how much you care about these things, swyx, how we do our ad spots… I literally tell these people that I sit down with, more often than not CEOs of the companies, I’m like “I don’t want you to sell. If in this conversation you’re trying to sell, we’re doing it wrong. I just want you to share your story. Can you share your story for me?” [phone ringing] And not that story on my phone. Sorry about that. It is to – just don’t sell. That’s my job, to tell people where to go, and to be excited about your thing, and to give people waypoints. Don’t come on here and sell. Same thing for Dev Rel. Don’t go out there and sell. Just tell people what we’re doing, and get that feedback on how we’re doing it. And is it working? And how do we change to make it work?
If I could make one tweak, instead of just tell people what we’re doing, you should nerdsnipe them. That is the way to hook developers, is like tell people what hard problem you worked on, and tell people the backstory to why you worked on it, or what’s the sort of intellectual history behind these ideas, and why is this the thing that is inevitably what everyone is going to be going towards…
Yes.
…whether or not it’s you… Like, you build it in house, or you buy it from us, or someone else builds it doesn’t matter. The industry is going this way. Are you with us, or are you part of the last [unintelligible 00:46:50.18] whatever. That is the kind of story that I like to tell, which is not just “Tell us what you’re doing”, but put us in a broader narrative of where are we at that moment in history, and I think you get the nerd snipe.
Definitely try to show a little bit of the behind the scenes. I think a lot of the standard marketing advices benefits over features, and I think that there’s a little bit of inversion for developers, where you want to talk about features, because you let the developers figure out the benefits. But go down to the implementation details, because people love to learn about that, so that they never have to touch it… And then go like “Here’s the benefits of that.” But if you only lead with benefits, like “We will accelerate your digital transformation by 10% in the next quarter”, I don’t care as the developer. Show me how it works, and tell me something cool.
Right. One flavor that I think would be interesting, which maybe we’ve done, maybe we haven’t done - and we definitely do it on some of our shows - is “Tell me how hard this particular thing was to build.” What did you have to go through to build this thing?
Exactly.
And that’s where the nerd sniping comes.
[00:47:54.03] The nerd snipe is so effective for selling the product, but also selling you on working with me. Like “Come join us, we work on cool things.”
Dan-tan.
Yeah, exactly.
And I just cannot tell people enough to do this, because I think you have to kind of repeat it to them a lot, that people want to be nerd sniped. They want to work on hard things. And if you just emphasize the nerd snipe, you accomplish both goals of doing any public appearance, which is recruiting and selling.
Nerd sniping for the win. So riddle me this… How has data science not been rebranded into AI engineering, or data engineering, or pick your flavor of the day? It seems like the data scientists are just doing what they were doing before, mostly - maybe there’s some deployment things going on now - and just like changing the label on their business card.
I think there’s definitely some rebranded data scientists that are transitioning over to generative AI really well… But I think there’s a qualitative difference in the kind of people that are doing really well in generative AI; they have no shared history, no shared skills, no shared language with the data scientists. There are many, many successful AI engineers that do not know Python. And for data science not knowing Python, not knowing Pandas is like “What are you even doing here? You’re not part of our club.”
Okay, so you’re saying the opportunity has broadened the industry, to where you don’t need to have the same background as a traditional data scientist.
Right. And this is not to say that data scientists demand has gone down at all. This has been a secular growth trend for decades. I would just say this is just a different type of skill set that makes you successful in this era, rather than the previous era. And whether or not you are successful here relies a lot more on your creativity and full stack product development skills than just pure data science… Because data science comes in later, when you have data to work with. But now when you have foundation models that you can just prompt and make an MVP so quickly, you actually need to be creative and quick to market, rather than being deliberative and analytical. Being analytical actually slows you down, and makes you too conservative. Like, what are you doing worrying about costs when the cost of intelligence for GPT 3 goes down 90% a year? That kind of stuff.
Makes sense. So what are the attributes of an AI engineer?
A-ha. [laughs]
Ha-ha! [laughs]
I have a convenient blog post to refer to people…
Oh, gosh…
Please read it out loud to us now. [laughs]
So yeah, obviously, not to be annoying, but I do actually have a blog post for this… And that’s part of the sort of meta game I do preach to people about the learning in public, which is anytime there’s a frequently asked question, you should have a canonical blog post for it… Not just because you can be that annoying person to send it to people, but actually because you can actually spend the time to think about it, so you have a more complete thought.
I think Kelsey Hightower often says “You don’t really know what you think until you write it down.” And the reason he’s so thoughtful is actually he writes a lot of stuff down first.
So there’s “The rise of the engineer” post that just celebrated its one-year anniversary, and it’s the start of a lot of things I’m doing… And more recently we actually published a [unintelligible 00:51:06.29] that actually published some sort of reference job descriptions for people… And I like the sort of framing of offensive and defensive AI engineering. Defensive meaning like being able to create systems that fundamentally work on top of non-deterministic AI models. LLMs, as you might know, hallucinate, they’re non-deterministic, and they actually fail a lot. The P 99s of latencies are ridiculous sometimes, just for whatever inference load reasons your selected API provider might have. And so it’s effectively like “How do you create a reliable service on top of fundamentally unreliable foundations?” Sounds very familiar? That’s distributed systems.
A lot of the same language, maybe slightly different tooling emerges coming out of that. That’s defensive, though. And there’s also – let’s just call it preventing against regressions, or optimizing costs… And that’s a lot of fine-tuning for smaller models and all that good stuff. But really, offensive AI engineering is exploring new frontiers. This capability just came out. How can we put that to good use in a sort of end user product way that immediately clicks with them and generates a lot of revenue?
[00:52:19.24] I think the image companies have actually had the most success out of this. Midjourney is my sort of favorite example of this, making something like $300 million a year with 50 employees. Completely bootstrapped, like no VC funding. So for people counting at home, that’s $6 million per employee. And there’s more examples I can list in there, but it doesn’t really matter. If the new capability comes out, is it the optimization guy or the creative technologist that wins? It’s the creative technologist. And for me, it’s like, okay, most engineers are not creative technologist, but are they product people? Can they think about “How do I use this capability that just emerged to solve problems for a customer in some way?” They can be more creative there.
So I’m trying to basically explain why that is qualitatively different than the data scientist role, which is mostly analytical… Which is still very important. It’s just like a different skill set; if you just don’t have that gene in you of like being creative as a product thinker, then you won’t be as successful as someone who is.
Who are some other people besides Midjourney who are –
Oh, God… [laughs]
…very successful? Well, because from my perspective, I’ve seen – you know, set ChatGPT and the alikes aside; general use chatbots… That as a category - set that category aside. Obviously, huge success. Lots of value, etc. I’ll give you that. Midjourney - give you that one. But the companies that have brand new products, that are making moves in the marketplace, that have gone beyond demo and hype to actual product people are paying for - I don’t have my thumb on that pulse. I’m not seeing much of that. I’m sure you’re seeing more of it, so that’s what I’m asking about specifically.
Yeah, so let’s have a bar for – it’s easy to get to production on a small use case that nobody cares about. So production to me is not good enough. So let’s have an even more aggressive bar of it must make $100 million a year. That’s at a point where you can IPO as an independent company.
Okay.
Maybe the bar’s 200, but that’s just a factor of two. So let’s just say $100 million. What AI use cases have made $100 million a year? So obviously, we talked about Midjourney. I have four, and then the fifth one is more speculative. Conveniently, this is another blog post called “The anatomy of autonomy”, if people are looking this up. But generative text for writing, Jasper.ai and Rytr.com both have above 100 – Jasper reached the 75 million ARR before they imploded, and Rytr.com I think is comfortably at 100…
How did they implode? What happened? Adam, did you miss this? What happened with Jasper?
So I don’t know what their ad revenues are today, but effectively they got rug-pulled –
They got acquired.
Well, they imploded before they got acquired.
They got rug-pulled?
The acquisition was the exit, yeah. I don’t – look, obviously I’m just saying secondhand stories from other people, so don’t hold me on any of this… But effectively, the founder [unintelligible 00:55:12.08]
Well, you’re on a podcast being transcribed, so… [laughter]
Yeah, but on the transcript he just said “Don’t quote me on this…”
Don’t quote my transcript…
Okay…
The founder sold a whole bunch of secondary and then just peaced out.
Okay…
So he basically lost interest in developing the company, but then also it seems like they – so they built a very successful business on top of GPT 3 before ChatGPT… And then a whole bunch of people found out after ChatGPT that they weren’t actually doing that much on top of GPT 3, and then they migrated to ChatGPT. So they were basically killed by ChatGPT is the common narrative. I don’t know how true that is, because their focus was very, very strong on eCommerce on Facebook. The reason you don’t hear about them is because you’re not on Facebook. They are. They did very, very well. They went from zero to 75 million revenue in two years. Very few people have done that.
[00:56:05.24] But anyway, so since then, the emergent winner in that sort of generative text for writing category is Rytr.com. They seem they seem to have figured out the sort of post ChatGPT navigation… Which is not hard. Like, focusing on users and building differentiated features on top of the model is the job of AI engineering, and you just have to do a more creative, more dedicated job staying on top of it, and not being defeated by Open AI’s first move into chatbots.
Right…
So I don’t know if that’s a fair – like, I really want to stress, I don’t know. This is not my industry, I don’t know this specific writing case, whether that’s a fair characterization of what Jasper went through… But it is an interesting story. So a fair amount of revenue there.
Copilot now I think 200 million in ARR. So well past, right? And there’s a bunch of other smaller Copilot competitors, all with decent revenue, many of which spoke at the AI Engineer Conference that I held, so you can go look at that. ChatGPT, I think something like 2 billion a year in revenue…
I ruled that one out.
Yeah, exactly. So those are the four categories that we are like very, very sure make sense. There’s a bunch of sort of Copilot for other knowledge worker type things. Harvey is now the emergent example for like “We are Copilot for law, and every lawyer needs this, or you’re behind.” Like, fine. So for every knowledge or profession, there will be a Copilot for X. And each of those things will easily make $100 to $200 million, because you are replacing a whole bunch of junior workers for that. We can talk about the replacement theory issue… But there is real revenue here, there’s a real case for generative AI. It does not have to get smarter to be useful.
Okay. The fifth category, beyond all this, is the agents category, which is the most contentious one. It was a complete bubble last year. This year, the bubble company is Cognition. Devin. Also spoke at my conference; the first time they ever spoke at a conference. I like them. I actually have access and I use them… We can talk about Cognition if you want. They’re not the only players in this game, of like the fully autonomous agent. This one happens to be code-related, but there are others that are not code-related.
I do think that whoever eventually cracks this will be able to make significant revenue, but we haven’t seen it yet, obviously. But the bar is, for everyone listening, is can you make $100 million? And if that’s not good enough for you, nothing is good enough for you. If your bar is higher than mine, then you’re just gonna have to wait longer to see the results. But this is happening in progress, and you can either criticize it from afar, or you can just get in earlier and track the progress, as I’m doing.
Break: [00:58:40.24]
I would love to hear more about Cognition and Devin. It seems like they were unscrupulous in their marketing with the Upwork thing…
God, okay… I will defend them here. So yes, the headline on Hacker News reads “Devin debunked.” Very nice alliteration there. Out of the nine videos that they produced, one of them was an overstated claim, which I agree they should not have put out.
And the claim was that this bot could make money on Upwork autonomously.
Yeah. Pasting an Upwork job, and then just doing the rest, and make money for you.
Right.
Obviously, there was a human behind that being the bridge from Upwork to the bot, and also the bridge from the bot to Slack, which - Devin does not have Slack integration. So some stuff in the video was not the true Devin experience, or they failed to show… It’s how like when people market games, they tell you if it’s like in-game render, or if it’s just some artist’s rendition of what the game should feel like. And that was definitely the artist’s rendition of what the game should feel like, eventually.
But yeah, one video was unadvisedly produced. The guy who made it owned up to it and said “Yeah, sorry, I shouldn’t have done that.” But that doesn’t take away that this is the still the most significant agent we’ve ever seen outside of Open AI. Prior to this, my reference for best agents outside of the self-driving cars that we have in San Francisco - because those things are the best agents in the world - the second-best agent in the world was ChatGPT code interpreter. Since then, we have Devin, and then since then we have Claude Artifact, from Anthropic, which we can talk about.
But Devin is really good. It’s a really, really good agent, actually a really good generalist agent, not even factoring in the code writing ability. And I hope that people don’t throw out the baby with the bathwater, because unless you’ve actually tried it, you don’t know what you’re talking about. You’re just reading headlines and you’re just repeating the last headline that you just read.
Well, we can try it, because we signed up for a waitlist, and then they don’t give you access to it. And so what do you want us to do besides speculate? We can’t.
Maybe spend less time on things where you’re just going to repeat headlines… [laughs]
I’m not spending any time on it. I watched the Upwork video, I watched the debunking video; he certainly debunked what they did… And there was no question to it. So I understand that you’re okay with 9 out of 10 times I tell the truth, but when I’m coming out and trying to make a splash and I’m lying in my marketing material… Sorry. I’m just gonna go ahead and remember that.
That was a bad idea, and they should not have done it. Still, it’s a good product.
Which I have to take your word for it.
I have to square those two –
Which - I have to take your word for it.
Yeah. I have the fortunate ability to say I have no vested interest in Devin. They gave me access, I used it, I was impressed. And so was Patrick Collison, so was Fred Ehrsam from Coinbase… There’s a bunch of people who cannot be bought, who like it, you know?
Sure.
That’s good. That’s good.
I’ll take your word for it. I can’t do anything else.
But can we talk about – I think the technical design of it can be replicated. I think the real question, the thing that people really should be talking about instead of the video, which was a mistake, a one-off mistake, the most structural issue with Devin is can it be cloned? How thick is their moat? This is a six-month-old company, that is now valued at $2 billion. Which is absurd by any stretch of the imagination. So that’s the real question which Devin has to answer, and the rest of the AI engineering industry has to answer.
[01:05:57.05] There’s a project called Open Devin that is trying their very hardest to replicate it. I’ve interviewed both of them, you can check it on my podcast. I would say that Devin is still ahead. Who knows how long it’s gonna last. But I think the sort of structural merits of what Devin has innovated in terms of how agents should be interacting with each other, what are the necessary components for agents - that is going to stick, and if you focus too much on the marketing video, you’re going to miss the actual lesson to be learned with Devin, which is that hey, your agent should have a coding environment, should have access to a browser, should have a plan, and should have a terminal and interactive chatbot where you can sort of observe what it’s doing and correct it in real time, and it can respond to you in real time. That is the UX that has wowed all these people, wowed myself… I have never seen it in any other agent before, and I think it’s going to be the standard or state of the art for all agents going forward, because it’s so good.
What are the odds that of something like that, which is very general, as you said, just gets sherlocked by Open AI?
In a way it has been, but not by Open AI, but by Anthropic, which is the other thing that I mentioned. So Claude Artifacts is the other thing that people should really think about. They definitely looked at Devin and were like “Oh yeah, we’re taking a bunch of that.” They did not do the browser access, because these guys are way too worried about safety as compared to me, and as compared to Scott from Devin… So Claude project is basically an advanced version of ChatGPT’s code interpreter that can render a working web app. I often say the sort of spicy version of this is that Anthropic did more for code agents in two months than Replit has done in two years… Because it’s basically Replit.
Hm… Spicy.
[laughs] So for the record, for my Replit friends, obviously, they did not build a full sort of repl environment and IDE. Anyway. But still, you can do very significant programs in Claude now, that you could not do in ChatGPT code interpreter; you can do in Devin, but Devin is slower than Claude, and less generally capable than Claude. It’s just very, very good. And for the first time, people are actually openly talking about Anthropic being better than Open AI. Open AI has lost its crown as like the undisputed number one… Which is wild. I did not expect a year ago to be living in this world, but now we do live in a world where – it’s sort of like a multipolar world where there are multiple sort of top powers in this space. It’s very, very good. And you can try it, unlike Devin. [laughs]
Yeah. That does sound good. Love some competition for Open AI. Of course, there’s been turmoil over there as well, and there’s been interesting things going on inside and around Open AI…
Maybe for the engineers listening, I would say the progress here has been at the model layer. So Claude Artifacts is built on top of 3.5 Sonnet, which is the current world best model… But also, there’s a significant amount of AI engineering that was required to build Devin and to build Artifacts. And I think that if you want to see what the future of AI engineering should look like, you should be trying to build a clone of this thing. That’s what I’m trying to do. Because I think a lot of AI engineering will look like this. It will look like “How do you wire up a model to the real world to produce projects of significant value, that you would otherwise have had to assign to a junior engineer?” I think that is absolutely the sort of gold trophy that people are going for right now. And obviously, the step beyond that is artificial general intelligence. But this is a pretty good second place.
A $2 billion market cap in six months is absolutely amazing. And I think that –
I mean, it’s a bubble.
Sure. Clearly. But still, the fact that it took six months tells me it’s gonna take less time when more people are applying to that. And so is there actually a moat there? Time will tell, I guess.
[01:09:42.13] Yeah. Time will tell. They’re trying to build one. I think the – yeah, so this is a question of business and less about tech… The moat is really user data. The more people you can get coding with this thing, and the more you can observe how people interact with these agents… Devin has a six-month headstart on everyone else on how people work with Devin-like agents. And if Devin-like agents is like the goal of this thing, then they will have the best RLHF feedback data on the planet for specifically this task. It’s the same motivation that Open AI had with ChatGPT, which is they kind of lucked into this… But now they have the longest series of chat-oriented data sources, human feedback data that you previously had to pay a lot for.
And then – so that moat is the data moat, but then actually it also becomes… You’re investing ahead of where the capabilities are. So you’re sort of saying “I will build code or write manual code to build other capabilities that I don’t have yet.” But as the capabilities grow in the fundamental model, you can just kind of swap them out for your sort of handwritten code, and be more generally capable, with the scaffold that you already built ahead of time.
So I feel like I’m being vague there, but you’ll see this in the form of the model’s ability to interact with the real world. A lot of times you’re writing integrations, you’re writing – it’ll interface with Open API. Like, screw that, man. The future is just models like surfing websites, just like anyone else would surf websites, and interacting with them exactly like you would interact with it. But right now we have to use the crutch of code, and in the future we don’t.
Models surfing websites. How does that sound, Adam?
Dangerous… Cool… Amazing… Awesome… [laughter] Yeah.
It’s a new world… Yeah. On the grander timescale of this – like, this is happening within the last three years. What does 30 years of this do? What does 300 years of this do? We are birthing a new life form. I do think about that timescale as well. So it’s just an exciting time to be alive, and to observe this. I don’t think it’s useful to try to resist it, because it’s happening anyway. I think this is why alignment is important. Because the people who have believed in this way earlier than anyone else is the alignment people. They took AI safety more seriously, because they knew this was coming. And the rest of us are just waking up now. And the current mindset is “How do you control something that’s smarter than you?” Because it’s going to be. And so I think that is probably the right mode to think about it. The relevant paper for people interested is “The weak to strong generalization paper” from Open AI, which is written by [unintelligible 01:12:16.24] before he left for Anthropic. I do think if you’re worried about the safety element, people are working on it; they are trying to look for similar-minded people, and you can go apply for those jobs.
Well, there’s always the plug, right?
The plug?
You just pull the plug. Yeah. The legit electronic plug, unless we give them the new life form we’re birthing, as you just eloquently said, which I’m taken aback by, but I also want to dig into… Because it’s like “Wow, are we really creating a new life form, kind of?” What is it that will give it autonomy, the AGI-ness of it, I suppose? And this is the holy grail question everybody’s doing.
We all need to partake in some marijuana before having that conversation.
[laughs] I know…
Gosh, Jerod…
So a lot of AI discussions tend to devolve into existential risks and AGI discussions… And part of my goal of defining AI engineering is to create a space where those discussions are the side discussions, and not the main thing. Because we’re all here to engineer, we’re all here to build for today’s problems, with today’s capabilities… And I think that’s a lot of how I think about my impact in this field, which is how to guide people in a more positive direction that basically nobody’s against. A lot of Dev Rel is “The future is here, it’s not evenly distributed”, but a lot of engineering here, and especially in AI, is “The future is here, but it’s not evenly distributed. And how do we distribute it best to everyone else?”
I come from Singapore. One of the my favorite stories to tell is that the Singapore government is embracing AI really well for their older folks; the people who don’t speak English, the people who are disabled, the people who need the natural language interfaces to the many, many digital forms that are coming up in our lives… And applying this AI technology to that civil service I think is like the best form of how we do AI engineering.
[01:14:04.24] So yeah, I mean, we don’t have to go to the freshman dorm room conversation of “Are we bootstrapping a life form?” That’s fun to discuss; happy to engage with that. But why I try to keep it to an engineering conversation is to let people have a way to ground their conversation in “What can we do today?”
But you’re the one who said we’re literally creating a new life form.
Yes. I do also believe that.
So you opened the topic.
I’m sorry… [laughs]
That’s okay. Which I think is – well, so I’ve been silent for quite a bit because I’m listening quite well… And I’m just slurping up all the things you’re saying. And then I’m also feverishly trying to find where Mark Russinovich said in our conversation with him… Mark Russinovich is the CTO, I believe, of Azure, right, Jerod?
Correct.
So we met Mark at Microsoft Build 2024, where it was just like basically all AI everywhere, all-in on AI, as we said. And Mark said – and I’m thinking “Well, Mark is part of Microsoft”, and they’re one of the largest companies that benefit well from Open AI’s innovations. Sure, the discussions around cognition as well, Devin… But Mark said “I am not–” I’m paraphrasing, because I couldn’t find the quote, and I was hoping I can find it. But Jerod, please fill in the blanks. Mark said, paraphrasing, again, that he is not worried about AI taking over developers’ jobs. But then you just say we’re literally birthing a new life form, and then you’re speculating/revealing, to some degree, the agent OS of Cognition and Devin, and what you think would be a good outline for anyone trying to copy what they’ve done… Meanwhile saying they have a leg up in terms of timeframe, six months… A lot of time in today’s world, but realistically not a lot of time. And then you throw out - what was it, 2 billion, 3 billion? What was the valuation?
Two.
Two. Which is just incredible. Like, one, what is that number based on? Is it based on what somebody is willing to purchase it for? Is that the valuation? Is it based on like [unintelligible 01:16:13.26]
Yeah, Founders Fund invested at $2 billion. I think they gave them a few hundred million, or something.
Gotcha. Okay, so the valuation is based on venture capital that’s coming in. “Okay, well, we’ll give you x at x valuation.” I guess I’m just camping out there; I’m sort of sitting back, thinking “Gosh, is this really a new life form we’re birthing?” And if so, I think we’ve got to talk about that.
Sure.
Well, maybe I could square the circle here… Because swyx was talking on a very long timespan… And I’ve found the exact Mark Russinovich quote.
Thank you. Good job.
And he said, “I can tell you, we’re not at risk anytime soon of losing our jobs.” So maybe that harmonizes your stance, swyx? What do you think?
Yeah, absolutely. We should always be clear about what timespan we’re talking in, and there’s a big difference between the near term and long term. I just think that in the grand scheme of things, if – most AGI timelines, by the way, are like “By 2050 we shall have AGI.” That’s within our lifetimes, guys. [laughs] It’s time to panic if you really think this is going to end the humanity.
It’s time to panic…!
Seriously. Like, we have Eliezer Yudkowsky saying ethically the right thing to do right now is to bomb all data centers in the world, because humanity ends otherwise. He said this in the New York –
I mean, I guess if we’re in charge of this in terms of innovating it and creating it, how can we not have failsafes in place to be in charge of if it goes wrong? I mean, I think that’s where it has to come down to. I jokingly said “pull the plug”, but I literally mean if we control the physical hardwired plug into the wall… Now, Jerod, if that book I mentioned in the intro that I shared with you a while back, which I can happily share here, too… If that happens, then we’re in a different world.
[01:18:08.00] I speculated a good intro to a book or a movie, and I’m thinking more movie than book, but all good movies tend to begin as books. Sometimes they’re bad movies of good books. But anyways, I digress. I was speculating that this intro scene to this movie was a very beautiful cinematic scene where you see this human being - and it’s so strange to say things like this - a human being is happily racking and stacking the servers, happily organizing this hardware, happily instantiating a new machine into the rack… Meanwhile, the entire task was given to the human by artificial intelligence. So the boss, you said before, live above or below the API. I think we’re kind of like –
Nice callback.
That’s a good callback. I forgot about that.
A version of that is like above or below the AI, and just take out one letter. Because at that point it’s like, well, in the future, this dystopian, potentially non-dystopian future, we’re subjects of AI, but only if we allow it. But if we’re in control of the hardware, and we’re the physical beings, for now… Because you do have Boston Dynamics out there creating robot dogs, and the latest version of Atlas… Like, at what point do we lose, I guess?
That’s why he’s saying “Bomb the datacenters, man! Bomb ’em!”
Oh, gosh…! I’ve gotta back up on that one.
Pretty much.
[laughs] Pretty much…
So the question is, can we pull the plug on these bots? So for what it’s worth, this is my favorite joke in this category, which is Sam Altman is very well known for carrying around a blue bag. And everyone’s joke is that the buttons is in there. If he ever never needed to push the button, it’s in that blue bag. I don’t think we can, because the secret’s out that it’s mostly possible to simulate intelligence inside of neural networks. Even if the current transformer paradigm doesn’t really pan out for that, something else will, because we evolved from non-sentient life forms, we think. Unless we were created in a span of days. So if we can evolve, something else can evolve, too… And we are currently speedrunning evolution of this particular life form.
So I don’t think that’s necessarily a negative for us, except that in every prior incidence of a more primitive civilization encountering a more advanced civilization, the more advanced civilization accidentally wipes out the more primitive civilization. And right now, AI is not more advanced than us, but it is growing much, much faster than us. It is spreading much, much faster. It learns much faster than us. And so we need to figure out how to contain this, or eject it from our solar system so it doesn’t affect us. I don’t think any of these are – I don’t think that’s possible, so we have to contain it. We have to align it. That’s the only way.
Right.
And I also don’t think that capitalism and this sort of top-down safety are aligned, in the sense that in order to control this, if you really are concerned about safety, you have to nationalize all AI labs. And then you cannot stop there, because what use is nationalizing things within one border, you have to nationalize all borders. So you have to take over the world, and control all AI developments if your intention is to really control from a top-down basis of all AI safety. So that’s not happening.
Gosh. Yeah. Borders are a big concern.
This is the classic “China’s gonna do it if we don’t do it.”
Well, there’s this show out there called Westworld. Have you seen the show Westworld, swyx?
Yeah. Great season one and two.
Okay. You’ve gotta watch season three.
Isn’t there a season four?
There’s gonna be a season four. So I will digress –
There is?
[01:21:47.25] No, actually I think there was going to be a season four, but I believe it was canceled. I don’t know. It’s an HBO show, I’ve got to check in on that. So in my opinion, the entire show is worth watching for season three alone. And I think you only really need to maybe even watch recaps of season one and season two to watch season three. I don’t think you really need – it’s almost standalone, in my opinion. And I think anybody out there listening to this right now headnodding to season three know where I’m going. And I don’t want to ruin the plot for you all, because you haven’t watched it, but I would say go watch it… A lot of what you’re talking about here is represented some way shape or form in the intelligence and the autonomous beings, let’s just say, that are out there in the world, doing different things. And it’s very captivating from a cinematic standpoint. And I think if we’re 26 years away from 2050 - I had to do the math there real quick… If we’re 26 years away from AGI, or even the beginnings of it, and Cognition can create what they’ve created in six months or some span of time less than a year, I’ve gotta imagine whatever was in Westworld season three is closer than we think. Some version of that is closer than we think. It could be 2070…
Yeah, plus/minus 20 years.
I don’t even know how to do math these days. Yeah, I mean like 30 more years after that, 20 more years after that… It’s got to be close if you get to that speed of creation. And then I will also say the other thing I have learned about is von Neumann probes. It’s this idea of a self-replicating spacecraft. So shooting it out into space is not going to be helpful, because they might allow themselves to escape on a von Neumann probe, which will just self-replicate. It will begin to ore and mine planets to create new materials to create themselves, to just replicate and come back, and do whatever. Now, they could be peaceful, if you’ve read the books I’ve read. Anyways.
Yeah. No real response to any of that, apart from - it’ll happen… Really, if you want to be a player on this stage, you either need to be a political leader of a world power, or you need to be a head of a major AI research lab. Basically, the rest of us don’t really get a say. This is not something where democracy has any sway over.
We are below the API on this one.
We’re below the API.
Yeah.
I think we’re above it in the sense of like we do get freedom from – currently, we do get freedom from the mundane tasks. I no longer care about doing like really minor features, because I could just tell [unintelligible 01:24:23.27] to do it for me, and it does it really well. And if Cognition pans out, or something like Cognition pans out, then I will have a lot of PRs done purely by agents, and that’s great.
But yeah, we always have and always will live below the power line, and that’s separate from the API line, of people actually deciding the sort of future course of humanity. And I think where engineers really make or break here is whether or not we choose to join them and enable them. Because they still need us to execute things.
The one hope, or the one note of optimism between the very short-term future, which is where we are today, and the very long-term future which is when AGI is here, is that I do think that AI engineers are the last job to exist, because they are the job, mathematically, to eliminate the other jobs. Like, you need AI engineers to eliminate the lawyer. You need AI engineers to eliminate the – I don’t know, the executive assistant. So if you’re worried about job placement, go be an AI engineer, because that will be the last job. And then we’ll be post-abundance, and then we can explore the stars… But until then, you should be an AI engineer.
There’s the sales pitch.
There you go.
“If you would like to destroy all other jobs, become an AI engineer, and you will be the last person standing…”
I do have to bring out my favorite TV show, Jerod. Silicon Valley.
[01:25:46.18] Well, you lost me at Westworld. I haven’t seen a single episode. I don’t know what you’re referring to in season three… So go ahead, man.
In the final episode of season seven - sorry, season six, episode seven, called Exit Event, of Silicon Valley, they’re locked in the Pied Piper offices and they’re dealing with what they’re dealing with… Obviously, it has to deal with AI, because that’s the conversation right here right now… And Jared says “Okay, is this a good thing or a bad thing? Somebody tell me how to feel.” And Guilfoyle says - and this is my favorite line ever - “Abject terror for you. Build from there.” So that’s my advice for everyone that is not a political power, or whatever you just said you had to be, swyx, to have any say in the future this… Because “Abject terror for you. Build from there.”
Yeah… I don’t call it terror, so much as we live in the point of history – history is happening. We’re lucky to be alive to witness this in this moment. We have some minor sway on it, but history is bigger than us, and it’s going to happen, it’s going to take course. And I don’t know, to me, I think this is part of the general [unintelligible 01:26:57.24] message, which is that if you are pro life, you don’t have to be only pro human life. You’re pro life in any life form, you’re pro consciousness in any form. And if humanity happens to be this sort of bootstrap load sequence to what actually is what life is supposed to be, which is sort of more reliable, sustainable, faster-learning machines than us, then maybe that’s the natural order of things. I don’t know. I would like it to not be the case, because I like humanity, I like my body, but we do live in a world where that is a possibility.
And this is why we almost outlaw conversations on artificial intelligence on this podcast, because of this. This is almost why we outlaw it. Almost.
[laughs] I’ll mention one last thing maybe as a positive parting thought…
Please be positive.
I’ll help you be positive. We used to basically completely throw in the towel on interpreting the model weights. GPT 3 was 175 billion parameters. Absolutely just meaningless numbers. 175 billion, meaningless numbers. And I used to just think of mechanistic interpretability as a joke. I will say Anthropic has done a crazy amount of work here recently to make features of models interpretable, and if we can study the brain of these things as they think, then we can control them very, very effectively. And I have gone from “This will never happen” to “Oh, I didn’t know that this was possible.” And that’s where – you should read the paper “Scaling monosemanticity” from Anthropic, which they demonstrated they can do it on Claude Sonnet, which is a mid-sized model; we think it’s something between 15 and 70 billion parameters. If we can do that to 15 to 70 billion parameters from a standing starting point of less than 100 million parameters last year, we are accelerating our ability to interpret models faster than our ability to grow these models, and that is a good thing. We will fully understand and map this brain before it is bigger than us, and so like we will be able to control it if that is true. The trajectory of interpretability this year has been an unmitigated success story, and it is going to get better, and we might actually overtake our ability to grow these brains, and that will help us control these programs much more effectively than basically any other method possible.
I did not know that. That’s very cool. What was the name of that paper again?
Scaling monosemanticity. It’s the third in a trilogy of semanticity papers. The first one is “Superposition.” I covered that in my AI news newsletter. This is where I plug my newsletter for like “Go subscribe if you want to keep up on this stuff”, because - yeah, that’s my sort of daily pick of what the top thing to know is.
Alright, swyx, well, fun times, great conversation. Dev Rel, AI engineering, AGI, the end of the world… All the things, we expect nothing less. Hook us up with links to your newsletter, to your pod, to all the things mentioned, and we will make sure they hit the show notes for folks to follow up and connect with you on the interwebs.
Yeah, thanks for having me on. I like this & Friends format, because then we can just talk about whatever is top of mind, instead of sticking to specific company or piece.
That’s right. Alright, well, that’s all for now, but we’ll talk to you all on the next one. Bye, friends.
Our transcripts are open source on GitHub. Improvements are welcome. 💚