Changelog Interviews – Episode #580

Leading in the era of AI code intelligence

with Quinn Slack, Co-founder & CEO of Sourcegraph

Featuring

All Episodes

This week Adam is joined by Quinn Slack, CEO of Sourcegraph for a “2 years later” catch up from his last appearance on Founders Talk. This conversation is a real glimpse into what it takes to be CEO of Sourcegraph in an era when code intelligence is shifting more and more into the AI realm, how they’ve been driving towards this for years, the subtle human leveling up we’re all experiencing, the direction of Sourcegraph as a result — and Quinn also shares his order of operations when it comes to understanding the daily state of their growth.

Featuring

Sponsors

CrabNebula Cloud – CrabNebula Cloud is here! Distribute Tauri apps and Electron apps with best in class updater. At the heart of CrabNebula Cloud is a purpose-built CDN ready for global scale, and secure updates as a first-class citizen. Learn more at crabnebula.dev/cloud

TailscaleAdam loves Tailscale! Tailscale is programmable networking software that’s private and secure by default. It’s the easiest way to connect devices and services to each other, wherever they are. Secure, remote access to production, databases, servers, kubernetes, and more. Try Tailscale for free for up to 100 devices and 3 users at changelog.com/tailscale, no credit card required.

imgproxy – imgproxy is open source an optimizes images for the web on the fly. It makes websites and apps blazing fast while saving storage and SaaS costs. It uses the world’s fastest image processing library under the hood — libvips. It is screaming fast and has a tiny memory footprint.

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 This week on The Changelog
2 01:22 Sponsor: CrabNebula Cloud
3 04:15 Start the show!
4 04:53 It's been two years
5 06:02 Being CEO
6 11:50 Why are less people using code AI?
7 14:47 Subtle human leveling up
8 20:07 Latency between the prompt/response
9 22:44 Sponsor: Tailscale
10 25:37 Is AI stealing the joy work?
11 29:12 What's changed with Sourcegraph?
12 33:19 Selling Sourcegraph then vs now
13 48:59 Sponsor: imgproxy
14 52:27 Let's talk winning
15 59:42 Vertical up the customer base?
16 1:05:08 How are you winning the customer?
17 1:07:29 Two co-founders great at comms
18 1:12:52 What's next for Cody?
19 1:14:59 Very different conversation
20 1:15:16 Outro and shout outs

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

So Quinn, it’s good to see you, good to have you back… I want to really talk about the evolution of the platform, because the last time we talked it was kind of like almost pre-intelligence… It was kind of almost still search. And like just after that you went buck wild and had a bunch of stuff happening… And now obviously a whole new paradigm, which is artificial intelligence, a.k.a. AI… But good to have you back, good to see you. How have you been?

Yeah, it’s great to be back. I’ve been good. I think, like everyone, it’s been quite a whirlwind over the last four years, over the last year with AI… And we’ve come a long way. We talked two years ago, and we talked a lot about code search…

Was it two years ago?

Two years ago.

Time flies.

So a lot changed in two years.

Yeah, there’s been about 10 years in the last two years, through the pandemic, and now AI… And we have grown a ton as a company, our customer base, and all that… And yeah, two years ago we were talking about code search, and that’s what we had built, and we still have code search that looks across all the code, and understands all the calls, the definitions, the references, the code graph, and all those great things. And we’ve got a lot of great customers on that. You know, much of the FAANG companies, and four of the top ten US banks, and Uber, and Databricks, and governments, and Atlassian, and Reddit, and so on.

[00:05:56.12] But it’s not just code search anymore. The world has changed for software development in the last year so much.

What is it like being CEO of a company like that? I mean, you’re a founding CEO, so this isn’t like “Oh, you inherited this awesomeness.” You built this awesomeness. How does it feel to be where you are?

Sometimes it is exciting and scary to realize that I as CEO have to make some of these decisions to go and build a new product, to change our direction. And I feel so lucky that I have an amazing team, and that people are on board, and people are bringing these ideas and need to change up… But it’s definitely weird. I mean, it’s one thing to try a new side project that is dabbling with some of these new LLMs; it’s another thing to shift a 200-person company to going and building that. But as soon as you do that, just the progress that you see - I mean, it’s so validating. And then obviously, hearing from users and customers, it’s so validating… So it’s been I think a whirlwind for everybody.

When you make choices like this, do you have to go to – I know you’re funded, so you probably have a board. So you have other people who help you guide the ship. So it’s not “Hey, I’m CEO, and we just do whatever I want…” You also have Beyang Liu, your founding co-founder, CTO. I’m very keen on Beyang, I’ve known him for years. I want to go backtrack a little bit, but I want to stay here for a minute or two. When you make a choice to go from code search to code intelligence, to now introducing Cody, your product for code gen, for – and code understanding as well. I mean, it’s so much more. It’s got a lot of potential. When you make a choice like that, how do you do that? Do you have to go to the board? What’s that, like? Give me an example of what it takes to sort of change the direction of the ship, so to speak.

Yeah. If you go back to the very founding of Sourcegraph, we decided on a problem to solve, which is big code. It was way too damn hard to build software. There’s so much code, there’s so many devs, there’s so much complexity… And back when we started Sourcegraph 10 years ago, we felt that, you know [unintelligible 00:08:05.01] companies felt that… Now you start a brand new project and it brings in like 2,000 dependencies, and you have to build on all these super-complex platform… Stuff is getting so much more complex. So we agreed on a problem, and we got our investors, we got our board, we got our customers, our users, our team members all aligned around solving that problem. And not one particular way that we solve it. And if you go back to our plan, actually, in our very first seed funding deck, it talks about how first we want to build the structured code graph. And then we want to do Intelligent Automation. That’s IA. I think we probably would have said AI, except back at the time if you said AI, people thought that you were completely crazy.

For real, yeah.

You know, it’s unfolding – I won’t say exactly; we didn’t have a crystal ball back then. But it’s unfolding roughly as we expected. And we knew that to do more automation and code, to take away that grunt work from developers, so they could focus on actual problems and the stuff they love, that you needed to have the computer have a deep understanding of the codebase. It couldn’t just all be in devs’ heads. So it was no big surprise to our board or our team that this was something that we would do, that we would go and build our code AI. And it was also not some complete luck of the draw that we found ourselves in a really good position to go and build the most powerful and accurate code AI. So none of this is coincidental. But when do we do that? And I think if we had started to do that say in 2018, where there were plenty of other attempts to do ML on code, I think that we would have failed, because the fundamental underlying technology of LLMs was just not good enough back then.

[00:09:57.16] And the danger there is if we had started doing it in 2018 and we failed, we might have as an organization learned that stuff doesn’t work. And then we would have been behind the ball when it actually did start to work. So getting the timing right was the tough part. And I actually think that we probably waited too long, because we could have brought something to market even sooner. But it’s still so early in terms of adoption of code AI, and even less than 0.5% of GitHub users are using GitHub Copilot, so it’s still very early. Most devs are not using even the most basic code AI that exists. So it’s still very early.

But getting the timing right, and making a big shift, starting back last December… That was when we started to see Cody really start to do some interesting things. That felt early to a lot of people on the team. And it took a lot of work to get the team on board, and to champion what the team was doing that was working, and to shift some of the things that we could see were not going to be as important going forward.

What a shame though, right? …that less people are using code AI-related tooling. I think it’s like a – I’m not sure what it is exactly happening, because there’s this idea that it might replace me, and so therefore I just resist it. And I’m just assuming that’s probably some of the case for devs out there… Because I’ve used it, and I think it’s super-magical. And I’m not trying to generate all the things, I’m trying to move faster, with one more point of clarity, if not an infinite point of clarity that can try something 1,000 times in five minutes for me, so I don’t have to try it 1,000 times in a week or two. Whatever it might be. And that might be hyperbole to some degree, but it’s pretty possible. I just wonder, why are people not using this more frequently? Is it accessibility? Is the access not evenly distributed? What do you think is happening out there? What’s the sentiment out there of why more tooling like this isn’t being used?

Well, I think this applies even to ChatGPT. ChatGPT – it’s amazing. It changed the world. It’s mind-blowing. It can do incredible things. And yet, you ask the average person how often are they using ChatGPT in their everyday life or their everyday workweek, and the answers I usually get are “Maybe one or two times.” You hear the stories of people that say “I ask it to write emails for me. And what it writes is ten times too long.” And the technology is there, the promise is there, but in terms of actually something that is so good, and understands what someone is doing, understands the code they need to write for developers - that’s still not completely there yet. And at the same time, the promise is there.

So I really want to make sure that we as an industry, everyone building Code AI, that we level with developers out there, with what exists today, what works well today, what doesn’t, what’s coming in the future, and not lose credibility by overhyping this whole space. I think that’s a huge risk. And I actually look at self-driving cars. 10-15 years ago you started to hear about autonomous vehicles; there was so much hype. People thought “Are humans even gonna be driving in the year 2020?”

They are.

Clearly, we are… And some people are kind of jaded by all that hype, and they just dismiss the entire space. And yet, in San Francisco here, there’s two companies where you can get a self driving taxi. And that is amazing. That is mind-blowing. The progress is real. It was a little slower than people thought if you were just reading the hype, but I think that most of the industry experts would have said “Yeah, this is about the rate of progress that we’d expect.”

So we don’t want that kind of mistake to happen with code AI. We want to be really clear that it’s not replacing developers. Those tweets you see where it’s like “Oh, you fire all your junior developers. You can replace them with”, whatever, this AI tool someone is shilling. Those are completely false, and those detract from the incredible progress that we’re seeing every single day with code AI getting better and better.

[00:14:20.18] The autocomplete that code AI can do is really powerful. I think that could probably lead to a 20% boost to developer productivity, which is really meaningful. But then having it write entire files, having it explain code, understand code… We’re working on that with Cody, and Cody does a pretty good job of that. It’s really helpful. And you see a lot of other work there. That is really valuable. And it doesn’t need to be at the point where, you know, it’s Skynet for it to be changing the world.

Yeah, for sure. Can we talk about some subtle human leveling up that’s practical for ChatGPT? I mean, I know it’s not Cody. Do you mind riffing a little bit? So last night my wife and I, we were hanging up pictures of our beautiful children. We took pictures of them when they were less than one week old, and then we have pictures of them in the same kind of frame at like their current ages, and one’s seven and one’s three. So it doesn’t really matter about the age. They’re just not one week old anymore. So you have this sort of brand new version of them, and then like current version, to some degree. And it’s four pictures, because we have two sons, and we want to hang them on the wall. And my wife was trying to do the math - and we can obviously do math; it’s not much. It’s like an eight foot wide wall; we want to put them in a [unintelligible 00:15:29.05] with even spacing, all that good stuff. I’m like “Babe, we should ask–” And like, I’m more versed in this than she is; not so much that she doesn’t use it often, she just doesn’t think to. And I think that might be the case of why it’s not being more widely used, is they don’t think to use this. And I’m like “I don’t want to use it. It’s a word calculator.” Or “I want to use it, it’s a word calculator.” I don’t want to think about this problem myself. I don’t want to do the math. I could just tell it the problem; my space, my requirements, and it will just – it will tell me probably too much, but it will give me a pretty accurate answer. I’m like “Let’s just try it”, and she’s like “Okay.” And so I tell it “Hey, ChatGPT, I have this eight foot, five inch wall in width, and I want to have these pictures laid out in a grid. They’re 26 inches squared, 26 wide, 26 tall, and I want to have them evenly distributed on this wall in a four grid.”

It gave me the exact answer, told me exactly where to put them at. We did it in five minutes, rather than like doing the math, and making a template, and writing all these things on the wall. It was so easy, because it gave us the exact right answer. That’s cool.

That’s awesome.

That to me is like the most uniquely subtle human way to level up. And I think there’s those kinds of problems in software that are being missed by developers every single day, to not X their day. And what I mean by that is like 1x, 2x, 5x, whatever it might be; if I can do a task in five minutes, not because it does that for me, but it helps me think faster and get to the solution faster. Then I want to do that, versus doing it in 15 minutes, or an hour or so. What do you think about that?

Yeah. So when you asked it that, did it give you the exact right answer on the very first reply?

Yes. Yes, it did.

That’s awesome.

Yeah. I’ve found a way to talk to it that it does that. And I don’t know if it’s like a me thing, but I get pretty consistently accurate answers. Now, it also gave me all the theory in a way, too. “The combined width of this and that, and two times this, and whatever that”, I don’t really care. I just want to know the end, which is says “So if you want six inches in between each picture frame, you should do this and that and this and that.” Like, it gave me the ending; just skip to the ending. Just give me the good parts.

But I’m willing to like just wait; literally, maybe 10 seconds extra. That’s cool with me.

Yeah. Well, that’s incredible. And I think that there’s probably –

Isn’t that incredible?

[00:17:52.05] Yeah. There’s so many things like that in your everyday life where you could use it. And it probably won’t get it 100% correct, but I mean, what an amazing time to be living, where that new technology is suddenly possible. And it’s not trickled down to all the things that it can change. And when you think about that underlying capability, this kind of brain that can come up with an answer to that question, how do we make it so that it can do more code? The way that a lot of people think about code AI is autocomplete the next line, or few lines. And that’s a really good problem for AI, because just like with your picture framing example, the human is in the loop. The human is reviewing a digestible, reviewable amount of code, of AI-suggested code. And so you’re never having to do things that the human cannot look at. If the AI told you “Hey, if you want to put pictures up on the wall, first crack some eggs, and put them on the stove.” You’d be like “That makes no sense.” And you would have caught it.

So that human in the loop is really important. The next step though, and how we get AI beyond just a 20% productivity enhancement is “How do we have the AI check its own work?” And I don’t mean the LLM, I mean how do we have an AI system? One very simple example is right now any AI autocomplete tool will sometimes suggest code that does not type check, or does not compile. Why is that? That should no longer be the case. That’s one of the things that we’re working on with Cody. So don’t even suggest code that won’t type check. How can you bring in context about the available types in the type system so that it will produce a better suggestion, and then filter any suggestions that would not type-check. And in some cases, then go back to the LLM, invoke it again, with additional constraints. And you know, then why stop at type checking? Let’s make it so you only suggest code where the tests pass; or you suggest code where the test don’t pass, but then you also suggest an update to the tests, because sometimes the tests aren’t right. And it’s all about all the advances in the future with code AI that I think are critical for us to make it so amazingly valuable are about having the AI check the work and bringing it to real world intuition, so it’s not relying on that human in the loop.

Yeah. I guess my concern would be latency, right? Like, if you’ve got to add, not just generation, but then checking, linting, etc, testing, correctly testing, canceling out… Like, you’ve got a lot more in that buffer between the prompt, which we’re all familiar with, to get that response, and the ending of the response. I always wonder, why does it take ChatGPT in particular time to generate my answer? Is it really thinking and it’s giving me like the stream of data on the fly? Or is there some sort of – is that an interface that’s part of usability, or part of UX? And I just wonder, in that scenario that you gave, would the latency affect the user experience?

Yeah, absolutely.

Of course, right?

Yeah. We have incredibly tight latency budgets. We look at getting the 75th percentile latency well below 900 milliseconds. And once you start invoking the LLM multiple times to check its own work, to go back and redo the work, once you start invoking linters, and type checkers… I think we’ve all been in a situation where we hit Save in a file in our editor, and we see “Oh, waiting for the linter to complete.” Sometimes that can take a few seconds in big projects. So this requires I think a rethinking of a lot of the dev tooling. Because in the past, it was built for this “Human is editing a single file at a time”, it’s interactive, and it’s in CI… But that’s where latency is not that sensitive. But I look at just the difference between like Bun running tests in a JavaScript project, versus another test runner… And bringing that down to 200-300 milliseconds instead of 5 or 10 seconds or more is really critical. I look at things like Ruff, rewriting a Python linter in Rust to make it go so much faster. I mean, I wish something like that existed for ESLint. And we need to bring the latency of all these tools that devs use in that edit loop down by several orders of magnitude to make this possible. But I think the reward, the pot of gold at the end of the rainbow if we do all of that is so great, because it will enable AI to take off so much of the grunt work that we ourselves do. So I don’t know if that’s the motivation behind some of these linters and new test runners and so on, but I love that those are coming out there, because that will make this fundamentally possible.

So recently, at All Things Open, Jerod conducted a panel with Emily Freeman and James Q. Quick… And really, one of the questions he asked was – you call it grunt work in this scenario, and Jerod argued that maybe that’s the joy work. Does AI steal the joy work, Quinn? Some of this stuff is fun, and some of it is just a means to an end. Like, not all developers really enjoy writing the full function themselves. And some of them really do, because they find coding joy. What are we doing here, are we stealing the joy?

I love nothing more than having six hours of flow time to fix some tech debt, to do a really nice refactor… And as CEO, sometimes that’s the best code for me to be writing, because I do love coding, rather than some new feature, some new production code… So yeah, I totally feel that. And at the same time, I choose to do that by writing in TypeScript, by using a GUI editor, Emacs or VS Code - I choose to do that by writing in Go. I’m not choosing to do that by going in and tweaking the assembly code, or… You know, we’re not using C. So I’ve already chosen a lot of convenience and quality of life improvements when I do work on tech debt. It’s not clear to me that the current level is exactly right. I think that you can still have a lot of the really rewarding puzzle-solving, the fun parts of the grunt work, and have the AI do the actual grunt of the grunt work. And I think it’s different for everyone… But as we get toward AI starting to – and to be clear, it’s not here yet. But as we work as an industry toward AI being able to take over more entire programming tasks, like build a new feature, then we’re gonna have to do both the grunt work and the fun work from the programmer. And if someone only wants to use half of that, that’s totally fine. My co-founder Beyang - he uses Emacs, but in a terminal, not in GUI. So it’s a free country, and devs can choose what they want.

That’s right. Okay. I guess I was saying that more as a caution to you, because half of the audience cringe when you said grunt work, and the other half was like “You’re taking my joy away.” Some of them are happy, and then some of them are like “Let’s not use a pejorative towards the work we love.” You know what I mean?

Well, I think grunt work is different for each person. I think a lot of people would consider the grunt work to be all the meetings, and all the reading of documents, and the back and forth, and the bureaucracy of their job…

For sure.

They hate that part. And they just love coding. And I say we need it, we need AI in the future to be able to digest all that information that comes from all these meetings, and to distill the requirements. So let the AI do that for them, and then they can just have a glorious time coding. And we used to joke with Sourcegraph that we hoped that Beyang and I would create Sourcegraph, it’d be so damn good that we could just retire and spend all our day coding in some cave… And look, I totally feel that, and we want to bring that to everyone. And if they want to do that, then they should be able to do that.

Yeah. So two years ago we would not have opened this conversation up with a discussion on artificial intelligence. Two years ago you were talking about like that was the last time you did the work, not me. I didn’t even look at the last time we talked. I knew it was not yesterday, and it was not last year. I just wasn’t sure how far back it was. What has changed with Sourcegraph since then? I mean, you’ve grown obviously as a company, you’ve got two new pillars that you stand on as a company… Code search was the origination of the product, and then you sort of evolved that into more of an intelligence platform, which I think is super-wise… And then obviously, Cody, and cogeneration, and code understanding, and artificial intelligence, LLMs, all the good stuff. What has changed really, from a company level? What size were you back then? Can you share any attributes about the company? How many of these FAANG and large enterprise customers did you have then versus now? Did they all come for the Cody and stayed for the Sourcegraph, or was it all one big meatball? How do you describe this change, this diff?

[00:30:02.09] Yeah, two years ago we were code search. And that’s like a Google for all the code in your company. It’s something that you can use while coding to see how did someone else do this, or why is this code broken? How does this work? You can go to find references, go to definition across all the code… And at the time, we were starting to introduce more kinds of intelligence, more capabilities there. So not just finding the code, but also fixing the code with batch changes, with code insights so you could see the trends. For example, if you’re trying to get rid of some database in your application, you could see a graph where the number of calls to that database is going down, and hopefully, the new way of doing things is going up. So all these other kinds of intelligence. And that stuff is incredibly valuable. Millions and millions of devs love code search and all these things. And with code search, that was about feeding that information to the human brain, which is really valuable. And the analogy that I would say is Chat GPT, again, changed the world, but we all use Google search, or whatever search you use, way more than you use ChatGPT today. And yet, everyone has a sense that something like ChatGPT, that kind of magic pixie dust will be sprinkled on search, and we’ll all be using something that’s kind of in between. ChatGPT is probably not the exact form factor of what we’ll be using. Google Search circa two years ago was not what we’ll be using. But there’ll be some kind of merger. And that’s this journey that we’ve been on over the last couple years, taking code search, which fundamentally builds this deep understanding of all the code in your organization, and we’ve got a lot of reps under our belts by making it so that humans find that information useful… Now, how do we make that information useful to the AI, and then make that AI ultimately useful to the human? So how can we use this deep understanding of code to have Cody, our code AI, do much better autocomplete, that has higher accuracy than any other tool out there? How can we have it use that understanding of how you write tests throughout your codebase, so that it will write a better new test for you using your framework, your conventions. How do we make it really good at explaining code? Because it can search through the entire codebase to find the 15 or 20 relevant code files.

So we’re building on this foundation of code search… And what I’ll say with code search is I use it all the time. I think every dev would do well to use code search more. It’s so good at finding examples. Reading code is the best way to uplevel as a software engineer… But Cody and code AI is something that every dev thinks that they should be using. So given that they solve so many of the same problems, this problem that caused us to found a company; if it’s so damn hard to build software, it’s really hard to understand code. They both solve the same problem. And if what people want is Cody, more than code search - well, code search still exists, and it’s growing, and it’s the foundation for Cody… But we’re going to be talking about Cody all day, because that’s what people are asking for. And that’s what we hear from our users. We see a lot of people come in for Cody, and then they also realize they love code search… But I think Cody is going to be the door in. It’s so easy to get started, and it is just frankly magical. I think everyone can speak to that magic that they see when AI solves their problem. Like you did with that picture frame example.

Yeah. Can you speak to the ease of which it was to sell Sourcegraph, the platform two years ago, to how easy it is to sell it now? You kind of alluded to it to some degree, but can you be more specific?

Yeah. Two years ago would have been 2021, the end of 2021, which was the peak of the market; the peak of kind of everything. And I think there’s been a lot of big changes in how companies are hiring software engineers, and budget cuts, and so on. So we’ve seen a big change over the last two years. Code search has grown by many, many times since then…

[00:34:01.11] But what we saw is with companies realizing “Hey, maybe we’re not going to be growing our engineering team at 50% each year.” We saw a lot of kind of developer platform, developer happiness, developer experience initiatives get paused in favor of cost cutting. “How can we figure out what are the five dev tools that we truly need, instead of the 25? Where in the past, if a dev loved something, then yeah, we’d go in and plop down a bunch of money.”

And so we were well positioned because we had such broad usage… And because a lot of companies looked at us as a platform, they built stuff against our API, and every team used it, we were in a good position there. I think though, if AI had not come out about a year ago, then I don’t know what the DevStack would look like. I think you’d have a lot of companies that realized “Hey, we’ve been keeping our eng hiring really low for the last two years…” I’m not sure now – companies see AI As a way to get as much as they were getting in the past, but with less developers. And developers see it as a way to improve their productivity. And I think the missing piece that we’re not fully seeing yet is there’s a lot of companies out there that would love to build more software, but were just unable to, because they didn’t know how to, they were not able to hire a critical mass of software engineers, they were not in some of the key engineering hiring markets, developers were too expensive for them to hire… But all these other companies that would have loved to build software, they were just bottlenecked on not being able to find the right engineers. I think that AI is going to help them overcome that, and you’re gonna see software development be much more broadly distributed around a lot of companies. And that is what’s exciting.

So looking at the overall software developer market, around 50 million professional developers, around 100 million people, they write code in some way in their job, including like data analysts. I fully expect that number to go up, and I fully expect that pretty much every knowledge worker in the future is gonna be writing some code in some way. So I’m not pessimistic on the value of learning how to code at all… But there’s just been massive change in how companies are seeing software development and the structure of teams over the last couple of years.

I think when we talked last time you were saying, either exactly, or in a paraphrasing way, that it was challenging to sell code search. That it was not the most intuitive thing to offer folks. You obviously, founders, understand how deeply it was useful, because you worked inside of Google, you saw a different lens towards code search… And most people just saw Command+F, or even Command+Shift+F as just something that was built in, rather than something that you went and bought, and stood up separately as a separate instance, that had this other intelligence. And that was hard to sell. However, code search that is being understood by an LLM, Cody, is a lot easier to offer, because you can speak to it. Very much like we’ve learned how to chat with artificial intelligence to generate and whatnot like that.

So I’m curious, even when we were done talking on the last time on Founders Talk, you weren’t ready to share this intelligence side, which was also the next paradigm. I think this intelligence factor - obviously, code search gives you intelligence, because you can find and understand more… But it was the way that you built out insights and just different things like that, that allowed you to not only manually, like a caveman or cave person type in all these things you can into search; you could just sort of form an intuitive graph towards, like you’d mentioned before, the calls to a database going down, and calls to the new database going up, and you can see the trend line towards progress. Clearly. And even share that dashboard with folks who are not in development, in engineering. Sharing with comms, or marketing, or CEOs, or whomever is just not daily involved in the engineering of their products. And I’m just curious… Give me really specifics, like how easy it is to sell now because Cody just makes the accessibility, the understandability of what Sourcegraph really wanted to deliver so much easier?

[00:38:11.25] Yeah, Cody does make it so much easier. And yeah, going back two years ago, we had a fork in the road. We could have either made just code search, something that clicked with so many more developers, and overcome that kind of question which is “You know, I’ve been coding for 10 years. I haven’t had code search. I have it in my editor. Why would I need to search across multiple repositories? Why would I need to look through different branches? Why would I need kind of global [unintelligible 00:38:39.15] definition? Why would I need regex search that works?” We got a lot of questions like that. We could have just doubled down on that and tried to get, for us, way more devs using it for open source code, and within our customers 100% of every developer, and all of our customers using code search. We could have done that. What we decided to do was go deeper into the intelligence, to build things that were exposed as more power user tools, like the code insights. Code Insights is something that platform teams, that architects, and security teams, managers - they love, it has incredible value for them, but for the average application engineer they’re not really looking at code insights, because they’re not planning these big, codebase-wide refactors. Same with batch changes. Platform teams love it, people that have to think in terms of the entire codebase, rather than just their feature, they need it. And I think we got lucky, because given that right around that time, that’s when developer hiring began to really slow down. It was really helpful for us to get some really deep footholds in these critical decision-makers, just from a sales point of view, in companies, to have like very deep value, instead of kind of broad, diffused value.

So that ended up being right. It also ended up being right in another way, which is we got deeper in terms of what does Sourcegraph know about your codebase? And that was valuable for those humans over the last couple of years, but it’s also incredibly valuable now, because we have that kind of context that can make our code AI smarter. But I do really lament that most devs are not using code search today. I think it’s something that would make them much better developers, and there’s absolutely a part of me that wishes I could just go have 50 amazing engineers here work on just making it so that code search was so damn easy to use, and solved every developer’s problem. Now we’re tackling that with Cody, because we’ve got to stay focused… And to your point, they do solve the same problem. And with code search, if you’re trying to find out “How do I do this thing in code?”, code search will help you find how all of your other colleagues did it. Cody will just look at all those examples and then synthesize the code for you. And so there’s so much similarity… And we are just finding that Cody is so much easier to sell.

But we did have a cautionary moment that I think a lot of other companies did. Back in February to May of 2023 this year, if you said AI, if you said “Our product has AI”, literally everyone would fall over wanting to talk to you, and they’d say “My CEO has given me a directive that we must buy AI. We have this big budget, and security is done, legal is done, we have no concerns. We want it as soon as possible.” And it didn’t matter if the product wasn’t actually good. People just wanted AI. And that I think created a lot of distortions in the market. I think a lot of product teams were misled by that. I’m not saying that the customers did anything wrong. I think we were all in this incredible excitement. And we realized that we didn’t want to get carried away with that. We wanted to do the more boring work, the work of “Take the metric of accuracy, and DAUs, and engagement, and overall a lovable product, and just focus on that.” We did not want to go and be spinning up the hype.

[00:42:04.06] So we actually really pulled back some of this stuff and we level-set with some customers that we felt wanted something that nobody could deliver. And that was one of the ways that we came up with these levels of code AI taking inspiration from self-driving cars. We didn’t want the hype to make it so that a year from now everyone would become disillusioned with the entire space. So definitely a big learning moment for us. And if there’s an AI company out there that is not looking at those key user metrics that have always mattered, the DAU, the engagement, the retention, the quality, then you’re gonna be in for a rude awakening at some point, because exploratory budgets from customers will dry up.

Well said. I think it’s a right place, at the right time, really. I would say the right insight a long time ago to get to the right place, to be at the right place, at the right time. Because everything that is Cody is built on the thing you said you lament that developers would use; it’s built on all the graph and all the intelligence that’s built by the ability to even offer code search, at the speed that you offer it. And then obviously, your insights on top of that. So it’s like, you took – it’s like having the best engine and putting it in the wrong car, and nobody wants to buy the car… And then suddenly, you find like this shell that performs differently, maybe it’s got better – I don’t know, just in all ways it feels better to use, and it’s more just straightforward to use; you still have the same engine, it’s still the same code search, but it’s now powered by something that you can interact with in a meaningful way, like we’ve learned to use with having a humanistic conversation with software running on a machine.

I think that’s just such a crazy thing to be – that’s why I wanted to talk to you about this, because you’ve had… I mean, some people think that Sourcegraph was born a year or two ago, that know your name. And you’ve been like on a decade journey. I don’t even know what your number is; it’s getting close to a decade, if not past a decade, right?

Yeah. We started Sourcegraph a decade ago.

And so I’ve been a fan of y’alls ever since then. And for a long time, just a fan hoping that you would get to the right place, because you provided such great value, that was just hard to extract, right? The ability to extract the value from Sourcegraph is easier thanks to Cody than it was through code search, because of obvious things we just talked about. That’s an interesting paradigm to be in, a shift to be in, because you’re experiencing that, I’m assuming, to some degree, a hockey stick-like growth, as a result of the challenges you faced earlier, that now are diminished to some degree, if not all degrees, because of the ease of use that Cody and things like Cody are.

Yeah. And code search, when we started bringing that to market in 2019, that was a hockey stick. But now we realized that was a little league hockey stick, and that now this is the real hockey stick.

And I’ve been reading – I love reading history of economics, and inventions, and so on… And I’ve been reading about the oil industry. The oil industry got started when someone realized “Oh, there’s oil in the ground, and this kerosene can actually light our homes much better and much more cheaply than other kinds of oil, from whales, for example.” And initially, oil was all about illumination. Make it so that humans can stay up after 6pm when the sun goes down. And that was amazing. But that’s not how we use oil today. Oil is just this energy that powers everything; that powers transportation, that powers manufacturing, that powers heating, and so on. And there were people that made fortunes on illumination oil, but that pales in comparison to the much better use of oil for our everyday lives. And now, of course, you have renewables, and you have non-oil energy sources… But for a long time, we saw that that initial way of using oil was actually not the most valuable way.

[00:46:14.09] So seeing that this just happens over and over, that a new technology is introduced and you’re not quite sure how to use it, but you know that it’s probably going to lead to something… And that’s how we always felt with code intelligence, and that’s – us getting new Intelligent Automation is so exciting for us now.

One of the really exciting things we’re seeing is because – so many people are shocked that these LLMs, you speak to them humans. They seem to feel much more human-like than what we perhaps anticipated AI would be like. We think of AI from movies as being very robotic, of lacking the ability to display empathy, and emotion, and thought processes. But actually, that is exactly how we see LLMs. I’ve seen some studies even the show that LLMs can be better at empathy than a doctor with a poor bedside manner, for example. And for us, this is absolutely critical, because all this work we put into bringing information about code to the human brain - it turns out that AI needs that same information. That AI - well, the human, if you started a new software engineering job, you get your employee badge, you go read through the code, read through the docs, if there’s an error message you’ll look at the logs, you’ll go in team chat, you’ll join meetings… That’s how humans get that information. And AI needs all that same information. But the problem is, you cannot give AI an employee badge and have them roam around the halls and stand at the watercooler. That’s just not how AI works.

So we just happen to have broken down all that information into how can we think of it programmatically. And now that’s how we teach it to Cody.

I always throw the word yet in there whenever I talk about status quo with artificial intelligence or innovation… Because my son - he’s three; he loves to watch “the robot dance video”, he calls it. It was Boston Dynamics, that “Do you love me” song. And they have all the robots dancing to it. And I’m just thinking like “When is the day when it’s more affordable, or to some degree more affordable to produce that kind of humanoid-like thing that can perform operations?” Now, I know it’s probably not advantageous to buy an expensive Boston Dynamics robot to stand at your water cooler. But that’s today. What if 50 years from now it’s far more affordable to produce those, and they’re en mass produced with the things that are completely separate from it in the future? Maybe it might make sense eventually to have this water cooler-like scenario where you’ve got a robot that’s the thing that you’re talking to. I’m just saying. That’s why I said the word yet.

Yeah, yeah… And you’ve got to have this humility, because who knows…?

Okay, so let’s talk about some about winning. Can we talk about winning for a bit? So if you’re on this little league hockey stick with search, and then now it’s obviously major league hockey stick - I think your head-nodding to that to some degree, if not voicingly affirming that…

When I search “GitHub Copilot versus”, because I think Copilot has a brand name because they were one of the first AI code-focused tools out there. Now, obviously ChatGPT broke the mold and became the mainstream thing that a lot of people know about… It’s not built into editors directly. It might be through GitHub Copilot and Copilot X… But even when I search “GitHub Copilot X” or just Copilot by itself versus, Cody does not come up in the list. Tabnine does, and even VS Code does… And that might be biased to my Google search. And this is an example where I’m using Google versus ChatGPT to give me this versus. Now, I might query ChatGPT and say “Okay, who competes with GitHub Copilot?” And you might be in that list. I didn’t do that exercise. What I’m getting at is, of the lay of the land of code AI tooling, are you winning? Who is winning? How has it been compared? What are the differences between them all?

Yeah, Copilot deserves a ton of credit for being the first really good code AI tool, in many ways… And I think at this point it’s very early. So just to put some numbers to that, GitHub itself has about 100 million monthly active users, and according to one of GitHub’s published research reports - that’s where I got that 0.5% number from - they have about a million yearly active users. And that’s the people that are getting suggestions, not necessarily accepting that even. So a million yearly actives - what does that translate into in terms of monthly actives? That’s a tiny fraction of their overall usage. It’s a tiny fraction of the number of software developers out there in the world. So I think it’s still very early. And for us, for other code AI tools out there, I think people are taking a lot of different approaches. There’s some that are saying “We’re just gonna do the cheapest, simplest autocomplete possible”, and there’s some that are saying we’re gonna get jumped straight to trying to build an agent that can replace a junior developer, for example. I think that you’re seeing a ton of experimentation. What we have, which is unique, is this deep understanding of the code. This context. And another thing that we have is we have a ton of customers, where Sourcegraph is rolled out over all of their code. And working with those customers - I mean, I mentioned some of the names before… These are customers that are absolutely on the forefront, that want this code AI, and it’s a goldmine for us to be able to work with them.

So when you look at what’s our focus, it’s how do we build the very best code AI that actually solves their problem? How do we actually get to the point where the accuracy is incredibly high? …and we see Cody having the highest accuracy of any code AI tool based on completion acceptance rate. How do we get to the point where every developer at those companies is using Cody? And that’s another thing where we’ve seen – there’s a lot of companies where, yeah, they’re starting to use code AI, and five devs over here use Copilot, five over here use something else… But none of this has the impact that we all want it to have until every dev is using it. As we learn with code search, it’s so important to make something that every dev will get value from, that will work for every dev, that will work with all their editors, that will work with other languages. And that’s the work that we’re doing now.

[00:56:17.09] So I don’t know the particular numbers of these other tools out there… I think that everyone has to be growing incredibly quickly, just because of the level of interest, but it’s still very early and most devs are up for grabs. I think the thing that’s going to work is the code AI that every dev can use and instantly see working. And what are they gonna look at? They’re gonna say “Did it write good code for me? Is that answer to that code question correct or not? Did it cite its sources? Does it write a good test for me?” And it’s not going to be based on hype.

So we just see a lot of – it’s kind of like eating your vegetables work. That’s what we’re doing. Sometimes it’s tempting. When I see these other companies come out with these super-hyped up promises that - you know, ultimately, I think we all try their products and it doesn’t actually work. We do not want to be that kind of company, even though that could probably juice some installs, or something like that. We want to be the most trusted, the most rigorous. And if that means that we don’t come up in your Google Search autocomplete - well, I hope that we sell that by the time Cody is GA in December… But so be it, because our customers are loving it, our users are loving it, and we’re just so laser-focused on this accuracy metric.

And by the way, that accuracy metric - we only can do that because of the context that we bring in. We look at, when we’re trying to complete a function, where else is it called across your entire codebase? That’s what a human would look at to complete it. That’s what the AI should be looking at. We’re the only one that does that. We look at all kinds of other context sources. And it’s taken a lot of discipline, because there is a lot of hype, and there’s a lot of excitement, and it’s tempting to do all this other stuff… But I’m happy that we’re staying really disciplined, really focused there.

Yeah, the advantage I think you’re alluding to directly is that Sourcegraph has the understanding of the codebases that it has already available to it. That might require some understanding of how Sourcegraph actually works, but I think to be quick about it, that you sort of ingest one or many repositories, and Cody operates across those one or many in an enterprise. You mentioned a couple different companies; pick one of those and apply it there. Whereas, famously and infamously, GitHub - not X Copilot - was trained primarily on code available out there in the world… Which is not your repository; it’s sort of everybody else’s. So you sort of inherit, to some degree, the possibility of error as a result of bad code elsewhere, not code here, so to speak.

I think Tabnine offered similar, where they would train an artificial intelligence code tool that was based upon your own code’s understanding, although I’m not super-deep and familiar with exactly how they work. We had their CEO on the podcast, I want to say about two years ago, again. So we’re probably due for a catch-up there, to some degree. But I think it’s worth talking through the differences, because I think there’s an obvious advantage with Sourcegraph when you have that understanding. And not only do you have understanding; like you said, you’ve done your reps. You’ve been eating your vegetables for basically a decade, you know what I’m saying? So you’ve kind of earned the efficiencies that you’ve built into the codebase and into the platform to get to this understanding for one, and then actually have an LLM that can produce a result that’s accurate is step two. You already had the understanding before, and now you’re layering on this advantage. I think it’s pretty obvious.

Is a lot of your focus, it sounds like, is on vertical in terms of current customer base, versus horizontal across the playing field? Like you probably are going after new customers and maybe attracting new customers, but it sounds like you’re trying to focus your reps on the customers you already have, and embedding further within. Is that pretty accurate? What’s your approach to rolling out Cody, and how do you do that?

[01:00:07.02] Here’s my order of operations when I every three hours look at our charts. First, I look at what is our accuracy.

Every three hours?

Oh, yeah. Yeah. I love doing this.

Do you have an alarm or something? Or is this an natural built-in habit you’ve got?

I think a natural built-in habit. So first, I look at what is our accuracy, our completion acceptance rate, and how is that trending, broken up by language, by editor, and so on. It’s the first thing I look at. Next, I look at latency. Next, I look at customer adoption, and next I look at DAU, and retention… And that gets all this broad adoption. And everything is growing. Everything is growing in a way that makes me really happy, but the first and most important thing is a really high-quality product. That is what users want. That’s what leads to this growth in users. But that’s also what helps us make Cody better and better. That’s what helps us make Cody so that it can do more of the grunt work, or whatever parts of the job that developers don’t like. If we were just to be at every single event, and we had all this content, we could probably get our users higher, faster than making the product better. But that’s not a long-term way to win.

And so instead, we’re seeing “How do we use our code graph more?” How do we get better, entire codebase references? How do we look at syntactical clues? How do we look at the users’ behavior? How do we look at - of course, what they’ve been doing in their editor recently, like Copilot does, but how do we take in other signals from what they’re doing in their editor? How do we use our code search? How do we use conceptual search and fuzzy search to bring in, where this concept of say GitLab authentication exists elsewhere in their code, even if it’s in a different language? How do we bring in really good ways of telling Cody what goes into a really good test? And if you just asked ChatGPT “Hey, write a test for this function”, it’s gonna write some code, but it’s not going to use your languages, your frameworks, your conventions, your test setup and teardown functions. But we have taught Cody how to do that. That’s all that stuff that we’re doing under the hood, but we don’t need developers to know about that. What they need to see is just this works. The code that writes is really good. And by the way, with the things I mentioned - those are six or so context sources that if you compare to other code AI tools, they’re maybe doing one or two. But we’re not stopping there, because - you know, take a simple example; if you want the code AI to fix a bug in your code - well, it’s probably gotta go look at your logs. Your logs are probably in Splunk, or Datadog, or some ELK Stack somewhere… And so we’re starting to teach Cody how to go to these other tools. Your design docs are in Google Docs. You’ve probably got tickets in Confluence that have your bugs; that’s important for a test case. And you also have your product requirements in JIRA. JIRA, Confluence… You want to look at the seven static analysis tools that your company uses to check code, and that’s what should be run… So all these other tools, Cody will integrate with all of them. And they come from so many different vendors, companies that have in-house tools… And that ultimately is the kind of context that any human would need if they were writing code. And again, the AI needs that context, too.

We are universal. We’ve always been universal for code search, no matter whether your code is in hundreds of thousands of repos, or across GitHub, GitLab, Bitbucket and so on… And now it’s – well, what if the information about your code, the context of your code is scattered across all these different dev tools? A good AI is going to need to tap all of those, and that’s what we’re building. And then you look at other tools from vendors that are – you know, maybe the future of their code AI will tap their version of logging, their internal wiki… But very few companies use a single vendors suite for everything and are totally locked in. So that universal code AI is critical. And that’s how we’re already ahead today with context, that leads to better accuracy… But that’s how we stay ahead. And developers have come to look at us as this universal, this independent company that integrates with all the tools they use and love. So I think that’s gonna be a really long-term, enduring advantage, and we’re putting a ton of investment behind this. We’re putting the entire company behind this. So it takes a lot of work to integrate with dozens and dozens of tools like this.

[01:04:27.25] For sure. What does it take to sell this? Do you have a sales organization? Who does that sales organization report to? Does that report to both you and Beyang collectively, or you because you’re CEO, or is there somebody beneath you they report to and that person reports to you? And whenever you go to this metrics every three hours and you see let’s say a customer that should be growing at a rate of x, but they’re not, do you say “Hey, so and so, go and reach out to so and make something happen?” or get a demo to them, because we’re really changing the world here and they need to be using this world-changing thing, because we made it and they’re using us, and all the good things? How does action take place? How does execution take place when it comes to really winning the customer, getting the deal signed? Are there custom contracts? I see a way where I can sign up for free, and then also contact. So it sounds like it’s not a PLG. Kind of PLG-esque. You can start with a free tier, but… Are most of these deals, are they homegrown? Is there a sales team? Walk me through the actual sales process.

Yeah, everyone at Sourcegraph works with customers in some way or another… And we’ve got an awesome sales team, we also have an awesome technical success team, that goes and works with users that are our customers. We see a few things come up. When I look at a company, sometimes I’m like “Man, if every one of your developers had Cody tomorrow, they would be able to move so much faster.” And yet, you know, I can’t just think that and expect that to happen… So one of the reasons that we see companies slower to adopt code AI than perhaps they themselves would even like to is they’re not sure how to evaluate it. They’re not sure how to test it. They’ve got security and legal questions, but sometimes they want to see what is the improvement to developer productivity. Sometimes they want to run a much more complex evaluation process for code AI, than they would for any other tool out there, just because there’s so much scrutiny, and nobody wants to mess this up. So what we advocate for, what GitHub advocates for is there’s so much latent value here. Look at accuracy, look at that completion acceptance rate, and that is the quality metric. And then there’s a lot of public research out there showing that if you can show a favorable completion accuracy rate inside of a company, then that will lead to productivity gains, rather than having to do a six-month-long study inside of each company. So that’s one thing that helps.

Another thing is sometimes companies say “We want to pick just one code AI tool.” And I think that’s not the right choice. That would be like a company picking one database in the year 1980, and expecting that to stick forever. This space is changing so quickly, and different code AI tools have different capabilities. So we always push for “Get started with the people that are ready to use it today”, rather than trying to make some big top-down decision for the entire organization.

Okay, so two co-founders deeply involved day to day… One thing I really appreciate - and I often reference Sourcegraph, and I suppose you indirectly by mentioning Sourcegraph… Sometimes you by name, you and Beyang by name, but sometimes just “the co-founders”. So I lump you into a moniker of the co-founders. And I will often tell folks like “Hey, if you’re a CEO–” I often talk to a lot of different CEOs, or founders… And they really struggle to speak about what they do. They literally cannot explain what they do in a coherent way very well. It happens frequently, and those things do not hit the air, let’s just say. Right? We’re a podcast primarily.

[01:08:13.11] Or I have bad conversations about possible partnerships and possible working with them, and it’s a red flag for me. If I’m talking to a CEO in particular, that has a challenge describing what they do, I’m just like “Do we really want to work with them?” But you can speak very well. Congratulations… You and Beyang are like out there as almost mouthpieces and personas in the world, not just to sell Sourcegraph, but you really do care. I think you both do a great job of being the kind of folks who co-found and lead, that can speak well about what you do, why you’re going the direction you’re going, and that’s just not always the case. How do you all do that? How do you two stay in sync? Has this been a strategy, or did you just do this naturally? What do you think made you all get to this position to be two co-founders who can speak well about what you do?

We have learned a lot since we started Sourcegraph on this in particular. And even when describing Sourcegraph, we say “Code search. And now we also do code AI.” And I think some people are definitely relieved when they ask “Hey, what does Sourcegraph do that it’s four words?” Because I think there’s a lot of companies where they do struggle to describe what they do in four words. And yet, we were not always at this point. I’m coming here from a position where we have a lot of customers. We’ve validated that we had product-market fit, that a ton of people use those products, and that I can say that. But before we had that, there was a lot of pressure on me from other people and for me internally to make us sound like more than code search. Because code search feels like a small thing… Which, seems silly in hindsight. Does Google think that search is a small thing? No. But there was a lot of pressure to say “We’re a code platform-platform, a developer experience platform”, or that we revolutionize and leverage, and all this stuff. There’s a lot of pressure –

…but nothing beats the confidence of product-market fit, of having a lot of customers and users just say what you actually do. And one way we started to get that even before we had all that external validation was Beyang and I use our product all the time. We code all the time. I don’t code production features as much, but we fundamentally know that code search is a thing that is valuable. That Cody, that code AI is the thing that’s valuable. And we felt that two weeks after we started the company. We were building Sourcegraph and we were using Sourcegraph, and for me, it saved me so much time, because it helped me find that someone had already written a bunch of the code that I was about to write for the next three weeks. So it saved me time in the first two weeks. And from then, it’s clicked. So I think as a founder, use your product, and if you’re not using your product, make it something – make it so good that you would use it all the time. And then iterate until you find the thing that starts to work, and then be really confident there. But it’s tough until you’ve gotten those things.

That’s cool, man. It does take a journey to get to the right place. I will agree with that. And just know that out there you have an Adam Stacoviak telling folks the way to do it is Sourcegraph.

Thank you.

You guys are great co-founders, you guys seem to work great together… I see you on Twitter having great conversations… You’re not fighting with people, you’re not saying that you’re the best, you’re just sort of out there, kind of iterating on yourselves and the product, and just showing up. And I think that’s a great example of how to do it in this world where all too often we’re just marketed to and sold to. And I don’t think that you all approach it from a “We must sell more, we must market more.” That’s kind of why I asked you the sales question, like how do you grow? And you didn’t fully answer, and that’s cool… You kind of gave me directional, you didn’t give me particulars. But that’s cool.

[01:12:04.16] Yeah. Well, look… If you just take the customers that we have today, we could become one of the probably at the highest adoption code AI tool, the highest value code AI tool just by getting to all the devs in our existing customers; not even adding another customer. And that just seems to me to be a much better way to grow, through a truly great product, that everyone can use, that everyone can adopt, that’s so low friction… Rather than something that’s not scalable, than getting billboards, or buying ads… That’s all part of the portfolio approach that you’ve got to take, but ultimately, the only thing that’s gonna get really big is a product that not only do people love so much they spread, but where they – it gets better when they use it with other people. That’s the only thing that matters. Anything else, you’re gonna get to a local maximum.

Very cool. Okay, so we’re getting to the end of the show… I guess, what’s next? What’s next for Cody? Give us a glimpse into what’s next for Cody. What are you guys working on?

For us it’s really two things. It’s keep increasing that accuracy. Just keep eating our vegetables there. Maybe that’s not the stuff that gets hype, but that’s the stuff that users love. And then longer term, over next year, it’s about how do we teach Cody about your logs, about your design docs, about your tickets, about performance characteristics, about where it’s deployed? All these other kinds of context that any human developer would need to know. And ultimately, that’s what any code AI would need to know if it’s going to fix a bug, if it’s going to design a new feature, if it’s going to write code in a way that fits your architecture. And you don’t see any code AI tools even thinking about that right now. But that’s something where I think we have a big advantage, because we’re universal. All those pieces of information live in tools from so many different vendors, and we can integrate with all of them… Whereas any other code AI is going to integrate with the locked-in suite… And you’re probably not using whatever vendor’s tools for a wiki, for example, and their logs, and all that. So that’s a huge advantage. And that’s how we see code AI getting smarter and smarter. Because it’s going to hit a wall, unless it can tap that information. And you already see other code AI tools hitting a wall; not getting better much over the last one or two years, because they cannot tap that context. It’s all about context, context, context, whether you’re feeding that into the model at inference time, whether you’re fine-tuning on that… It’s all about the context. So that’s what we’re gonna be completely focused on, and we know the context is valuable if it increases that accuracy. And what a beautiful situation with this incredibly complex, wide open space, that you actually can boil it down basically to a single metric.

So that’s our roadmap - just keep on making it better, and smarter, and in ways that means developers are going to say “Wow, it wrote the right code, and I didn’t think that it could write an entire file. I didn’t think it could write many files. I didn’t think it could take that high-level task and complete it.” That’s what we’re gonna be working toward.

Well said. Very different conversation this time around than last time around, and I appreciate that. I appreciate the commitment to iteration, the commitment to building upon the platform you believed in early on to get to this place, and - yeah, thank you so much for coming on, Quinn. It’s been awesome.

Yeah, thank you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00