Practical AI – Episode #241

AI's impact on developers

with Emily Freeman & James Q Quick

All Episodes

Chris & Daniel are out this week, so we’re bringing you a panel discussion from All Things Open 2023 moderated by Jerod Santo (Practical AI producer and co-host of The Changelog) and featuring keynoters Emily Freeman and James Q Quick.

Featuring

Sponsors

Neo4j – NODES 2023 is coming in October!

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Chapters

1 00:00 Welcome to Practical AI
2 00:35 Sponsor: Neo4j
3 01:31 Introduction
4 02:22 Long-term sentiments
5 03:56 Steps to get started
6 07:25 Being in a hype cycle
7 08:17 Moving up the value chain
8 12:34 On cutting dev jobs
9 16:30 Making the cut
10 20:09 Unionize against AI?
11 21:37 What coding AI is good at today
12 25:41 Impact on learning development
13 29:38 Will AI steal developer joy?
14 34:13 Impact on open source
15 37:31 Disallowing GPTBot
16 40:39 Audience Q&A!
17 40:55 Most underhyped AI things?
18 43:03 AI tools: shovels or opiates?
19 45:19 AI for code maintenance?
20 47:16 Wrapping up
21 47:37 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello! Jerod Santo here, Practical AI’s producer and co-host of the Changelog podcast. Chris and Daniel are out this week, and I just got back from Raleigh, North Carolina attending the All Things Open conference. While there, I moderated a panel all about AI’s impact on developers featuring keynoters Emily Freeman and James Q Quick. We thought you might enjoy listening in on that discussion, so here it is.

The opening question didn’t get recorded, but I asked each of them to introduce themselves and tell us all if they’re long-term bearish or bullish on the impact of AI on developers.

James Q Quick, developer, speaker, teacher… I’ve done some combination of those things professionally for 10 years now, which is pretty fun. And on the AI front - this is something I’ve actually talked a lot about; I really enjoyed your talk, by the way… That was my first pitch, was an AI talk, and they were like “No, we already have somebody that’s taken that.” So my take that I would love to get into more is a very super-positive thing, and a thing that I’ve talked about a lot recently, is people’s fear of it replacing their jobs, and kind of hopefully, maybe changing your mindset around that fear, the fear that you might have, and changing it more into a positive thing… So hopefully we can get more into that long-term.

I love that. I love that we’re starting with bullish or bearish, like “Yes/No, go.” I’m Emily Freeman, I lead community engagement at AWS. That means I come to communities and conferences like these to really show up as a partner for the communities that already exist. I ran Developer Relations at Microsoft prior to that, and I’ve certainly been in the community for a long time, wrote “DevOps for Dummies: 97 things every cloud engineer should know.” I am bullish on artificial intelligence, because it’s happening, right? Like, this is happening. We have to kind of make it our own and lean into it, rather than try and fight it, in my opinion.

And your response. I guess you guys agree, so you don’t have to –

Yeah, are we supposed to – yeah, we should have made that [unintelligible 00:03:45.27]

Yeah, we should have set this up so you have a debate to kick it off.

Here’s my response…

[laugh] Alright, well, let’s reel it in then. So that’s long-term both very positive. I think I’m also in that camp, so we won’t debate too harshly on that… But what about today? Where does it stand? I know we’ve had some good demos, we have people using certain things… It’s here, we think it’s staying, so to developers, it sounds like the message is “It’s time to adopt…” But how? How do I get started? If I’m just seeing the demos on social media, or my colleague talks about it, and they show me what they’re doing with it, what do I do today to actually start my AI journey?

I think getting started today is really about acknowledging sort of where we’re at with AI, and the tools that are available to us in this moment. I think learning as much as you can… This isn’t new to us, right? We have to learn all the time, and adapt our skills, and grow as our technology grows. So I believe that we have to, again, lean into AI, learn these things… I mentioned prompt engineering earlier. I don’t think it’s a permanent role, but I think it is something that we have to engage with right now, and learning to design our prompts to really lean into the specific vectors of the model that you’re using is important.

Learn as much as you can about how it actually works on the backend, right? I’m doing this right now. I don’t have a degree in data or artificial intelligence… I’m learning, and I’m watching the content that already exists, and gleaning as much as I can from it. So that’s been a great experience, and it’s opening my eyes to sort of how we proceed with this. But I think for now, it’s just exploring the tools, recognizing the strengths and the limitations, and being ready to adapt and change as we move forward.

That’s perfect. I love the adaptive change, and I think if you don’t adapt and change and embrace AI to a certain extent - this is dramatic, but you’ll get left behind. But the reason that’s not as scary as it sounds is that’s been the case with every technological advancement that we’ve ever had. If you were writing machine code 30 years ago, if you would still be doing that, you’d not be very productive. Maybe some of you are, and that’s cool… But we have abstractions, and we continue to have abstractions, where the world that we live in as developers is totally different than it was five years ago, 10 years ago, 20-30 years ago.

[00:06:16.00] So this is just one of those things, and it doesn’t happen overnight. It’s a progression. So I think you look at what’s the easiest way. Can you add an extension to your text editor to give you prompts? Can you go to ChatGPT? I use that almost on a daily basis; not just for code - actually less so for code, but just a creativity standpoint. Like “Give me an idea of a project I can build”, or “Give me questions to ask my Discord” is actually something that I’ve done… So I think that’s kind of the easy way to do it, and I think like where we are now is really, I guess, very similar to what you said about the iron-clad stage… I forget the exact phrasing, but basically the verification phase, where everything you do with AI has to be verified. And that means that our jobs don’t go away, because we have to be developers and have that knowledge to be able to do that verification process… But I think that’s – you’re able to get a lot, but I think you also have to invest a pretty good amount of time into the verification process to make sure that it works, it works correctly, and then if you’re doing it for things outside of code, it also fits your tone. I use it for blog posts and ideas for content and things, but I have to take that output and convert that into something that is genuine for me. So there’s a lot that goes into just confirming, verifying and tweaking the output that you get.

I also just wanted to say, I think there is currently a bit of a misunderstanding about what a hype cycle actually is. And so you’ll hear this phrase, that we’re in a hype cycle of AI… And they’re right. But the hype cycle, if you actually go look, it was made by Gartner. Thank you, Gartner… So it’s really just this sort of extreme expectation. So we’re very excited about it right now, and we haven’t begun to really see the technical limitations and the difficulties that we will come across later. So being in a hype cycle does not necessarily mean that AI is going away. It is just inflated right now.

Right. Well, to James’s point, I think very few of us are writing machine code, but the ones who are, are getting paid very well to write it.

Like ridiculous amounts of money.

Don’t sleep on COBOL. Still a thing, and still will be for a time to come. So in my experience, I think that AI codegen in the small is very much here, at the function level, at the line level, maybe at the module level… As you get into broader strokes, understanding the system at large, the things that really are in the mind of the developers at this point, do you think it’s always going to stay there? Do you think it’s going to move higher and higher up the abstraction, to where I can say “Hey, AI, make me a Facebook for dogs”, and it will say, “Okay, I’m done.”

Please, no… [laughs]

Well, that’s the ridiculous end point. But if we look at what – yeah, I mean, there actually is one of those. [laughter] Or was, perhaps. If we look at the way that a client would hire, for instance, an indie developer, like a contract, freelance dev, and they have a business idea. And the client has some sort of idea of what that business is, and so maybe they’re at like the user story level… Now, most people aren’t quite there yet. You have to help them flesh that idea out. But at a certain point, that becomes a feature that is given to that person, and then they go and implement it. And right now, I think it’s fair to say that person will use AI tooling in order to do that faster, better, stronger etc. But is there a point - and if so, please prognosticate when that point comes - when I can simply be the writer of the user story, and we don’t need anybody in between me and the computer?

[00:09:56.11] I think we’re a long ways off from that. I think any time you’re talking about an abstraction, even the best developer tools on the market right now, the difficulty really comes in plugging everything together. We have access to so many different tools, that operate wonderfully, and provide incredible benefits… But making them all integrate and flow together is always the hard thing… And I see artificial intelligence as the exact same thing. It will do really well in small sort of pockets of where we need it to, and then plugging it all together will be the sort of last moment, I think, where we’re involved.

I think the abstraction just gets higher and higher. And again, that’s been the evolution of humankind. That’s the reason we have technology and inventions, is so that we don’t have to do the stuff that we wasted a bunch of time doing, looking back now like 100 years, or whatever.

So all the abstractions that we see in development, from like you no longer have to manage your own servers, you no longer have to do patches, you no longer have to do firmware updates, and that kind of stuff… Like, that’s just the continual path that we’ll go down, and I’m glad that you started with. It’s a very far way away… Because people’s irrational fear is like tomorrow they lose their job because they use ChatGPT to build the app. And that’s not anywhere near the case. But I don’t see why the evolution of this wouldn’t be exactly that, where you say “I want Facebook for dogs”, and it gives it to you… Because that code and that logic is out there. It takes a lot to put it together and to figure it out. And this “Prognosticate when…” Years. But that could be the goal.

But one interesting thing - in doing some research for one of the talks I gave, I came across the Devon’s Paradox. Anybody heard of that?

Cool. So it makes me sound smart. So Devon’s Paradox says lot of people fear – like, if something can do my job faster, that means I’m going to lose my job, because it’s going to do my job. But Devon’s Paradox looks across – we’re only doing that in a mind state of what we’re capable of doing now. We’re not thinking forward about what as a whole we’re capable of doing with these augmented tools. So we can’t even imagine what problems we can solve in 10, 15, 20, 50 years. So even if right now we have this idea of Facebook in our head, we know what that is tangibly, even if ChatGPT or whatever can do that, we don’t know what problems we’ll be solving that are infinitely more difficult than that at the time… So it’s going to be continuing like tools are getting better, but we’re continuing to do more, I think, as an ecosystem.

Okay. So we’re gonna get past Facebook is what you’re saying.

Okay. Okay, well, how about the other – I know we’re all optimistic long-term, but what about this very real possibility…? I’m a C-level executive, I’m watching TikTok… Somebody else on TikTok, who’s a C-level executive coach, says “Look, developers are getting more and more efficient things to AI. They are now 40% more efficient. You can just cut that directly off of your top line and save your bottom line. We’re in an economic downturn; you need to cut your engineering team today.” Like, that seems like a very real fear, and a very real possibility. What are your thoughts?

Sure, I’ll take it. No danger in that question… No, I think plenty of CEOs are probably watching this kind of videos on the TikTok… I don’t know why that amuses me so much, like a CEO – yes, I call it the TikTok, because I think it’s funny. I remember when Facebook was The Facebook… And I’m a millennial, so you know… Millennials have been coming up a lot today… In a good way. Yes, we’re good, despite what the baby boomers say.

So I think it is a very real possibility to cut, and for that to be the impetus and the sort of thought around this. And you see this throughout history - as we become more efficient and effective, instead of earning ourselves more time to live the life that we want, we prioritize work, and are always chasing that edge of the bottom line. Societally, I think we could do better with that… But it’s always going to be a reality. And I think this is where we have to learn and grow and adapt. If we sit still, to James’s point earlier, that will not behoove you long-term. So learning, adding value in different ways, and adapting to this new technology is key, I think, to increasing our value and having some more longevity in our roles.

[00:14:16.13] That said, I think the roles are going to change. And again, we’re not new to this. Our roles have changed completely. We had sysadmins, and now you rarely see that job title. But the population of people in technology roles has only grown from there. And so I think that there’s extreme opportunity, if again, we lean in and we’re not approaching this in a fear-based mentality, of trying to dig our heels in and maintain the current system as it stands.

I feel like we need to be more controversial. No, I don’t have it; I’m saying, like, all those things I agree with as well. To your point earlier in your talk, the – again, I forget the exact phrasing, but we kind of had to go through the iron-clad situation to learn what the pitfalls were, and to then get to this next iteration of building ships that was so much better in so many ways… And I can see a scenario where what you’re saying happens, and I can see them getting bit in the ass really quickly from not having developers for when things go wrong… Because as we all know, no matter who writes the code, stuff goes wrong. And somebody has to fully understand that. And like maybe somebody with non-technical background can go into ChatGPT and say “Here’s what I’m getting. What’s going on?” but probably in that case, you really want someone with the technical experience.

I just think it’s such a slow – although it seems super-fast, I think it’s a much slower process than we give credit for, and I think we just go down this rabbit hole of really thinking it’s happening now, and it’s just not. And if that has happened with a company, please share a story, but I just haven’t heard of that. But I can see a time, and I think there’ll just be learnings with that… But I also go back to the Devon’s Paradox of like we still approach this conversation now with a fairly limited mindset of what we can think about being capable of building right now, and we just don’t know what else we’ll be building. And I 100% agree jobs will be augmented, but not really in any different way, although maybe slightly accelerated than how they’ve augmented over the course of time… Because that’s what inventions are for. So I really just go back to that one. I kind of go down maybe the fear rabbit hole, or question marks of the benefits going forward.

Okay. If there was a 30% cut, and I didn’t want to be a part of it, what would I do today?

Learn. We have to learn. And ChatGPT has come up a lot, and that’s sort of the leader right now… We don’t know that that’s going to stay that way. So you’re going to see a ton of new tools come forward, you’re going to see a ton of startups get funded… This is where venture capitalists are putting their money right now. There’s going to be a lot of new tools entering the market, and a lot of churn, as we sort of hone in on who the big players will be long-term.

So I think learn… I think you have to sort of make demands where you can. I’ve talked about responsible AI… This is super-critical. And we are in the place where it is truly our responsibility to push for this, and push against the sort of market forces that would say we’re moving forward quickly, with a profit-based approach to this, a profit-first approach.

We have to go forward with a set of guidelines and standards that protect everyone, and use this in that responsible way. So that for me is key as we proceed, and really owning that as the people who not only build these tools, but utilize these tools, that we are clear on our approach and our tolerance of that behavior.

[00:17:57.09] I’ll double down and go a little bit deeper on the ownership piece of learning. If we’re really honest, we’re in a really shitty time right now, economically, and jobs… And I feel like every month, I have a friend of mine who reaches out, or I just hear about having gotten let go… I was let go from my role before really this started, like a year and a half ago, that summer… And the reality is that’s happening. And it really, really sucks. And it’s really, really hard. But I think your skill set has never been more important, your ability to communicate what you bring to the table has never been more important… I talk about this a lot from a career perspective; you have to be able to share your benefit and your value, and you have to be able to communicate that effectively, and also confidently, when you go into potential interviews, or just how you show up and talk to people in general. That’s never been more important.

I also think - and I go back to this a lot, because it’s very important to me - community. Like, you never know when someone in this room might be the person that helps you find your next job. You never know what one of those connections is. And to clarify this, from a networking perspective, it doesn’t mean find people that work at a company, so that when you go to apply there, you can just have an end. That’s not why you do it. You invest in the community, you show up, you’re a part of the conversations, and you’re genuine, and that will have a significant return, or at least can.

A little personal story - when I was let go from my job a year and a half ago, it was kind of a debate for me of whether or not I was going to go full-time to work for myself, something I’d been thinking about for a while… And so I posted on Twitter, saying “If anyone is hiring for developer relations or like management positions in that realm, send me a message.” And I got 50 or so DMs of people not only saying “We’re hiring”, but also kind of like “We’d like to hire you.” And I don’t say that from like a braggy perspective. What I’m saying is like, my network at that point - I had nothing to worry about, because I could find an opportunity because I’d earned trust in that community. And so the people that you’re sitting next to, the people that you talk to, the people on stage - you never know what that’s going to do for you. So there’s in recent times never been a more important time for your skill set to be very sharp, and for you to be continuing to evolve that, like you said… And then also your network and how you show up in community, because you just never know.

I love that emphasis on community… And we are not a collection of individuals who form a humanity, we are a whole. And not everyone should have to have the gumption or tenacity or privilege to demand certain things from their specific workplace or role. And I think part of being a community is protecting each other, and standing up for each other, and showing up for each other. And if you have the room to do that, or the natural personality to do that, the more that you can kind of be a leader in this community, and push for those things in your own workplaces, and locations, the better off we will all be.

Can I add one more thing? [unintelligible 00:20:47.16] so much. The automotive industry right now is going through strikes, and stuff… And they did it in an interesting way, where they did it in like bits and pieces of like taking more people off the line, so that they can continue to budget to be able to do that longer. There’s also – I forget what the acronym is for writers, but like the strikes in writing… That’s, I think, the power of community and people being able to come together as a community to stand up for what they think they deserve… And I don’t know that we’re here right now, but I think it’s just an example of what people that come together with a common goal can do for an entire industry. And maybe we get to a point where we unionize against AI; I don’t know, that’s – maybe not. But the power of those connections, I think, can lead to being able to really make positive influence wherever we end up.

Unionize against AI. You heard it here first. Okay. Let’s dive into the adoption weeds a little bit. So we talk about learning, adopting, trying things… What have you all found is particularly beneficial today, how I would go about adopting, and things that let you down? For instance - I’ll give one, because I write Elixir, which makes me a little bit weird… AI does not know Elixir very well. So yes, it’s here, but it’s not evenly distributed; for our more obscure technologies, you’re gonna have worse generations, you’re gonna have worse advice. It’s all good. So I use it less in that context. But when I’m writing the frontend stuff, it knows JavaScript very well. So that’s just an example of what’s good and what’s not good.

[00:22:25.15] I’ve heard the advice that you should use it to generate your tests, and then you write the implementation. Maybe that’s a good idea; maybe that’s backwards. Maybe I should have it write the implementation and I write the tests, because I am the verifier. So thoughts in the weeds of like what it’s good at today… You don’t have to go into the future, but if I was actually going to go code after this, and I was going to adopt or die, what would I do to really level me up?

You can probably speak more, actually, to how good or bad in different scenarios, or maybe – I don’t know, but I can’t do that. So I’ve used it…. But I also come from a perspective of I know nothing about how AI works. And so it’s interesting that you were saying you were learning about how it works, and what the underpinnings are, and stuff like that… And I’ve taken a different approach, where it’s like, I’m just a regular developer; I have none of that knowledge, and I’m just seeing what it does for me. And I think there’s a time where we continue to get better and learn more.

So I think the adoption for me - again, no specific advice of like how well it does in different segments of the industry, but just throwing it in there and seeing. Because I think it’s going to change from language to language, framework to framework, and it’s up to you to kind of figure out what works for you, and maybe your team, and just kind of figure that out for yourself. Again, not super-specific, so maybe you can help me out there.

I think right now we see various tools on the market. I can think of about five that are sort of leading the way. I think we’re going to see a lot more models be developed and released, and kind of see where that goes. An experiment there. I think your point about the languages is such a good one, where you’re seeing a ton of JavaScript. Obviously, I expect Python to be in there…

Python. Yup.

[00:24:06.07] Yes. The data people love Python, so I get it. But I think as we proceed, making it an even playing field as far as code generation… But also, keep in mind that part of the major issue with generative AI is you take a prompt and it generates something based on expectations. And so it produces what we call hallucinations. Gen AI is on drugs. [laughter]

Lots of breaking news on this panel.

Breaking news. Yes. And so what happens is it will just hallucinate something, and it kind of goes off on these tangents… And you see this when it becomes really verbose in its language, or it kind of goes off… Or if an image, someone’s missing an ear… That type of thing. And those exist right now, and they’re fairly common in gen AI. And I expect as we kind of move closer, and again, hone these models, that that becomes better, and we have fewer of those… But right now that is one of the major challenges with gen AI.

If you really want to be entertained/trigger warning - very weird. I had this video in a slide, and I took it out, because it’s so weird. If you’re interested, search for “the Toronto Blue Jays AI generated hype” video; that’s for the baseball team. Fair warning, if you want to. It’s very entertaining, but also extremely weird, going back to like people missing ears, and stuff. Check it out if you want.

So when we talk about it being hallucinatory, what that really is is it’s wrong; it gave the wrong answer. Right? And as an experienced developer - I’m sure many of you here are experienced developers - I can look at the wrong bit of code, maybe I’ll execute it once, but I can be like “Yeah, that’s not right.” What does this do to people learning software? Because they can’t do what we can do and say “That’s not right.” They’re just gonna be like “Alright, let’s rock and roll and throw this into production.”

Is that what you did when you were a junior? [laughs] Because I did not do that.

Okay, well, different paths, you know… [laughter]

I appreciate the Yolo approach to production there. No, I think you bring up so many different things… So yes, it’s wrong. It doesn’t know that it’s wrong yet. And when we go through and we’re talking about juniors – someone on Twitter right after the keynote mentioned that “Well, gen AI is getting rid of juniors.” I don’t believe that for a moment. And please, please don’t take that approach into your companies. That’s going to be bad.

I think the same approach with juniors with gen AI should exist as we always have, which is where the more experienced senior and principal engineers not only review that code, but also coach the juniors on what works and what doesn’t, and why, so that we can all learn and progress together. Again, such an emphasis on learning and evolving as a community.

I also think this is where – I know for Amazon CodeWhisperer, when it generates code, you have options. So it will give you a few options that you can scroll through and read, and decide which works best for you. And I love that approach, because one, you can see multiple ways of solving the same problem, and two, you still have some ownership and direction that you can inject into the code, based on your, again, personal style, or approach, or belief, knowing the whole system. From that one comment, no code sidekick is going to know exactly what is actually happening at the large scale; it can pick up on things as it learns… But being able to see it as the whole and not just that one piece of code is really one of the values of you.

[00:27:49.11] My initial reaction to the impact or influence on learning when you use AI is - several different things. First and foremost, it’s the fact that you have to also understand what you’re accepting, whether you’re copying, pasting, or pressing ENTER, or Tab, or whatever to get that code. You have to understand that, because you have to be able to decide “Is it going to work?” Hopefully, you’re not just shipping directly to production, although, you know… And in some ways, it’s not that different than how we’ve always been. Stack Overflow has been here for years; we have memes about Ctrl+C, Ctrl+V keyboards, because that’s all we need, right? We’ve done that for a long time, and we’ve learned sometimes to be responsible with how we do that. So I think we have to take time, especially for people that are early on, to pay attention to what’s there, maybe go and do outside research about what’s there to really have at least a decent understanding of what’s there…

But I’ve also got a different perspective from [unintelligible 00:28:44.06] was at GitHub as a developer advocate, and now I forget the company name that she’s at now… But she had a different take on the learning experience, and she was kind of going the other way, of saying AI enables us to move faster and learn some things, while obscuring other things. So if you’re intentional about “I want to learn this piece”, I can have aI generate other pieces that I don’t need, that are then enablers for me to build the thing, while focusing my learning journey on this one individual piece, or a few different individual pieces. That was kind of an eye-opening thought for me. I hadn’t thought about it in the reverse, of like it still is enabling us to do more, but I think you do have to use it intentionally about what is it that you don’t know, that you’re trying to learn, what is it that you don’t know that you don’t need to know yet, and then what is it maybe down the road that you’re definitely going to need to learn at some point, too?

Well said. Alright, stereotype warning… Here comes one. Software developers are, generally speaking - this will be generally true, and specifically false. We’re pedantic. We think about the tiny, littlest details, because historically we’ve had to. I mean, some of us are still writing machine code, right? So I know pedantic is a pejorative, but if we just take it literally… We think about the little things, and a lot of times we take joy in those little things, right? So if we think about the impact of AI on developers, is this stealing some of our joy? Will we continue to do what we do at a higher level, and be more productive, and make more money, and all the things that are great, but actually, what we liked to do was to write that function to sort that array that exact way we wanted to?

I think you have a point…

I would say pedantic feels… Negative.

Is there a better word?

Jerod here in post. I thought of that better word… Okay, ChatGPT thought of it. Meticulous. I should have said meticulous. Pretty similar meaning, none of that negative baggage. Alright, let’s get back to it.

Focused and specific on those types of issues… Because I think we all carry those moments that we saw something fail spectacularly, right? Or you’re actually looking at something, and as an expert, you can notice right away what is wrong with something… And that pattern recognition is something that makes us really powerful. I think as we sort of proceed with this, I think that’s the joy for some people. It’s not the joy for others.

I’ll speak for myself, I’m a second career in tech. I was a writer, and I worked in politics and nonprofits… And so coming from that into tech, coding was not necessarily the thing that brought me joy. It’s not to say that when you don’t finally hit that thing, and then it runs, and it’s perfect, it’s like “Oh, that feels so good.” But for me, it was building tools that matter to people. And that is what brings me joy. And I think the spark of joy is going to be different for all of us, and finding joy in our work, no matter how it evolves and changes, I think is important for all of us as humans, and for our personal growth. But I think, again, we set the standards here. This is not happening to us, it is happening with us; it is happening by us. And taking ownership of that, and really kind of saying, okay, well, these are the areas that we want to maintain, and grow, and evolve with, and these are the areas that we want to give up.

[00:32:21.16] I don’t want to write a CRUD service again. I just don’t. I’ve done it 1000 times; we’re good. That can be done away with. I want to solve the really complex problems. I want to think about “Okay, this hasn’t been done before. It’s only been done at scale by a handful of companies. How can I apply this to my specific constraints and resources?” That’s interesting. And I think it’s that kind of problem-solving and looking higher up in the stack, and having that holistic view that will empower us along the way.

Well said. Do you want to add?

Yeah, I think very similar. I can speak from just my perspective of what I enjoy, and I think it’s the exact opposite. Or – I’ve said opposite; the exact same is what I meant, sorry. I was trying to bring drama, and I just don’t naturally have it.

Can you guys disagree on something? It’ll be a lot of fun.

I’ll try. On the next one I’ll come up with something.

But my favorite thing about being a developer is being able to build. And with code, we can solve most problems. Now, there’s other aspects, like hardware and things that come with it, bu we solve the problems of the world on a daily basis, and that’s what’s cool for me. And I can’t remember if it was your talk or someone else’s - the way some people look down on no code/low-code environments, or platforms, or whatever… Like, I don’t care. I just want to build a thing and see people use it, or just build a solution to a problem I have. So I don’t know, same perspective. On the next one I’ll come up with something controversial, I promise.

Okay. A nice analogy might be stick shifts and automatic cars. You know, no one’s stopping you from writing that function. Just go ahead and have fun, write it. But the rest of us are going to use the thing to write the function for us, and if you take joy from that, just go ahead and write your functions.

Manual transmissions forever…

There you go. Got one.

I don’t know how to use one, drive one. So… Boom! Controversy.

Hey, they disagreed!

I told you…

Alright, let’s get slightly more philosophical and broader sweeping… So we’ve talked about the details. What about big picture changes? I’m thinking about open source software, I’m thinking about ownership of code… If an AI writes 30% of my code, do I get 70% copyright on that? Do I get 100%? Does my employer get all the copyright probably? But what about open source? Because this is – you know, these things are trained, famously and infamously, on publicly-available source code, and so that’s our labor; whether we gift it or not, it is. And so what does this impact the lives of us developers who are either working on open source, or simply using open source? It touches all of us.

I imagine some maintainers will maybe think twice about having stuff be truly open source… I think there’s a whole deeper conversation about the impact of just like reading from people’s code and leveraging that to do other things, and ownership, and stuff… So I could see some people just kind of bowing out of that and kind of coming back into themselves… Which would be a shame for that not to be available.

I don’t know, there’s so much that goes into it; like, from a political perspective, from an ethical perspective… Honestly, you asked me that and I’m overwhelmed just thinking about it. There was someone last night at the speaker sponsor dinner, and he talked about how I think today – he’s worked on multiple revisions of a pitch for either ethics in AI or something like that over the last year, and he was giving another pitch last night, and they were gonna go through it… I think we will have a lot to catch up on to define that. I have none of those answers, and they drastically overwhelm me, because I can’t begin to comprehend those applications… But there has to be like legally, morally, ethically, open sourcedly, there has to be things that kind of catch up, and give some sort of guidelines to this stuff that we have going on.

[00:36:09.09] Yes. And this is why I keep pushing on responsible AI. We have to have these conversations, and they’re going to be hard. In economics there’s this concept of the tragedy of the commons; it comes from a pamphlet of the same title. And the focus was really around the shared common land that cow herders or any kind of farmer would utilize for their herds to eat off of. And as individuals, it benefits each herder to have their cows graze the most, with no limitations. But obviously, shared resources are finite, and they are limited. My favorite quote from that pamphlet is “Ruin is the destination toward which all men rush.” And I think we have to be truly careful as we proceed here. A lot of this is a common resource, and it’s based off of a common resource. And this is where I think communities around this is really, really important. And recognizing our own power and influence on pushing toward a holistic and appropriate approach to responsible AI.

That quote, I thought you were talking about my code again for a second…

[laughs]

[unintelligible 00:37:22.26] ship to production?

Yeah. I should probably go revert that commit… Okay, have you guys seen this new thing you can do? It’s like robots.txt, but it’s for your – this is for website copy. So we’re in the same realm, but it’s like no gpt.txt. What’s the actual technology you can do? No crawl, maybe… I don’t know, it’s a brand new thing they’re working on, where the LLM crawlers will skip your website, much like you can tell Google not to index your website. Is that something people will do? Is that something that can have an application into the world of open source? I mean, maybe you said opting out of… Does that mean not even publishing at all? Because there’s no guarantee that the language model creators will necessarily comply with a robots.txt, for instance. What are your thoughts on the analogy there and how it applies?

I find it to be unacceptable that companies would push forward with a profit-only mentality, and not take these things into consideration. And to some degree, between our work and also where we spend our money, we have to tell the market that that is not acceptable.

I don’t want to live in a world where we’re trying to hide from crawlers. I want to live in a world where we have decided on standards and guidelines that lead toward responsible use of that information, so that we all have some compromise around how we’re proceeding with this. I think it’s super-important.

Trusting people is a big ask… [laughter] Actually, when I said the thing about people potentially retracting from open source, as soon as I said that, I kind of wanted to backtrack that in my head, and find out another way, and I immediately thought about like a flag on GitHub that says “Don’t look at this code if you’re an LLM.” So something like that I think could be useful longer-term. Having it all figured out is definitely better, but I could definitely see that being a thing that people would use, I imagine, if they don’t want their code to be used in an LLMs, to just be able to opt out. That seems like a reasonable intermediary step along the way.

[00:39:41.25] Yeah. I think we would start to argue around definitions of open source, because the freely available ability to use without restriction is part of the tagline. But maybe it’s source-available kind of things, where maybe indies start saying, “You know what, I’ll put my source code out there. You can do everything except this”, and we have a new license that’s not open source, but it’s something else. I think time will tell.

It just gets so hard to prove too, right? It’s like cheating on a homework assignment in college - which I never did (?). Like, they had these things that would compare your code against other people’s assignments or whatever from previous years… I’m sure that’s gotten more and more sophisticated now. So that would be one of those things where if you have an opt out flag, and then you come across a repo that has code that looks like yours, there’s no way you could prove that without diving into the logs from the AI that generated – I don’t know, that’d be so hard to prove. Again, coming back to like ethically, and legally, we have a lot to figure out, I think.

Okay. How much time do we have? Does this go till 12:15?

I think so.

We’ve got five minutes? Okay, anything that wasn’t addressed, that you want to make sure it gets addressed? Here, I’ll take the mic, I’ll run it to him… You stand up here and answer.

There’s been a lot of discussion about how gen AI has been hyped or overhyped… My question is - maybe this is a way for you to disagree - what do you think is the most underhyped technologies around AI? I think I kind of agree with Emily that the trustworthiness of AI is the most underhyped, but what do you guys think?

Especially in the conversation here, from a technical perspective, I think the most underhyped thing is how much it can be used for things that are not just writing code. And I mentioned this earlier, just from a spark of creativity… I sometimes limit myself mentally, because I don’t think I’m creative, although if you look for pieces of things I do, it’s there… But I can use something to just give me ideas for stuff when I’m stuck, and it doesn’t have to be technical. And I think that’s super-super-valuable. I’m thinking about like an onboarding of how to incorporate it… What easier way to incorporate some AI into your life, and to just like “Give me an idea for something to do this weekend that would be fun, with my partner/spouse, or whatever.”

So I think just on a regular, outside of code perspective, there’s so much that you could get out of it from a creativity spark… And I think that’s a lot of fun, and I think it’s easy to get started that way.

I keep coming back to the – for me, the hype is around the speed and scope of AI. When I quoted Marvin Minsky - bless him - who believed by 1980 we’d have a human analog, obviously, that’s not true. And when you think about how quickly this kind of came to market, it feels really fast, but a lot of that had to do with 2018 transformers coming about, and us being able to actually proceed with this. But when you look at all of artificial intelligence, it’s truly been eight decades, at a minimum. And so we’re kind of coming to a place where there is that distribution, but I fully expect it to still take some time before widespread adoption, before efficient uses, certainly affordable uses, where we can actually apply this to higher-risk scenarios and industries.

Time for one more, I think.

Yeah, and this use of the term “tools” in a kind of a neutral way to describe AI kind of broadly, I think what’s been left out maybe is that different tools have different side effects. So for instance, video games have certain characteristics, shovels have other characteristics, medicines have still other characteristics… Where do you see these tools right now, and maybe in the future, where we have to look at societally, are they more like shovels or opiates?

[00:43:37.02] Oh, I like that. I like that last line there. Good question.

That last line took a hard left. I think – we don’t know; there’s no way to know. I think we can sort of think about the next three to five years and where we think this will go, but I think anyone who claims to be a sort of futurist or believes that they can tell you in 50 years what this looks like, they’re just guessing. You might as well throw a pen against the wall. We just don’t know. But I think – truly, I keep coming back to this… We have ownership and responsibility over this, and we can kind of determine what this actually looks like in usage.

Shovel versus opiate is like a T-shirt waiting to happen… It’s such a good, and kind of easy call out for – and it’s kind of funny, but I think it’s very serious… All the ethical, legal implications - we talked about that; there has to be catch-up. I think we also just have to acknowledge that this is also the same as every other advancement that we’ve ever had. You think about – I don’t know, people that want to use things for nefarious ways, people that want to use things for their own purpose, that hurts other people or affects other people in negative ways… It exists, unfortunately. And so I think it’s even more important for the concept of responsible AI. But also just acknowledging that there’s probably a point where we need to have limitations. What that means and what that looks like, I don’t know. Do we get to a point where we’re in iRobot, and that’s where we’re living on a day to day basis, and we have to prevent that? I don’t know. But I think it with great – what is it? With great power comes great responsibility… And I think that’s absolutely true here.

One more quick one.

So there’s a lot of talk about AI tools that help you write code. But as a developer, a lot of my time was spent actually supporting code, or maintaining code, and there isn’t a lot of tools out there that helps you fix bugs, or… I don’t want to read someone else’s code and fix their bugs, but that’s what I spend my time doing. So why do you think we’re in the state we are now, and what can we do to build more tools that eliminate that tedious part of coding?

So from my perspective I think I have seen at least people talking about that use case. I don’t disagree that there’s more tools focused on the generating of code, but I have seen people post on Twitter, and things, of like give it a code snippet, “Tell me what’s wrong with this” or “Explain this piece of code.” So I think that’s starting to get into what you’re saying, although the toolage may not specifically exist as much as we may want for that use case…

What I think is really cool, and I think this goes back to probably the most undervalued aspect of AI, is the fact that not only does AI exist, but AI exists in a way that we as developers can consume it to build other things. That means that we see a gap in tooling to address exactly what you’re saying. We don’t have to build all that logic from scratch; we can build a nice UI on top of an already-existing LLM, and be able to start to provide the things that you’re looking for more specifically. Now, eventually, you get into more custom-trained LLMs, and that sort of stuff… But I think that’s the beauty of having it be accessible, at least in certain ways for us as developers to build on top and go and solve those use cases.

That was well put, and I expect more tools in the future. I think we led with the thing that we knew we could execute on as an industry, and that seemed like the most straightforward path… And as we kind of diverge from there, I think you’ll see a ton of tooling around solving those problems. But yeah, I still believe that those kinds of – the fixes, the plugging everything together, the integrations, that will be probably something that takes a long time.

Okay, that is all the time we have. Thank you all for coming, and let’s hear it for the panelists.

Thank you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00