Practical AI – Episode #187

AI IRL & Mozilla's Internet Health Report

with Solana Larsen & Bridget Todd

All Episodes

Every year Mozilla releases an Internet Health Report that combines research and stories exploring what it means for the internet to be healthy. This year’s report is focused on AI. In this episode, Solana and Bridget from Mozilla join us to discuss the power dynamics of AI and the current state of AI worldwide. They highlight concerning trends in the application of this transformational technology along with positive signs of change.

Featuring

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I’m a data scientist with SIL International, and I’m not joined today by Chris, who is currently in a plane somewhere, taking his daughter to Disney World, I think, to have a wonderful time… So we’ll give him the week off. But in lieu of Chris, we have some amazing guests with us today to talk through some of what Mozilla is putting out with their IRL Podcast and their latest Internet Health Report. We have with us Solana Larsen, who is the editor of the Internet Health Report, and Bridget Todd, who is host of Mozilla’s IRL Podcast. Welcome! Great to have you both.

Thanks for having us.

Yeah, so excited to be here.

I was so excited that we got to do this. Of course, you’re putting out amazing content through the Internet Health Report and an IRL Podcast, which this time around is focused on AI, and I’m sure we’ll get into that. But maybe before we do, Solana, would you mind just sort of introducing, for those that aren’t familiar with it, what is the Internet Health Report, how did it come about, and maybe just a little bit of context there?

Sure. Well, it’s an annual report, and it’s published by Mozilla. We started five editions ago, asking the big question, “What does it even mean for the internet to be healthy? And what happens when we think about it as an ecosystem that can be either healthy or unhealthy, or bits of both at the same time?” And then the important question, of course, is how do we make it healthier?

So when we’re talking about healthy in this case, we’re thinking a lot about how it acts as an ecosystem for humans, for humanity? Is it a benefit to the world? Is it something that is good for people? And so when we think about the things that are unhealthy, it’s everything from disinformation, or hate speech, but it can also be things like how many people are connected to the internet? How many women are online? Are people able to build and code and compete? What is the this ecosystem that we’re building?

[03:49] So every year, we would step back and look across a lot of different topics, everything from undersea cables, to codes of conduct in open source communities, and so forth. And I think over the years, a lot has changed in how we talk about the internet and how we understand the internet, both in the media and in technical circles, how we think about regulation… And so in terms of moving with the times a little bit, I think right now is the moment to talk about AI. And so it’s the first year that we have taken just one big topic as the focus area for the Internet Health Report and gone deep on just that.

And with AI, it’s really all of the things that hurt or harm the health of the internet the most. We see those magnified or amplified with AI, in a lot of ways, but there’s also a lot of opportunity, right? And there’s a lot of things that are in flux, and things that are adaptable, I think, to what we do right now. So this is an exciting moment and an important moment to be talking about it.

That’s so well put, and I appreciate Mozilla digging into this subject, and covering a lot of really important aspects of it. I’ve listened to the first episode of the podcast, the IRL Podcast that’s coming out with a lot of these stories. Bridget, you’ve been hosting that… From your perspective, as you were talking with Solana and maybe the team around this and thinking about “Why is now the time to talk about AI?” what were what were some of your initial thoughts or your perceptions about AI kind of at the outset of this project?

Yeah, I mean, pretty much plus-plus to everything that Solana said. But I think for me, not really having a hard tech background - I’m not an engineer, I’m not an AI expert - but I’m somebody who cares about technology and sees the ways that it impacts all of us, even if you don’t think of yourself as a techie. And so I deeply appreciate the way that Solana, with the Internet Health Report and the entire team at Mozilla have really made these conversations accessible.

I’m sure a lot of folks out there probably do not listen to this podcast, but might say, “This has nothing to do with me. AI - what does that have to do with me?” and I think this podcast and the Internet Health Report really pushes back on that notion, because from the way that our medical issues are diagnosed, or the way that we voted elections, all over the globe, AI impacts all of us. And so it is imperative that we all understand the way it impacts us, the potential for harm, but also the potential for good things, too. And so not just focusing on the harmful impacts, but asking, “Well, what can be better? And why does it have to be like this? How can we have space for dreaming and hoping that things can be better than they are?” So I think that the thing that really draws me to these stories are the ways that they are made so accessible, where everybody can understand, “Hey, this really impacts all of us.”

It’s interesting you brought up the concept of, a lot of people maybe not thinking so much about the impact of AI on their lives… Do you think that there’s a side of that that has been sort of exacerbated by the kind of futuristic Terminator scenario sort of hype around what comes to mind when people say AI is maybe, that’s the harmful thing that could happen, not so much automating weapon systems, or things that could happen in the healthcare system, or other things? From your side, and like the people that you’ve talked to, or maybe just in your day-to-day life, do you find that to be part of the issue? Or how can we think about like the general population’s perception of AI, and maybe how that needs to shift a little bit?

[07:49] I think that there’s so many ways, so many directions you could take an answer to that question… But I definitely think there is this kind of exclusionary, kind of it’s magic, there’s an element of dark arts to it, this mystique around AI that really serves the people who use it to exert power in different ways. And so when we’re asking for AI to be more transparent, and more understandable, it’s partly about demystifying to an extent that we can actually get to the heart of “How do these systems work? How does it affect me? What can I do?”

And so yeah, I think that kind of obscurity, and also elevating it to this higher art form that normal people can understand - I think that’s part of where the power lies. You have that with other forms of knowledge and power systems as well. So a big part of what we’re doing is bringing it down to a level where we’re explaining how it works, how things can work differently… And oftentimes, and which is what we have in the podcast, is people explaining how they were harmed by a system, but they’ve decided to design it in a different way, to do something different. And sometimes it’s through those stories of just people building something, even if it’s on a smaller scale, building something, and that gives you this kind of realization, “Oh yeah, I was just completely taking for granted that we have to collect data in this way, or that we have to ignore privacy in this way.” It just opens your horizons for the hoping and the dreaming.

And I think even when we’re talking about data futures, or AI in the future, a better future, what we don’t want to get into is “Oh, robots…” It’s not like SciFi future really, it’s more like you and me real-life kind of future. Because even though we’re talking about very advanced technology in some cases, it affects people who aren’t even on the internet. It affects you when you’re walking down the street. There are all kinds of ways that aren’t very high-tech, and just very basic daily life where you encounter these technologies. And so yeah, it’s bringing it down to the ground level where we can talk about it, and where we can also approach it with grassroots communities when it’s appropriate.

Yeah. And just to add on to that, when we were first working on the podcast scripts, something that Solana said that will really always stick with me is - when we’re talking about “Oh, how do you phrase certain things as it pertains to technology or AI?” and she said, “Oh, we don’t like to say AI does this, or the technology does that”, because that’s not actually true. It’s people who are programming AI to do this, or people who are programming technology to do this. And that really blew my mind, the way that I had to sort of believed this idea that “Oh, the technology is going to do what the technology is going to do. it’s this mystical robot that I have no insight into.” How that really obscured the humans with power to make decisions about AI. It really obscured their role in a way that I think really benefited them. And so I just really agree with all of the points that Solana just made.

Yeah, I think that one of the themes, even in the first episode that I listened to, which was so wonderful, and I encourage everyone to maybe finish this if they’re listening to this now, and then immediately go over and listen to the IRL Podcast… But yeah, one of the themes that I think was starting to come out for me was the connection of this technology to the data side as well. You were both talking about there’s a human element behind this; part of it is what humans decide to do with the technology, but another side of it is that this technology is inherently behaving in a certain way because of the data that humans have generated, and the data that they’ve chose to put into the training of these algorithms. And this data isn’t sort of just created in a vacuum. There’s a human element behind the data side.

[12:17] As you were kind of looking at the stories that were coming in, and what you were curating for the podcast and the report, how much of it was around the applications of AI, versus the data side as well? Because I know that that’s a huge part of what can go wrong with these sorts of systems.

In the second episode we were talking about the gig economy, and workers tracking their own data, and taking ownership of their own data in order to reverse-engineer the algorithms of the gig platforms to figure out if they’re getting a fair deal, or if they’re even getting what they’re being told that they’d being getting, which is difficult to assess. So tools like that, where you’re thinking about – well, changing the perspective on “Who does data belong to?” The data that’s generated by you, or by your community, or by you as laborers - who should that belong to? And who should have power and control over it? Those are questions that are being asked. I mean, not just in the sphere of technology, but also among regulators.

We also have an episode where we look at geospatial data, and who has access to geospatial data and labeling that data in order to interpret what you see. And the stories that you can tell based on a place, or people who live in a place can change a lot depending on what your motives are. And so we have a story from one of the research fellows at the DARE Institute, who is looking at the spatial legacy of Apartheid in South Africa, looking at how have townships changed over time, and how do you measure that with geospatial data? And what kind of datasets can you create to actually document the differences in the landscape that are not being tracked or talked about by the government?

So much of what we talk about on this podcast has to do with the data side of things, which is whether it’s just the practicalities of often that’s where trying to practically build these systems is difficult is in the data side of things. Or it’s the element of like ownership, like you’re talking about, and these other elements of – I’m thinking of like language communities in particular; that is the area that I work in. This idea that in certain cases, big tech, or whoever it is, is really kind of mining these communities for the language data that they have to offer. And the systems that are built out of that really don’t provide a benefit back down to those communities where their data has been leveraged. So yeah, I really appreciate that perspective.

Before we get into a couple of the individual stories, Bridget, as you were kind of curating these stories for the podcast, how did you decide what themes to kind of focus on? I’m sure there were so many stories related to AI that were interesting in one respect, and not another… What went into the kind of curation process and deciding kind of what to focus on and what stories to feature on the podcast?

[16:02] Oh, well I wish I could say it was all me. But it definitely was a team effort with Solana and the rest of the folks at Mozilla, an amazing team of writers and researchers who put this together. I would say - and Solana, I would love your thoughts as well… I would say the stories that resonated the most are the ones that really have that human element at the center. The stories where you hear “Oh, I was an engineer at Google, and I experienced this, and this is how it felt for me to experience that. This is what that felt like to be going through that experience.”

I think that the way that the stories aren’t just about the tech and the people who make it and the policy folks that shape it, but really what brought them there and how they wound up there, and the emotional experience of being in those situations. Having that as the focal point I think is really at the center of what makes the podcast tick. And again, I wish I could take all the credit, but I really cannot.

Yeah, and it’s hard… I mean, how do you pick stories? I think we wanted to get into different corners of the issue. With the Internet Health Report we have this collection of data, visuals, a compilation of research that really asks the question, “Who has power over AI?” And so you scroll through that, and you have some different perspectives that shows “What do we even mean when we talk about power in that context?” and what are some of the facts around how the technology is distributed, controlled, who dominates in that space? Who’s making more money in that space? And then the podcast kind of answers the question, “What can be done?”

So it’s looking at some of those areas… What areas of big tech dominance of AI could we put a question mark to? What areas of surveillance could we put a question mark to? You know, AI-powered surveillance? And then thinking about where are the opportunities. A lot of people are talking about the opportunities of AI in healthcare, the opportunities of AI for addressing poverty, or whatever. But I think it’s also important to figure out “Well, how are you actually critical? How do you assess whether AI is trustworthy?” Because a lot of people will tell you, “This is really good, this is good for you, or this is good for Africa, this is good for women”, or whatever, but it’s not always true. Sometimes you need more people to help assess whether that is really the case. And so it’s a discourse, it’s a conversation, and it needs to be many-sided in order for us to really get smarter about how do we build AI that’s better.

Yeah. And I would also just love to add - something that you just said really jumped out at me, this idea of being critical. I love technology; technology gave me my wings when I was a young person, but part of that love is also criticism. Part of that love is challenging it to be better, and challenging the people who have power and the people that make it… And I don’t know, I want to get to a place where being a tech critic, or a skeptic even, is seen as a form of love of technology, because you want it to be better. You want it to be used to ask the questions of “Why can’t this be better?”

That’s exactly it. And I think when we’re talking about opening up that conversation to others, so that it’s not just tech people who are talking among themselves about this, it’s – for instance, the thing I’ve been repeating a lot in the past couple days is, you know, I might not be a data scientist or an engineer, but if your AI system is harming me, then I know something about it that you maybe don’t. And so there has to be a way for me to be able to engage with you in some kind of – maybe it’s not a conversation where I call you on the phone, but maybe there’s a way that I interact with your system, or I’m able to get through to a helpline, or something.

Whether these systems work on a small scale or on a mass scale, we need to make them adaptable to the input of people who have knowledge to contribute to them.

[20:03] Yeah, I’m so happy that all these things are getting brought up. And I think certain things - like, you brought up the idea of this sort of power imbalance, which you highlight both in the podcast and in the report, and the facts around that… I think that that term gets thrown out in relation to AI a lot, but people might not have certain ways to think about “What does that mean? What does the power imbalance mean, and what are the end user implications of that?” And I think in like the facts that you’re showing on the website, but also in the stories – so one of the ones in the episode that I listened to was from Shmyla Khan, I think was her name… Describing what happens when Western entities sort of create this technology for certain purposes that have implications for someone all the way over on the other side of the world. I’m not sure if you could maybe kind of bring out some of those things, and how is the power imbalance – how does it play out in that in that sort of situation?

So here’s a line from her that I feel like really gets that. She says, “The relationship that many of us have with technology is one-sided, especially in the Global South, where a lot of this tech, the apps that we use, the devices that we use, have been built in other places and other contexts by people who have not really sort of imagined us as the end users. And that is a really important issue, because that tech is not built for you, with you in mind, or your needs in mind. That is a sign that you’re excluded from those conversations.” And I think that something about that line really gets it to me, where the people who are designing the technology that you’re going to be using have not even really thought of you - let alone as an end user the way that she describes it, but as a person; they have not thought about how might this fit into your into your life, what kind of life are you living? What does your life look like? And so all the different ways that that can be used against people. And it really goes back to what you were saying before about data, how so much of technology, the way that it’s used is so extractive and how – I don’t know, it’s just such a limited perspective that folks will design technology that just takes and takes and takes from us, but doesn’t really give us a lot back, or even really see us as people or humans.

Yeah. And the added context, which we don’t really get into in this podcast episode is that Shmyla Khan is leads research for the Digital Rights Foundation in Pakistan, and that’s an influential digital rights group there. And so when things do go wrong with some of the big platforms or with content moderation and such, they’re the group that gets invited in to give advice to the big platforms. And so what she’s saying is, “We get asked to come in and fix it after it’s broken, after it’s caused harm, instead of the systems being designed from the beginning in a way that they’re not intended to cause harm.”

And in terms of how big platforms - do they care about the people who are using their systems, the vast majority of people who use their systems who are not in the United States, for instance, and then how do they collaborate with local groups around the world, research groups, groups that also use AI to track disinformation, track hate speech, that kind of thing - how does that work? How do these systems – how can we make them work better?

I’m just thinking of our listeners, maybe there’s certain people out there that are thinking “Well, the technology that we’re building in our team, we have an expectation for how it’s going to be used, and we don’t see that as harmful. And how can we possibly know all the different ways that that people could use our technology?” What would you encourage them with in terms of their own thinking maybe about how this sort of technology could be used, versus how they’re envisioning it being used, which might be two different things?

[24:20] Well, one thing that I have to say right off the bat - I think this is definitely a question for Solana, but I would encourage people to listen to the episode that we put out last week about the tech that we call ‘Tech We Won’t Build’. Laura Nolan’s story of the work that she refused to build at Google I think is such an interesting one, because she talks about how she didn’t really know what she was building, and the way that the team that she was on at Google working on Project Maven was designed - you really couldn’t be super-sure what the work that you would be building, or what you will be working on, what it will be go on to be used for. But that once she started poking around and asking the right questions, and like talking to the people in other departments, she did that work of like finding out “Oh God, I’m building something that can be used for horrible purposes, and that is not what I set out to do, and that is not what I want to be doing.”

Her story is one that really resonates with me, because I think it provides a really interesting blueprint for how folks can do a little bit of the of the investigation about what the potential causes, or what the potential for harm that the technology that they’re working to build can be used for.

I think another thing that could speak to your question, Daniel, would be this idea that you can design one-size-fits-all technology solutions for everything. I think that’s a tricky one sometimes, where there’s this default imagined user, which often ends up looking a lot like the developer themselves. One example we have is the databases of images that are used for dermatology, and for systems that are used to diagnose skin diseases, or skin cancer, where the datasets are almost entirely of people with white skin, and then don’t work for people who don’t have white skin. So what is the solution? Why does it have to be just that one big dataset that is going to lead to misdiagnosis for countless numbers of people? Why don’t we make other systems? Why don’t we have community-based systems? Why don’t we have indigenous communities, or language communities, people who are building their own tools and technologies and datasets that actually work for them, and then not have to deal with the arrogance of somebody telling you “No, this works for you. This really works for you”, even though you can prove that it doesn’t, and the wrong people get misidentified and set to jail…

We have so many harms at this point across the use of these technologies, whether it’s biometrics, or facial recognition technology, or… It gets endless; you can pick any topic almost and you can find some kind of harm. So if we’re going to diminish that, if we’re going to make systems that are more trustworthy, we need to learn from these experiences, because there are so many. Now, there’s no reason for it to be that way. Like, let’s just make it better.

I know that one of the things that you draw out in some of the information that you put up online, and I’m sure it will come out in the podcast - one element is why do things have to be this way, or what tech should we not be building maybe, even if it is possible? The other side of this is accountability, I think, that you draw out in terms of “Well, there’s a lot of people to gain from AI.” There’s a lot of applications that are already permeating our lives. I think even on the last episode with Chris, we were talking about just how quickly AI and applications of AI are spreading at a rate that’s much higher than regulation is happening. So who is really accountable here, and to whom? I guess that’s connected to the power element of it as well. What did you learn in terms of this side, accountability in AI, as you as you put together the material for the report?

I think here there’s accountability in a lot of different areas as well, because you have accountability from businesses, or governments, you have big tech accountability is one that I think is probably most familiar to people, where we ask for more information about the content that’s harmful, and how it’s moderated, and how recommendation systems work on social media platforms, that type of thing.

But you also have – one example that stood out for me is with the gig work. In the episode we did on gig work there’s one woman who was a delivery worker, who’s the head of an association of delivery workers in Ecuador. And she’s on the streets of Quito, and when she’s interviewed about how these systems work and what concerns her about them, one of the things she said is that the government is so scared to fall behind with AI that they’re willing to go with anything. They’re so happy to have all these gig platform companies coming from all different kinds of countries, with almost no demands made at them for the fairness for the human rights of workers, because they want the system was to be there and to thrive and to be like part of this story, this happy story of AI success. And she’s saying “But in this eagerness to have a seat at the table for the governments, they’re willing to overlook all kinds of things that they wouldn’t overlook necessarily in other areas of labor.”

[30:17] And so you have a lot of that, I think, where you have this obfuscation through the technology that somehow makes people look the other way, whether it’s the consumers, or the workers, or the governments, or the tech workers themselves. And that’s part of what we need to – when we’re talking about transparency it’s not just “Can you make a privacy policy that’s really clear?”, it’s all these other things as well, like being really clear and honest about what are the limitations of a system, who are they actually working for? All that stuff?.So it’s a difficult question to answer, because AI is everywhere.

Yeah. And, I mean, let’s be real - don’t you both kind of feel like as consumers or users of technology there is a little bit of looking the other way and not asking too many questions involved? Like, every single time I order something from Amazon, Uber Eats, I know that if I think about it hard enough, I’m like, “Well, I really shouldn’t be doing this. It really isn’t good.” And I don’t know, I just wonder – I think if we’re being honest, a lot of us have probably experienced that in one way or the other. And I think that the work of the Internet Health Report that Solana is doing really asks us to confront that a little clearer, and maybe not look the other way, and maybe not just be like “Oh, let’s order it. It’s convenient. I won’t think too much about it”, or whatever… But really, what is like, grapple with that and the implications a little bit.

Yeah, but at the same time, it shouldn’t be on the end user to navigate all those things as well. That’s also why we’re talking so much about the systems and the regulation and everything, because it can’t just be on one party or another to figure this out. It has to be joint solutions, joint responsibility to make sure that things improve.

Yeah, I think that there’s all sorts of things at play here on the user side and on the backend system side. There’s certainly things within a system, if you think about even something as simple as how data is transmitted, right? If I’m processing speech on a device, I can choose to transmit that speech up into the cloud, and it’s stored somewhere, and then things happen with it, maybe that I intend and I don’t intend… Or I can choose to process that speech, and maybe only send very anonymized metadata back up to some type of centralized system. So there’s very real implications for how you design a system, and then also, I think there’s real amazing work that can be done on even people that aren’t technical, but realize, like you were talking about earlier - I forget which one of you, about how users can tell developers how a system is failing them or harming them in ways that the developers never even envisioned.

So I think - yeah, there’s a lot of things at play here, for sure, that come into it on both sides of this. And I guess the third side that might be kind of interesting to discuss here is the research side. So research - I think people have always thought, “Well, we should just research things and figure out what’s possible. It doesn’t matter what it is, we just need to figure out what’s possible, and what can we can we do”, right?

[33:42] And something that’s always been fascinating to me is just the extremely short cycle in AI between research and the application in real systems. It’s like a paper is published, maybe even before a conference happens, it’s published on the archive, there’s like seven different GitHub repos that implement the thing… It’s all out there already, and people can just grab it and go with whatever was literally just researched and peer-reviewed… Which is so strange to me, how quickly that can happen. And I know that you highlight certain elements of that also in the report, in the podcast. How often from your all’s perspective – like, as you were working on this, how often did the research side of things kind of flavor, the conversation that research was actually kind of impacting users on a maybe a shorter cycle than people were thinking?

I actually never thought of this cycle in particular, but it has been a real eye-opener how influential research and journals are on technology. Like, even without thinking of the speed, it was very confusing to me that the academic publishing cycle could be so influential to the business sector in a way that – I don’t know, is there any other sector where that happens in that way, and (I guess) on that scale?

Certainly not other sciences, like physics or chemistry. It’s much longer, from my perspective.

Yeah, it’s a different sort of thing, which is also why we chose to highlight and visualize results of research that talk about the research papers themselves, which wasn’t where I thought we would end up at the beginning, but it made a lot of sense when we were talking about “How do you get to the core of where decisions are made around AI?” And if it’s the proofs of concept, if they’re being driven and funded, and really kind of the tone for what is developed is coming with an incentive that is being set by big tech, then you get a certain kind of research, which is different from if you were getting something that was coming at it from a different angle, or maybe not from an elite university in the United States, and maybe somewhere else, or in a different language altogether.

Yeah, so we look a lot at what kinds of datasets are being used for benchmarking in AI research… And again, this isn’t original research by us; this is compilations of research that we’ve put together. Sometimes we’ve found research and then visualized it, made it more beautiful, made it more accessible, so that more people can enjoy it and understand some of the lessons from it.

Yeah, I would encourage people, our listeners to check out – if you go to the Internet Health Report site, which will be linked in our show notes, there’s a Facts page where some of these things are visualized, and I’m sure more content will be coming too, but there’s some really interesting perspectives on both the sort of power imbalance on various different scales, whether that be by sort of frequency of dataset usage, or investments in AI in different parts of the world… All very interesting, sort of different angles at this, which tell a certain aspect of the story.

Maybe as we get closer to the end here, I’ll ask both of you to respond… For the practitioner out there who’s listening to this podcast, and might be thinking, “Oh, I wasn’t really thinking as much about maybe having to say no to developing certain technologies”, or they’re thinking, “Oh, there really is a lot to dig in here in terms of thinking more about the data that I’m using, thinking more about kind of downstream uses of the technology that I’m building” - how would you encourage them based on the stories that you have told and are telling through the report and the podcast? How would you encourage them to really be a positive force in this AI field and kind of help shape the future of what AI is becoming? Any thoughts? Either one of you could start.

[38:21] Well, I’ll start. I just love the question, and I think in making the podcast, one of the things that really struck me are, what a great resource we have in folks who work in tech, whether you’re an engineer, or honestly, whatever you’re doing in tech… Like, the stories of people who have pushed back, challenged power from within tech companies have been so impactful and inspiring to me… And so I would just say, if you’re a [38:47] tech employee, you have so much power and so much agency, and there are so many folks who are using that power and wielding it in such interesting and inspiring ways. So yeah, I would say really recognizing and owning and walking in that power.

These are fields where people are constantly learning and constantly pushing themselves, and so it can also be – maybe it’s time to learn from a different source, learn different things, ask different questions, be curious in other directions, particularly when you’re thinking about the potential social harms or risks of these technologies. Like, to listen to the people, there’s brilliant, magnificent research about how things can harm, but also how things can be done better. And so it does require an open mind to say, “Okay, well maybe the way that I’ve been taught to do this, or the way that I’ve been doing this for 10 years, maybe that isn’t the only way. Maybe there could be a different way.” But that does require some real empathy and willingness to listen and to engage with others.

A lot of the people that we highlight in the show are, I think, what we would consider heroes, even if they have small projects, big projects. I would say if you’re inspired or moved by what they’re doing, reach out to them, support them, back them, share their work with others, elevate it. Because it’s not that these things aren’t happening. It’s not that you don’t have great datasets that are created in other parts of the world. It just takes somebody to vouch for them and help elevate and help create more diversity in the types of ideas that we consider a part of this greater discourse about AI.

Yeah, that’s so encouraging. And I’m just, I’m so thrilled by the content that you’re putting out, and just the thought of like a person working in tech, listening to these stories, and whether it’s at lunch one day, or in a meeting, like bringing up this story and saying, “Hey, I heard about this… What do you think? Have you thought about this before? What are we thinking in this area? Or have you ever considered this?” That’s just so encouraging to me, to think that those conversations will be happening.

I really appreciate both of your amazingly hard work on this, and just the content that you’re putting out. For our listeners, we’ll link everything that we talked about in our show notes, so please, don’t wait. After you listen to this episode, just go over and start streaming the IRL Podcast and catch up on that. I know I’ll be watching as the episodes come out. So thank you both, I really appreciate you taking time to join us.

Thanks a million from us as well. Thank you. You just described our dream. [laughter]

Thank thanks so much.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00