Practical AI – Episode #245

AI trailblazers putting people first

with Solana Larsen, editor of Mozilla's IRL podcast

All Episodes

According to Solana Larsen: “Too often, it feels like we have lost control of the internet to the interests of Big Tech, Big Data — and now Big AI.” In the latest season of Mozilla’s IRL podcast (edited by Solana), a number of stories are featured to highlight the trailblazers who are reclaiming power over AI to put people first. We discuss some of those stories along with the issues that they surface.

Featuring

Sponsors

Traceroute Podcast – Listen and follow Season 3 of Traceroute starting November 2 on Apple, Spotify, or wherever you get your podcasts!

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes

Chapters

1 00:08 Welcome to Practical AI
2 00:35 Sponsor: Traceroute Podcast
3 02:23 Mozilla & AI
4 07:13 Curating season 2
5 11:18 People over profit
6 17:50 AI doomerism
7 23:55 Regulation's affect
8 26:47 Ghost workers?
9 30:15 Lend me your voice
10 34:34 Mass experimentation
11 38:23 We're all crash test dummies
12 42:15 An encouraging future
13 46:57 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I’m the founder at Prediction Guard, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. How are you doing, Chris?

Doing great, Daniel. Having a nice day here, and having lots of interesting conversations about just all the things in AI. The episode last week really hit a nerve, I think.

I think so, yeah, and I’m actually in London this week and giving a talk tomorrow about “Trustworthy AI”, which I’m hoping that our guests can enlighten me on a few other aspects of that prior to my talk tomorrow, which will be convenient… So if I have to change my slides tonight, that’ll be useful. But…

You’re cheating. Oh, boy, listen to this…

[laughs] We’re privileged to have back with us Solana Larsen, who is the editor of Mozilla’s IRL Podcast, or Online Life is Real Life. Solana, it’s so great to have you back.

I’m so glad to be here. Hi.

I was just looking up the date and the episode number… So we had you on episode 187, back in July of 2022, talking about the podcast season that you had released all about AI. We talked about concerning trends, and how the technology was transformational, positive signs of change… And of course, it’s just so interesting that it’s only been up until now, so call it a year…

Yeah, what a year.

…so much has changed. It almost seems like AI has been invented. I’m wondering if you could just give a little bit of context for why and how Mozilla is putting together these seasons of the IRL Podcast, and why the focus is on AI… And then we can go from there.

Yeah, sure. Yeah, what a year. What a week. What a month. I feel that way all the time. It’s just going so quickly. Yeah, I mean, I guess people who might be familiar with Mozilla might know Firefox, and think “What does that have to do with AI? Why are you talking about that?” The Mozilla Foundation, the nonprofit arm of Mozilla, for the past couple of years has been really focused on what we call trustworthy AI, the term that you just used. And I guess you have a buffet of choices when you’re trying to talk about ethical AI, or fair AI, or equitable AI… Trustworthy was the one that we went with as an organization. It checks a lot of the boxes, especially at the time that we chose it… And it resonates in policy circles in particular, in some of the contexts that we like to be in.

So there’s this element of “What is the future of the internet?” And if we’re an organization that has a manifesto that cares all about making the internet healthier, creating tools that enable people to create and be a part of the internet, if AI is the future of that, then what’s our role there, and how can we help make sure that all the mistakes that happened for the Internet in all these 20, 25, 30 years aren’t just all being repeated again in AI.

And so we’re thinking about privacy, of course, but there’s also consolidation of power… You know, the way that big tech kind of take control of everything, and squeezes out opportunities for smaller players… And a whole bunch of other things.

[06:04] So we have a whole foundation of people, we have fellowships, we have grants that we make for different people who are innovating, and trying to think about AI in a way that big AI tech companies aren’t thinking about them, just so that we can have an alternative kind of conversation happening around these things that are changing our society, and our industry, and everything so quickly.

So part of this is we do have a big mouthpiece, I guess, as Mozilla, and one of those platforms that we have is this podcast, IRL, that the Firefox team started years ago, and that we took over on Mozilla Insights Team a couple of years ago, and we’re now doing a season seven. It’s the second season that we’ve dedicated entirely to AI. And part of that exercise is thinking about “Well, who do we want to lend the microphone to? Who do we give the microphone to? What kind of voices would we like to have in this dialogue about AI that we don’t maybe get to hear as often when we’re tuning into the US mainstream tech press?”

Yeah, that’s really great. I love – in the announcement of the new season of the podcast you talk about… Like you just said, it too often feels like we’ve kind of lost control of the internet to these larger players, and you want to speak to those kinds of reclaiming power over AI to put people first. I’m wondering, as you were preparing for this season of the IRL podcast - you know, it has been a transformational year in AI, and we talked about some trends, both positive and concerning, last year in terms of the IRL report and season… But I’m wondering if you could talk a little bit about how all the unique things that have happened recently, and especially around the kind of public discourse around AI, and the public adoption of this technology, weaved into how you wanted to curate this season, and particularly how the topics that you covered kind of bubbled up, which we’ll get into here in a bit.

Yeah, I think front of mind, a lot of people are curious now in a way that they weren’t before. I mean, you must experience this on your podcast as well, that people now have this hunger to know about AI, where a couple of years ago they were like “Oh, what’s that? How does that concern me?” Now, everybody’s like “This really concerns me, what should happen.” And I think there are a bunch of areas where nobody is entirely sure what to do.

The first topic that we took on in episode one is around open source in large language models, this whole question where you have on the one side folks who are saying “It’s got to be open. We can’t audit the models. We don’t know what’s happening with the data.” And then on the other, you’ve got people saying that it’ll be the doom of all of us, and everything’s got to be shut down and closed for security purposes. And so you have these – I think a lot of discussion these days, it’s really polarized sometimes… And so it’s trying to figure out how do you make a nuanced argument that kind of explains not just different sides of the story, but just explaining how there’s a spectrum. And there’s a lot of AI topics that get sandwiched together just under this umbrella that’s called AI, and it’s just so many different contexts, and so many different business purposes… It almost less and less makes sense to talk about it all as one thing. But we’re right on the cusp, where we’re still talking about it as one thing and we’re still trying to grapple with how we should regulate, how we should build, how we should design, what we should think about personally… And so it’s a really exciting moment to try and figure out those things.

[10:07] And the challenge as a podcast creator is that each of our episodes is like 20 minutes long. So we pack in three, four different voices, there’s some really deep analysis… We work with our host, Bridgette Todd, who’s great… A whole bunch of people work together on this thing, and it’s like this very highly-polished/produced, lovely kind of white paper in audio almost of a big issue, a big topic. So I’m really proud of it. And last season, which was a little bit ahead of the curve in terms of talking about some of these AI issues, we actually won the Webby for Best Tech Podcast.

Congratulations. That’s awesome.

Congratulations. Yeah. Wow.

I was surprised, because we were the only tech podcast in the nominated group there that was hosted by black women, that was featuring voices from Africa, from India. We’re really kind of digging into the corners, I think, of thoughts around AI that aren’t just concerned with how much money a technology is making; that isn’t necessarily the criteria of success for why you would elevate somebody’s voice.

There’s so much that you’ve just said that I want to go into even more… You talk about the larger public’s discourse over these topics, and topics with an s is crucial. And it’s very nuanced. And yet you kind of alluded to that sense of responsibility that you have in your own podcast, about what voices do you want to raise, and where do you want to get them. And I know that Daniel and I feel that way very intensely. We’re at a moment now where the whole world is really hopping on to this topic. So you’re bringing the people we had before, but there’s so many new people that want to understand… Because they really do get that it’s going to affect their life.

And I guess, selfishly, as we have tried to do that, trying to avoid just like people wanting to self-promote, and there’s always these efforts at that… But trying to bring the right conversation to bear. How do you think about that? How did you and your team think about the fact that you’re in such a responsible position in terms of being able to either have a voice, or lend that voice to others at such a crucial moment in time? We’re at a unique point in history… How do you parse that? How do you help the public get that discourse right, and talk about the right things? And how do you recognize, for instance, on this one – you know, this is too big of a topic to be thought of as one unit anymore; we now need to kind of segregate through the different concerns within it… How do you approach that? Because it really affects how the public thinks about it.

Yeah, I mean, I think we have an advantage in the sense that the organization as a whole is also thinking about this on a daily basis. So when we think about who do we give fellowships to, or who do we give grants to, or who do we partner with when we do different things as part of the thought process. And it’s extremely difficult, because it’s almost like every single AI startup or project is something for good. Like, everybody says that they’re doing it more ethical…

Making the world a better place.

Everybody, right? But you need to figure out, I think, what are the values that guide you when you want to make a decision about what that means for you. And even in an organization like Mozilla there’s a lot of diversity in terms of what are people comfortable working with, and what are their opinions on this… We chose this theme “People over profit” on this. And the idea wasn’t to just only look at nonprofits; it was to also be able to look at “Are there ways that you can profit, that you can make money, and still kind of have a sense of putting people ahead?” Which is what we try and do with Firefox and with other Mozilla products. We’re trying to figure out “How do you make money and not sell people out, not sell their data?”

[14:07] And so there’s that sort of critical lens on it, and what happens very easily is that you end up veering a lot to the people who are criticizing. They’re being critical, they’re criticizing, they’re pointing out the flaws and the errors… And you can also get a little bit too much of that, I feel. And I think that’s where we really put a lot of effort in, is to figure out – you know, we listen to those voices very intensely. Who’s being constructive? Who’s kind of trying to rethink this from a different angle that we hadn’t thought about before?

One area – in the second episode we look at content moderation, and we look at data work. The ghost workers, and how exploitation of labor, and data, and how the content moderators in Kenya were being paid really terribly, and not treated well, and how they’re fighting back. This story, right? But then as part of the package we also talked to an organization called [unintelligible 00:15:06.12] in India, that’s trying to rethink “Okay, well, if we’re doing data work, how could we remunerate people differently?” And what they did is they have these voice datasets that they’re making in different languages, in rural Indian in particular, and they’re working with a lot of women, and they do it as part of this educational project, and have people be able to do work from home… And there’s a whole philosophy around it. But every time they resell a voice dataset to a new client who’s building some kind of voice recognition tech, they send more money to the person who donated their voice to help train the system.

And so what they’re asking is “Well, if we’re paying pennies for this work in the industry, and companies are making hundreds and thousands of dollars on people’s labor, why don’t we just give them a bigger cut?” We can still have really good business, but we could be thinking differently about what a contribution is, and have kind of royalties that build over time, as long as this dataset has value. Why don’t we think differently about how we share that value across more people?

And that’s like a very simple thing, where you hear about it and you’re like “Oh, yeah, right. We could just be thinking differently about this.” And there are a lot of examples like that in AI, where [unintelligible 00:16:30.26] a business; maybe they’re not a unicorn, but that’s not their goal. That’s not their ethos. And so when we’re challenged, I think, by people who are innovating in terms of in entrepreneurial ways as well, I think it really – it helps us see how we’ve been goaded into thinking about AI in ways that are really defined by an industry that has a specific set of norms and values. Also around how data is used, and how humans are treated… And we can rethink a lot of those things. It could be different.

And when we’re talking about now in the regulation space, how do we make things safer, and how do we stop harms from happening - all that stuff is really important, but we also need to have people who are working on figuring out where do we want to end up? What is the vision for what we want to accomplish with this tech? Because it’s not going away. And yeah, so we need more examples, I think, of what is good, what could good be… That’s our kind of guiding star for how we pick who we choose to feature in the episodes, and how we kind of build up a story that has a bit of tension, but also like a silver lining.

[17:51] I’m already super-excited to dig into these various topics, and I think people at this point maybe they’re wanting to just hop over and start binging the IRL podcast… And I totally give people permission to, because it’s just so good. So you can pause this podcast you’re listening to and hop over and jump right into IRL. But I do want to take the time to talk through some of these subjects on this podcast, because these are topics that have come up in various ways, and bringing these other voices into that I think would be really good.

The first episode, what you titled with “AI is wide open”, I think is super-interesting and relevant even this week, with Open AI releasing a whole new set of features, which are just incredible. I mean, I think anyone would have to admit, these features are incredible. And the vision stuff… And it seems like there are these sort of proprietary model providers or API providers that are really leading the way in some of this functionality. But there is this really amazing undercurrent of open models… And it’s multifaceted, as you’ve alluded to; some people approaching it in various ways, like Stability releasing some models that are amazing performing models, but maybe licensed in a restricted way for research use, and other purposes. Others that are releasing under various licensing… So there’s the licensing side of this.

There’s also, as you alluded to, “Hey, if we start opening up these models–” Can we really say to Open AI, “Hey, just open everything up, and everything will be okay”? Are there reasons why we shouldn’t be saying things like that? So this is a very multifaceted topic, and I’m curious if you could just kind of lay out how you thought about approaching this topic in particular, and who spoke into this, and what were some of the highlights that were mentioned?

What I was most concerned about in the beginning was not knowing where to end up, if that makes sense…

…because the jury’s still out on a whole bunch of things related to this. I was worried about AI doomerism. I was a little bit worried about making it sound too scary. But then I was also worried about not making it sound scary enough, you know?

What’s the right balance, yeah.

Yeah. We called in David Evan Harris, who used to work on the AI responsibility team at Meta, who’s probably a little bit more on the worry side than I am at least, but also has a history in open source, and was able to, I think, also speak in practical terms about what is and isn’t being done at an organization like Meta, and the reasons that he’s concerned, and how it connects to, for instance, social media, which is something that everyday people are really concerned about, right? Like, how is this going to affect, elections and that kind of thing. So that’s something that we looked at.

And then we talked to [unintelligible 00:21:01.14] who is a fascinating researcher, very vocal on Twitter as well, who’s looking for openness in datasets, and who’s been doing a lot of auditing of datasets, and trying to, among a group of researchers who have really tried to engage with companies and try and encourage them to be more responsible about what they do, and do not include in the datasets in regards to hate speech, terrible representations of women, black people, and so forth. And that’s a really important tenant in the way that Mozilla thinks about why it’s necessary for these models to be open. Because the companies time and again have proved that we can’t really entirely trust them. And so open source is like our safety net, in a way.

[21:56] So I think on the spectrum, we’re leaning more towards the open side than the closed side, even taking into account all the things that we know. We talked to Sasha Luccioni from Hugging Face, who does research around climate change and open source models, and just hearing just the simple perspective around – well, simple, but just this idea of if it’s open, if we’re working together in large communities, we might actually have these AI models have a smaller carbon footprint. Because they gobble up lots of energy.

And so they’re the kind of concerns that are outside of this framework of dangerous/not dangerous, where we need to think about what would it mean to have researchers who speak different languages working on these things? What would it mean if we could have a more diverse set of people than the people who work at Open AI working on some of these large language model capabilities?

And then finally, we end up with an interview with [unintelligible 00:22:56.00] from a startup called Nomic, and they’ve made this system called gpt4all, which you may have played with at some point. It’s like an alternative to ChatGPT that you download onto your computer, and you can ask it questions and chat with it offline. And it doesn’t take your data if you ask it not to; it doesn’t take your private information and use it to retrain models. So it’s, again, a different approach to thinking about “Oh, well, what if we did want this stuff to be offline? What if we did want it to be totally compressed, so you could have it on an everyday computer, in a low bandwidth society? What could we do then? How would we be designing differently?” I find that inspiring, but it’s not clear-cut for any of these things. Nobody has the answer. But the important thing is that we need to be really contextual about how we think about these things.

A lot of the concerns that you’ve just enumerated are – certainly, Daniel and I share this; it’s been an interesting conversation on the safety side… But then in the last week, we had the Bletchley announcement in the US, we had the executive order, which was quite detailed in terms of that… In the context of we agree with you that open source is a good basis on which to try to work on solutions that are in the daylight, so to speak, that everybody can be part of, and see, and verify, how do you see regulation? It looks like we’re right on the cusp of after many years of talking about it, it’s starting to happen. How do you think it will affect some of the ideas that you were just talking about? Do you see us making some turns, or do you think it’s not gonna have much impact? Because I’m still trying to process it myself.

I think it has yet to be seen. Because if we’re thinking about risk, for instance, like, in the EU AI Act, if we’re thinking about who is responsible for the risks, and upstream, and downstream risks, and how would you regulate, and how that affects communities, or how would that affect individuals, or small organizations, or nonprofits that were working on these things - I think we still don’t entirely know.

I think part of what we tried to communicate, and then I see Mozilla communicating in other contexts as well, is that it’s not a matter of – open isn’t always just good in its own right. It’s open for a purpose that is good.

Yes. It’s a start.

It’s a start, but it’s also – like, you can be open in different ways. You can be open to consolidate the market in your favor and stamp out competition. That’s not the kind of openness that we’re in favor of. We’re in favor of openness that leads to transparency and accountability. We’re in favor of openness that enables collaboration between people who have good intentions. And so openness that allows people to build, and create, and do things for their own countries and languages and societies and stuff. So if we start thinking about how do we protect those functions of open, then it’s not just open for openness’ sake; it’s open with a kind of - yeah, a purpose.

[26:13] So I think it’s important for the regulation to start thinking about those things. And consolidation of power is a big thing; enabling free and open competition on some of these issues I think is really important. Having a global lens on the effects of these technologies is really important. I mean, overall, it’s good to see that things that are happening, and certainly we agree with a lot of things that are being put forward. I think we just want more in certain areas.

Yeah. On the dataset front you mentioned some of the necessity to have some sort of transparency about what went into a dataset in order to proceed with kind of efforts that are reversing bias, or trying to prevent hate speech, or toxicity coming out of these models… Which I think is really good. I also think – just to really highlight that kind of global piece of this… So one of our engineers at Prediction Guard - we were working with LLaMA 2 the other day, and doing some experiments in a variety of languages… He’s a Hindi speaker, I think he speaks a few other languages as well… And we are trying some things in Hindi, and he’s like “Hey, this doesn’t seem quite right”, and then he went to the LLaMA paper and looked at kind of the distribution of language data in the dataset, and he was able to very quickly understand “Oh, this is why this is this way. Maybe if we do this, or that, we could make some improvements in the tasks that we’re trying to do.” So even just – I don’t think it always has to be… I guess what I’m saying is maybe every single thing needs to be open, and in all of the same ways, but even being transparent about the makeup of a dataset and where it came from, and provenance, that can be actually quite helpful and powerful. And maybe that ties into this kind of second topic as well, which is some of this data comes from this sort of data of workers, or crowd workers, or ghost workers, and highlighting that as – you label it as the human in the machine. Why was that an important thing for you to include as part of this discussion?

Because it’s invisible, and it’s overlooked at all parts of the food chain, from the consumers at their computers, but also the people actually developing AI. The systems that you interact with in order to hire thousands of task workers to help you do things with your data - they’re designed to shield you from actually having any kind of human connection or sense that you’re dealing with the human. The whole thing is meant to feel like a machine. And unfortunately, I think it also shows the callousness of a lot of the industry, because thousands of people are saying that they’re traumatized, and are suffering, and can’t eat. And yet, these practices continue as just cost of doing business. And you’ve got millions of people in countries like Kenya, and the Philippines, and India, places that have a lot of Tech graduates who are doing this kind of work, which is thoughtful and requires reading long policy documents… And people take a high degree of responsibility for the outputs a lot of times on some of these projects, and yet they’re treated really terribly.

So it just seems like – does it have to be this way? No… It wouldn’t have to be this way. And it’s tied in with this – you know, we can do better. Again, it just seems like an area that like “We can do better on this.” The fourth episode that we look at, actually – I hope you weren’t expecting to go through them chronologically, but we actually –

[30:11] No, no, you’re all good. They’re all interconnected. Yeah.

Yeah. Well, because we return to this question of like open, not open… The title of it is “Lend me your voice”, and it’s actually about voice datasets and what it means to belong to like a small language community, and you build a dataset to be able to do voice recognition tools in your own language. And there’s a lot of open source AI that’s really useful in that context for them to be able to build stuff in ways that are affordable, and sustainable also if you’re trying to build a nonprofit. And what some of them describe… We talked to this one person, [unintelligible 00:30:50.22] He’s based in New Zealand, he’s actually Hawaiian… But he’s working with the indigenous community in New Zealand for Maori language, and they have this radio network of radio stations, and they started building their own voice recognition systems to be able to transcribe historic broadcasts that they’ve had for many years from the community. And they have this dataset, and they’re trying as hard as they can to protect it from big tech, because big tech wants to gobble it up. Just how they gobble up everybody’s videos on YouTube, or the transcripts of them, and they build their own multilingual, large model datasets that are supposed to be able to do all languages. Suddenly, a small organization like the one that’s working with Maori language, or like the one that’s doing something with Swahili language etc, all around the world, they’re suddenly in competition with these big tech companies who claim that they can do the languages as well as they can, who often don’t have the same attention to detail that they do. It’s not that they’re bad, but they’re just different, and they’re not created with the same set of values of like uplifting, or supporting a community.

And so if you’re a startup developer in, let’s say, South Africa, or in Kenya, and you’re trying to build something with your own local language model… A lot of the VC funders, people who want to give you money, they’re not going to say – they’re going to be like “Well, why don’t you just use Open AI’s thing? Why are you using something else that costs more, when you can have all these different languages at once?” And so suddenly, you’re in competition with the biggest companies in the world, and you’re just trying to create a sustainable startup ecosystem in your country.

A lot of these communities are grappling with these similar questions around openness, not because of nuclear war, or fear, or anything like that, but more like “This is our data. How do we protect it?” How do we make sure that it’s used for the intended purpose? But at the same time, we want it to be open; we just want to be able to choose who can work with it.

[unintelligible 00:33:04.11] he’s a really amazing speaker on this topic. They made their own license, an indigenous data sovereignty license. And they try and treat data kind of like as land. That’s the metaphor that they use for it. And they treat it as a resource, as a natural resource. He said “They’ve taken our land, now they’re coming for our data. Let’s protect it.” And it’s a kind of – sort of like a spin on… You know, you’re familiar with the Creative Commons license, where you can use it under certain circumstances… In their case, you cannot use it at all unless you have permission. But if you are an indigenous language community, they’re likely to give you permission. So they kind of set themselves up as the stewards of this data, with a sense of responsibility for what it’s for. So this question around “Should AI be open or closed?”, it’s not so clear cut. It’s not like all the society voices are saying “This has to be open”, and big tech is saying “This should be closed.” It’s this whole confusing mishmash really, and arguments about what does it even mean for something to be open these days when it comes to AI. Because it’s not just opening the code. It’s so much more complex than that, and there’s so many people who are doing – just like we talked about AI for good, there’s also all this “open washing”, people call it, where they say that it’s open, but it’s not really open; or only when it suits them.

[34:33] Well, Solana, I’m really intrigued by this other topic on mass experimentation with AI systems, which I think – is it right to say I’m the subject of this, or…? I guess I’m the – yeah, participant/subject of this mass experimentation with AI systems… So could you talk a little bit about what you mean by this term, and why it came up as part of this focus on putting people first in the development of these AI technologies?

I think we did good with the titles of the episodes this year… This one is called “Crash test dummies.”

It’s good. [laughs]

The central question of it is “Are we crash test dummies of AI?” And we kind of are, because we start off with the story of the automated vehicles in San Francisco, and we talk to somebody who works in traffic safety there, to get actually a nuanced perspective on it… Because what I realized in doing research for the show is that people really love these AVs, and they’re really excited about them, and think that they’re great. So again, I didn’t want to be like – you know, I’m sitting in Germany, in Berlin, and think “This sounds dangerous. Why would anybody want to get into a self-driving car?” But they’re exciting, and there’s a lot of hope that they might make things safer in some way. And so again, it’s a topic to approach with nuance. Are we the crash test dummies? Like, between the time that we did the first interview and the episode went live, [unintelligible 00:36:11.16] they got their license to go completely driverless pulled. And so it’s another fast-moving topic where - yes, our cities, our streets, are actually testing labs for technologies. And they’re experimenting not just on a focus group of people, or a small – you know, it’s like millions of people, and it’s kind of life and death situations. And you just kind of ask yourself “How did we get to this point where we have companies that we know that we can’t entirely trust, because they don’t put people over profit, and they show that over and over again… And yet, we trust them when they say that they’re there to make things safer.

And so there’s this element, I think – this comes up in a lot of government processes, for instance, where you have companies that are selling predictive systems, or algorithmic decision-making things. You have it in the banking industry, you have it for hiring people, you have it over and over again; you have evidence that it’s not really working as intended, or it’s biased, or it’s stereotyped… And at this point, who can really be surprised? So why aren’t things being tested better? Or better yet, why aren’t they just being designed differently in the first place? And I think that the difficult thing, which - we also fall into this trap where we’re talking about AI as a topic again, right? But maybe the best way to make streets safer doesn’t have anything to do with AVs. maybe it has to do with sidewalks, or street lighting, or… There’s a whole bunch of different things that you could be doing in terms of public planning, and so forth, that have nothing to do with AI. And it’s the same with fraud, and different things. It’s like, maybe the answer isn’t AI. Maybe this is actually creating new problems, instead of fixing old ones.

[38:20] I guess to extend that a little bit, as we’re talking about kind of crash test dummies, and in this context kind of literal crash test dummies… But it seems like that’s almost happening across – I mean, it’s certainly happening across social, it’s happening across many, many touchpoints within civilization. And now we’re contending with - you have large groups of people that are doubting elections, and others saying “No, no, that one went just fine.” But it’s created all of these social issues of being a participant in our society… And I’m kind of curious when we arrived at that? Are we just kind of destined to go down that path, where we’re all crash test dummies in all factors of life to AI? Or is there a better path potentially that you identified?

Yeah, well, we actually ended up thinking about regulation… So to carry the metaphor forward, we’re like “Well, the seatbelts are going to be regulation for us”, right? That’s part of the answer, is like thinking about how to have the right amount of transparency, openness; how to have accountability. How to make sure that people can argue with systems that treat them badly, or threaten them, their lives or their livelihoods.

The reason that I bring up regulation is we look at the – I think it was a couple of years ago. Now I don’t remember… The blueprint on AI that the White House put out recently… It’s actually a really remarkable document. It has a whole bunch of good advice about how you could regulate things, and what are the principles that should be in place. And it’s the kind of things that are showing up in some of the pronouncements that we’ve looked at this week, and that are bubbling up, I think. It’s like, well, you can build differently, and you can design differently.

And we also talked to a woman called Navrina Singh. She used to be one of the Mozilla Foundation’s board members, but she works with this company, or founded a company called Credo AI. And I’m sure there are many other companies that do this kind of thing, but they sort of try and operationalize what it means to make responsible tech on a large scale. And it’s kind of interesting, because operationalize in this case, it means they make dashboards, for instance. Like, they make tech dashboards where they ask the companies “Okay, well, what are your values, and how would you measure that? What are your benchmarks for how you measure whether you’re successful or not, or whether you are creating harm in society or not? Who is going to hold you accountable? What kind of people are you pulling into the process?” And so sort of trying to create - they call it AI governance, but it’s basically a process to help companies think through “Are they actually doing what they’re saying that they’re doing?” Because a lot of them aren’t used to thinking about how tech affects society. And you have to look at it not just at the beginning when you deploy, but throughout the livelihood of your operations. And that’s a different mindset.

So there are different ways, I think, to build with safety and with risk, trying to think about risk as part of your operations and part of your business. And then the final part of that is also - okay, so they make these regulations… How is a giant Fortune 50 company supposed to comply with regulations when they have thousands of different AI systems that they’re managing across different client portfolios, and so forth? So you need partners in that to help you technically figure out how to do that, but also in terms of process, procedure. So it’s not that these things can’t be solved, it’s that there’s very little attention paid to them at the moment. For sure, it can be better.

[42:14] As we’re kind of getting close to an ending point here - I love the direction this is already headed, in the sense that we aren’t kind of destined to have everything just be as it is… And I know that there’s other content that we didn’t quite get to cover that’s part of your newest season. I encourage people to take a listen. Go ahead and just start the first episode and binge it after finishing this one… But in looking towards next year, when I hope we have you back on the show, or Bridget, or both of you, to talk about what’s going on with AI and kind of internet health next year… What are some of the encouraging things that kind of bubbled up to the surface as you were looking at what’s going on in the AI industry in terms of people that are being transparent, making a positive difference, and kind of pushing us towards thinking in new ways? What are some of the things that may be encouraged you, and what are you encouraged by kind of looking towards the next year in terms of what could positively happen in the industry?

I’m encouraged by the fact that we’re all becoming more literate on these topics, because it’s a high learning curve to pick up some of these topics around AI and all their complexities. But I think the general public, I think people who are building AI, I think regulators, I think journalists - I think everybody’s really stepped up their understanding of a lot of the issues. And there’s room for complexity still in these conversations. So I think that’s really great.

The other thing that happens - and my background is media, it’s also activism… I care a lot about social movements, and how they work, how do you make change in society… And so one thing that we look at a lot at Mozilla is these intersections. Like, how is AI – now that it’s embedded in all these systems that affect our lives, and that affect discrimination, and education, and every -ation, how are social movements picking up these topics in different ways? How is AI intersecting with migrant rights, or women’s rights, and so forth? And so how are these different social movements going to make this part of that mantle, in a way. And I mean that like worldwide, people who are fighting for human rights, the people who are fighting for free speech, privacy, against surveillance, against facial recognition… Everybody’s becoming more literate, and so I think there are many, many more people who are going to be paying attention to this topic, and I think it will actually have a positive effect. Because for sure, it can’t just be the tech industry that’s figuring out how to make themselves better on their own terms, in ways that also makes them gazillionaires… That doesn’t work, right? It’s too big. The AI sandwich is too big. I think that’s something that I find very interesting, is where are those intersections, and how do we fan out, in a way, and start working together to really bring in folks who are directly affected by these systems in their design, and also as builders. They’re going to be building stuff differently.

Yeah. I think that’s a really good place for our listeners to end, and something for us all to think about… Because there are a lot of builders that listen to this show, there’s a lot of people across the various industries… I think you did a great job in kind of expressing how we can all be thinking maybe a little bit more nuanced, but also a little bit more intentional about some of these topics… And again, I encourage people to go check out the IRL Podcast, the latest season, and go ahead and listen to the previous season as well, because it’s awesome… And we’ll link that in our show notes, and we certainly look forward to having you back on the show very soon, Solana.

And I should say - anybody listening, come back to Practical AI. Go to us, but come back here. We like this show.

[laughs] Thank you. We appreciate that very much. That’s very meaningful coming over from you.

I’ve learned a lot from your show. I really appreciate it a lot.

Good, good. Well, we’re very happy to have your voice on our show, and thank you so much for taking time to join us.

Thank you both.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00