Practical AI – Episode #31

AI for social good at Intel

with Anna Bethke

Featuring

All Episodes

While at Applied Machine Learning Days in Lausanne, Switzerland, Chris had an inspiring conversation with Anna Bethke, Head of AI for Social Good at Intel. Anna reveals how she started the AI for Social Good program at Intel, and goes on to share the positive impact this program has had - from stopping animal poachers, to helping the National Center for Missing & Exploited Children. Through this AI for Social Good program, Intel clearly demonstrates how a for-profit business can effectively use AI to make the world a better place for us all.

Featuring

Sponsors

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

RollbarWe move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog.

LinodeOur cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelog

AlgoliaOur search partner. Algolia’s full suite search APIs enable teams to develop unique search and discovery experiences across all platforms and devices. We’re using Algolia to power our site search here at Changelog.com. Get started for free and learn more at algolia.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to the Practical AI Podcast. This is Chris Benson, your co-host, as well as the Chief AI Strategist at Lockheed Martin, RMS APA Innovations. This week you’re going to hear one of a series of episodes recorded in late January 2019, at the Applied Machine Learning Days Conference in Lausanne, Switzerland. My co-host, Daniel Whitenack, was going to join me, but had to cancel for personal reasons shortly before the conference.

Please forgive the noise of the conference in the background. I recorded right in the midst of the flurry of conference activities. Separately from the podcast, Daniel successfully managed the AI For Good track at Applied Machine Learning Days from America, and I was one of his speakers. Now, without further delay, I hope you enjoy the interview.

My guest today is Anna Bethke, who is the Head of AI for Social Good at Intel. Welcome to the show!

Thank you, and thanks for having me on here.

Could you start us off by telling us a little bit about you and how you got where you are?

Sure. I studied aerospace engineering at MIT, and focused in grad school on human factors engineering. This is basically how users interact with computers. Specifically, my lab was looking at complex algorithms and scenarios for having a single operator operate multiple drones; this is where the aerospace ties very loosely into it. And this was probably about ten years back, so drones were starting to be utilized more and more… But then how does somebody integrate all this information and be able to do path planning etc. It gave me a taste to statistics, as well as data visualization.

Once I graduated, I was doing some geospatial data analytics, first at MIT Lincoln Labs, then at Argonne National Labs. I moved to a data science consulting type of place, and now I am here at Intel.

When I joined Intel, I was still doing data science, looking at natural language processing in particular, doing some deep learning research, trying to figure out how do we make these algorithms really run quickly. But I’d always been very interested in applying these skills in a way that was more beneficial for humanity, beneficial for the world.

I’ve been volunteering with an organization called Delta Analytics. They pair data science volunteers, software engineer volunteers with non-profit organizations… And I just wanted to make it more of my day-to-day job. I had seen a number of different projects at the company, that we’ve been doing, that had these missions, like helping detect which kids are most at risk of online predators based on their conversations, other things like this… I can go into those more, but… I sort of decided that this was what I really wanted to do, but I didn’t see a very easy way of getting involved in them. It was like, you know, “Go talk to that person, or that person”, and it’s just scattered and crazy… So I suggested this role. I said “I think we should have a program. I think that there should be a way to bring in more of these types of programs - talking to the non-profits, talking to individuals, talking to organizations, or for-profits too, that are really trying to move the meter on helping out individuals, helping out the environment, helping the world, basically.” I know that sounds a little bit cliche, but these social impact projects… So that’s sort of how I became what I am doing today, which is being that coordinator, point of contact, as well as an advocate for these programs.

[04:26] I love the fact that you saw that need and kind of created your own job by way of suggestion. Before you got to that point, was there a moment you talked – you just alluded to some of these initiatives you got involved in before you ever even got to Intel, that had an impact. Was there a moment in particular, could you tell us about maybe one of the projects you are working on which made you realize this was what you wanted to do, and what was it about that project that did it?

Sure, and I guess it actually even started before I was a volunteer with Delta Analytics. I had been hearing about this AI for good, and data for good type of idea, and went to the Data Science for Social Good (DSSG) Conference Outbrief at the University of Chicago. They had a two-day conferency-type thing and just showcased a bunch of these different projects, talked about what these grad students were doing, what these non-profits were up to.

When I heard about Delta, I started to follow these different things on social media, and it seemed perfect, because it was a way to really get my hands dirty. I was working with the organization Open Media Foundation, and basically what they do is they go into these local town halls, governments and help them record their meetings, so that everybody in the community can hear what happened. They do some speech-to-text translation, so you transcribe all of the meetings, so they have a bunch of different text. So I was like, “Oh, this is my kind of data. I love this.” Tons of data, tons of text… Very messy, no punctuation, which was really difficult with a lot of different NLP techniques… And the issue that OMF was seeing is that they had all this information, but they didn’t have any tags on it. So they didn’t know if a town hall was about water usage, was about urban development planning, taxes etc. So what they wanted to do was do labeling. This is something that NLP is quite good at.

So with a team of three other individuals we looked at the data and tried to figure out how to do this. One of the hardest parts was that there were no labels whatsoever, so it was completely unsupervised. We used some techniques called LDA; so this is an unsupervised type of text clustering thing, and you know, figure out suggestions. We had a few very simple dashboards, but this was mostly done in Python, the dashboards were done in R, so a combination of different types of tools along the way. And at the end of the day we were able to figure out three or four tags per meeting… It works pretty well. We handed that off to OMF, and now they are putting that into their website, so that the local town governments can bring that into their APIs and say “Okay, this is what we are talking about”, and then somebody who is very interested in this type of information can then say “I really want to know just in general, or at a local level, or an entire national level”, because they have different government groups that are nationally… Like, “I wanna know whenever there’s any talk about gun control” potentially, or whenever there’s anything about health.

Fascinating. When you got to Intel and you had had these experiences, I assume you came in under a different role initially?

Yeah…

[08:00] And what prompted you to say “Hey, this is what I wanna do. I’m gonna go invent my own job, and go make this thing happen, here in the organization.” Had you been there long, or were you still new?

I was still pretty new. I have been at Intel for about two years now, and I created this role last April, so I’d been there just a little bit over a year. My title beforehand was “deep learning data scientist”, so completely different - very much hands-on coding, research-oriented… So I guess a little bit more into the back-story - when I had joined Intel, there was this idea of having sort of a more core data science type of group, and I thought that I was gonna be working on more of a social impact project, and – no, I didn’t come in as a deep learning data scientist, I came in as sort of a data scientist just in general, into another group. That group got merged into another – all sorts of complexities, but I was essentially doing sports data analytics first, and then I was doing the deep learning natural language processing. But it was short, so I usually just skip that over.

Well, that’s not unusual. Right now, the field evolving as fast as it is, not only deep learning specifically, but data science in general, it seems like people are moving around from position to position within organizations pretty quickly.

Exactly. Now, I knew that sports data analytics was not gonna be where I wanted to be; I know for many people that’s like their dream job, because “Sports, and data science? Hell yeah!”, but no… I call a lot of sports “sportsball”, and you know… It is interesting data, but it wasn’t the data for me.

It wasn’t just the right fit…

Exactly.

So how did you know that this was gonna be the right fit? What actually got you to put it forth?

It’s just something that I knew that I am super-passionate about. I’d been loving the work that I was doing with Delta Analytics, and… I don’t know, I guess I just decided to give it a shot. I’ve always heard that you can create your own role, that you can advocate for what you would like to do, and you can talk to your manager, talk to other managers… I approached probably five or six different people about this, and I was very pleasantly surprised when they were very unanimously for this, and had my back, and helped me clarify what I was going to be doing… And it’s changed a little bit, and it’s ever-morphing, but… I don’t know, it’s cool!

Can you share with us what your vision is for this role? I’d like to get a sense of what did you pitch to them originally, and how has it evolved in this time that you’ve been in the role?

For sure. One of the biggest things that I would like to do - and this is something that I’m still figuring out how to do most efficiently - is to bring in more programs… So to be able to help more organizations that are having a social impact. The hardest thing with that so far has been trying to figure out where are all of the resource areas. So who are the different people that have the capacity to take on another project… Because we’re all working as hard as we can already on a lot of other things, whether it’s research and development, or working on these proof of concept projects with other groups. Their time is pretty much tasked, so trying to figure out with the managers of other organizations what we can do and how to sort of leverage the company.

One of the things I’d love to be able to do (and I’m still figuring it out) is to involve more than just my own business unit. Intel is ginormous. I don’t know the stats on how many employees we have, but it’s a lot, and it’s everywhere; it’s a very global organization. So how do I get more people that are like me, really wanting to work on these types of projects. Sometimes I get emails like “Oh, I wanna do this so badly!”, so I have a little list going of people that can help on the projects, but I also don’t want to be taking them from their day jobs, too. It’s sort of like a line, I guess, to walk.

[12:36] It sounds great. So as you did this, and as you were making the pitch internally to get the role into place, how did you approach justifying it, given the fact that you’re working for a for-profit corporation, that is in business to make a profit, as are many of our employers (certainly mine), so how did you approach that, on them seeing the value of this kind of role?

For sure. I think one of the things that’s been the most beneficial of this last nine month or something period is to really start to see how to do that. Coming from an engineering background, I didn’t really have a lot of business classes, I didn’t really have a lot of marketing classes… But one of the things that I have been doing a lot is talking about these projects, both internally and externally, and showing a few different things.

So when I pitched it, I didn’t give any business objectives, or any metrics, or things like that, and now I’m starting to put those together. A lot of what we’re seeing though - it sort of helps the business in a few different ways. One is marketing. Talking about these really socially beneficial projects gives you all the warm feels, and they’re lovely to talk about, they’re really interesting as well, so that’s one thing.

The other is hiring and retention. A lot of the workforce today really just wanna work on these projects that are impactful. Recommender systems are great, or figuring out the sentiment of Twitter - also great. These projects have a place in things, but a lot of the workforce wanna do something that is more impactful. So instead of looking at any social media (we’ll just anonymize the social media source) or online source for sentiment or categorization, instead looking at it to figure out what is harassing text or not, or to try to figure out what types of information do kids globally have access to. That’s the things that we really wanna be working on.

The third actually is really relevant also to our hardware. At Intel we sell a bunch of hardware, that’s our bread and butter, and without being able to do these types of projects we wouldn’t see the entire range of use cases. We’ve done a bunch of different medical types of projects; one of them is using very large 3D images and trying to figure out where tumors are, so basically revolutionizing the healthcare industry.

The issue with these datasets though is that the images are so large, so it takes a large amount of memory to put them in your compute power. And there’s something called tiling which exists, so if you can’t fit it all in memory, you can chunk it up… But that doesn’t do very well if you’re doing a segmentation type of deep learning, where you’re trying to show an entire area. So you wanna keep your image whole. That really helps us then be able to make certain that our hardware is designed in a way that supports this dataset. If we were all just looking at ImageNet, then it’s like these tiny, tiny images… And that has an area and it has a place, but we wanna see the breadth of what is out there. And that’s just one example. A lot of these other datasets are also very large, very messy, so creating the tools to support those…

[16:11] I’m wondering, you had just mentioned hardware support, and I know we’re working through some of the different initiatives that you’ve done… If you could take us through some of the initiatives, and then afterwards I would like to delve into what kind of hardware support you’ve needed, how that’s affected Intel’s business, and also which algorithms you all are tending to use… But let’s start at the beginning, before I rush forward too far, and just talk about some of the different projects that you’ve done at Intel.

Sure. There have been a lot, so I’m just going to highlight a few, and then we have a few more on our website, which will be in the show notes.

Yeah, we’ll have those in the show notes.

So this one actually gets to the hardware support as well - it’s called TrailGuard AI. The premise behind this is that poaching is a giant issue, both in Africa, as well as globally. Actually, there is an employee who reached out to try to figure out if we could help install this type of camera in (I think) Sedona, Arizona, where this wild horse herd has been drastically impacted by people that are killing these wild horses… So it is an issue everywhere. But the park rangers that are monitoring these areas - there is not a lot of them. One of the statistics that I’ve heard is that in (I believe) Serengeti there is an area about the size of Maryland, and 150 park rangers. So it’s a large area, and the poachers basically have a very large financial incentive to poach these animals, because the ivory that they’re getting from it, or the bushmeat, whatever they’re trying to pull out, is very financially invaluable to them.

So basically what we did was we worked with a company called Resolve, and they have these motion capture cameras that they’ve been trying out. Motion capture cameras area great - they’re able to detect if there’s any movement, take images, and then an early version of TrailGuard sent all these images to the park rangers. Now, the issue though with the system is that they’re very noisy - change in lighting, any movement in the trees… If the bushes move, the motion capture camera goes off; you want them to be pretty sensitive. So these park rangers were getting tons and tons of images without anything in it.

We helped to resolve, embed Movidia’s vision processing unit. This is a very small chip that’s low-power, specifically designed for inference on the edge, so you don’t have to send any of these images to the cloud, which saves on battery power, and also there’s not a lot of cloud connectivity in these wildlife reserves; they’re pretty remote.

So basically what happens is an image is taken, because the motion capture camera goes off. That image is sent to the Movidius VPU, and there is an SSD type of neural network that goes off, which is a type of convolutional neural network, and it detects if there is a person or a vehicle. These are the things that the park rangers are the most interested in. Yes, of course, we can extend this to animals as well, so just the basic object detection.

And if there is a person or a vehicle detected, it’ll place a bounding box around that object and send that image with the bounding box to the park rangers, along with a little text file that says the probability. This drastically reduces on the false alarms… Which has a few different advantages for the entire unit. One, it really saves on the battery life, so basically this unit can be out in the field for like a year, a year-and-a-half, as well as reduces the noise that the park rangers are getting… So hopefully they can now intervene before the poachers get to the animals, and they can also see what is the information being given. They’re being able to decide “Yes, this is a poacher”, or “It’s a lot of poachers, with a lot of guns. We need to respond in a different way”, or “It’s a farmer, they’re getting their cows”, things like that.

[20:21] For any of our listeners who have listened to many of our podcasts, they may have heard me - that’s something I’m very passionate about, and I know you know that as well, animal advocacy… So I would like to say thank you very much for taking that particular issue on. I just absolutely love that you guys are doing work in that. It definitely touches my own heart. What are some of the other things that you all have engaged in?

One of the other projects - this one’s also a vision one, but it’s using facial or gesture recognition. We worked with a company called Hoobox on this vehicle called the Wheelie. Basically, it’s designed for somebody who’s had a spinal cord injury, specifically potentially someone who is quadriplegic, and so they don’t have the use of their arms to be able to control a motorized wheelchair. This lets them use whichever facial gesture is most natural for them - smiling, open mouth, raised eyebrows etc, to control the motorized wheelchair in any public spaces, or at home, or wherever they wanna go… Basically, allowing them to have more options and mobility. A lot of the devices that are out there can be expensive, they can be invasive, so one of the things is this little pipe thing… So just options that are not great for them.

We worked with them to use that, basically using a bunch of different hardware choices… Intel has a RealSense 3D camera, so that helps capture a lot of information about the face, and then all the processing is done on the NUC, which is a miniaturized PC with a customizable board, so it can be done all on the device. Again, you don’t have to send it to the cloud, because you want it to go really, really fast. Like, if I wanna stop, then I wanna stop now.

That sounds fantastic. There are so many use cases that I can imagine that that can be applied to, in terms of people that their mobility is entirely in a wheelchair for the most part, in the larger world; I would imagine that can be pushed out everywhere.

You’ve talked about two of them so far - when you were engaging in these kinds of initiatives, how does the larger organization, beyond just your group that’s bringing these to bear, how does that affect the larger organization? Are these things, both from a business opportunity, but also from a social good standpoint, how do you spread your ripples out through this large corporation?

For sure. There’s a bunch of different programs that have been helping these projects, and these are just a number of different business units, as well. There’s something called The Software Innovator program, and the AI Academy - the Hoobox example came through that. Basically, they help support get access to hardware, as well as software, and anybody can help in those projects, in those programs, any employee of Intel. Basically, anybody who wants to be involved with it can go there, and I can send you those links, too.

There’s a few different things… One of the things that we have done in the past and we’ll continue doing is – gosh, I’m forgetting the name of it right off the top of my head, but we have a program that employees can volunteer to do hackathons, or in-depth types of teaching programs in local communities as well. One of the cool things that we’ve done in the past - especially at these hackathons - is utilize some of the AI for Good programs and use them as a way to teach a student (high schoolers, middle schoolers, college kids) about AI, about computer programming, and basically spreading the knowledge that way.

[24:20] One of the things that we do as a company that I love is that we try to open source as much as we can. We have courses online, we have different Python packages or other types of language packages that are out there to serve as examples, whenever we are doing a project, either for research, or with a customer, whenever we can… Put it out there for somebody to take up, to utilize and use as their own as well. So that’s one way of doing it.

One of the biggest things is to continue to talk about it. We have these internal groups where we come and discuss different interests, and stuff. There’s a deep learning community of practice, there is an ethical AI group, there is an AI for Social Good group where we have these online spaces and forums to chat.

As you do this, and you’ve talked about these different organizations within the larger organization, different capabilities - how do you engage them? I assume that you’re thinking of them from “Wow, that group over there has a capability we could really use in this social good project.” As as you typically bring them in, how do you do that? Does it tend to surprise them, compared to their normal day jobs? I can’t imagine they wouldn’t be enthusiastic about being able to help, but I’m just curious how the politicking of those internal communications across departments works in this case.

That’s a great question, and I think it’s one that I’m still figuring out. It’s been a lot of email, for the most part, or IM chat, but… It’s funny, because when I introduce myself or when I am introduced to somebody, one of the responses that often happens is like “Oh, you exist?”, basically. It’s like, “We have an AI for social good?”

It’s really interesting, because we have projects that go back years, that I would totally put under the umbrella for social good, and the web page that we have that is highlighting these programs - a lot of them have happened even before I joined Intel. So just because we didn’t have a program, or a person that was taking it under their wing, it doesn’t mean that we weren’t doing it.

It’s fantastic, though… I totally get that social good didn’t start when you came to the company, but you essentially created a group where you have a flag to plant, and it gives you a firm place for the company to rally around for these kinds of things, and to tie different components together, I assume.

For sure, yeah. And one of the nice things is that a lot of groups and individuals reach out and talk to me about – like, when we were talking about the Wheelie, on the International Day of Disability back in December, I got a bunch of different emails from our disability group, and they were like “Hey, these are the things we’re doing. We’re super glad that you exist, we love this story… Can we use it in our slides?” It’s like, “Yes, please. Of course.” So there’s that communication.

It’s really helped me see more of the projects that are happening at Intel which are super-interesting. There’s things on education, there’s things on accessibility, there are things on trying to make sure that we’re using even – one of the projects that we did a few years back is making sure that we are using conflict-free minerals in all of our silicon, when we’re making our chips; making sure that that’s not having a harmful impact as well.

[28:01] All of these different pieces and parts, and the players who have been advocating for this I’ve gotten to know… And then when somebody asks, “I wanna do something on education. Who do I talk to?”, it’s like “Oh, go talk to [unintelligible 00:28:15.23]. She’ll hook you up.” Or knowing the AI Academy people, or the AI Builders group; they help startups get access to AI technology. All of those difference pieces and parts connecting them to each other, as well as to organizations that I think we can help.

I love the fact that not only are you doing social good, but there’s the benefit for the company, because that’s gonna keep them motivated on doing these. When you talked about making sure that the raw materials that go into the chips are from conflict-free areas, so that people are not being exploited, and all that… And with the work that you’ve done, obviously, in the poaching, and with accessibility, with wheelchairs and such… Do you have any other areas that you’re either engaged in now, or would like to get engaged in? What’s your aspiration there?

One of the things that has really interested me about this program, and seeing what the problems are and what we can do, is that we can really just reutilize a lot of the technology that we already have. So we can use the compute types of power, we can use the frameworks, be them computer vision, NLP etc, and really just rejigger them into these new use cases. The segmentation example that we were talking about earlier for cancer detection - that same technology is used to show where in an image is a dog, or where in an image is a person, if you’re doing some sort of self-driving car type of thing… The same technology, but just utilized in a different - and in my opinion more meaningful - way, I guess.

That’s a great example. One of the things I was just thinking about is that some of the examples we’ve talked about so far in the conversation have been very much around computer vision, where you’re going to apply different CNN architectures to solve it… I’m just curious, and answer - maybe you know what’s coming, but outside of computer vision, have you found there are any other deep learning algorithms in particular, or even outside deep learning, that have been particularly useful, or that you expect that you may be seeing based on some of the conversations?

No, for sure. One of the projects that Intel did a few years back was called Hack Harassment. Basically, what they were doing was working with Vox and the Lady Gaga Foundation to identify harassing speech online, and be able to work in these communities to mitigate it. We were using LSTMs and other NLP architectures to try to detect these types of comments that were occurring. It’s interesting, using that in there… And we actually are working with some grad students now to continue those types of projects and bring the state of the art forward in that area.

There’s other things that you could do that we’ve done with the National Center for Missing and Exploited Children (NCMEC). They get a whole bunch of different pings from anyone that has data online, and if there’s ever any content that looks like a child might be in danger from an online or a real-life predator, they get this. It takes them a large amount of time to go through it; basically, it takes 30 days to respond to every single type of thing. They need to figure out where is this located, is it actually hazardous? Because they get some false information, too… And what is the response that’s necessary, what’s going on?

[31:53] We worked on a couple different algorithms, some of which are NLP, some of which are just machine learning, to determine if there are multiple different types of IP addresses, which one is the one where it’s located, who are the different authorities that need to be brought into this case, as well as do a prioritization of saying “Yeah, these are definitely ones that we have to look into rapidly”, and with missing kids especially, the sooner that you respond, the better. Or “This is a case that is important, but might not need the same response”, working with them on that.

That’s amazing. You have so many amazing examples that you’re working on, and aside from the animal advocacy, I love children’s issues, and elderly’s issues as well are things that I personally care a lot about. So if someone I’m ever on the market for another job, I may come knocking on the door at Intel and beg you to take me onto your team here.

[laughs] Very cool.

So with this success - you talked about that you only came into this in April, and you had tremendous success in doing this, in a very short amount of time, so I have to pick your brain a little bit… There are going to be other people out there in other organizations that really want to do something similar in their own organization, be it a small or a large one. As you have come through and maybe have some battle scars on setting this up and having to figure it all out, what kind of recommendations do you have to help people do something similar?

For sure, and one thing that I definitely want to mention is that the projects that I have talked about are all ones that I didn’t bring in, or that I didn’t work directly on. This is the work of many, many people, over many years… And I think that’s important, to make sure that the credit goes where it’s due.

What I would suggest though is if you’re wanting to do this type of role - kudos; I think it’s great. One of the things that I did before this is to volunteer with Delta Analytics, which is an organization that is located in the San Francisco Bay Area, but there are ones that are more nationally and globally. DataKind is one, and there are many others… And that really helps you start to see what are the issues that are out there, what are the ways that I can help. It does help to have a data science/software engineering background, so that you understand the tech, you understand the AI lingo… And then, you know, get ready to network, because a lot of it is figuring out who has the issues, who has the solutions, and how do we get each other to work together? So it’s a lot of networking… But it’s interesting.

[34:46] So I definitely suggest going to some of the AI For Good workshops or symposiums that are starting up. A lot of them are occurring at the traditional ML, AI, different types of conferences. Here at AMLD there’s a couple different sessions, as well as on the big stage… So it’s becoming more of a topic that is spoken about, and they’re great. If you’re a grad student, check out DSSG, check out some of these other labs that are at universities.

One thing that I neglected to do at the beginning of the conversation - while I know listeners know that we’re at Applied Machine Learning days in Switzerland, I neglected to say that you were one of the speakers, and I was too, and we were on the AI For Good track, which our good friend Daniel Whitenack, my co-host - he ironically was not able to be here at the last minute, due to a family situation, but he actually organized the track, and a lot of us on the AI For Good kind of banded together, and stuff… So I wanted to say thank you very much for everything you’re doing, and for being here and taking the time to not only do the work, but to share it with us.

For listeners who might wanna reach out, get in touch with you, how is best to do that?

I’m definitely on Twitter, and I check that a lot. I’m @data_beth on Twitter, and there’ll be a link to the website, which is just intel.ai/ai4socialgood (the four is a number, because I am a nerd, and I love that).

We will definitely include that in the show notes.

So those are great ways to reach out and get more information about what I’m doing. I would definitely not suggest emailing me, because my inbox is a little back-logged at the moment, so we’ll go the Twitter route for now.

Sounds good. Anna, thank you so much for coming on the show, sharing all this with us. I’m quite sure there’s some people out there that are inspired to do the same, and thanks for giving some advice. Thanks so much, I’ll see you at the next AI For Good conference, somewhere in the world.

For sure. Now I look forward to it.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00