Longtime listeners know that we’re always advocating for ‘AI for good’, but this week we have taken it to a whole new level. We had the privilege of chatting with James Hodson, Director of the AI for Good Foundation, about ways they have used artificial intelligence to positively-impact the world - from food production to climate change. James inspired us to find our own ways to use AI for good, and we challenge our listeners to get out there and do some good!
O'Reilly Open Source Software Conference – OSCON has been ground zero for the open source community for 20 years. This year they’ve expanded to become a “software development conference” — because in 2019, software development IS open source. The program covers everything from open source, AI, infrastructure, blockchain, edge computing, architecture, and emerging languages. Use the code
CHANGELOG20 to get 20% off Bronze, Silver, and Gold passes.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Click here to listen along while you enjoy the transcript. 🎧
Welcome to another episode of the Practical AI Podcast. We are the podcast that tries to make artificial intelligence practical, productive and accessible to everyone. I am Chris Benson, I am chief AI strategist at Lockheed Martin RMS API Innovations, and with me today is Daniel Whitenack, my co-host, who is a data scientist with SIL International. How’s it going, Daniel?
It’s going well. A little bit jet-lagged at the moment, but happy to be talking.
I know you’ve been traveling… Where are you at this point?
I’m in the Netherlands, I’m meeting with a few different teams that I collaborate with.
Great, sounds good. Well, I am very excited about this episode… Anyone who has been listening to us for a while knows that you and I are very passionate about using AI for good; we’re always talking about AI for good.
Yeah, it comes up in many episodes… So today we’re gonna end up really dedicating that. Before we dive in, I know that I have some stuff that I do in that space, and so do you; I know for me, I work on humanitarian assistance in disaster relief, applying AI to those areas… And my own personal project - everyone that listens to me knows that I love animals, I’m always talking about that, so I’m trying to use convolutional neural networks to detect dog fighting rings in puppy mills.
I know that you do some stuff in terms of AI for minority language community stuff… Did I get that right, Daniel?
Yeah, so I actually work for a non-profit. SIL is a non-profit. I’m working on AI for minority language communities; things like Google translate are only available in 50 or so languages, but the world has about 7,111 languages at the last count, and there’s a lot of places that need humanitarian assistance. Most of the time, those places that have that need have just a lot of language diversity… So I’m working on some of those problems.
Great. Well, you know, not long ago – I have a friend named Paul [unintelligible 00:03:58.11] who used to work at Thomson Reuters, and he actually interviewed me for an article that he wrote at Thomson Reuters… And we’ve kept up with each other ever since then; he was talking about the fact that he had just come to the AI for Good Foundation… When we were talking, I asked him if I could interview James Hodson, who is the CEO for AI for Good Foundation, and we have the good fortune of James joining us today. Welcome, James!
Thank you very much. I’m very happy to be here.
[04:29] We’re excited about this, because we’re actually able to have a conversation about the work that you do, and really have an entire episode just about AI for Good, so this is gonna be a good one… But I was wondering, if you’d just kind of start us off, telling us a little bit about your background, how did you get interested in AI, and what’s the story that led to this organization at a personal level.
That’s a great place to start. Now, I think obviously one episode for AI for Good is probably not sufficient to cover everything, but I guess we’ll see how far we can get.
The organization itself started in 2015, so we’re not a particularly old organization… But it started with a lot of the machine learning and AI research behind it. It started specifically out of a set of workshops at Stanford University in 2014, where we were trying to think what the big challenges would be over the next 10, 15 years, that as AI researchers we should be dedicating our time towards.
This set of workshops was attended by many of the big names in artificial intelligence that you would recognize, and one of the mandates that really came out of everybody there is that we need to get more of the research community and more of the practitioner community thinking about how they can use their skills, and the methodologies that are now becoming so widespread in other business areas, for social challenges. We don’t exactly lack social challenges, at the moment, where we could be applying these technologies.
Now, for my personal perspective, I’ve been working in artificial intelligence for about 15 years. I actually started, similarly, in machine translation. I was working on low resource languages and on machine translation for the European Parliament. This was at the German National Research Center for Artificial Intelligence, back in 2008-2009. I also spent some time in industry proper. I was managing the AI research lab at Bloomberg for some time, in New York, which allowed also to explore some aspects of attempting to use the technology for social impact. Obviously, as you can imagine, in an industry setting that’s not the primary goal always, but as you know, the Bloomberg Foundation and many projects in the oceans and climate and other areas that Michael Bloomberg in particular feels very strongly about… So there was certainly some precursor to the organization that started out of ideas with the Bloomberg Foundation, and with various projects that we did in collaboration with Academia back then.
The turning point in 2015 was really this set of workshops, and the realization that the types of technology that we’re developing today can have an enormous impact on these social challenges. But the question that remained was which social challenges should we really be attacking first, which ones are the most important, where can AI have an impact… And the fortuitous answer that we came to was that the United Nations had already done this work for us. The United Nations built the Sustainable Development Goals, which is a set of 17 goals, 16 that are thematic and one that involves building infrastructure that is strategic across the entire set… And they cover problems like removing poverty, and ensuring that everybody has access to clean water, and ensuring that everybody has enough food to eat, and ensuring that we don’t damage the environment on our planet to the point where it’s unlivable… All things that if we don’t think about them long and hard very quickly and take big steps, are going to make certainly some people’s lives much worse than they could be, and ultimately make our entire planet harder to live on… Whether that’s through geopolitical actions, or through the actions of individuals on the environmental health of the planet.
[08:29] So that’s where we began, and what we tried to do as an organization is to be a community builder, first and foremost. So we pool together the research community, we bring volunteers and AI practitioners on board who want to help us, and we host workshops, host conferences, but we also do build infrastructure. We are actually trying to get involved in the field, in the projects, and understand how artificial intelligence can be pushed into those challenge areas, as well as giving researchers and other individuals who are interested the incentives and the mechanisms by which they can contribute.
One thing to note that I think is quite important about us is we’re a public charity, which is an important distinction from many of the other players in this space. And we’re membership-driven, which means that we rely mostly, for our funding, on individuals who want to become members of our organization, and who pay a yearly membership fee for that participation.
We don’t exclude anybody, of course, but we do try to build a strong membership community of supporters and donors, who will support us year after year after year. In lieu of that, for this particular conversation, I was able to secure with our operational team that any listeners who are interested in becoming members of our organization can do so with a 50% from our normal membership rate.
And how will they go about doing that?
On our website, if they sign up for membership, they just put in the coupon code “practicalai”. That will allow them to sign up for half the normal price.
Awesome. We’ll definitely post a link to the website in our show notes. I would really encourage our listeners to look into that. We really appreciate that opportunity.
You talked a little bit about the origins of the AI for Good Foundation, and the workshops that were run at Stanford… How did you go about – I mean, it’s one thing to recognize the problems and the goals listed by the U.N. and also hold a workshop and understand that we can and should address these… But there’s obviously certain things preventing AI practitioners or researchers from really going after these things wholeheartedly, or else more would be going after these things wholeheartedly… So how did you decide what is preventing people from addressing these challenges, and how to incentivize busy researchers, busy practitioners to put their time into these things?
Right. That’s the perfect question, really; that’s the question that we started with. The incentive mechanisms for researchers are really skewed towards publication. Publication is, especially at top universities, the only metric that really meters for tenure, and tenure is the only thing that really matters to junior researchers if they want to have a job in the future. So the easiest thing if you want publications is to find a good source of funding and data, and to publish your work using that funding and that data.
[11:43] The problem with sustainable development goals like those of the United Nations is that they tend to be in areas that neither have funding, nor have data. As a result, very few people have the time to spend in the five or six years they might have before they come up for review at their universities, to actually explore ways of getting money, potentially from foundations and grant-making institutions, and find ways of unlocking data from companies or government agencies and so on that might be holding data, or potentially even go out and crawl or scrape or build sensor networks in order to get specialized, new types of data.
So that’s one side of this issue, and that’s where we decided we could have the biggest impact. It was essentially to build the capacity for the researchers and also practitioners within companies who have time to dedicate to this separately from their main job, or maybe there are ways that they can make it part of their main job as well, by providing the access to data resources, providing access to infrastructure, and building bridges between the organizations that need this work to be done in the field, and the community that has the appetite and ability to do it.
If you ask researchers at Stanford, at Carnegie Mellon, at Columbia, Princeton, anywhere, “Do you want your work to be used for social good?”, I have never received the answer “No.” I’ve always received the answer, “Yes, but…” And that but is usually that it takes too long to figure out how to do that effectively, in a way that mixes with their normal career.
So would it be fair to say you’re essentially providing them with an alternate incentive path that they can follow, so that they can achieve the output that they’re producing specifically toward a good purpose that they have in mind, bettering the world? Is that a fair way of looking at it?
Exactly. Now, we are partners with the United Nations on defining how technology gets used for the sustainable development goals, and that means that we have connections into the various U.N. agencies like UNESCO, that deal with these challenges directly, as well as a whole set of non-profits that operate in this area, government agencies around the world… And what we can do very quickly is, as you mentioned, plug the researchers in to a community that already wants their input, and already has data that they can use, and is very willing to invest additionally in order to make things happen… Because you can have a huge impact with very limited new types of models, on data that previously has been unexploited, because there are so few people working on this aspect of the humanitarian intervention.
So if I’m a researcher, maybe I’m an associate professor, or whatever it is, or I’m in an R&D lab in industry, and I’m interested in exploring this route, could you describe what it’s like to engage with the AI for Good Foundation? Is that kind of like becoming a member and then starting those conversations around what is my expertise, and then how does that match up with the problems, and then you kind of match me up with these organizations and other things? How does that process typically go? Or maybe it starts at a workshop, or a conference, or something.
The answer is, of course, a combination… But we primarily work with research labs in Academia. We build strategic partnerships with labs at certain universities where we have a presence, and those universities are starting to number in the several dozens at this point. So if there are people at universities, then we’re very happy to get them involved in those communities, and actually go out there and organize workshops on the university campuses, get people involved, understanding what we do, what the opportunities are, and build that way.
[16:02] We also have what we call our global volunteer force. Now, this is a database, if you will, of people across industry, Academia, so it includes anybody from masters and undergraduate students, to post-doctoral fellows, researchers in Academia, researchers in industry, practitioners in industry, programmers who maybe don’t usually work on artificial intelligence, but are interested in the area… And we build strategic task forces out of this volunteer set for particular projects.
When we identify, say, with UNESCO, that there is a need for looking into tracking student behavior in certain types of classes in India, then we will go and identify 5-6 individuals from the global volunteer force in order to get involved in that project with the policymakers, with the datasets that are available, and with a specific goal in mind. Those task forces will always be overseen by what we call the faculty mentor, which is somebody from the research side, who has experience in that particular vertical, that particular domain… But the teams will be cross-disciplinary and the teams will be drawn from wherever there is interest. That allows us to flexibly build the capacity.
I think one of the things with my experience involving volunteers in non-profit, tech-related stuff is a lot of times there’s this initial excitement, on these really exciting and meaningful projects, and maybe an initial great effort at a hackathon or something like that, and then basically always the project dies out because there’s no structure around it… So in terms of what you’re talking about, it sounds like – I don’t know, is that something you’ve seen? And maybe having the AI for Good Foundation as a backbone and putting these mentors in place helps with that, but I was wondering if that’s an issue you see, if that’s something you’re fighting.
Yeah, the mentorship structure was built specifically in order to mitigate the concerns that you raised. Some of the initial projects that we did suffered a lot, and we had some disappointed non-profits and government agencies because it seemed like people were very interested in the beginning, everybody would attend kick-off calls, everybody would even come maybe first on-site, but then other priorities would come up. So we’re very careful now in two senses - we ask a lot of questions before we qualify people to go on the global volunteer force… And that includes the number of hours they’re willing to put in, the timeframe over which they’re willing to do it, the specific skills that they think they can contribute, and we vet those people to make sure that when we build teams, they will be teams that have the capacity to actually build something reasonable.
The faculty mentor obviously is not a manager, is not somebody who’s going to manage the psychological well-being of the people on the team, but it does help a lot in terms of setting a pace. Also, people really enjoy being able to work with top researchers from Academia in order to get a taste of their work, and also be able to cross-pollinate the types of things happening on the academic side with the types of things happening in industry, which we all know are two completely different worlds otherwise, which hardly interact.
James, I know when we started the conversation you made a reference to the United Nations Sustainable Development goals, and I was looking across some of the program of activities that you guys offer on your website; just to enumerate some of them for our listeners - there was workshops and conferences, there was education outreach, there was standards and guidelines, tools and platforms, research program funding and support, and local chapters… You’ve talked a little bit about how these volunteers can start engaging, become members and start trying to do it – could you talk about it in the context of some of the programs that you guys offer, and maybe give some examples, a little bit of case study about what you’ve done?
[20:23] Absolutely, I’d be very happy to. There are two case studies that I think would be interesting to talk briefly about… The first big program that we ran with a network of universities and companies and non-profits and the government was around food security. We call this the Food Security AI Challenge, and what we did in the first instance was go to many different companies that were operating in this section, whether it’s the actual agricultural output sites of farmers, farming conglomerates, seed producers and so on; the logistic sites, so people who actually go out to the farms, purchase the goods, move them from one warehouse to another, eventually move them into refining and other plants that they need to go through in order to make it to market, the markets themselves, and then finally food waste side, the consumption side of that equation.
We gathered datasets, and we tried to bring people on board with a view to contributing the information that they had about their part of that puzzle. Now, we then made those datasets available - climate data, phenotypic, genotypic data about seed varieties, growing data, supply chain data, where food was being consumed, when, and so on - to a community of people who signed up. Those people came from industry, so we had entrants from all over the world, but especially from U.S., Canada, Europe, China, Australia and South America… And what we were looking for was for people to apply on this data interesting metrics to help us first understand the whole landscape.
We then brought people together for a series of workshops. We held workshops at the Santa Fe Institute, and we held workshops also at several AI conferences. In particular, we have a very close relationship with the ACM Conference on Knowledge Discovery and Data Mining, which is one of the largest machine learning conferences in the world; it’s about 5,000 people and it takes places in August of each year. We partnered there in order to build continuous topical workshops and theme days around the SDGs, and how researchers and practitioners can get involved.
We glued all of these pieces together, and one of the outputs that we got from the models that we’d built was actually the ability to improve the seed yield of particular varieties of seed that are purchased, especially across the U.S. Midwest regions, by an additional 50% per year in terms of the yield improvement.
So yield improvement is around 1% a year, on average, based on the enormous amounts of resources and research that seed production companies put into growing seeds, testing them, splicing them, regrowing them, keeping track of test fields. Everything is done in the traditional methods, since GMO has been criticized for many years, so seed manufacturers have gone back to more traditional types of splicing… And they get roughly a 1% improvement per year. Now, just through the data science aspect of this, just through looking at it through machine learning eyes, if you will, they were able to push that up to 1.5%.
[23:55] Now, to give you an idea of the effect that that can have, if implemented across the board, is that if we don’t come up with a way of doubling our productive capacity, then by 2050 we basically run out of food, and that’s based on fairly conservative population projections, and also based on the fact that the African population in particular is going to be exploding over the next 20 years. Now, that doesn’t even account for climate change scenarios and changes in agricultural land use… So we need to make a change here, and this is one way that we can contribute towards it.
Yeah, obviously that’s super-exciting, and I’m so happy to hear that this process happened, and the outcome… I was wondering about your perspective on – you kind of mentioned at some point if implemented, what effect this would have… So once you have this outcome from one of these efforts, what is the process to get that information and those techniques back into the hands of people that can do the implementation? Is that through the organizations that you have connections to, through the U.N.? How would that actually get back into the hands of the seed producers or the researchers in industry that could actually work towards those implementations?
Right. It’s a very good question, again. We try to involve the full lifecycle of stakeholders throughout the process. That means bringing the government representatives and the NGO representatives, and even farming representatives into the room for our workshops. It also means going and having specific meetings in strategically located areas, where this can have the biggest impact. Now, the U.S. Midwest is a huge growing region of global significance, as are large parts of Brazil, as are large parts of Eastern and South-Eastern Europe… So we actually go out and talk to people in those areas and help them to understand how the technology might be integrated with their current practices.
This is hard, because often the biggest barrier is not that the technology is not available, but it’s the fact that there is no mechanism by which to get people to shift the way that they’re currently doing things, to use the technology.
It sometimes involves a cultural shift, as well.
It sure does.
Exactly. That’s the hardest part, and we’re still learning how to do that effectively. I think everybody’s still learning how to do this really effectively. There are reasons why despite billions of dollars in aid over the last 30, 40, 50 years to certain countries, we still haven’t been able to shift the quality of life of individuals in those countries, and it’s not because there wasn’t enough money, and it’s not because there weren’t enough people wanting to do it, but it’s because the reality of this area is that there are certain societal frictions and cultural frictions, as you mentioned, that make implementation hard. We’re ultimately a market-based economy, and it’s about supply and it’s about demand. You can’t always shape everything just by having the technology available.
I’ve been looking across your projects page too, and I saw that you covered the food… And that’s a very inspirational use case as well, in terms of being able to do that with food. Also, just to share with the audience, you have projects in ocean life protection, education, urban development, traffic safety, media bias, carbon sequestration, healthy, sleep and nutrition, and also transparency in government and corruption… Do you have any other use cases that you can also share with us along the way?
Yeah, a big area where we’re really trying to have an impact now is climate change… But this is an area where you can’t really just dive in in the same way as many of the others there. There are many climate scientists and environmental scientists working on the question of climate change. It’s a huge area of research right now, and the IPCC, which is the main international body that publishes research on findings relating to climate change and predictions about what would likely happen in the future if we don’t or do change our behavior - they’re the main body that deals with this, and as a result, the machine learning researchers have not had much of an impact in this area, let’s put it that way. If you look at the latest IPCC report, there are almost no citations to machine learning research, or AI-related research.
What are some of the inhibiting factors that is making that the reality currently?
The main factor - because you have a very strong research community that is not an AI research community, there has been no perceived reason for them to reach out and want to get involved with this. Now, some of those papers may include some machine learning methodology, but actually very few of them. The reason is they have their own science-based modeling techniques, which they have been developing fairly independently for decades… And as a result, there just isn’t much cross-pollination between these research areas.
If you go to industry, there also isn’t very much cross-pollination between the for-profit motivated companies that may benefit from one or the other area. There are hardly any machine learning startups in the solar energy space, for instance, or in any other energy space.
When you’ve been making efforts in that area and you have identified this as a major barrier, how would you go about getting those communities to talk? Is that part of the workshop and conference projects that you have going on, or how have you been making strides in that area?
We’ve got two prongs on this particular area right now. The first is that we are organizing what we’re calling The Earth Day Summit in Alaska in Anchorage, in August. This will bring together machine learning researchers, machine learning practitioners, scientists who work with the IPCC scientists from NSF, from various other large international or national grant-making organizations that work in this area. That’s the first time that we’re going to see an organized and large-scale set of conversations exactly on the topic of how machine learning can help with the various climate change-related challenges that we face.
[32:09] Now, many people don’t realize, but most datasets used by the IPCC are tiny. They’re on the order of tens of samples, because you can’t take more than tens of samples of [unintelligible 00:32:21.21] and you can’t look at testing gas concentrations in more than 10 or 20 different locations globally without it becoming cost-prohibitive. So many of the problems aren’t big data problems, but if we’re talking about practical AI, there’s no reason why machine learning has to be a big data problem. This is a new myth that has been generated. We have methods for dealing with small data too, and some problems converge faster than others, and some problems require less data in order to achieve the same performance, depending upon how you go about finding solutions.
So we’re all about starting those kinds of conversations, and not hiding behind the stereotype of machine learning as being large convolutional neural nets with millions of samples.
I have a little bit of a side question that occurred to me as you were talking, and in particular about your Earth Day summit in Alaska in August… If you think of me as a podcaster or the hammer trying to find my nail, how can those of us that are in some form or fashion part of the media, or do podcasts or other similar things, bloggers - how can we help, considering that you have these challenges that are often cultural, and changing attitudes and saying “Hey, we have some great tools that can be applied to the great problems of our time”, how can we help at large, in terms of getting the word out and starting to change minds and how people are perceiving these situations?
We are all about getting the media and people who have an audience to share what we do, and also to come and experience what we do directly. We do have, for example, media passes to all of these events, where we can get people into the room and try to record as much of it as possible for dissemination. Many of our workshops and conferences are freely available to view, either through our website or on videolectures.net, which is the largest platform for graduate-level and above content in these sciences, including computer science and machine learning.
Oh, great. We’ll have a link to that in the show notes.
So we definitely want to get the word out, we want you guys to come and be part of the conversation as much as possible, so that you can offer that gateway to your listeners. We also want your listeners to come to the conferences and workshops and be part of that directly. All of our events are open. Even our board of directors meetings are open. We have minutes of what we talk about on every aspect of our organization, and as a result, we hope that that helps create a culture of wanting to get hands dirty, wanting to get involved, and ultimately having a bigger impact down the road.
I have one follow-up, back over to the data side that you were mentioning, in terms of having small datasets; that is something – I mentioned at the top of the show that at work I’m working on humanitarian assistance disaster relief, and certainly the lack of data in certain areas, that being one of them, but I imagine there are many different areas where AI for good can be applied… How much of your focus is on generating datasets, versus having the luxury of going right in and trying to model a situation into improvement. Do you have a large focus on dataset generation, by chance?
[35:56] Yes, so we do have to get involved in this area, as somebody who works in artificial intelligence; listeners also know that having data is often a red herring, because if you look at medical data, for instance, it’s collected in a particular way, it’s collected for a particular purpose, and often, when you take somebody else’s data that’s been collected for a different purpose, you’re missing key information about the assumptions that were made during the collection process, about the method of storage, about the method of just collecting the information. How accurate were the sensors? Did you decide to fudge together two variables because you couldn’t really be bothered to measure where one begins and where the other one begins or ends? As a result, it’s often the case that we find that the datasets that look like they might be useful in the beginning are just not, because the margin of error on the key variables of interest is too high for our particular use case.
Unfortunately, especially in the research world, but in many places, people ignore the aspect of understanding the data appropriately before jumping in, and this leads to results that look good on paper, but don’t really convert into something that’s usable on the ground. We have to be very careful about this, because we only have one chance with certain stakeholders, and people will never trust us again if we promise that we give them an improvement and it doesn’t pan out because we weren’t careful about what type of data we were using to infer a particular decision for them.
I love what you said in this whole discussion about small data and certain techniques that maybe the AI community as a whole isn’t so focused on. I think that we’re oftentimes blinded by building a bigger language model, with more text data, and all the data that we can get, but at the same time, that kind of steers us away from a lot of research areas that are really valuable… And I’m just curious, in these sorts of challenges that you’re providing and the data that people are working on, are they finding new, interesting techniques that others maybe have not run across or have not explored because the problem doesn’t involve a lot of data, or the researchers aren’t focused on these issues? It just seems like we could, in addition to solving really important problems, we could stumble on really important technical discoveries as well, because we’re exploring a larger variety of problems.
Yes, that’s precisely what happens. I’m actually really glad that you brought this up, because I feel like over the last ten years or so, as artificial intelligence has gained a new meaning, and as more and more people have associated with the area in one way or another, whether it’s to raise money for their startup or to look cool on TV shows, or however the underlying reason might be, we’ve kind of lost track of the fact that there are some problems that you can consider them solved. Once you’ve achieved a certain threshold of ability to recognize a cat in an image, the problem of cat identification is fairly well-solved. You can improve it by half a percent, maybe even 5%, but improving it by 5% doesn’t open up any new use cases that previously were not accessible. Once you’ve had a breakthrough, the further work doesn’t make it possible to do things that you couldn’t do before, it just maybe gives you a slight improvement in the ability to do it.
[39:43] What we’re focused on as an organization is solutions to problems that currently don’t have any viable solution. And that’s an important thing to think about from an AI research perspective. Would you rather be spending your time, as you said, in a machine translation context, improving your blue score by 0.1 on French to English, or would you rather have a breakthrough on that under-resourced language that, by the way, has 350 million people using it in under-privileged areas around the world, where now all of a sudden you gave them access to the internet and all of the knowledge on it. Which of those problems is more impactful for you to be working on? One is already solved, you can get an easy publication out of it, there are ten journals that will accept it, and the other one will be a harder sell, but it’s ultimately gonna have a bigger impact, and that problem is actually going to be worth something in the real world. That’s what we’re trying to do - we’re trying to get people to work on the latter, not the former.
And you segued right into where I was about to go next, which has to do with impact. I guess I wanted to kind of wind up asking a two-pronged question - you made the generous membership offer earlier, and definitely we’re encouraging our listeners to go check that out… If someone has a passion for a particular area within the larger AI for good space and they wanna join, is there a way they can bring a project into the organization or sponsor it? How do you make those choices? And the other side that I’ll go ahead and pose is if they’re not part of the foundation itself, but they’re just kind of out there on their own, do you have any guidance on how they might drive their own passions for AI for good forward there?
Yes. We don’t really make a difference between people who are members of our organization and working on AI for good, or people who are out there by themselves, trying to do something good with the techniques that they know and the dataset that they have available, and their passion. We’re as inclusive as we can possibly be. And as I said, whether people choose to become a member or not is irrelevant to the work that we do. We kind of need money, obviously, like any other organization, but if there are people out there that need support, where there is a connection that we could potentially help them make that will drive forward their project, that will make it a little bit more likely that it will get picked up and used for something beneficial, we want to hear about it. You can write to us through the website, or at email@example.com, or you can reach out to me directly, and we’re always going to be interested in having those conversations, regardless of whether it ends up being considered an AI for Good Foundation project or something that is being done entirely separately.
And that can be anywhere around the world, and we’re especially interested in focusing on areas that currently don’t have as much of the resources of AI practitioners. Places that are maybe not the first places you would think about hosting a conference, like in Kyiv in Ukraine, for instance, or in Dhaka in Bangladesh, or even in San Paolo in Brazil. There are many places around the world that are not New York or San Francisco or London, and those are the places where we can also have a big impact by bringing more focus and energy towards solving these challenges… So please do get in touch, no matter what you’re working on, if we can help, because you do need a network in order to get projects from a prototype phase to actually being deployed, and there’s no point duplicating the effort. That’s why we exist.
That’s more inspiring than I can express. On behalf of everyone listening to the show, I would like to thank you very much for the work that you and the foundation are doing in this space. I would also like to challenge our listeners, in turn, that if you are a practitioner in the AI and ML space, take your expertise, pick some sort of side project where you think you can make a difference and use AI for good.
James, thank you so much for coming on and sharing what you’re doing, and giving us some guidance on how we can do it ourselves. I really appreciate it.
Thank you, guys. It’s a fantastic opportunity for us to be able to talk to your listeners. It was very enjoyable, thank you.
Our transcripts are open source on GitHub. Improvements are welcome. 💚