Daniel and Chris do a deep dive into The AI Index 2019 Annual Report, which provides unbiased rigorously-vetted data that one can use “to develop intuitions about the complex field of AI”. Analyzing everything from R&D and technical advancements to education, the economy, and societal considerations, Chris and Daniel lay out this comprehensive report’s key insights about artificial intelligence.
DigitalOcean – DigitalOcean’s developer cloud makes it simple to launch in the cloud and scale up as you grow. They have an intuitive control panel, predictable pricing, team accounts, worldwide availability with a 99.99% uptime SLA, and 24/7/365 world-class support to back that up. Get your $100 credit at do.co/changelog.
The Brave Browser – Browse the web up to 8x faster than Chrome and Safari, block ads and trackers by default, and reward your favorite creators with the built-in Basic Attention Token. Download Brave for free and give tipping a try right here on changelog.com.
Welcome to another Fully Connected episode of the Practical AI podcast, where Daniel and I keep you fully connected with everything that’s happening in the AI community. We’ll take some time to discuss the latest AI news and we’ll dig into learning resources to help you level up on your machine learning game. My name is Chris Benson, I’m the principal AI strategist at Lockheed Martin, and with me as always is Daniel Whitenack, a data scientist with SIL International. How’s it going today, Daniel?
It’s going great. It was a cold and rainy/snowy weekend, but on the good side, as some of our listeners will know, we had a season in our household of flu and sickness, and that’s kind of ending, so I’m very happy about that. No coronavirus yet.
Yeah, I was about to make a joke, welcome back to the land of the living, but you know… Yeah, with that in the news right as we’re recording this - that’s been a big thing the last few days.
Yeah. And I know you pointed a link to me earlier, which was pretty interesting for our listeners. Do you wanna tell people about that?
I was just scanning across news articles, and it was actually on Wired. A couple of days ago the title was “An AI epidemiologist sent the first warnings of the Wuhan virus”, which is the virus that started in China and is spreading to some parts around the globe as a type of coronavirus. The short part of the article is that there’s a company called BlueDot who has algorithms that take a lot of data sources from health, and airline ticketing data and such to predict the spread of disease… And in this case, they really kind of got there first, actually, on December 31st, before New Year’s they actually sent out their first note that this outbreak was expected…
And it was really another week before here in the U.S. the CDC, which is the U.S. Centers for Disease Control and Prevention, got the word out… I don’t know the detail of their algorithm, but they refer to it as an AI-driven algorithm, and they got those first reports out. We like to talk about AI for good, and that certainly seems like a good thing, to get an early warning of a major outbreak like this.
Yeah, definitely. It’s a super-interesting thing, and in some ways it seems very much science-fictiony to me… Like movie-like, “Oh, we’re detecting all of these signals around the world, and correlating them to say there’s gonna be this pandemic, or whatever… Their article says that they scour foreign language news reports, animal and plant disease networks - I’m not quite sure what that network is - official proclamations and other things like that.
[00:04:01.10] So it’s definitely pretty interesting, and they’re really doing this; it’s pretty cool. If anyone knows anyone at BlueDot out there, we’d love to have them on the podcast to discuss that. Maybe we can make that happen sometime soon.
That sounds fantastic. Yeah, timely news about AI having an effect on the world, and possibly able to save lives, especially as they gain notoriety and others start really watching them; it might make a big difference. I think over the next few years the way AI is really revolutionizing medicine at large, this is one of many of those cases.
Yeah. Speaking about how AI is revolutionizing things, you also found an interesting thing that we’re gonna be talking about today on our Fully Connected episode… The goal of these episodes, again, just being to keep in the news, the AI news, and keep ourselves updated, but also our listeners, and dive into the topics that people are currently looking at.
Stanford’s Human-Centered AI Institute in partnership with a variety of others came out with an AI index report 2019, which provided a lot of – I mean, we did a sort top five things of 2019, but that wasn’t really based on rigorous research and data collecting and that sort of thing. This was more of actually who published articles, what’s going on in the AI world as we move into 2020… And there were some interesting things in there that would be great for us to dive into, so that’s what we’re going to do today.
Yeah, it’s quite a lengthy document. I haven’t counted, but I think it’s somewhere around 150-ish…
I think 291 pages…
I was far short, okay…
It was quite long; I didn’t look at the page line… But they broke things out into a number of sections, and it was a really interesting overall view of the world of artificial intelligence in general. It gave a lot of statistics and facts… The breakdowns were pretty interesting, actually…
Yeah. I’m just looking at the steering committee here from Stanford, also McKinsey Global Institute partnership on AI, Harvard, OpenAI, MIT, SRI International… So it wasn’t just Stanford that put this together. They also mentioned partners of Google, PwC and others… So yeah, it will be interesting to see what they think were the noteworthy things that are happening now in AI, some of which are maybe different than the things that we talked about in that first episode of the new year.
Let’s go ahead and dive in. It looks like the first section of what they talk about is research and development… Which I know both of us aren’t professors or anything like that, or involved in academic research in that context, but we both interact with research people… What, if anything, surprised you about what they talked about, Chris?
Well, they started off really focusing on how much growth there has been in the space… And I don’t think that was surprising in itself, but we’ve really seen the rise of China in terms of just raw numbers of publications coming out… And as part of that, they finally passed Europe. They had previously passed the U.S. already, and so… Despite the fact that you’re seeing those raw numbers, the fully-weighted citation impact though of U.S. publications is still about 50% higher than China’s… And that really rang true.
I was at an internal artificial intelligence meeting last week with my employer, and we were actually discussing that specifically - just the fact that you’re seeing more citations. The discussion was in terms of the quality… And obviously, we were speculating on why that is and when that might change going forward.
Yeah, and I think actually on our next episode we’re gonna be talking to someone from the Semantic Scholar team at Allen AI. Of course, that project is really concerned with the discoverability of research, and kind of weeding through the noise, and all of that sort of thing… So that’ll be interesting, to talk to them about how this giant surge in research and archive papers and all of those things has increased, and in some ways made it hard to find the real notable things that are happening, in some cases.
[00:08:23.21] I thought another thing that was interesting was that they noted several small countries that were having a relatively high increase in deep learning papers in terms of per-capita. These are countries like Singapore, Switzerland, Australia, Israel, Netherlands, Luxembourg… They’re talking about these countries who, despite their small size, are really making AI research or encouraging AI research, and believing in that as a future driver of economic prosperity, and innovation, and all of those things.
Yeah, it’s interesting… And we’ve talked about this before - I know we’ve talked specifically about Singapore… It was very obvious to me about a year ago when I was in Switzerland for a conference - these countries have really committed to some degree maybe their national identity a little bit to saying “Hey, this is something that we’re gonna do.” Technology in general, and specifically AI in a lot of these cases.
Obviously, we’re both pro-AI, I think… I wonder just kind of off the top of my head if that’s kind of making a dent in other research areas in terms of taking away funding, or putting the focus less on other still very important areas, but now that everybody’s kind of all-in for AI, what effect that’s having on more traditional biology and medical research, and those sorts of things.
That’s an interesting idea. I’m not sure. I would say obviously there’s a finite amount of research and development dollars and time…
Or whatever your currency is…
Yeah… Available, and stuff. I suspect, as we look at different strategies over time, AI is one of the great enablers of our time… So I think they may be selecting certain specialties and then kind of going after AI as an enabler in this.
I know that’s at least party why I am in industry versus in Academia… Because I originally wanted to go the professor route, and that sort of thing… But I was in physics in grad school, and – I mean, physics, as a discipline, I think is exciting. There’s a lot of people that think it’s exciting… But in terms of like paradigm-shifting things that have happened in recent years, there haven’t been a lot in terms of the type of paradigm-shifting things in physics of let’s say the ’20s, and the ’30s, and that sort of time period.
Physics has become a sort of – I think it’s plateaued in terms of its excitement, to some degree, and that’s made a lot of the jobs in physics research very competitive… Because there’s maybe not as many universities that are really filling their bench with physics people; maybe they’re now filling their bench with computer science, AI type of people. I think it definitely will be interesting to see how that plays out on those fronts.
Interesting. Another interesting note there was the fact that we’re seeing, especially in Western-European countries, but not limited to that - in countries such as the Netherlands, Denmark, Argentina, Canada, and even Iran - a relatively high presence of women that are involved in AI research.
Yeah, where’s the U.S. on that list?
Yeah, I would ask the same question.
What’s up with that…?
But it’s nice to see this field – I know that since the very beginning of this podcast that’s been a big goal of ours, is to see this field be a truly equal field in all respects… So that was a stat that caught my eye, that I was really thrilled to see.
[00:11:55.20] Yeah, I think we need to take note of some of these countries and see what they’re doing to promote that, and try to increase that more. I know there’s been another thing – the next section of the index report is about conferences, and one of the things maybe that has driven that change is they talk about the Women in Machine Learning workshop that happens throughout… I’m not sure how long it’s happened. They talk about 2014… But they say that it has 20 times more alumni than it had in 2015. So I think that this is one of the contributing factors; I think it’s a pretty big contributing factor, that there’s been intentional effort to have these sorts of workshops and local chapters and all of these things focused on women in machine learning and AI.
Yeah, I agree. It’s a good sign. It’s a sign that we’re moving in the right direction. And you know, just in general, we’re seeing conferences, their popularity and the number of people attending in AI-related conferences just exploding.
Yeah. NeurIPS was I think upwards of 10,000 attendees, I believe, at this point.
Yeah, I think it was almost 14,000. 13,000-something… Which is just insane to me.
Yeah. I mean, 13,000 people… I don’t know what we’re gonna have to start doing… Like, rent out football stadiums or something for AI conferences… What is the future? I don’t know…
One personal note on that is I would love to see – and there was some of this last year, where there’s a lot more effort to livestream things. I watched several conferences livestreamed, at least some of the content. I really appreciated that.
And also, from an environmental standpoint, if we have 13,000 people taking plane rides to go to a conference… I would love to see that many people involved, but having livestreaming resources, and better livestreaming and remote conference events would be something I would love to see.
I would, too. We’re seeing this explosion, and being able to participate as we see so many people wanting to get to conferences…
And getting visas rejected…
Getting visas rejected… In a lot of cases just not able to get in. I mean, NeurIPS is famous for its lottery, and so many people that would like to go, cannot go, and that’s despite the massive number of attendees it already has… So yeah, livestreaming would be a fantastic way of being a little bit more inclusive for those who either can’t travel, or are wanting to be responsible and avoid the environmental impact by getting on a plane.
Yeah. The next thing, which is kind of obvious that they would go into this, but - the technical performance of AI models has shifted in several ways. Generally, they talk about image classification as an example task, and they talk about how the time required to train these sorts of models has drastically decreased, and the costs to train them has drastically decreased… So there seems to be this sort of more general availability of compute resources in the cloud that allow you to train these systems… So the availability of that, but also efforts to speed this up - maybe architecture, or framework, or language-wise as well.
Yeah, I know that one of the topics you and I like to talk about a lot are transformers, for instance… And I’ve noticed – just a few weeks ago I was talking to some folks, and you’ll see these large transformer models come out, and then these follow-ups that are huge performance enhancements… And they may reduce the size of the model, but you end up getting a dramatically faster training based on these optimizations… And I think when you counter that with the fact that we’re seeing GPU, TPU and other hardware architectures really accelerating… You’ve got cloud options, you have options for being able to maybe have a GPU right on your desktop that you’re working, or whatever… And I think the combination of that has made a huge difference in accessibility for people to be able to actually do these.
Chris, I’ve found one really interesting thing in the index report that is related to technical performance, and if people are following along while they’re listening, this is on page 68 of the report, so a good ways in there… But they go through and talk about the milestones that have been achieved in terms of human-level performance and AI reaching or beating human-level performance in certain tasks. I hadn’t seen something like – there probably are things like this compiled in other places, but I thought the compilation and timeline that they compiled here was really interesting. This starts with Othello, back in 1980, and it goes all the way to 2019, detecting the diabetic retinopathy, with specialist-level accuracy. This was really cool. I don’t know how many of these things were familiar to you, Chris…
A few of them are. They talk about AlphaGo… There’s a number of them, especially given – back when I was doing gaming when I was younger, a lot of the older ones ironically are more familiar with me than some of the newer ones… The ones that have been in the news a lot in terms of AI in the last few years, I’m familiar with, but they had some that I had not noticed before. The prostate cancer grading I had not seen.
Yeah, so pre-2011 there’s three milestones that they list, which are Othello, Checkers and Chess. And then post 1997, then we skip all the way to 2011 and you can kind of see this rapid advance… So between 2011 and 2019 - I’m not gonna be able to count these, but there’s probably at least 15 or something, or…
Yeah, something like that.
…15-20 after that, including things like Dota and video games, machine translation, all those things.
Something I noticed just as an aside comment, since it leaps from 1997 to 2011 - those were the years of the most recent AI winter. I almost said “nuclear winter.” It was not a nuclear winter… [laughs] But yeah, the most recent AI winter was right there, and you saw zero progress made, as everyone turned away from neural networks for those years.
[00:20:00.11] Also, going back to the conference thing, they have this graph in the report where they track AI conference attendance, and you can see – so they track back to 1985… This is one of those - a side for a minute - plots that being slightly colorblind, I have no chance of reading, because all the lines are colors that blend together for me… So just FYI, there’s color palettes out there to help with that, but maybe you can help me know which one goes back to 1985.
There were a couple of conferences back in 1985 that have at or over 5,000 attendees… And then you can see 1990, 1995, 2000, 2005 - it actually decreases all the way to about 2010. Some of them start to increase again, and then 2015 and on it’s just skyrocketing attendance in these conferences. So it is just historically interesting to see - back then there were these very high-profile 5,000 attendees at conferences, no joke.
Yeah, that’s quite a conference. And for those of you who haven’t been to one that size - you get lost in them, in terms of trying to find your content, and everything… I actually prefer smaller conferences; they’re a lot more intimate, more fun, from my standpoint.
Yup. Before we leave the technical performance side of things, one of the trends that I saw them point out which was interesting to me - and they drew it out in terms of NLP, but I think it’s true of computer vision in some ways as well - is that in terms of benchmarks, they had benchmarks for certain tasks in NLP or computer vision, like object recognition or machine translation, or entity recognition, reading comprehension, co-reference, and all these different benchmarks… And as they’ve reached human-level performance on these, they actually had to go – “they” I’m meaning the research community, or those in the research community… Many have had to go back and say “How can we make this more challenging? Because we’re reaching human-level performance in so many of these tasks.”
For example, in the AI world there’s this benchmark called GLUE, which I’m gonna mess this up off the top of my head… I think it’s General Language Understanding – hold on…
Come up with that E there… [laughs]
It’s something like that. Let’s see… There’s GLUE and then there’s Super-GLUE. So it’s General Language Understanding Evaluation, there we go. They had that, and then that wasn’t enough, or that wasn’t challenging enough, so they reached human-level… And this graph is in the index report as well, so you can see models surpassing human-level performance in GLUE, this sort of test that combines a bunch of NLP tests to make it harder itself, so human-level performance was beat there… So then they developed this other one, which is Super-GLUE, which kind of ups it from there… And others, like Allen NLP or Allen AI Institute and others are producing other benchmarks to further challenge things.
So as we’ve reached a lot of those milestones, now we’re in this season of “What’s next? How do we make this harder for computers?” A lot of those things are common sense understanding and reasoning that are really hard for computers to do. We still have a lot of room for growth there… And of course, in languages other than English, and multi-modal settings, where we’re combining video and imagery and text and all of those things… It will be interesting to see what benchmarks come about there.
I agree. Just as a side thought there - we’re having a couple of interesting conversations (meaning the community at large) right now. We’re seeing these things that we’re calling out here in terms of how far we’re coming, and having to adjust for benchmarks. Then you see people in the artificial generalized intelligence community saying “Oh, we need to have completely new models”, and stuff… And I think people tend to get caught up in one or the other. It’s interesting that in terms of the deep learning basis where we’re at right now, we’re really still making pretty immense progress in terms of applicability and performance improvements… And while that may not be a giant – I think some people tend to get caught up in one or the other conversation… I think it’s pretty remarkable that we’re in an industry where you can have both of those conversations with the specifics of whether we’re going fast or whether we are not making much progress at all in the larger scheme of things…
[00:24:35.26] But I really think it shows how vastly artificial intelligence has moved into culture and society and industry at large. That astounds me repeatedly, and I think this report on the applicability of these technologies is really amazing… Just as you just called out, the fact that NLP is making such fast progress.
Yeah, sometimes I think we overcomplicate how much progress are we making. One indication of that is economic investment in AI and application within industry - that’s another point that they call out in the report… And they throw out really huge numbers like “AI investment was over 70 billion, with AI-related startup investments over 37 billion.”
Honestly – I mean, I don’t know about you, Chris… Those numbers don’t – it’s hard for me to grasp those numbers, because I’ve never seen a billion dollars… But it’s a lot of money, and there’s definitely – they also cite different percentage increases in jobs, and AI investment… And I’m guessing some of that may be hype, but there is actual proof that AI applications within industry are driving a lot of change, and people are responding with investment.
There was a particular stat - it’s a U.S-centric stat that I noticed - and that is the total number of jobs in the economy, and taking that total number and looking at the share of those that are AI-related jobs, or at least in terms of maybe titles or tangentially, it’s approaching nearly 1% in terms of AI jobs to total jobs.
And that’s the AI jobs meaning like humans doing AI…
…not AI doing jobs.
I’m glad you called that out, just for clarity on that. The number of jobs that we as humans are engaged in that are AI-related compared to the total economy - it’s approaching 1%, and that is remarkable, because we’re still at such an early stage in this industry… So you can see – I believe back around 2015 it was just a fraction of 1%; I think it might have been 0.3%… So we are growing so fast in terms of how AI is impacting the economy, represented by the number of jobs being created to do just that.
And job-wise, they did a bunch of analysis of LinkedIn… I’ve found this interesting, because they have a lot of – so they have asterisks next to India, for example, because I think they said 40% of workers in India are on LinkedIn… So numbers are likely not accurate in that sense. If anything, they’re diminished, I would say… And yet, India was at the top of a lot of job statistics in terms of how many people are involved in AI, and also the fastest growth in AI hiring… Again, these other countries that have invested heavily as part of their national strategy in AI, like Singapore, Brazil, Australia, Canada, were right at the top of those AI hiring stats…
[00:27:46.14] And it’s interesting to – I mean, it makes sense with what I’ve heard; I’ve got a few people that I work with in Singapore, and from what I understand, AI people in Singapore are pretty much snatched up instantly. So if you’re trying to hire someone in AI in Singapore, there’s just so much hiring going on, and there’s not enough people to go around, which is one of the reasons why they establish some of these things like AI Singapore, which is trying to feed AI expertise in the industry… But there’s just so much hiring going on. The demand is so high that AI people could get hired pretty much right away. So if you’re interested in getting an AI job, consider Singapore. It’s a beautiful place.
They may be the most extreme case, but I don’t think it’s a problem just for Singapore. We’re seeing that really everywhere. All employers that are invested in AI, which is obviously a steadily increasing number, are contending with that same issue in terms of finding qualified people who can be productive quickly. So the university system - they’re snatching them straight out of universities that are oriented on AI… It is just an explosive growth area.
You mentioned all the billions of dollars a few minutes ago in terms of global private AI investment, and along with that, I was really astounded to see that year-over-year, the annual growth rate being around 50% in terms of investment in startups, and it’s continuing to go… And that’s despite various economic reasons and things that people are concerned about about life in general. So it’s still quite staggering, the explosiveness of the field in general.
Alright, so you kind of started into the conversation about education, and me being in a university town, I’m also kind of monitoring this and thinking about it… But more people are going to school to learn specifically AI-related things than ever before… And one of the things they talk about is international Ph.D. students pursuing AI specialization in computer science, so that’s up… But also, there’s this interesting trend of Ph.D’s that are graduating in AI are not going the academic route in general in terms of getting professorships, and that sort of thing… There’s this drain of AI talent, where a lot of these people are going into industry to work at awesome, cool places, whether that be like Google Brain, or OpenAI, or whatever it is… So much so that AI faculty are leaving Academia for industry, and that’s continuing to accelerate.
That’s exciting, in some ways, that there’s this infusion of expertise into industry, but it’s also concerning in some ways… Because at least from my perspective, it seems like – you know, when I’m thinking about Academia, this gap between how in touch Academia is with industry is separating more and more, in certain ways… So I would love to see industry and Academia be closer, and people coming out of university programs really ready practically to do jobs… But I’m not sure that that’s really happening, because the AI professors in Academia that are interested in industry stuff are just leaving to go to industry, and then what’s left is just pure academic research, which is interesting, but maybe in some ways less connected to industry problems.
[00:32:29.14] Yeah, it’s really changed both Academia and industry, this trend… And it’s changed the relationships we have. One of the things that I’m involved with at my own employer, at Lockheed Martin, is I engage a lot of universities in terms of artificial intelligence with that. There’s a number of us that do this. And the nature of those collaborations are changing… Whereas once upon a time you might think of the brain trust at these universities, and that industry would access them to help them, but we’re seeing so many people that - as you pointed out - might otherwise have been on an academic career path, are going to industry because of the opportunities, and because of the compensation…
That has changed, and it’s interesting to see the partnerships that we’re having between industry and Academia where both sides are doing cutting edge research in different topics, and collaborating across… And you’re seeing a substantial amount of that in industry, where it used to be mostly in academic and then industry would kind of take that and apply it to what they’re doing… But this field is moving so fast, and the brain drain is happening from Academia into industry for those reasons. It’s kind of rebalanced that. I think both sides are trying to figure their way through that at this point. It’s also driven compensation rates through the roof for AI specialties, obviously… And that becomes another thing, where different companies are competing for the talent.
Yeah. It will be interesting. I know one of the other things we’ve talked about before on the podcast is a trend to more formalization around data science and AI programs within universities… Where before a lot of universities took the strategy of AI as a graduate discipline within computer science - which it definitely is, and should continue to be… There’s efforts to kind of embed data science and AI across all organizations, with some universities even taking steps to establish the center for data science, or center for AI, or whatever… And there’s kind of cross-discipline collaborations that happen within those centers. So that’s interesting… I think there’s success stories within that, and there’s not so great success stories with trying to apply that… Yeah, I don’t know what all of the solution is to this sort of balance between industry and Academia. Maybe it kind of just flows back and forth… But yeah, it’ll be interesting to follow, for sure.
Other things that were emphasized in the economic point with the index, but also actually called out as a separate whole chapter in the index report, where autonomous systems, specifically autonomous vehicles – so autonomous vehicles received the largest share of global investment over the last year, followed by things like drug cancer therapy, facial recognition, video content, fraud detection, and other things… But autonomous vehicles were at the top.
I know this is something you’ve been involved with personally, and of course, Lockheed Martin is interested in… But it took me off-guard a little bit, because you hear a lot about self-driving cars and that sort of thing, but it doesn’t seem to me to have penetrated markets as much as something let’s say like facial recognition or computer vision, and yet it’s at the top of investment.
[00:35:59.05] I think you will, though. One of the things that was notable is that the state of California licensed testing for over 50 companies with an enormous number of autonomous vehicles… And they noted that they’d already driven over two million miles. When people hear “autonomous vehicles”, they’re often thinking about cars on the road, but what we’re really seeing here is a transformation, and they call it out in this report, that you’re seeing – I think we’re right on the cusp, now that California has done that, and you’re gonna see other states and other countries as well engaging in the same thing, as people recognize that these vehicles can be safely integrated into society, whether or the road, or in the air, or on the water, wherever we happen to be. And we’re gonna see that more and more.
I think this action by California demonstrates that we’re right at that tipping point as we’re recording this. We’re gonna see it everywhere. In the company that I work at, autonomy is a big part of it, as it is in many different industries… So I think you’re gonna see autonomy becoming fairly common over the next few years, whether it be on our nation’s roads, or those of other nations, or from the bottom of the ocean, to the surface, all the way to outer space. I think that it’s gonna become…
In the Space Force.
That’s exactly right. That’s another thing worth calling out - in the last few months, the U.S. Space Force has been created out of what had been the Space Command in the United States Air Force. So we’re now at a point where it made sense to separate those out as their own concerns… So I think you’re gonna see autonomy in every facet of transportation, sooner than most people might expect.
I’m full of asides today, but just as an aside - did you see how close the Space Force logo and the Star Trek thing…?
I saw people have them side-by-side on Twitter, and… Yeah, I don’t know what to think about that, but it’s kind of interesting.
We’ll leave it to people to call out on Twitter and Slack to us. We welcome your comments. I’ve seen some pretty funny ones so far…
What’s your opinion about Star Trek and the Space Force? Anyway… See, I made a really good transition with that aside to talk about public perception and societal considerations…
…which were some of the last things that were talked about in the index, along with national strategies. They talked about the public perception of AI, societal considerations around things like fairness and interpretability… One of the things that I thought was good in the report is they did specifically call out that there’s these 17 United Nations sustainable development goals which cover a lot of things around education, and climate, and other things.
So there’s 17, and then there’s a 169 target, and they talk about how AI can contribute to each of these… And if you remember, we had another guest on the show with the AI for Good Foundation who is directly working with the United Nations to apply AI to the sustainable development goals in really interesting and amazing ways… So if you’re interested in that side of things, take a listen to that other episode. I think that’s really worth calling out - now more than ever, because we have reached so many milestones in terms of AI, we’re at a point where we can really apply AI to all of these different problems that really matter and make a difference for the quality of life for people, to give them a better life.
So if you’re interested, that’s really a great effort to be a part of. And in terms of if you’re looking for side-projects, or just to learn about AI, why not take on some side projects related to the sustainable development goals, related to AI for Good… Yeah, I think it’s a really great time to be part of that sort of thing.
[00:40:09.07] I agree completely. As we talk about AI for good, and societal impact, I think maybe finishing up with one last point that they note in this document - they really point out the rise within the context of AI fairness, interpretability, explainability (what we tend to call ethics) and they identified that really those topics, in terms of references to AI ethical principles, have become an enormous conversation that we’re having globally at this point… And we’re recognizing that we have these powerful tools, and before unintended consequences could arise, that we need to be thoughtful.
I love the fact that people are engaging on this, and trying to say “How can we think about fairness before we have problems?” We’ve had some bumps in the road over the last few years, obviously, but I’m very optimistic as we go into the 2020’s here about people at least engaging on these topics, on these ethical AI principles, on the front-end of the decade as we surge forward.
I just wanted to end on that note of optimism, and ask people to continue to do that. Don’t just do the engineering side and the data science side of AI, but think about the world that you want… And AI for good, as you mentioned, is a great place to be thinking, whether it’s in your primary job, or whether it’s what you’re doing for a side project when you go home at night.
Yeah, definitely. And we always like to share learning resources as part of these Fully Connected episodes. One related to what Chris was just talking about, which we could share, which I poked around a bit with is the AI Fairness 360 toolkit from IBM. I think we mentioned it maybe once on the show…
Yup, we did.
But if you just go to AIF360.mybluemix.net, there’s a toolkit there where you can experiment with their tools for fairness, and analyzing datasets, and modifying models, and all of those sorts of things. They have a web demo, but also as a resource, they have links to read more about bias mitigation concepts, terminology, they have a Slack channel where you can ask questions related to that, they also have tutorials that show examples of code that checks bias in different industries, in different applications…
I’m scrolling down - it seems like there’s even more here than what I remember the last time I checked it. They’re talking about all sorts of things - Disparate Impact, Manhattan Distance, Average Odds Difference, Equal Opportunity Difference… All sorts of different methods. Then also talking about Adversarial Debiasing, Reweighing… Really cool stuff, so I would suggest to check it out.
Of course, they have notebooks where you can try things, and it’s easy these days to spin up a notebook on Colab or other resources to try out a toolkit like this.
Absolutely, sounds good.
Yeah, awesome. Well, great to go through this with you, Chris. I’m interested to see what the index looks like next year, but it was great to talk through it with you, and looking forward to a great year of AI.
As am I. Sounds good. Talk to you later, Daniel. Thanks.
Our transcripts are open source on GitHub. Improvements are welcome. 💚