Practical AI – Episode #71

2019's AI top 5

get Fully-Connected with Chris and Daniel

All Episodes

Wow, 2019 was an amazing year for AI! In this fully connected episode, Chris and Daniel discuss their list of top 5 notable AI things from 2019. They also discuss the “state of AI” at the end of 2019, and they make some predictions for 2020.

Featuring

Sponsors

DigitalOcean – The simplest cloud platform for developers and teams Whether you’re running one virtual machine or ten thousand, makes managing your infrastructure too easy. Get started for free with a $50 credit. Learn more at do.co/changelog.

Brain Science – For the curious! Brain Science is our new podcast exploring the inner-workings of the human brain to understand behavior change, habit formation, mental health, and being human. It’s Brain Science applied — not just how does the brain work, but how do we apply what we know about the brain to transform our lives.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another Fully Connected episode, where Daniel and I keep you fully connected with everything that’s happening in the AI community. We’ll take some time to discuss the latest AI news, and we’ll dig into learning resources to help you level up on your machine learning game.

My name is Chris Benson, I am principal AI strategist at Lockheed Martin, and with me as always is Daniel Whitenack, who is a data scientist at SIL International. Hey, how’s it going today, Daniel?

It’s going pretty good. It’s 2020. Crazy, man.

Happy new year, man! We have just put to bed our first full calendar year of podcast, as we started mid-2018… It’s pretty exciting.

I know. I think if I do the math right, this is episode 71, unless we switch anything up… But yeah, 70+… It’s pretty exciting. I don’t know what we’ll do when we hit 100, but we’ll make sure and have something exciting for our listeners when we hit 100 for sure.

We need to think about that. And if any of the listeners out there have any suggestions for that, let us know. Join us in our Slack channel, where we are on all the time, every day, talking to people… Or you can reach us on LinkedIn or Twitter. We are definitely out there, having conversations with you guys.

Yeah, definitely. And pretty soon, just as a final reminder for people - I think I might have mentioned this on other episodes, but both of us will be at the Project Voice conference, which I think as far as when this episode airs, will be the following week… January 13th through to 16th. So if you’re around at Project Voice, come find us. We’ll be recording some episodes in the SIL International booth, and as well giving a keynote together, both Chris and I. It’ll be fun to be there, and think a little bit about speech and voice and AI, and what’s going on in that world.

Absolutely. That will be in Chattanooga, Tennessee. I think it’s Monday the 13th, if I’m recalling correctly.

Exactly, yeah.

Today we’re gonna do the same thing we did about this time last year, as we got into 2019. We really wanted to look back on a couple of notable points in the AI world in 2019, talk about why we think they were notable, and also assess the current state of AI, where we are right now, and then look ahead to 2020… And of course, it would not be a start-of-year show if we didn’t try to make a few predictions, each of us, on where things are going over the next year.

[04:07] It is practical AI, so predicting has to be a part of it.

Of course, absolutely.

And the predictions will be likely wrong, but… Maybe after we do this for like 20 years, we’ll have a proper test set of predictions that we can really determine what our accuracy was.

Oh, I’m not looking forward to that result. I’m not sure that’s good. And I’m not sure if we should call them predictions or inferences, considering the field we’re in here…

Oh yeah, maybe that would be better.

One of the things that we were talking about before we started recording – it’s been an amazing ride so far, and that is entirely due to our listeners and our guests, and as you pointed out, we just would not be where we’re at with the show being as popular, and so many people expressing how helpful it’s been for them to get into this field, and understand the detail…

Yeah. Thank you to our listeners and our guests. The guests for sure, of course – a lot of the great content comes directly from them. Chris and I are mostly – I feel like a lot of times we’re just facilitators and there to listen to the great content that is there… So thank you to our guests.

It has been great to get feedback on our Slack channel, talk to people on Twitter, talk to people at conferences who are aware of the podcast, and are getting value out of it… And a lot of that is because we do get feedback; we hear “It would be awesome if you did a show on this” or “I’d love to hear about this”, and we definitely try to integrate those things in… So thank you for being part of the community. I hope you feel welcome and are excited for 2020 like we are.

Absolutely. I think a huge part of this show is the community aspect of it, even more so than the technical. It gives people an ability to connect, so thank you all for constantly talking to us over this past year and a half, and making sure that we’re still on track on how best to meet your needs. So I guess with that said, I know that on a couple of things looking back, we definitely [unintelligible 00:06:07.29]

Yeah, top things of 2019.

Yeah, absolutely. We definitely are in agreement on quite a few of those. One of those big topics is transformers… Do you wanna jump into that and set that up?

Sure. So when we are thinking about the top AI milestones or notable things of 2019, we both drew up our own list of things that we were interested in, or thought were notable. There was a little bit of overlap, but the big piece that was the overlap was transformers. We have an episode that talks about this in a lot more detail, specifically related to BERT, and we referenced GPT-2 a couple times… But if you aren’t aware of those episodes or haven’t listened to them, 2019 has kind of been the year of the large language models, the year of the transformers…

So this kind of got kicked off with BERT, and GPT-2, and other models that were really large-scale language models that in essence were able to learn a lot about language in general by being trained on many, many documents. Lots of data scraped from the web, or other places… And were able to transfer to a lot of different NLP tasks, whether that’s machine translation, or reading comprehension, or named entity recognition, text classification, all sorts of things.

These models have allowed us to have a zoo of really large, pre-trained models that know a lot about language, and transfer those easily to these various tasks… So we can kind of stand on the shoulders of giants, in a sense, of OpenAI, and Google and others who have trained these large models, have a lot of data, and then allow us to kind of just level up our own NLP game by utilizing these pre-trained models. That’s been a huge boost to NLP, this year in particular.

[08:18] I think the thing that really struck me about it is you and I actually come at this from different perspectives. You are a true NLP expert; anyone who has listened to our episodes very much when we talk about this will know that. It’s what you do all the time. I observe it, but it’s not my specialty, so I’m kind of coming from an outsider’s perspective on that. And the thing that really struck me is these new large language models just impacted the entire world of deep learning and industry at large, whether or not you were neck-deep in it, the way you are, or whether you’re really watching this from outside, and just being a user of these externally, the way I am.

So it was like the big hit just kept on coming through 2019, as we did this. I was absolutely - as probably most people - blown away when OpenAI did their first blog post on GPT-2 early in the year. I think it was February, if I’m recalling correctly… And even as they introduced it, they noted a couple of things. As we have specified, it’s a transformer language model, and it’s used as a generative model of language where you can essentially give it a sentence to start with and it will generate a great deal of text, which in many cases is indistinguishable to the casual observer on whether or not it came from a computer or from a human. It was pretty amazing.

We saw that, and they did that initial release, which was a scaled-down version, just to let the world try it. They were recognizing there could be security implications on that. They were slow to release, and released in stages, but ultimately, if I’m recalling, the larger model they released later on in the year was trained on web text, which contains over 8 million documents for a total of 40 GB of text. If that was images, that wouldn’t be so much, but for text, that’s enormous. And they pulled that from URLs on the internet, an unsupervised approach, that was from Reddit submissions, in which case they had at least three upvotes. So they had a huge, huge corpus of text to pull from.

I just remember seeing those early examples of what was possible, and thinking “Okay, we’re in a new place on the NLP front at this point.”

Yeah, definitely. I’ve talked to many colleagues who have expressed - specifically with that blog post that you mention - that prior to the blog post, if you were to ask them “Hey, what’s the best that an AI model could do in generating text? …regardless of architecture and everything that’s been done in the best, what’s the best we could do?”, they would have guessed a much lower quality than what was published in that blog post.

And just kind of being blown away by that… And of course, that fueled all sorts of things throughout the years. So I think these years, 2018-2019, have been referred to NLP’s ImageNet moment. If you remember further back, when ImageNet came out, which was a challenge around object recognition and computer vision, there was a huge boost in computer vision and AI. And this, I think, is kind of a parallel in what’s gone on. So there’s just been an explosion in all sorts of things that build on this technology.

The technology itself - these large language models, again, they’re kind of building blocks, in a way. We talked in the blog post about BERT, about how these are structured often into encoding layers, and decoding layers, and how you can utilize BERT or these other models to create these word embeddings or representations of text that can be used for a variety of tasks.

[12:13] That’s spurred not only innovation in text-generation, but innovation in all sorts of NLP tasks - like I mentioned, in translations and all sorts of things. And I just saw – one of the big indications of this is that Google Search, which is arguably Google’s bread and butter, they just switched over to actually integrating BERT, which is one of these transformer, large language models, directly into Google Search, in production, live now. I don’t think Google would be taking that risk if they weren’t convinced that this was a transformative technology. So that’s pretty cool.

There was almost a meta issue around this… There was quite a bit of controversy in how GPT-2 was released, and we already talked about the stage release that they did… In that original blog post, under release strategy they say “We’re not releasing the dataset, the training code, or the GPT-2 model weights”, and they specify that “We expect safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety policy in standards research.”

That was really the first time that a major AI research organization had done – everybody up until that moment was just publishing as fast as they could as new stuff came out, and that was the moment where they suddenly said “We have a greater concern.” And there was quite a lot of debate about whether or not that was the right approach or not. I know we talked about it on the show a little bit, but… It was just interesting to see how that policy debate shaped out over time.

Yeah. I would specifically like to note - and long-time listeners of the show will know that I like to mention this group quite a bit, because I really think that they’re a big part of what’s happening… I specifically don’t think that the momentum that’s built up this year around transformers would have been quite as much without Hugging Face’s contribution.

Absolutely.

We Clément from Hugging Face on a while back; we’ll reference that episode. That was actually before a lot of the stuff I’m about to talk about really built up… But after that episode, Hugging Face - they came out with a few things. One of those was this application called “Write With Transformer”, which - for non-technical people, you can just go to this app and choose any of these language models you want and just try to generate some text with it. It’s kind of like a Word document where you can integrate these models. And I think that was just a huge eye-opener for people, because a bunch of non-technical people could go in there and do this.

It also forced Hugging Face to really deal with this “How do we productionize these models? How do we integrated them practically?”, which led them to release the transformers library, which is one of the widest-used NLP AI libraries; it’s been mentioned a lot this year at top conferences… Even all the research conferences, but industry conferences. Even TensorFlow Dev Summit, even though Hugging Face has traditionally worked with PyTorch, I think… So this was really transformative. I even in my car often listen to NPR…

I do, too.

I was listening to NPR and there was someone on there - I forget the exact topic. I don’t remember the context, but they were talking about AI, and they were like “Well, I can use an AI model to generate some new NPR show titles for this show”, and they used with Transformers, the app from Hugging Face, to do that sort of on the show, which was pretty cool.

[16:05] I’m looking over what they do, and they’ve done such a good job of integrating their transformers in with the existing tooling, as TensorFlow 2 was out this year, and PyTorch… And those two are still probably - I know there’ll be some people disagreeing with me, but probably the dominant two frameworks… And the tight integration - they’ve really made NLP not only powerful, but incredibly accessible to people.

In your view - I know that you follow them very closely, even beyond us having them on the episode - what do you think Hugging Face has done so well and so right that they’ve kind of become, to some degree, the darlings of the NLP world this year? That’s at least my own feeling of it.

Yeah, for sure. I mean, I think a couple things that maybe can be extracted from that, and we can learn for our own work, is that they have focused on making things, sort of giving people an immediate satisfaction with using these tools. The Write With Transformer thing - you don’t have to even go to GitHub, or download any models or anything; you just go and you try it out. So that I think is one thing that we could extract from this - making AI consumable to all sorts of audiences is something that is incredibly valuable.

Then also for developers, prior to the transformers library, it was still rather difficult to integrate these large-scale models into a normal workflow, and transformers really gave a standardized API that people could use to pull in these models, utilize them for various tasks, or just utilize them for generating embeddings… And so I think that sort of standardization is something that we also saw with SpaCy. SpaCy, which we had on the show recently…

That’s true.

…it has been and it still is extremely popular in the NLP space. And I think those are also characteristics that we’ve seen with SpaCy, where they value good design, they value a good user experience, they have a nice way to standardize the sort of workflow around NLP to these sorts of pipelines… So I think those are really key ideas that we could take away… And just to give Hugging Face a final congrats - they ended the year with an announcement of 15 million in funding to continue development of transformers and what they’re doing. So I think it’s worth taking time to mention them, and always happy to.

Absolutely. They’ve had such a profound impact on the industry this year. I’ve just been very impressed with them. It was a great episode. If anyone out there hasn’t listened to this episode, you definitely should.

So there were a couple of things this past year - I don’t know that they were the most important things necessarily, but they were certainly events that really captured my imagination, and we did have actually episodes on both of the things I’m about to mention. The first one, people may recall a few months back OpenAI did some work with robotic dexterity, using a robotic hand; the hand was trying to solve a Rubik’s Cube.

And just to specify - and we had a whole episode talking about this - it wasn’t the algorithm of the Rubik’s Cube that the AI portions were solving, because those were known solutions out there, so they just implemented one of those… But what they were doing is using reinforcement learning to really get the dexterity and sensitivity of the robotic hand to really a whole new level that it had not been doing, and they shared some videos out there showing the robot manipulating (the single robot hand) the cube in all sorts of ways… And it really inspired me seeing that to see the delicateness of it, the capability of being able to do very minute turns on the cube, with digits…

It was interesting - it would make you hold your breath as you watched the video, and at moments the Rubik’s Cube would roll right up onto the fingertips of the robot, and it would stop, balance there, and then spin… And it just made me realize that we were at the dawn of a new age for robotics in terms of what AI could do to supercharge where robotics are right now. Not only in more traditional movements and such, but also in these tiny little dexterity things, with sensors that were able to capture delicate things.

After watching that video, you could easily think of robots - you know, as we’ve talked about medicine and things - doing incredibly dynamic and precise forms of surgery on humans in that way… That if you had all of the right sensors and everything, that you could take AI and robotics and medicine to a whole new level. And that really had a fairly profound - for just one story - impact on my perception of the state of the art. How about yourself?

Yeah, it was interesting… And this is a space – you’re much more familiar with this space, but I think the things that stood out to me with that was their focus on making the models robust against perturbations and new scenarios. They developed these techniques around domain randomization and increasing the randomization during training such that the hand was able to deal with all sorts of unexpected scenarios… And I don’t know if it’s accurate, maybe you can tell me, but it seems like maybe one of the things that’s held back AI and robotics a bit is this fact of generalizing to all sorts of different scenarios…

Like, if you’re saying with medicine and surgery, people come in all sorts of different shapes and sizes, and ages, so having a hand that would perform certain procedures, would need to deal with all sorts of scenarios, and you can’t have all of those in your training set. So how do you make sure that your system is able to extend and generalize to different scenarios - I think that that focus in the project was really interesting to me. If that focus continues, maybe there’s a way to push the boundary there.

[24:11] Yeah, it’s really created a revolution in robotics, in terms of – you know, we’ve had robotics for decades, deployed in various industries, particularly industrial uses, and for a long time everything about… For instance, an assembly line had to be very precisely measured, and there could not be substantial variability in those workflows. So we’ve really seen over the past 2-3 years, at the moment culminating in this robotic dexterity demonstration that we saw - the ability to accommodate some variability and have the ability make change based on that variability dynamically, in real time.

So when we’re looking forward - and I know we’re gonna talk about the world ahead, the time ahead later on in this episode, but it really starts creating new possibilities in terms of using these in scenarios that it just wasn’t practical and realistic before. So it was a neat demo; I don’t think people should get too hung up on the Rubik’s Cube aspect itself, I just think that was a tool to show what they were getting to… But it was a pretty cool moment.

The other thing that had a very profound effect on me this last year - and we had an episode on it kind of mid-year (I think it was in June) was on deepfakes, and more specifically the very realistic types of deepfake videos where you’re using a generative adversarial network (GAN) to generate those deepfake videos. And I think the thing that became obvious - not just to us in this field, but to the public at large; you know, we had Congressional hearings on it - was the fact that you were now entering a moment with this tool which could be used for both wonderful and nefarious purposes. It’s not all bad, but… You’re really blurring the lines of what is real and what is not with this capability. And there can be fantastic, wonderful things…

You go to an amusement park where they’re able to implement deepfakes in rides, and that could be a lot of fun; it personalizes the [unintelligible 00:26:05.24] and you could do some pretty cool stuff… Or obviously, you could have things as bad as national security concerns about the U.S. elections in 2020, where we already have had the American FBI and the intelligence community warn us that it is highly likely that we’ll have adversaries and strategic competitors trying to interfere in the elections…

And then I’ve heard some other people talk about what happens – right now, as we look at this technology and we have a little bit of time to assess it, in some cases, what happens when we get to a situation when there is no time to figure out what is real and what is not. If you had a deepfake that showed the president of the United States describing that he had just launched nuclear weapons, and you’re somebody out there who may be the target of that, and that’s not a real video, how do you assess in a responsible, appropriate but expeditious matter to do that?

So we’re in a world that has changed in terms of our ability to know what’s real and know it in essentially real time. Any thoughts on that?

Yeah, I mean… I think you’re definitely right. And along with that, we’ve seen an increase in research into detecting fakes, which is encouraging, and I hope that continues… And then also, I know in a few of the episodes after we talked about those sorts of things - it’s come up that there definitely are good uses of this technology. We’ve talked about generating medical imagery of tumors, and that sort of thing, which is very expensive to annotate and generate manually… But we can kind of simulate that data and create simulated data using these methods that can improve tumor detection algorithms, and that sort of thing.

So with any technology, I think, there’s good and bad sides that you could draw from it. I think this one, the deepfake things and the videos that came out emphasize the bad ones first… So it’ll be interesting to see, as GANs become more and more practical and integrated into different systems, what the positives are and how we deal with those other negatives.

[28:17] Yeah. There was one website I came across, and it used the deepfakes that were GAN-powered to animate the Monalisa… So it took what is the most famous painting in the world, and the Monalisa was busy gesturing and talking and stuff like that. I think we’re gonna see many good uses.

Yeah, some of the uses are just kind of interesting, in that sense. I don’t know who uses that animated Monalisa for any practical purpose, but it is still fun, and it’s pushing the boundaries.

One thing - slight downer; not national security-level downer, but I think I’ve read some stuff that telecalls that you get from marketers and things like that - that that’s supposed to be the next big wave, as people are scraping social media sites to get images of you and people you know, and then trying to mimic voice, or whatever on those… So beware, as we go forward over the next year or two, that that kind of thing could be at a very personal level; it doesn’t always have to be these giant, end-of-the-world scenarios. It can be something that is very immediate and known to you.

What about you? What are some of the things that you noticed in 2019 that were awesome?

I think one thing that we definitely have to note is TensorFlow 2.0. I think the final official release of TensorFlow 2.0 was November 9th, if I searched that right… I mean, I used Google search, so if that’s the wrong date, then I guess they can blame themselves.

I was gonna say, if anyone should know the date, it should be them.

But yeah, TensorFlow 2.0… So for those that aren’t aware, with the release of TensorFlow 2.0, TensorFlow made quite a few significant changes, especially to the default API to TensorFlow, which is now Keras… And also to the way in which computations happen; instead of always generating this static graph that has to be executed later.

I think TensorFlow 2.0 was just an amazing demonstration that the TensorFlow team - and this is coming, I should say, from a PyTorch guy… I’ve used PyTorch way more than I’ve used TensorFlow, and I really enjoy PyTorch, and still really enjoy PyTorch and use it a lot… But for me, I think it’s a great demonstration that the TensorFlow team saw that “Oh, we have this really powerful technology, but we’ve gathered feedback that we need to kind of shift some focus in some areas, and make it more usable, make it more approachable, and make it more practical.” So I think the usability and practicality of TensorFlow 2.0 is just amazing… And I think they should be given congratulations for an amazing release, and I can’t wait to see more.

Yeah, as someone who used both the version one and version two now, I’d much prefer version two. And you can use the Keras interface for the vast majority of use cases that most people are likely to see; much more user-friendly.

It was funny, this past year at a couple of conferences – I tend to keep my skills up; I’ll go to TensorFlow classes, and stuff, and I remember it was several months after the TensorFlow 2.0 beta had come out [unintelligible 00:31:32.14] But I remember going to a class, a TensorFlow class - the entire class at the beginning of that first day was immensely disappointed that we weren’t using the TensorFlow 2.0 beta in the class, instead of TensorFlow 1.0. I felt sorry for the instructor; I’m gonna keep all the identities out of it, but… It made that big of a difference in that community. So kudos to the TensorFlow team for listening to user feedback and turning out a great product that made great strides on the first one.

[32:03] Yeah, I just tried this to see how easy it was to find, and I just searched for TensorFlow 2.0 Colab notebooks, because that’s probably where I would start if I was trying to find something. You could probably also search TensorFlow 2.0 quickstart, because the first two results are TensorFlow 2.0 quickstart for experts, TensorFlow 2.0 quickstart for beginners… And if you go in there, it walks you through the code itself, but also they have nice Colab notebooks that you can open and try things out without even running anything locally… So it’s super-easy to get into and I would recommend people to take a look.

Probably the last thing I wanna mention - I mean, there was so much in 2019, so sorry to all of you out there who are leaving out your favorite thing from 2019… But the other thing that I wanted to mention in 2019 is something that I detected throughout the year, and that was a sort of realization which hadn’t been there in 2018, at least the way I felt it this year… It was that training AI models is super compute-intensive, and this year I felt a little bit of pause from the community in saying “Hey, how much energy are we expanding to train these AI models, and what can we do to make that more efficient and more responsible in terms of the environmental impact and all of that?”

An article was released in 2019 which caught a lot of people’s attention, that training a single AI model, one of these larger language models for example - just training it once, a single model can emit as much Carbon as five cars during their whole lifetime of use… Which is pretty staggering, and I personally felt like not everybody took this seriously this year in the community necessarily… And it’s not like training large models has stopped. But I do think there is beginning to be a sense that we need to really pursue technologies that make AI more efficient and responsible in that sense.

Yeah, I don’t remember which episode it was, but I remember when it came out we talked a little bit about that, and I know that both of us are very environmentally-focused in terms of being responsible… So I was very happy to see people taking it seriously as well. I heard a lot of conversations through the year about the topic. So I think it’s a problem still to be solved; when you have very large-scale model training you have to do, there are currently not enough – we don’t have enough solutions out there yet in terms of having the compute capability and yet still be able to be responsible… Because this technology is here to stay; we’re gonna be computing more and more, so we need to be thinking about those responsible solutions, just as we have in other aspects of AI that have come to pass, that we’ll be talking about in a few minutes.

Yeah. There’s multiple facets to this. There’s the side of things which is, of course, making data centers more efficient, and also running those off of sustainable energy sources, and I think that’s been going on prior to this year… And there’s been a good amount of effort put into that. But also, I think the pieces that I’ve seen developed this year, much more emphasis in distilling and optimizing models to make them more efficient, make them run faster… Which is partly driven by just practicality.

If you’re using a model in production and it’s smaller, or you’re wanting to port it to a mobile device or something like that, it needs to be smaller… So some of those things factor in, as well. But also, I’ve seen some efforts in envisioning new, more efficient architectures for modeling, so not always relying on, let’s say, the next larger transformer model, but are there other architectures - maybe just regular RNNs - that can do this task just as well (or almost as well) as using the full, large-sized BERT model… And are much smaller and can be trained in much less time. We need to approach this from various angles, but I think it’s something that people started hopefully taking seriously in 2019.

[36:32] I think we have the benefit of the fact that it doesn’t require only a mindset in terms of responsibility toward the environment, but also just sheer performance. If you’re able to find these other approaches that are allowing us to actually get there sooner, it’s better for all concerned.

One of the things, before we turn to what the future looks like, is kind of – let’s take a moment and assess where we are right now. We’ve just gotten to the end of 2019, we’re at the beginning of 2020; not only the beginning of a new year, but the beginning of a whole new decade… So what are your thoughts towards where we are now, as we hit this point, Daniel?

Yeah, sure. I think one super-positive point of where I think we are and will continue to be in 2020 is really an amazing place in terms of the practical side of AI, which is what we’re concerned about a lot here on the Practical AI podcast. And I say that because you have these things, like we already talked about, like transformers, but other libraries as well, and other toolkits or just code on GitHub, whatever it is, infrastructure pieces, tooling - I feel like as compared to where we were at the end of 2018, there’s just a lot more ways to be robust and build AI systems that have a lot of integrity, in a much shorter period of time than we used to be.

It kind of used to be very much the Wild West, and maybe we are still a little bit in the Wild West, but I think that a lot of the principles from software engineering have kind of come into the AI world and we’re a lot more focused on versioning things, tracking things, monitoring things, whether that be with tools like TensorBoard, or other things… Or it’s infrastructure pieces like Pachyderm, Kubeflow, and things like that. We’re just thinking a lot more about the AI systems that we’re building, rather than just AI models, and I think that’s really encouraging and it helps people that are actually trying to build products and be practical, and integrate AI. I think there’s so much opportunity there, and there are so many choices available in that regard.

Agreed. Just seeing – you called out something a moment ago that really struck me, and that is when we talked about this a year ago now, going back to that episode - so much has changed. When we first started this podcast, we were always searching around for good tutorials and examples, and sometimes we would struggle a little bit to find them… In just that amount of time, and especially in the last year, there’s so much available out there.

The open source tools have really matured. Great communities, the tutorials enabling people to do that… And we’re finally seeing some of the surrounding infrastructure and tooling improving. I think there’s still a struggle there, as people really try to productize how they get models, not only trained, but deployed in the rest of their environment. But I think that’s definitely something that’s working hard now.

[39:45] Another thing that I’ve really noticed… At my job at Lockheed Martin I’m very involved in our own AI ethics and responsibility initiatives, so I spend a lot of time focusing on that. And over the past year we’ve seen pretty much all the major players out there, whether they be Google, or Microsoft, and many others, releasing ethical frameworks, and their principles, and such… And I think it’s gotten called out.

The difference between now and last year at this time, where people were starting to talk about ethical AI… But the conversation has matured a great deal. And the recognition that even with some of the limitations of where we are right now in terms of what deep learning can accomplish, that the dangers of [unintelligible 00:40:32.16] are very real. And we’re seeing lots of the significant luminaries in our field kind of calling that out, and expressing a need for standardization as we go forward on that. So that has been a fairly a significant change in the last 12 months.

I think things like, for example, China’s use of facial recognition, which we’ve talked about on the show before, and Russia’s use of behavioral modeling and that sort of thing to influence, for example, elections - those have hit everybody, and have been just kind of widespread, or have been acknowledged in a sort of larger sense that AI isn’t something that is really cool and for Sci-Fi, but there’s real uses of it that are going on. Not only real uses, but potentially really bad uses as well.

Yeah, and I know we’ve also talked about it in previous episodes, but as an example of something that – it depends on where you are in the world and your values, but I know based on generalized Western values, China has their social credit system. And as we have been looking at that and talking about that for some time now, they’re using AI to not only surveil, but analyze and monitor their citizens, and either reward or punish them accordingly. That’s such a profound effect upon that particular country and the society, that it’s given us a lot to think about in terms of what do we want.

If you live in a democracy where you have a say-so in how things are implemented, and you’re one voice of many that can contribute to that voice, I certainly hope people are thinking about what is right for you and the community that you live in, no matter where you are, and where does that make sense. So that’s gone from being a fringe conversation to becoming a mainstream conversation in this past year, I’d say.

Sure. One thing that I’ll bring up in terms of where we currently are in terms of the state of AI going into 2020 is I think that as we move forward, it’s gonna be more and more crucial that if we’re really serious about using AI to tackle large-scale problems like climate change and the death of languages around the world, access to good healthcare around the world, we’re going to have to better involve researchers and developers from all over the world.

We’ve had some really encouraging things this past year, and things going into next year around that, like various workshops being held around the world, in South-East Asia and Africa, there have been conferences that have been placed in those areas… There’s like the Deep Learning Indaba in Africa that’s going on, and offices of Google and others that are opening in those areas… But we’re definitely not where we need to be. For example, NeurIPS still this year - there was a huge problem with researchers from around the world getting to NeurIPS and having their visas denied. If you just look at publishing, we’re still pretty driven by the U.S, by Europe in certain areas.

[43:57] So if there’s been one thing that’s been clear to me as I’ve worked more with the NGO I’m a part of, and also other NGOs, is that if we really want to make an impact on these sorts of problems, we need to have representation from these local communities. We can’t just take – for example, if we wanna extend translation like Google Translate to all sorts of languages, we can’t not involve these communities. We can’t just publish research papers that say we’re studying low-resource languages and we just under-sample English as our low-research language… Because that leaves out so much. It leaves out unique scripts, problems and unique domain issues, and cultural things… So I think there’s a lot of shifting that needs to happen in this area, and I certainly hope that that continues to happen as we move into 2020.

I think that’s a really great point that you make there. Before we move on to predictions, the last thing I wanted to mention just about the state of where we are right now, is I also think there’s a general consensus developing in the industry. We’re seeing a lot of top luminaries – I know the VP of AI at Facebook recently said that we are very far from human intelligence. That was in an article that Wired had, I believe. And I think there was another article, ironically, that Wired had, where there were some comments about the fact that with us hitting some limitations on the types of problems that deep learning is likely to be able to solve, and given the fact that it is a technique that is very narrow, in terms of you get highly specialized results and a narrow scope…

One of the things at NeurIPS that was talked about was the fact that we really need to get to biological roots of natural intelligence to understand what our next steps are gonna be in the AI space… So what I think is that you may end up having people trying to reassess as they enter this new year about where they wanna focus their research on, and trying to do that. Any thoughts on that before we move into predictions?

No, I think it’s a great point, and I’ve definitely seen – I think we’ll put some links into the show notes about various luminaries’ statements on this sort of stuff. I’ve seen those as well, and I think that we can get into a pattern that is kind of natural, but can be limiting. For example, we’re all about transformer models, and we just do transformer models over and over, and it breeds this sort of “NLP is transformers.” But actually, there’s a lot of things that have happened historically in AI that we could pull from, and there’s new things that we could pull from maybe, like you say, that are rooted in other sorts of ideas related to biology, or evolutionary algorithms, or whatever it is. So I think we need to keep our flexibility intact maybe is a good way to put it.

I would agree. And I think the industry at large would agree with those sentiments, based on the sentiment we saw at NeurIPS, and that I think has been building over this past year in general. Let’s look ahead to 2020…

Alright. Inference time.

Inference time now… Figure out what we think might happen. I will start us off with a couple of then, and then turn over to you. I think, as we talked about AI ethics and responsibilities, I think we’re now at an inflection point where we’ve had many organizations putting out their principles on what they think should be, but we don’t have a very good way to execute on that. So not everybody is going to be an ethicist, especially in the engineering field… So I think that we’re seeing a consensus that the next step now is to turn toward the creation of supporting tools, or retrofitting existing tooling that enables non-ethicists to appropriately implement the various aspects of ethical AI. Everything from eliminating bias from datasets, to being able to think about weird, different types of AI should be applied to different types of solutioning… So I’m predicting that we’re gonna see a surge over the next year and beyond in tooling to support ethical AI.

[48:22] I hope so.

Any comments on that? You hope so?

Yeah. I’ll be looking for it.

That sounds good. Another thing that I think is happening already, it’s starting to - I see a lot of conversations, and I’ve been a part of a lot of conversations about this, is the fact that we’re getting to a point where instead of deep learning and neural network development being a separate little shiny object, with dedicated people that only do the modeling, you get to the problem of “How do you implement this in real life?” And you can build a great model, but then people in organizations really struggle to get it deployed into production, and getting a kind of a DevOps and feedback loop associated with what they’re doing and those activities… So I think you’re gonna see a lot of effort into moving neural network development into existing software development lifecycle and workflows that organizations already have in place, and they’ll make adjustments to those workflows to accommodate these new technologies. I think that’s really important, for them to see a good return on investment for their efforts in this space.

Yeah, we’ve talked about that a little bit. Maybe Joel Grus’ episode on responsible AI development practices would come into play here. I’ll link that in the show notes.

Sounds good. Another thing I’m seeing - and we actually already talked a little bit about it - I think TensorFlow 2.0 is an example of this… I think we’re gonna continue to see simplification of neural network tooling and trying to make that learning curve more manageable. And I’ll think you’ll see different users and developers within this technology being able to buy into toolsets that are suitable for them and their own backgrounds.

So I think that you’ll see more tooling that is specific, that may cater to certain types of data scientists, versus certain types of software developers, and you’ll be able to customize that tooling to match your level of knowledge, expertise, and your background as well, so that you can be productive quicker.

And then I guess my final prediction is I think that given what we talked about, this acknowledgment at large that’s developing within the deep learning field, that it’s not well-suited for certain problems. It is taking a lot more data to learn than maybe a human might use to learn something, and is less flexible, and overly focused on a particular solution and not able to move, by way of example, from one game that you might have a deep learning algorithm that has learned, be able to move to another game and be able to leverage what it learned from the first one.

So we’ve seen many examples of that, and I think as that message really permeates through the field, that we will see people reassessing, and maybe - and this is really the first time I have said this ever in our podcast, we will start looking at AI in the future as moving into a post deep learning world. When I’m talking about the present, I tend to tell people I think of AI personally as equivalent to deep learning right now, as we are at the beginning of 2020, but I think we also – we may get to the end of this year and that may not be a true statement anymore, and I may have a different answer.

So I think that is where we’re going… What about you, Daniel? What are some of yours?

[51:49] Well, I decided to go the safe route, and my prediction is that at least one of the following three things are going to be a huge player and a huge emphasis in 2020, and will really pick up steam. Or maybe all of them, or maybe just two of them or one of them, I’m not sure… I’m kind of covering my bases there, that way my test scores are better when we look at things after time.

The three things that I was thinking of were first multimodal learning, then mobile AI (or AI on mobile devices), and then federated learning. Multimodal learning is where, for example, you make inferences off of multiple modalities of input data. Maybe you have an image and text that are input to a model, and then you make some inference.

I think this was already emphasized recently by our guest from Etsy in their search technology, where they have titles for their products and descriptions, but there’s also more information in the uploaded pictures of the products. So you could take both of those input signals and do much more than you could with just the text or just the imagery alone. And I think that this is gonna be really revolutionary and pick up steam in terms of a lot of things, whether it be chatbots, or recommendation, like in that Etsy case, or whatever it is. I think we’re gonna see a lot more of that. In fact, we saw that also with open AI’s robot hand Rubik’s Cube thing where they were taking signals off of the hand itself, but also using the imagery from cameras and all of that.

I think you’re right. I know that at my own company multimodal learning is a big deal. One globally-impacting use case that we’re seeing it is in humanitarian assistance in disaster relief. That is where you have so – as you’re trying to get datasets for a particular disaster scenario (maybe a wildfire), if you can get data from lots of different imagery, the various types of radio calls that are occurring and all that, then you can create a model that is much more robust and accurate, and able to accommodate many more scenarios.

So I totally think you’re right on that, on multimodal learning. I think that is gonna be huge going forward. That was a good call.

Good. Well, hopefully at least that one comes true. [laughs]

I have faith in you, man.

Alright, cool. The other ones I think are really driven out of my sense that privacy, of course, has been important, but is increasingly important, and just the scale of AIs extending to all parts of the globe. I think we will see in 2020 a lot of deployments to mobile devices, and a lot more tooling around that, maybe along with deployment to things like browsers and that sort of thing, where we’re running models on user devices and fine-tuning them on user devices.

Along with that goes federated learning, which is the idea that we’re not really centralizing data from all sorts of users and then running a centralized training, and then porting the model back, but there is this sort of federated, distributed training that’s happening, where a lot of the data from user devices doesn’t have to leave user devices… There’s advantages to that, of course, because of privacy, but also data transfer and all of that.

I’ve seen this talked about over the last years, but haven’t really seen it come about in a widespread way, and possibly this is the year… I don’t know.

I think you’re right. You have stuck with safe, but they’re very good bets. [unintelligible 00:56:05.20] on all three of those, actually… So yeah, I think you nailed it.

Cool. Well, I’ll probably then have to learn a little bit of – I need to learn a little bit of mobile development or something. Maybe we’ll have an episode where we have some learning resources on that.

But yeah, I’ve enjoyed this lookback and lookahead, Chris. It’ll be interesting to look back at this episode at the end of 2020 and see what came true and what didn’t.

Yeah, so much has changed in the past year, as we’ve called out, and I suspect we’ll have even more so this coming year. It was a good conversation. Happy new year again, and looking forward to seeing you in Chattanooga at Project Voice, and doing all sorts of cool stuff in the year ahead.

Awesome. Happy new year!

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00