Practical AI – Episode #6
Government use of facial recognition and AI at Google
In this episode, Chris and Daniel discuss the latest news, including an article about Google’s AI principles, and they highlight some useful resources to help you level up.
Hired – Salary and benefits upfront? Yes please. Our listeners get a double hiring bonus of $600! Or, refer a friend and get a check for $1,337 when they accept a job. On Hired companies send you offers with salary, benefits, and even equity upfront. You are in full control of the process. Learn more at hired.com/practicalai.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code
changelog2018. Start your server - head to linode.com/changelog
Rollbar – We catch our errors before our users do because of Rollbar. Resolve errors in minutes, and deploy your code with confidence. Learn more at rollbar.com/changelog.
Notes & Links
- ACLU calls for a moratorium on government use of facial recognition technologies – TechCrunch
- Doing good data science - O’Reilly Media
- An Overview of National AI Strategies – Politics + AI – Medium
- Meet CIMON, the 1st Robot with Artificial Intelligence to Fly in Space
- AI at Google: our principles
- Foundations of Machine Learning
- Machine Learning Crash Course | Google Developers
- Best Laptop for Machine Learning - YouTube
Click here to listen along while you enjoy the transcript. 🎧
This is Daniel, and I have Chris, my co-host here with me, who is an AI expert and specializing in deep learning… How’s your deep learning been going, Chris?
It has been going 100 miles an hour. This field is moving so fast, and so many new things are happening that I’m trying to keep moving forward and keep my head above water. How about you, Daniel?
Awesome. Yeah, it’s been crazy, it’s been good. I’ve been doing a lot of data munging and cleaning this last week, which has been fun… I’m working a little bit with graph databases, so maybe at some point we’ll bring that into the show.
I’m glad that you mentioned that there’s a lot going on. For our new listeners, this is one of two different formats that we’re doing in this show, where we’re gonna just kind of give some news and updates that we’ve seen in the community, and also provide some resources for those starting out in AI, or wanting to level up in AI, some learning resources that we’ve found out there.
I’m excited to talk about what’s going on in the community. What did you see going on this week, Chris?
There are so many news stories that we’re getting each week, and just picking a few is really kind of the hardest part of this. I get asked all the time about where AI is going in the global scale, in terms of different countries… You know, everyone always asks about the U.S. and China, and how Russia fits in… So I came across a Medium post this past week called Artificial Intelligence Strategies. It kind of maps out what the known strategies are for a bunch of different countries out there, and kind of puts them on a timeline. We’ll share the link in the show notes.
[03:54] It’s really cool, and it has some great graphical stuff… But it kind of starts off with an overview of National AI strategies, and it gives you a table of contents, which is a couple dozen countries, and then each of those kind of has a one or two-paragraph blurb about what those countries are doing. It’s just a great single point to go to any say “Hey, what is China doing? What have they announced? What are they interested in?” and you can go down to China and find out… And there’s many others as well, such as - right below China is Denmark. I’ll leave it to our listeners to go explore that through the show notes.
It was a great starting point if you want to understand how AI is being seen at the nation-state, strategic level. How about yourself?
Well, I’ll tell you what China is doing with AI - they’re identifying all those people at their pop concerts with facial recognition. Have you seen those stories?
At the one – I forget the singer’s name, but they’ve nabbed… I forget how many at this point, but at his concerts for whatever it is, outstanding warrants, they’re looking for them, or however that works in China… But that’s what I know China is doing with AI.
You know, it creates this huge issue of the ethics of how to use these technologies. China is approaching that in that way, which I certainly am not comfortable with.
More recently, Amazon, with some of the stuff in the U.S. about facial recognition with law enforcement - there was a big uproar a week or so about that… I think you’re seeing the populations of all these different countries having to react to this rapid onslaught of this new technology, and how each of these governments is choosing to use it with or without oversight.
It’s a fascinating time in terms of understanding how we’re moving forward from an ethical standpoint, and I think that’s certainly gonna be a show coming up where we’re gonna talk about that in some great depth.
Yeah, for sure. And to go along with that, one of the things I saw was that the ACLU called for a moratorium on government use of facial recognition technologies, which goes right along with what you’re saying. I just think it’s interesting that – and I think actually the guest on a Changelog podcast was talking about this, how there’s kind of this spectrum of how people perceive AI on one side… They think that it’s kind of so amazing that it can do everything, and that’s a really awesome thing that’s gonna solve all our problems and automate everything, and then on the other side people think it can do more than it can, and it just creates a lot of creepiness out of that… Where, in both respects, those expectations need to be tailored back somewhat. But I think there is this kind of problem of setting up expectations for even what AI is capable of, but then certainly once technologies like this come out, there’s definitely a lot of conversations that need to happen around the use of them, especially by government entities.
I agree. And unlike the highly educated listeners of our podcast, because they’re here learning about this kind of stuff (as are we), I think the vast majority of people out there are hearing this in the news every day, but they don’t have any basis upon which to evaluate that… And there’s so much education that needs to happen, even while this field is just racing forward at light speed.
It’s really creating a lot of social, cultural and economic turmoil in terms of our lives changing so fast… So it’s definitely something we need to dig down into on an upcoming episode.
[08:06] For sure.
So another article that I saw that came from Space.com was that the first robot that is using AI flew in space recently…
It is called - I’m assuming I’m pronouncing it right - CIMON…
We’ll say CIMON [symon] or CIMON [simone] to make things not awkward.
There you go. If somebody can tweet us or hop onto our Slack and correct us in terms of what that is… But my first reaction when I saw all of this was “Really? This was the first?” I would have expected it a long time ago, that maybe some deep learning from a CNN standpoint might have been used, but apparently, as of the morning of June 29th, which is not too far back - a couple weeks back as we record this - it says “A small robot endowed with artificial intelligence launched a two-day trip to the International Space Station.”
What does it do? Does it just wake them up? Is it like an assistant?
It’s a small flying sphere, and it has kind of a cartoon-like face on the front of it…
Yeah, that’s super creepy…
It is pretty creepy, actually… And the pictures of it… And apparently, it’s able to propel itself around in the ISS through little puffs of air, and it interacts with the astronauts. So we finally got to that moment of 2001 futurism, it’s there now… So I imagine this is gonna be just pervasive in space missions going forward… Not only this one particular robot, but probably many to come.
Yeah, it might be the first thing that has motivated me not to go into space, so I don’t have to stare at that face for like months on end…
Well, you could just be out there with an entire crew of CIMON robots. It’s just you in the spacecraft, and all you do for the next six months is interact with them.
Yeah, I’ll pass on that one. [laughter] But I don’t know if that falls into this category, and speaking of kind of governments and nations’ strategy around AI, I saw that DJ Patil, who was the first chief data scientist of the U.S. - I’m not entirely sure what he’s up to now, but him along with Hillary Mason and a couple others came out with an article about doing good data science, and I think it’s a good read for everyone.
It brings up a lot of good things, and it talks about basically that we need to have the space and the time to address the ethical questions that are coming up in a data science and AI work, and share those at conferences, and be open about that side of the work. So if that’s something you’re interested in, it’s definitely a good read, and by some of the leaders in the field, for sure.
Definitely. We keep talking about ethics and AI – so far it’s more of an ancillary topic in our episodes, and I know we’re gonna have it as a primary topic coming up, but I ran across that Google CEO posted an article called “AI at Google: our principles”, which I was happy to see, because I think this is something that most companies need to be framing, in terms of how they’re approaching using these technologies from an ethical standpoint, and their objectives, and such.
[11:56] I thought I’d take two seconds and run through what he put, kind of a highlight… The article is longer and people can go read it, but he said under “Objectives for AI applications” to “Be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence”, and he finished with “Be made available for uses that accord with these principles.”
He also goes on and lists some things that they would not do, and talks about their long-term approach… But I was just happy to see that they were actually thinking their way through it and publishing it, so that so many other organizations can kind of follow suit and put forward their own objectives, and hopefully put them out there so people can see what they are, and we can continue to have this conversation so that AI can continue to be used for wonderful things, like feeding people in Africa, which was a previous episode of ours.
Yeah, for sure… And it definitely fits into the same spirit of that other article, about doing good data science and publishing those thoughts in blog posts and other things… Definitely a good read.
The other thing that we for sure wanna do in these news and updates episodes is to share some learning resources with people. Maybe you’re starting out in AI, or you just wanna keep yourself fresh or learn new things - we wanna definitely give you and help expose some of those resources.
The first one that I found recently was that Bloomberg published this free online course in machine learning fundamentals. You can view it, I think, as a series of lectures on YouTube, but they kind of have their own site now that you can go to, and you can kind of run through these episodes in order; they have extra resources and things there. I haven’t been through it, but it seems really useful and people seem positive about it.
That sounds like a great one. Likewise, Google has their machine learning crash course that uses TensorFlow APIs, and that is freely available. We can put the link in the show notes… They describe it as a self-study guide for aspiring machine learning practitioners. So that’s also one of the many great, free resources out there, where people can get their hands dirty on these.
[14:37] On a slightly different tangent, I have a lot of conversations with people that are trying to figure out how they’re gonna do the computation side. We use cloud services, obviously, from the major providers, and in some cases maybe we’re lucky and we can afford some pretty good deep learning-oriented hardware with GPUs or TPUs or whatever… But there is a YouTube video by Siraj Raval (if I’m pronouncing his name correctly), who is one of the better-known luminaries in the deep learning space. He does a lot of YouTube videos and courses, and he’s fairly well-known for these. He did a “Best Laptop for Machine Learning” video, and it was a good thing that if you’re on a budget, it was a great way of saying “Okay, how could I get into this? If I’m gonna build a system or if I’m gonna buy one, what are those tradeoffs?” and it was just a good basic thing.
I suspect that on the hardware side we’re gonna see a lot more of those types of recommendations as people get more and more into this, as this space becomes more accessible to people.
Yeah, that’s great. I have always been thinking about what laptop is next for me; I could watch those videos all day. Definitely, after seeing Kelsey Hightower demo from a Pixelbook, I’ve got my eyes set on those, although you’re probably not gonna train too many neural networks on them… But yeah, that’s always a fun one to watch.
So keep your eyes out for more episodes like this, where we share some things going on in the community. If you have suggestions for things you would like to talk about, or maybe links that you think are relevant, join our community. You can go to Changelog.com/community, you can ping us on Twitter or Slack - all of those links are there. There’s people already discussing things in our Slack channel, so join the community. We’d love to talk to you.
Stick around for next week. On the topic of learning, we’re gonna have Jared Lander with us, who is really big in the R community. He’s gonna talk a little bit about that, but even more we’ve kind of asked him to give us a little bit of an overview of the landscape of AI techniques and how certain things like deep learning fit into that… So I think that’ll be great, to hear from one of the experts in the field.
Thanks for finding some interesting stuff, and I’ll talk to you next week, Chris.
Yeah, I’ll talk to you later, Daniel. Have a good one.
Our transcripts are open source on GitHub. Improvements are welcome. 💚