Chris and Daniel help us wade through the week’s AI news, including open ML challenges from Intel and National Geographic, Henry Kissinger’s views on AI, and a model that can detect personality based on eye movements. They also point out some useful resources to learn more about pandas, the vim editor, and AI algorithms.
Hired – Salary and benefits upfront? Yes please. Our listeners get a double hiring bonus of $600! Or, refer a friend and get a check for $1,337 when they accept a job. On Hired companies send you offers with salary, benefits, and even equity upfront. You are in full control of the process. Learn more at hired.com/practicalai.
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.
Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code
changelog2018. Start your server - head to linode.com/changelog
- RFP for National Geographic AI earth innovation
- Intel - AI Interplanetary challenge
- App that lets Alexa read sign language
- The mythos of model interpretability
- Artificial Intelligence Can Predict Your Personality By Simply Tracking Your Eyes
- Think You Know How Disruptive Artificial Intelligence Is? Think Again.
- How the Enlightenment Ends
Play the audio to listen along while you enjoy the transcript. 🎧
Hello, everyone! This is Daniel Whitenack, and I’m joined by my co-host here, Chris Benson. Today we’re gonna do another one of our news and updates shows for you and just kind of update you on some of the goings on in the AI community, some things that have caught our attention this week, and then also we’re gonna give you some more learning resources.
Again, we’re trying to make AI practical for you, so getting some of those learning resources out I think is super useful, and I know I’ve already appreciated getting some of those links from Chris.
I’ll kind of start us out this week, is that alright, Chris?
Absolutely. Go for it, Daniel.
Awesome. Well, I saw a couple things for you guys out there that like maybe Kaggle challenges, or other challenges and that sort of thing… There were a couple of challenges or RFP’s that drew my attention this week. The first is this AI for Earth request for proposals from National Geographic.
First of all, it was interesting to me that National Geographic was putting out a request for proposals related to AI, which I think is super cool… But also, it’s a big passion for me in terms of sustainability and the environment, and I’ve really been interested to see more applications of AI in this space. So if you’re at all interested in the environment and using AI for good in that sense, definitely check out this link. I think you have until maybe October to submit proposals.
It says the proposals will get you a request up to 200k maybe if you’re part of a research organization, or maybe you’re a grad student, or something… It might be a good link for you.
The other one is Intel AI is putting on this AI Interplanetary Challenge, which sounds pretty epic.
[04:00] The sub-heading is Super Explorer Mission, which is a lot of great words there… But essentially, in my understanding, this is a way to solicit proposals for space-related applications of AI, and I think if you win, then you get a lunch with Bill Nye, and some other people. This is a super-fun one, and maybe less of a barrier than the National Geographic one in terms of expectations. I think this one would be a good one for everyone to explore.
Those are pretty cool. I was pretty excited to see both of those.
So I ran across an article in Neuroscience News entitled “Artificial intelligence can predict your personality by simply tracking your eyes.”
I know, I know… That caught my attention because, you know, going back to past conversations - you know, how kind of invasive AI could become in certain use cases… So I read that, and there’s a university - it’s the University of South Australia - that had done this process where they had 42 people that participated in this, and they gave them personality surveys; apparently, it was one of the standard (I’m not familiar with the) personality surveys that kind of covers all aspects…
They actually monitored their eye movements… Not in the lab, but they apparently wore a device and went around through their daily lives, and it ended up tying together the way you use your eyes and the types of movements that you have with your personality and how you might behave in certain scenarios, compared to other people… Which is a little bit creepy.
In the last news episode we talked about the law enforcement or government monitoring using different types of AI techniques… So this caught that morbid fascination for me in terms of that thought.
It was very interesting; they didn’t take it farther than that, maybe fortunately… I got to the end of the article and kind of wiped my brow in relief. I just thought I’d pass that on… We can put the link in the show notes, in the “Interesting, but slightly creepy” category.
Yes, definitely in that category. It’s funny to me, because now it’s like, well, we can’t use Facebook data anymore post-Cambridge Analytica and GDPR and all that stuff, but maybe there’s hope for the creepy personality detectors out there using webcam data, or something like that. That’s pretty interesting though, I have to say.
The next one that I found - it was a Fast Company article, and I think this is just, like, awesome… You know I have a passion for applying AI to good, and this article highlights what they call a creative coder - I actually don’t know his association; I think this was kind of a hobby project for him (correct me if I’m wrong in our community on Slack). His name is Abhishek Singh (sorry if I pronounced that wrong).
He basically built a sign language interface to the Amazon Alexa API, or the Amazon Alexa, I should say. I think this is awesome. It’s, you know, making this tech accessible to a whole other community that was totally left out of that technology before. He basically has this setup to where it will actually – there’s a camera that’s watching you do sign language, and you can sign-language something. It’s interpreted to text, which I think is sent to Alexa via their API, and then you get the response.
[08:03] This is, I think, just super-cool… I mean, not even in the realm of smart speakers, but in the realm of making more tech like this accessible to people with disabilities… Like, maybe they’re deaf, or they need to use sign language… I know that there’s been other AI applied in a similar fashion for blind parents, helping them understand their environment for maybe their seeing kids.
I just think this stuff in this category is super-useful, and just an encouragement from my end to any of you out there who are kind of exploring how to apply AI and what projects you might work on. I encourage you at least consider doing something in this realm, if there’s a way for you to do it and there’s time for you to do it… It’s awesome to see this.
First of all, I love that application of it. It’s a fairly obvious one, but it does so much good and I think there are so many other opportunities for similar applications, whether it be Alexa or on other platforms… And in general, I definitely join with you on the aspiration of using these tools in AI to do good. I am actively looking at using AI for animal advocacy causes that I’m so passionate about, so maybe in a future episode we can talk a little bit about how we get into that in terms of our aspirations for AI + good, so… I’m looking forward to that conversation.
Yeah, that’d be great. We’ll have to arrange a Twitch stream where we live-code some example.
Excellent! Okay, so one of the things that I have been talking about a lot with people lately is how AI is impacting digital transformation, and it’s changed the nature of it. That seems to become a more and more popular thing, for people to try to understand the implications of.
I ran across a Forbes article that’s entitled “You think you know how disruptive artificial intelligence is? Think again.” The basic idea there - they’re kind of saying people talk about job displacement, and automation, stuff like that, but really, the effect of AI over time is really gonna be driving digital transformation throughout organizations. They kind of finish up with the idea of “It’s not about a job, it’s about how an entire business is set up, and how it achieves its function and how it serves us customers”, and they describe it as “Digital Transformation 2.0, rise of the fully-automated business.”
Beyond the article itself, I just find this a really fascinating topic, and not only in the way it reshapes technology, but in the way that it’s reshaping business itself. Some jobs are automated away, but totally new jobs that we’re seeing come into existence are coming in, and as you are combining these technologies with the humans that make up this business, how do you organize all that together going forward to best serve the customer need?
I’m seeing more and more of these types of articles, and probably we’ll continue to share some… But I think the intrinsic change that business is now entering will be a pretty interesting topic for us for some time to come.
Yeah, that’s great, and I am thinking about – I know next week we’re gonna have Mike Bugembe join us, who I’ve talked to before about how he kind of changed in essence a lot of his company’s perception around how decisions are made, and thinking about that in terms of data, and this whole new realm of artificial intelligence and algorithms… So I’m excited to hear his perspective on some of those things, and think that will be really good.
Yeah. The last one that I wanted to draw people’s attention to was this article titled “The mythos of model interpretability.” I know I’ve talked to a lot of different people and we’ve even talked on this show before, I think with the guest from Immuta about what really is model interpretability.
I think there’s a lot of people that are skeptical about this idea of model interpretability, but I think that this article really kind of – it’s a pretty long article (I’ll kind of give that context), but it dives a lot into details about how we think about model interpretability, where it comes up in our decision-making, and why we should be thinking about interpretability, maybe where we shouldn’t be thinking about interpretability…
I love certain statements, like “An interpretation may prove informative even without shedding light on a model’s inner workings.”
There’s a lot of great perspective here I think about kind of stepping back from all of these discussions around model interpretability and looking at that field and that idea as a whole. I definitely recommend reading through that, especially in light of a lot of things coming out, like GDPR, which we’ve talked about on another episode, which has connections to model interpretability. We all need to understand a little bit more about that, so I’d recommend this article.
I’m looking forward to reading that. After we stop recording, that’s my next thing. My final article that I wanted to draw is going back to a topic we’ve alluded to a little bit, but it was really who wrote it that caught my attention. It was in The Atlantic, and it’s called “How the enlightenment ends.” It’s going down the dark path about the dangers of AI to humanity… And I know there are lots of different perspectives on that, from different people, but it was written by Henry Kissinger…
Yeah, and for those who may be younger in our audience and aren’t familiar with him - once upon a time, Henry Kissinger, who’s now a very old man, was one of the world’s premier guys in terms of diplomacy, his expertise in foreign affairs and such was just world-renowed. He was our Secretary of State, I believe, back in the Nixon – he opened up China back in that day… And he had a company ever since, that was one of the top companies in the world in this space.
Even though he is not a technologist by any stretch, he is a brilliant thinker… He kind of starts off the article saying that he was almost about to walk out of this talk; they turned toward AI and he didn’t have any particular interest, but he happened to catch the beginning of it and it started him thinking. He sat through the rest of that presentation, and then he started going to many of the world’s top AI luminaries and asking them their thoughts in different ways.
He has really landed personally in the same space as Elon Musk and others who are warning us of the dangers in the long-term to humanity… He kind of walks through a process that really spans a historical narrative starting with the Enlightenment, roughly 500-600 years ago, and talks about how humans have developed technically through that period, and where he thinks AI will go.
He ends in a very dark place. It’s a cautionary note that basically says “Let’s be very careful in terms of in the years ahead, as new AI develops, how we implement that AI.” There are many articles similar to this out there, where people are warning us of such things, but it really – like I said, Henry Kissinger is one of the greats of the twentieth and early 21st century, and certainly a great living thinker today… That made me pause a little bit, and as someone who myself tends to celebrate AI in all its possibilities going forward, I do give a little bit of thought to Mr. Kissinger’s perspective there. Any thoughts, Daniel?
[16:17] Yeah, I’m glad you pointed it out; I’m looking forward to reading it. I kind of wonder – I mean, it isn’t really the case with Elon Musk necessarily, but I think there’s this kind of balance between, you know, for the people that think that AI and the hype around AI… You know, that AI can currently do more than it can actually do; people kind of either hype up AI to thinking “Oh, it’s gonna do all these amazing things”, to other people, who kind of go down the darker path, like you were saying.
At least in my opinion, in reality, I think we’re at a point where our expectations around AI need to be moderated in a certain way… But I also appreciate the fact that we as practitioners of AI need to be understanding how the influencers in our world are thinking about AI, and also how we as AI practitioners can better communicate and impress upon them the proper expectations around what AI can do, and the proper way to go about thinking about AI ethics and where we should – you know, obviously, that is an important thing that I don’t wanna shy away from, but I think it also has to be kind of wrapped in this cloak of proper expectations.
So yeah, it’s very interesting.
I agree. In general, I think I would agree that the current state of deep learning and AI technologies today doesn’t feel very threatening to me. There are certainly use cases - we talked about the Chinese government identifying people, and stuff, but it doesn’t have that… I leave a tiny door open in the back of my mind to some future development in AI that’s beyond where we’re at today, well beyond where we’re at today in terms of what could happen decades or even centuries down the road… But I think we get far outside of the practical when we get to that. So I absolutely agree with you, that the reality check is pretty important - what’s possible today, and then in the foreseeable future.
Yeah, the human element here is really important. I was just having a conversation with someone on a Slack channel about “Does AI have morality?” and my thought around that was – I mean, similar to other tech, I think the morality of the creators is what infuses any sort of morality in the technology, in the same way that a certain technology can be used for good, to automate emails and all of those things, but it also could be maybe used for bad, in like phishing scams and all of those things… That really comes back to the root in the creators of that technology.
We need people thinking about this and pushing us, and we also need people with a head looking towards the ethics of what we’re doing as the creators of AI, which is especially a technology which has kind of a more subtle infusion of the creator’s morality and fairness and bias into it than maybe other technologies
I agree, and I guess I’ll finish by saying, as we’ve touched on ethics again, in an upcoming episode we will have an ethics expert relating to AI on, and so that will be a good surprise coming.
[19:53] That’s a much-anticipated episode. We’ve already had a lot of requests for that. So now we’ll kind of go – that was the news that caught our attention over the week. Definitely let us know in our Slack channel - you can join us on Changelog.com/community and join our Slack channel and let us know what news articles you’re finding interesting from the week, related to AI. But we finish off, as always, we wanna give you a few learning resources to help level up your skills in Practical AI, and maybe help you be more productive as practitioners of AI, or maybe learners or students of AI.
One of those that I’ve found this week was this article called “Fast, Flexible, Easy and Intuitive”, which are all good things, I guess…
Yeah, I know… So I’ve been guilty in the past of maybe slamming Pandas on a few occasions…
And just to clarify, we’re not talking about the animals, right?
Right. This is the Python package called Pandas, which is a kind of data munging and manipulation package that kind of organizes data into what’s called data frames, and series, and other things.
I just wanted to save you the hate mail on that, sorry.
Yeah, I appreciate that. I have nothing wrong with pandas in general, and actually I have nothing bad to say about the Pandas package either. It’s amazing, and I use it most days, I think; I love it.
I think I’ve been guilty a little bit in the past of probably using poor Pandas skills or patterns, and blaming the slowness or the lack of good results in terms of performance on Pandas, when it’s actually been my kind of poor use of Pandas.
I think this article lays out some good patterns that you can use when you’re selecting data, when you’re looping through data, when you’re working with time data, and other things… I still don’t think Pandas is, you know, obviously, right for every single use case, but I think it’s incredibly powerful; just an amazing project, and I think this gives you some good patterns to use with it.
Sounds good. I’m looking forward to that one. I ran into an article this week that was on Medium, actually, called “An introduction to Gradient Descent Algorithm.” It was by a lady named Sara Iris Garcia. We’ll put a link in the show notes to her post.
She basically talks about Gradient Descent, which finds parameters that minimize the cost function, which is the error in prediction, and she kind of takes you through what a gradient is, and then talks about the learning rate associated with that gradient, and talks about what big learning rates versus small learning rates do, and what the implications of those are in your training, and then continues on with a working example, and talks about the various steps in gradient descent, and some of the variants to that.
The reason this drew me in was Gradient Descent is really one of the very first things you learn when you step into the world of deep learning, and if you’re new to the field, you may not be familiar with it, and you may need to ramp up… Some of us who have been in this for a while kind of take it for granted, but it’s one of those fundamental building blocks that you need to learn in those early days… So I wanted to put this article out there, so that people could get a start here, especially considering how well she puts the introduction together.
Awesome. Yeah, that’s a great resource. The last one that I have - I think you have one more, but I found this link to a newly-released package of eBooks… But one eBook particularly focused on Vim. The editor Vim, which if you’re in a terminal on some UNIX machine, you can use Vim to edit various code, or text documents, or whatever it might be.
[24:13] I actually use Vim as my primary code editor, and I definitely feel like I have not mastered Vim. I know a lot of people give Vim a hard time because you get into it and then you can’t figure out how to get out of it, or whatever other jokes you might have about Vim… But I think it’s useful for everybody to learn a little bit about Vim because maybe you’re SSH-ing into a machine where you’re running a cron job, or whatever it might be, and you wanna be able to edit some script or something on the machine in a quick way, right there in the terminal. Vim is a great choice for that, even if you don’t use it as your primary editor, like I do… Which you should.
I won’t get into that… But I think this is definitely for people that maybe struggle with when they’re SSH-ing into a machine and they wanna modify stuff - this is a great resource to kind of level up your skills on that front, and be a little bit more effective in that way.
I wish that I was using Vim as my primary editor, and for years–
…I keep trying to, and then of course I run into a situation where I get frustrated and I roll back to one of the other editors out there. But I keep trying, and certainly when I SSH in, it’s what I’m using. So maybe this is my path forward, Daniel.
Yeah. Well, I’ve definitely got a ways to go. I know that some Vim masters probably cringe when they watch me scroll through various parts of the document in a non-efficient way, so… I’m looking forward to learning a few things here, too.
Okay, well I’m definitely gonna dive into that one. So the last thing that I am introducing today for learning is O’Reilly has an article called “Introducing capsule networks.” To give people a quick background, capsule networks are I guess an invention by Geoffrey Hinton, who is one of the luminaries in the deep learning world, and it is what you might think of as an alternative to convolutional neural networks.
[26:25] It’s a really hot topic right now, there’s a lot of interest in it, but what this article does is it kind of takes you through CapsNets (which is what they’re called for short), and it differentiates them with convolutional neural networks, and talks about some of the different ways and places that you might use them, it talks about the differences in architecture and approach, strengths and weaknesses, and kind of gives you a thorough introduction, so that if this feels like it’s one of the architectures that you’re interested in for your use case, you can then take it forward and learn more about it.
I’ve been looking for a really good intro to this, and I thought this was a good way of dipping your toe into it and deciding if it’s something that you wanna do further. Any thoughts on capsule nets?
Awesome. My only comment is that I haven’t gone through the article yet, but it looks like there’s some really great figures in there to kind of help visually walk through some of the concepts. I think if you’re interested in this subject, it might be a good starting place to jump off from… So definitely take a look at that.
Great. Well, I appreciated all the stuff you’ve found this week, Chris. As always, it’s an exciting week in AI, and I’m excited to talk to you next week to interview Mike Bugembe. So we’ll talk to you next week!
Sounds good, Daniel. Have a good one, and talk to everyone later!
Our transcripts are open source on GitHub. Improvements are welcome. 💚