Bharat Sandhu, Director of Azure AI and Mixed Reality at Microsoft, joins Chris and Daniel to talk about how Microsoft is making AI accessible and productive for users, and how AI solutions can address real world challenges that customers face. He also shares Microsoft’s research-to-product process, along with the advances they have made in computer vision, image captioning, and how researchers were able to make AI that can describe images as well as people do.
Lucy D’Agostino McGowan, cohost of the Casual Inference Podcast and a professor at Wake Forest University, joins Daniel and Chris for a deep dive into causal inference. Referring to current events (e.g. misreporting of COVID-19 data in Georgia) as examples, they explore how we interact with, analyze, trust, and interpret data - addressing underlying assumptions, counterfactual frameworks, and unmeasured confounders (Chris’s next Halloween costume).
What’s linked is the official PyTorch implementation of a paper published in April of this year called Bringing Old Photos Back to Life.
We propose to restore old photos that suffer from severe degradation through a deep learning approach. Unlike conventional restoration tasks that can be solved through supervised learning, the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize. Therefore, we propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs. Specifically, we train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces.
The results are impressive!
Daniel and Chris do a deep dive into The AI Index 2019 Annual Report, which provides unbiased rigorously-vetted data that one can use “to develop intuitions about the complex field of AI”. Analyzing everything from R&D and technical advancements to education, the economy, and societal considerations, Chris and Daniel lay out this comprehensive report’s key insights about artificial intelligence.
Here’s a new acronym for you: Generative Teaching Networks (GTN)
GTNs are deep neural networks that generate data and/or training environments on which a learner (e.g., a freshly initialized neural network) trains before being tested on a target task (e.g., recognizing objects in images). One advantage of this approach is that GTNs can produce synthetic data that enables other neural networks to learn faster than when training on real data. That allowed us to search for new neural network architectures nine times faster than when using real data.
Fake data, real results? Sounds pretty slick.
Chris and Daniel talk with Greg Allen, Chief of Strategy and Communications at the U.S. Department of Defense (DoD) Joint Artificial Intelligence Center (JAIC). The mission of the JAIC is “to seize upon the transformative potential of artificial intelligence technology for the benefit of America’s national security… The JAIC is the official focal point of the DoD AI Strategy.” So if you want to understand how the U.S. military thinks about artificial intelligence, then this is the episode for you!
Colin Lecher reporting for The Verge:
Last week, Gov. Gavin Newsom signed into law AB 730, which makes it a crime to distribute audio or video that gives a false, damaging impression of a politician’s words or actions.
While the word “deepfake” doesn’t appear in the legislation, the bill clearly takes aim at doctored works. Lawmakers have raised concerns recently that distorted deepfake videos, like a slowed video of House Speaker Nancy Pelosi that appeared over the summer, could be used to influence elections in the future.
This is the first (but likely not the last) piece of legislation aimed at fighting the potential impact of GANs Gone Wild.
It’ll be interesting to watch this game play out. I think the only long-term, sustainable solution will emerge from the same arena where the problem began: technological advances.
Chris and Daniel talk with Keith Lynn, AlphaPilot Program Manager at Lockheed Martin. AlphaPilot is an open innovation challenge, developing artificial intelligence for high-speed racing drones, created through a partnership between Lockheed Martin and The Drone Racing League (DRL).
AlphaPilot challenged university teams from around the world to design AI capable of flying a drone without any human intervention or navigational pre-programming. Autonomous drones will race head-to-head through complex, three-dimensional tracks in DRL’s new Artificial Intelligence Robotic Racing (AIRR) Circuit. The winning team could win up to $2 million in prizes.
Keith shares the incredible story of how AlphaPilot got started, just prior to its debut race in Orlando, which will be broadcast on NBC Sports.
Woo hoo! As we celebrate reaching episode 50, we come full circle to discuss the basics of neural networks. If you are just jumping into AI, then this is a great primer discussion with which to take that leap.
Our commitment to making artificial intelligence practical, productive, and accessible to everyone has never been stronger, so we invite you to join us for the next 50 episodes!
This week we bend reality to expose the deceptions of deepfake videos. We talk about what they are, why they are so dangerous, and what you can do to detect and resist their insidious influence. In a political environment rife with distrust, disinformation, and conspiracy theories, deepfakes are being weaponized and proliferated as the latest form of state-sponsored information warfare. Join us for an episode scarier than your favorite horror movie, because this AI bogeyman is real!
This isn’t just an awesome list of resources. The repo IS the resource! If you click through and the content is a bit too dense, start with the latest episode of Practical AI where Daniel and Chris explain many of these concepts in detail.
Daniel and Chris explore three potentially confusing topics - generative adversarial networks (GANs), deep reinforcement learning (DRL), and transfer learning. Are these types of neural network architectures? Are they something different? How are they used? Well, If you have ever wondered how AI can be creative, wished you understood how robots get their smarts, or were impressed at how some AI practitioners conquer big challenges quickly, then this is your episode!
The latest machine learning research from my friends at Fast Forward Labs. Shiou Lin Sam and Nisha Muktewar teach us what meta-learners are and how they learn.
Chris and Daniel take you on a tour of local and global AI events, and discuss how to get the most out of your experiences. From access to experts to developing new industry relationships, learn how to get your foot in the door and make connections that help you grow as an AI practitioner.
Then drawing from their own wealth of experience as speakers, they dive into what it takes to give a memorable world-class talk that your audience will love. They break down how to select the topic, write the abstract, put the presentation together, and deliver the narrative with impact!
At the recent O’Reilly AI Conference in New York City, Chris met up with O’Reilly Chief Data Scientist Ben Lorica, the Program Chair for Strata Data, the AI Conference, and TensorFlow World.
O’Reilly’s ‘AI Adoption in the Enterprise’ report had just been released, so naturally Ben and Chris wanted to do a deep dive into enterprise AI adoption to discuss strategy, execution, and implications.
This week Daniel and Chris discuss the announcements made recently at TensorFlow Dev Summit 2019. They kick it off with the alpha release of TensorFlow 2.0, which features eager execution and an improved user experience through Keras, which has been integrated into TensorFlow itself. They round out the list with TensorFlow Datasets, TensorFlow Addons, TensorFlow Extended (TFX), and the upcoming inaugural O’Reilly TensorFlow World conference.
While attending the NVIDIA GPU Technology Conference in Silicon Valley, Chris met up with Adam Stooke, a speaker and PhD student at UC Berkeley who is doing groundbreaking work in large-scale deep reinforcement learning and robotics. Adam took Chris on a tour of deep reinforcement learning - explaining what it is, how it works, and why it’s one of the hottest technologies in artificial intelligence!
Longtime listeners know that we’re always advocating for ‘AI for good’, but this week we have taken it to a whole new level. We had the privilege of chatting with James Hodson, Director of the AI for Good Foundation, about ways they have used artificial intelligence to positively-impact the world - from food production to climate change. James inspired us to find our own ways to use AI for good, and we challenge our listeners to get out there and do some good!
GIPHY’s head of R&D, Nick Hasty, joins us to discuss their recently released celebrity detector project. He gives us all of the details about that project, but he also tells us about GIPHY’s origins, AI in general at GIPHY, and more!
While at the NVIDIA GPU Technology Conference 2019 in Silicon Valley, Chris enjoyed an inspiring conversation with Anima Anandkumar. Clearly a role model - not only for women - but for anyone in the world of AI, Anima relayed how her lifelong passion for mathematics and engineering started when she was only 3 years old in India, and ultimately led to her pioneering deep learning research at Amazon Web Services, CalTech, and NVIDIA.
GIPHY is proud to release our custom machine learning model that is able to discern over 2,300 celebrity faces with 98% accuracy. The model was trained to identify the most popular celebs on GIPHY, and can identify and make predictions for multiple faces across a sequence of images, like GIFs and videos.
The White House recently published an “Executive Order on Maintaining American Leadership in Artificial Intelligence.” In this fully connected episode, we discuss the executive order in general and criticism from the AI community. We also draw some comparisons between this US executive order and other national strategies for leadership in AI.
While covering Applied Machine Learning Days in Switzerland, Chris met El Mahdi El Mhamdi by chance, and was fascinated with his work doing AI safety research at EPFL. El Mahdi agreed to come on the show to share his research into the vulnerabilities in machine learning that bad actors can take advantage of. We cover everything from poisoned data sets and hacked machines to AI-generated propaganda and fake news, so grab your James Bond 007 kit from Q Branch, and join us for this important conversation on the dark side of artificial intelligence.
Person re-identification (re-ID) can be viewed as an image retrieval problem. The emergence of this task can be attributed to 1) the increasing demand of public safety and 2) the widespread large camera networks in theme parks, university campuses and streets, etc. Both causes make it extremely expensive to rely solely on brute-force human labor to accurately and efficiently spot a person-of-interest or to track a person across cameras.
Based on PyTorch.
Chris caught up with Jennifer Marsman, Principal Engineer on the AI for Earth team at Microsoft, right before her speech at Applied Machine Learning Days 2019 in Lausanne, Switzerland. She relayed how the team came into being, what they do, and some of the good deeds they have done for Mother Earth. They are giving away $50 million (US) in grants over five years! It was another excellent example of AI for good!