GANs are at the center of AI hype. However, they are also starting to be extremely practical and be used to develop solutions to real problems. Jakub Langr and Vladimir Bok join us for a deep dive into GANs and their application. We discuss the basics of GANs, their various flavors, and open research problems.
Chris and Daniel talk with Keith Lynn, AlphaPilot Program Manager at Lockheed Martin. AlphaPilot is an open innovation challenge, developing artificial intelligence for high-speed racing drones, created through a partnership between Lockheed Martin and The Drone Racing League (DRL).
AlphaPilot challenged university teams from around the world to design AI capable of flying a drone without any human intervention or navigational pre-programming. Autonomous drones will race head-to-head through complex, three-dimensional tracks in DRL’s new Artificial Intelligence Robotic Racing (AIRR) Circuit. The winning team could win up to $2 million in prizes.
Keith shares the incredible story of how AlphaPilot got started, just prior to its debut race in Orlando, which will be broadcast on NBC Sports.
This post by Lauren Reeder of Segment goes over the different layers to consider when working with a data lake. What’s a data lake, you ask?
A data lake is a centralized repository that stores both structured and unstructured data and allows you to store massive amounts of data in a flexible, cost effective storage layer.
Her article explains what tools are needed and provides code & SQL statements to get started. 🤟
Since data science has a huge impact on today’s businesses, the demand for DS experts is growing. At the moment I’m writing this, there are 144,527 data science jobs on LinkedIn alone. But still, it’s important to keep your finger on the pulse of the industry to be aware of the fastest and most efficient data science solutions.
Click through for key takeaways and trend analysis.
Woo hoo! As we celebrate reaching episode 50, we come full circle to discuss the basics of neural networks. If you are just jumping into AI, then this is a great primer discussion with which to take that leap.
Our commitment to making artificial intelligence practical, productive, and accessible to everyone has never been stronger, so we invite you to join us for the next 50 episodes!
This week we bend reality to expose the deceptions of deepfake videos. We talk about what they are, why they are so dangerous, and what you can do to detect and resist their insidious influence. In a political environment rife with distrust, disinformation, and conspiracy theories, deepfakes are being weaponized and proliferated as the latest form of state-sponsored information warfare. Join us for an episode scarier than your favorite horror movie, because this AI bogeyman is real!
Daniel and Chris explore three potentially confusing topics - generative adversarial networks (GANs), deep reinforcement learning (DRL), and transfer learning. Are these types of neural network architectures? Are they something different? How are they used? Well, If you have ever wondered how AI can be creative, wished you understood how robots get their smarts, or were impressed at how some AI practitioners conquer big challenges quickly, then this is your episode!
The latest machine learning research from my friends at Fast Forward Labs. Shiou Lin Sam and Nisha Muktewar teach us what meta-learners are and how they learn.
Chris and Daniel take you on a tour of local and global AI events, and discuss how to get the most out of your experiences. From access to experts to developing new industry relationships, learn how to get your foot in the door and make connections that help you grow as an AI practitioner.
Then drawing from their own wealth of experience as speakers, they dive into what it takes to give a memorable world-class talk that your audience will love. They break down how to select the topic, write the abstract, put the presentation together, and deliver the narrative with impact!
At the recent O’Reilly AI Conference in New York City, Chris met up with O’Reilly Chief Data Scientist Ben Lorica, the Program Chair for Strata Data, the AI Conference, and TensorFlow World.
O’Reilly’s ‘AI Adoption in the Enterprise’ report had just been released, so naturally Ben and Chris wanted to do a deep dive into enterprise AI adoption to discuss strategy, execution, and implications.
This week Daniel and Chris discuss the announcements made recently at TensorFlow Dev Summit 2019. They kick it off with the alpha release of TensorFlow 2.0, which features eager execution and an improved user experience through Keras, which has been integrated into TensorFlow itself. They round out the list with TensorFlow Datasets, TensorFlow Addons, TensorFlow Extended (TFX), and the upcoming inaugural O’Reilly TensorFlow World conference.
A curated list of applied machine learning and data science notebooks and libraries accross different industries. The code in this repository is in Python (primarily using jupyter notebooks) unless otherwise stated. The catalogue is inspired by awesome-machine-learning.
While attending the NVIDIA GPU Technology Conference in Silicon Valley, Chris met up with Adam Stooke, a speaker and PhD student at UC Berkeley who is doing groundbreaking work in large-scale deep reinforcement learning and robotics. Adam took Chris on a tour of deep reinforcement learning - explaining what it is, how it works, and why it’s one of the hottest technologies in artificial intelligence!
Longtime listeners know that we’re always advocating for ‘AI for good’, but this week we have taken it to a whole new level. We had the privilege of chatting with James Hodson, Director of the AI for Good Foundation, about ways they have used artificial intelligence to positively-impact the world - from food production to climate change. James inspired us to find our own ways to use AI for good, and we challenge our listeners to get out there and do some good!
This is an explainer on how to build a GitHub App that predicts and applies issue labels using Tensorflow and public datasets. Hamel Husain writes:
In order to show you how to create your own apps, we will walk you through the process of creating a GitHub app that can automatically label issues. Note that all of the code for this app, including the model training steps are located in this GitHub repository.
See also: Issue Label Bot
GIPHY’s head of R&D, Nick Hasty, joins us to discuss their recently released celebrity detector project. He gives us all of the details about that project, but he also tells us about GIPHY’s origins, AI in general at GIPHY, and more!
While at the NVIDIA GPU Technology Conference 2019 in Silicon Valley, Chris enjoyed an inspiring conversation with Anima Anandkumar. Clearly a role model - not only for women - but for anyone in the world of AI, Anima relayed how her lifelong passion for mathematics and engineering started when she was only 3 years old in India, and ultimately led to her pioneering deep learning research at Amazon Web Services, CalTech, and NVIDIA.
Google, Intel, and others have recently been targeting AI at the edge with things like Coral and the Neural Compute Stick, but NVIDIA is taking things a step farther. They just announced the Jetson Nano, which is a $99 computer with 472 GFLOPS of compute performance, an integrated NVIDIA GPU, and a Raspberry Pi form factor. According to NVIDIA:
The compute performance, compact footprint, and flexibility of Jetson Nano brings endless possibilities to developers for creating AI-powered devices and embedded systems.
And it’s not only for inference (which is the main target of things like Intel’s NCS). The Jetson Nano can also handle AI model training:
since Jetson Nano can run the full training frameworks like TensorFlow, PyTorch, and Caffe, it’s also able to re-train with transfer learning for those who may not have access to another dedicated training machine and are willing to wait longer for results.
Check it out! You can pre-order now.
China has committed to becoming the world leader in AI by 2030, with goals to build a domestic artificial intelligence industry worth nearly $150 billion (according to this CNN article). Prompted by these efforts, the Semantic Scholar team at the Allen AI Institute analyzed over two million academic AI papers published through the end of 2018. This analysis revealed the following:
Our analysis shows that China has already surpassed the US in published AI papers. If current trends continue, China is poised to overtake the US in the most-cited 50% of papers this year, in the most-cited 10% of papers next year, and in the 1% of most-cited papers by 2025. Citation counts are a lagging indicator of impact, so our results may understate the rising impact of AI research originating in China.
They also emphasize that US actions are making it difficult to recruit and retain foreign students and scholars, and these difficulties are likely to exacerbate the trend towards Chinese supremacy in AI research.
OpenAI, one of the largest and most influential AI research entities, was originally a non-profit. However, they just announced that they are creating a “capped-profit” entity, OpenAI LP. This capped-profit entity will supposedly help them accomplish their mission of building artificial general intelligence (AGI):
We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company.
The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount—and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP—are owned by the original OpenAI Nonprofit entity.
To some this makes total sense. Others have criticized the move, because they say that it misrepresents money as the only barrier to AGI or implies that OpenAI will develop it in a vacuum. What do you think?
Learn more about OpenAI’s mission from one of it’s founders in this episode of Practical AI.
The White House recently published an “Executive Order on Maintaining American Leadership in Artificial Intelligence.” In this fully connected episode, we discuss the executive order in general and criticism from the AI community. We also draw some comparisons between this US executive order and other national strategies for leadership in AI.
While covering Applied Machine Learning Days in Switzerland, Chris met El Mahdi El Mhamdi by chance, and was fascinated with his work doing AI safety research at EPFL. El Mahdi agreed to come on the show to share his research into the vulnerabilities in machine learning that bad actors can take advantage of. We cover everything from poisoned data sets and hacked machines to AI-generated propaganda and fake news, so grab your James Bond 007 kit from Q Branch, and join us for this important conversation on the dark side of artificial intelligence.
Those of you following AI related things on Twitter have probably been overwhelmed with commentary about OpenAI’s new GPT-2 language model, which is “Too Dangerous to Make Public” (according to Wired’s interpretation of OpenAI’s statements). Is this discussion frustrating or confusing for you?
Well, Ryan Lowe from McGill University has published a nice response article. He discusses the model and results in general, but also gives some perspective on the ethical implication and where the AI community should go from here. According to Lowe:
“The machine learning community really, really needs to start talking openly about our standards for ethical research release”
Claire Jaja (Manager of Data Science) at TalentWorks was curious about how many job requirements are actually required, so they analyzed job postings and resumes for more than 6,000 applications across 118 industries from their database. The results are quite interesting…
Your chances of getting an interview start to go up once you meet about 40% of job requirements.
You’re not any more likely to get an interview matching 90% of job requirements compared to matching just 50%.
…these numbers are about 10% lower i.e. women’s interview chances go up once they meet 30% of job requirements, and matching 40% of job requirements is as good as matching 90% for women.