Streamlit recently burst onto the scene with their intuitive, open source solution for building custom ML/AI tools. It allows data scientists and ML engineers to rapidly build internal or external UIs without spending time on frontend development. In this episode, Adrien Treuille joins us to discuss ML/AI app development in general and Streamlit. We talk about the practicalities of working with Streamlit along with its seemingly instant adoption by AI2, Stripe, Stitch Fix, Uber, and Twitter.
There’s a lot of hype about knowledge graphs and AI-methods for building or using them, but what exactly is a knowledge graph? How is it different from a database or other data store? How can I build my own knowledge graph? James Fletcher from Grakn Labs helps us understand knowledge graphs in general and some practical steps towards creating your own. He also discusses graph neural networks and the future of graph-augmented methods.
Everyone is talking about it. OpenAI trained a pair of neural nets that enable a robot hand to solve a Rubik’s cube. That is super dope! The results have also generated a lot of commentary and controversy, mainly related to the way in which the results were represented on OpenAI’s blog. We dig into all of this in on today’s Fully Connected episode, and we point you to a few places where you can learn more about reinforcement learning.
What’s the most practical of practical AI things? Data labeling of course! It’s also one of the most time consuming and error prone processes that we deal with in AI development. Michael Malyuk of Heartex and Label Studio joins us to discuss various data labeling challenges and open source tooling to help us overcome those challenges.
Times series data is everywhere! I mean, seriously, try to think of some data that isn’t a time series. You have stock prices and weather data, which are the classics, but you also have a time series of images on your phone, time series log data coming off of your servers, and much more. In this episode, Anais from InfluxData helps us understand the range of methods and problems related to time series data. She also gives her perspective on when statistical methods might perform better than neural nets or at least be a more reasonable choice.
We’ve mentioned ML/AI in the browser and in JS a bunch on this show, but we haven’t done a deep dive on the subject… until now! Victor Dibia helps us understand why people are interested in porting models to the browser and how people are using the functionality. We discuss TensorFlow.js and some applications built using TensorFlow.js
The United States has blacklisted several Chinese AI companies working in facial recognition and surveillance. Why? What are these companies doing exactly, and how does this fit into the international politics of AI? We dig into these questions and attempt to do some live fact finding in this episode.
Chris and Daniel talk with Keith Lynn, AlphaPilot Program Manager at Lockheed Martin. AlphaPilot is an open innovation challenge, developing artificial intelligence for high-speed racing drones, created through a partnership between Lockheed Martin and The Drone Racing League (DRL).
AlphaPilot challenged university teams from around the world to design AI capable of flying a drone without any human intervention or navigational pre-programming. Autonomous drones will race head-to-head through complex, three-dimensional tracks in DRL’s new Artificial Intelligence Robotic Racing (AIRR) Circuit. The winning team could win up to $2 million in prizes.
Keith shares the incredible story of how AlphaPilot got started, just prior to its debut race in Orlando, which will be broadcast on NBC Sports.
Chris and Daniel take some time to cover recent trends in AI and some noteworthy publications. In particular, they discuss the increasing AI momentum in the majority world (Africa, Asia, South and Central America and the Caribbean), and they dig into Hugging Face’s recent model distillation results.
The All Things Open conference is happening soon, and we snagged one of their speakers to discuss open source and AI. Samuel Taylor talks about the essential role that open source is playing in AI development and research, and he gives us some tips on choosing AI-related side projects.
In this very special fully-connected episode of Practical AI, Daniel interviews Chris. They discuss High Performance Computing (HPC) and how it is colliding with the world of AI. Chris explains how HPC differs from cloud/on-prem infrastructure, and he highlights some of the challenges of an HPC-based AI strategy.
We’re talking with Sherol Chen, a machine learning developer, about AI at Google and AutoML methods. Sherol explains how the various AI groups within Google work together and how AutoML fits into that puzzle. She also explains how to get started with AutoML step-by-step (this is “practical” AI after all).
Redis is a an open source, in-memory data structure store, widely used as a database, cache and message broker. It now also support tensor data types and deep learning models via the RedisAI module. Why did they build this module? Who is or should be using it? We discuss this and much more with Pieter Cailliau.
Chris and Daniel take the opportunity to catch up on some recent AI news. Among other things, they discuss the increasing impact of AI on studies of the ancient world and “good” uses of GANs. They also provide some more learning resources to help you level up your AI and machine learning game.
We’re talking with Joel Grus, author of Data Science from Scratch, 2nd Edition, senior research engineer at the Allen Institute for AI (AI2), and maintainer of AllenNLP. We discussed Joel’s book, which has become a personal favorite of the hosts, and why he decided to approach data science and AI “from scratch.” Joel also gives us a glimpse into AI2, an introduction to AllenNLP, and some tips for writing good research code. This episode is packed full of reproducible AI goodness!
Woo hoo! As we celebrate reaching episode 50, we come full circle to discuss the basics of neural networks. If you are just jumping into AI, then this is a great primer discussion with which to take that leap.
Our commitment to making artificial intelligence practical, productive, and accessible to everyone has never been stronger, so we invite you to join us for the next 50 episodes!
This week we bend reality to expose the deceptions of deepfake videos. We talk about what they are, why they are so dangerous, and what you can do to detect and resist their insidious influence. In a political environment rife with distrust, disinformation, and conspiracy theories, deepfakes are being weaponized and proliferated as the latest form of state-sponsored information warfare. Join us for an episode scarier than your favorite horror movie, because this AI bogeyman is real!
Interpreting complicated models is a hot topic. How can we trust and manage AI models that we can’t explain? In this episode, Janis Klaise, a data scientist with Seldon, joins us to talk about model interpretation and Seldon’s new open source project called Alibi. Janis also gives some of his thoughts on production ML/AI and how Seldon is addresses related problems.
Daniel and Chris explore three potentially confusing topics - generative adversarial networks (GANs), deep reinforcement learning (DRL), and transfer learning. Are these types of neural network architectures? Are they something different? How are they used? Well, If you have ever wondered how AI can be creative, wished you understood how robots get their smarts, or were impressed at how some AI practitioners conquer big challenges quickly, then this is your episode!