Chris Benson Avatar

Chris Benson

Chris Benson is Principal Artificial Intelligence Strategist at Lockheed Martin. He came to Lockheed Martin from Honeywell SPS, where he was Chief Scientist for Artificial Intelligence & Machine Learning. Chris built and operationalized Honeywell’s first dedicated AI team from the ground up. Before that he was on the AI Team at Accenture.

As a strategist and thought leader, Chris is among the world’s most in-demand professional keynote speakers on artificial intelligence, machine learning, emerging technologies, and visionary futurism. His inspirational keynotes are known for their passion, energy, and clarity. He is a seasoned storyteller who delights in captivating his audiences with inspiring narratives and insightful analysis at conferences, broadcasts, interviews, forums, and corporate events around the world.

Chris is an innovative hands-on solutions architect for artificial intelligence and machine learning - and the emerging technologies they intersect - robotics, IoT, augmented reality, blockchain, mobile, edge, and cloud.

He is Co-Host of the Practical AI podcast, which reaches thousands of AI enthusiasts each week, and is also the Founder & Organizer of the Atlanta Deep Learning Meetup - one of the largest AI communities in the world.

Chris and his family are committed animal advocates who are active in animal rescue, and strive to make strategic improvements on specific animal welfare issues through advocacy for non-partisan, no-kill, and vegan legislation and regulation.

Chris Benson’s opinions are his own.

https://chrisbenson.com

Atlanta · Twitter · GitHub · LinkedIn · Website

Practical AI Practical AI #176

MLOps is NOT Real

We all hear a lot about MLOps these days, but where does MLOps end and DevOps begin? Our friend Luis from OctoML joins us in this episode to discuss treating AI/ML models as regular software components (once they are trained and ready for deployment). We get into topics including optimization on various kinds of hardware and deployment of models at the edge.

Practical AI Practical AI #171

Clothing AI in a data fabric

What happens when your data operations grow to Internet-scale? How do thousands or millions of data producers and consumers efficiently, effectively, and productively interact with each other? How are varying formats, protocols, security levels, performance criteria, and use-case specific characteristics meshed into one unified data fabric? Chris and Daniel explore these questions in this illuminating and Fully-Connected discussion that brings this new data technology into the light.

Practical AI Practical AI #166

Exploring deep reinforcement learning

In addition to being a Developer Advocate at Hugging Face, Thomas Simonini is building next-gen AI in games that can talk and have smart interactions with the player using Deep Reinforcement Learning (DRL) and Natural Language Processing (NLP). He also created a Deep Reinforcement Learning course that takes a DRL beginner to from zero to hero. Natalie and Chris explore what’s involved, and what the implications are, with a focus on the development path of the new AI data scientist.

Practical AI Practical AI #160

Friendly federated learning 🌼

This episode is a follow up to our recent Fully Connected show discussing federated learning. In that previous discussion, we mentioned Flower (a “friendly” federated learning framework). Well, one of the creators of Flower, Daniel Beutel, agreed to join us on the show to discuss the project (and federated learning more broadly)! The result is a really interesting and motivating discussion of ML, privacy, distributed training, and open source AI.

Practical AI Practical AI #158

Zero-shot multitask learning

In this Fully-Connected episode, Daniel and Chris ponder whether in-person AI conferences are on the verge of making a post-pandemic comeback. Then on to BigScience from Hugging Face, a year-long research workshop on large multilingual models and datasets. Specifically they dive into the T0, a series of natural language processing (NLP) AI models specifically trained for researching zero-shot multitask learning. Daniel provides a brief tour of the possible with the T0 family. They finish up with a couple of new learning resources.

  0:00 / 0:00