In this Fully-Connected episode, Chris and Daniel explore the geopolitics, economics, and power-brokering of artificial intelligence. What does control of AI mean for nations, corporations, and universities? What does control or access to AI mean for conflict and autonomy? The world is changing rapidly, and the rate of change is accelerating. Daniel and Chris look behind the curtain in the halls of power.
In this Fully-Connected episode, Daniel and Chris explore DALL-E 2, the amazing new model from Open AI that generates incredibly detailed novel images from text captions for a wide range of concepts expressible in natural language. Along the way, they acknowledge that some folks in the larger AI community are suggesting that sophisticated models may be approaching sentience, but together they pour cold water on that notion. But they can’t seem to get away from DALL-E’s images of raccoons in space, and of course, who would want to?
In this “fully connected” episode of the podcast, we catch up on some recent developments in the AI world, including a new model from DeepMind called Gato. This generalist model can play video games, caption images, respond to chat messages, control robot arms, and much more. We also discuss the use of AI in the entertainment industry (e.g., in new Top Gun movie).
This last week has been a big week for AI news. BigScience is training a huge language model (while the world watches), and NVIDIA announced their latest “Hopper” GPUs. Chris and Daniel discuss these and other topics on this fully connected episode!
The term “foundation” model has been around since about the middle of last year when a research group at Stanford published the comprehensive report On the Opportunities and Risks of Foundation Models. The naming of these models created some strong reactions, both good and bad. In this episode, Chris and Daniel dive into the ideas behind the report.
What happens when your data operations grow to Internet-scale? How do thousands or millions of data producers and consumers efficiently, effectively, and productively interact with each other? How are varying formats, protocols, security levels, performance criteria, and use-case specific characteristics meshed into one unified data fabric? Chris and Daniel explore these questions in this illuminating and Fully-Connected discussion that brings this new data technology into the light.
From MIT researchers who have an AI system that rapidly predicts how two proteins will attach, to Facebook’s first high-performance self-supervised algorithm that works for speech, vision, and text, Daniel and Chris survey the AI landscape for notable milestones in the application of AI in industry and research.
The time has come! OpenAI’s API is now available with no waitlist. Chris and Daniel dig into the API and playground during this episode, and they also discuss some of the latest tool from Hugging Face (including new reinforcement learning environments). Finally, Daniel gives an update on how he is building out infrastructure for a new AI team.
In this Fully-Connected episode, Daniel and Chris ponder whether in-person AI conferences are on the verge of making a post-pandemic comeback. Then on to BigScience from Hugging Face, a year-long research workshop on large multilingual models and datasets. Specifically they dive into the T0, a series of natural language processing (NLP) AI models specifically trained for researching zero-shot multitask learning. Daniel provides a brief tour of the possible with the T0 family. They finish up with a couple of new learning resources.
Each year we discuss the latest insights from the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and this year is no different. Daniel and Chris delve into key findings and discuss in this Fully-Connected episode. They also check out a study called ‘Delphi: Towards Machine Ethics and Norms’, about how to integrate ethics and morals into AI models.
Federated learning is increasingly practical for machine learning developers because of the challenges we face with model and data privacy. In this fully connected episode, Chris and Daniel dive into the topic and dissect the ideas behind federated learning, practicalities of implementing decentralized training, and current uses of the technique.
Polarity Mapping is a framework to “help problems be solved in a realistic and multidimensional manner” (see here for more info). In this week’s fully connected episode, Chris and Daniel use this framework to help them discuss how an organization can strike a good balance between human intelligence and AI. AI can’t solve everything and humans need to be in-the-loop with many AI solutions.
We’re back with another Fully Connected episode – Daniel and Chris dive into a series of articles called ‘A New AI Lexicon’ that collectively explore alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI. The fun begins early as they discuss and debate ‘An Electric Brain’ with strong opinions, and consider viewpoints that aren’t always popular.
Inspired by a recent article from Erik Bernhardsson titled “Building a data team at a mid-stage startup: a short story”, Chris and Daniel discuss all things AI/data team building. They share some stories from their experiences kick starting AI efforts at various organizations and weight the pro and cons of things like centralized data management, prototype development, and a focus on engineering skills.
How did we get from symbolic AI to deep learning models that help you write code (i.e., GitHub and OpenAI’s new Copilot)? That’s what Chris and Daniel discuss in this episode about the history and future of deep learning (with some help from an article recently published in ACM and written by the luminaries of deep learning).
Chris and Daniel sit down to chat about some exciting new AI developments including wav2vec-u (an unsupervised speech recognition model) and meta-learning (a new book about “How To Learn Deep Learning And Thrive In The Digital World”). Along the way they discuss engineering skills for AI developers and strategies for launching AI initiatives in established companies.
In this Fully-Connected episode, Chris and Daniel discuss low code / no code development, GPU jargon, plus more data leakage issues. They also share some really cool new learning opportunities for leveling up your AI/ML game!
What’s it like to try and build your own deep learning workstation? Is it worth it in terms of money, effort, and maintenance? Then once built, what’s the best way to utilize it? Chris and Daniel dig into questions today as they talk about Daniel’s recent workstation build. He built a workstation for his NLP and Speech work with two GPUs, and it has been serving him well (minus a few things he would change if he did it again).
The multidisciplinary field of AI Ethics is brand new, and is currently being pioneered by a relatively small number of leading AI organizations and academic institutions around the world. AI Ethics focuses on ensuring that unexpected outcomes from AI technology implementations occur as rarely as possible. Daniel and Chris discuss strategies for how to arrive at AI ethical principles suitable for your own organization, and what is involved in implementing those strategies in the real world. Tune in for a practical AI primer on AI Ethics!
Daniel and Chris get you Fully-Connected with open source software for artificial intelligence.
In addition to defining what open source is, they discuss where to find open source tools and data, and how you can contribute back to the open source AI community.
This full connected has it all: news, updates on AI/ML tooling, discussions about AI workflow, and learning resources. Chris and Daniel breakdown the various roles to be played in AI development including scoping out a solution, finding AI value, experimentation, and more technical engineering tasks. They also point out some good resources for exploring bias in your data/model and monitoring for fairness.
Daniel and Chris go beyond the current state of the art in deep learning to explore the next evolutions in artificial intelligence. From Yoshua Bengio’s NeurIPS keynote, which urges us forward towards System 2 deep learning, to DARPA’s vision of a 3rd Wave of AI, Chris and Daniel investigate the incremental steps between today’s AI and possible future manifestations of artificial general intelligence (AGI).
On the heels of NVIDIA’s latest announcements, Daniel and Chris explore how the new NVIDIA Ampere architecture evolves the high-performance computing (HPC) landscape for artificial intelligence. After investigating the new specifications of the NVIDIA A100 Tensor Core GPU, Chris and Daniel turn their attention to the data center with the NVIDIA DGX A100, and then finish their journey at “the edge” with the NVIDIA EGX A100 and the NVIDIA Jetson Xavier NX.
Daniel and Chris get you Fully-Connected with AI questions from listeners and online forums:
- What do you think is the next big thing?
- What are CNNs?
- How does one start developing an AI-enabled business solution?
- What tools do you use every day?
- What will AI replace?
- And more…
Daniel and Chris do a deep dive into The AI Index 2019 Annual Report, which provides unbiased rigorously-vetted data that one can use “to develop intuitions about the complex field of AI”. Analyzing everything from R&D and technical advancements to education, the economy, and societal considerations, Chris and Daniel lay out this comprehensive report’s key insights about artificial intelligence.