This full connected has it all: news, updates on AI/ML tooling, discussions about AI workflow, and learning resources. Chris and Daniel breakdown the various roles to be played in AI development including scoping out a solution, finding AI value, experimentation, and more technical engineering tasks. They also point out some good resources for exploring bias in your data/model and monitoring for fairness.
Daniel and Chris go beyond the current state of the art in deep learning to explore the next evolutions in artificial intelligence. From Yoshua Bengio’s NeurIPS keynote, which urges us forward towards System 2 deep learning, to DARPA’s vision of a 3rd Wave of AI, Chris and Daniel investigate the incremental steps between today’s AI and possible future manifestations of artificial general intelligence (AGI).
The CEO of Darwin AI, Sheldon Fernandez, joins Daniel to discuss generative synthesis and its connection to explainability. You might have heard of AutoML and meta-learning. Well, generative synthesis tackles similar problems from a different angle and results in compact, explainable networks. This episode is fascinating and very timely.
On the heels of NVIDIA’s latest announcements, Daniel and Chris explore how the new NVIDIA Ampere architecture evolves the high-performance computing (HPC) landscape for artificial intelligence. After investigating the new specifications of the NVIDIA A100 Tensor Core GPU, Chris and Daniel turn their attention to the data center with the NVIDIA DGX A100, and then finish their journey at “the edge” with the NVIDIA EGX A100 and the NVIDIA Jetson Xavier NX.
Chandler McCann tells Daniel and Chris about how DataRobot engaged in a project to develop sustainable water solutions with the Global Water Challenge (GWC). They analyzed over 500,000 data points to predict future water point breaks. This enabled African governments to make data-driven decisions related to budgeting, preventative maintenance, and policy in order to promote and protect people’s access to safe water for drinking and washing. From this effort sprang DataRobot’s larger AI for Good initiative.
Daniel and Chris get you Fully-Connected with AI questions from listeners and online forums:
- What do you think is the next big thing?
- What are CNNs?
- How does one start developing an AI-enabled business solution?
- What tools do you use every day?
- What will AI replace?
- And more…
Daniel and Chris have a fascinating discussion with Anna Goldie and Azalia Mirhoseini from Google Brain about the use of reinforcement learning for chip floor planning - or placement - in which many new designs are generated, and then evaluated, to find an optimal component layout. Anna and Azalia also describe the use of graph convolutional neural networks in their approach.
In the midst of the COVID-19 pandemic, Daniel and Chris have a timely conversation with Lucy Lu Wang of the Allen Institute for Artificial Intelligence about COVID-19 Open Research Dataset (CORD-19). She relates how CORD-19 was created and organized, and how researchers around the world are currently using the data to answer important COVID-19 questions that will help the world through this ongoing crisis.
AI legend Stuart Russell, the Berkeley professor who leads the Center for Human-Compatible AI, joins Chris to share his insights into the future of artificial intelligence. Stuart is the author of Human Compatible, and the upcoming 4th edition of his perennial classic Artificial Intelligence: A Modern Approach, which is widely regarded as the standard text on AI. After exposing the shortcomings inherent in deep learning, Stuart goes on to propose a new practitioner approach to creating AI that avoids harmful unintended consequences, and offers a path forward towards a future in which humans can safely rely of provably beneficial AI.
So many AI developers are coming up with creative, useful COVID-19 applications during this time of crisis. Among those are Timo from Deepset-AI and Tony from Intel. They are working on a question answering system for pandemic-related questions called COVID-QA. In this episode, they describe the system, related annotation of the CORD-19 data set, and ways that you can contribute!
Daniel Wilson and Rob Fletcher of ESRI hang with Chris and Daniel to chat about how AI powered modern geographic information systems (GIS) and location intelligence. They illuminate the various models used for GIS, spatial analysis, remote sensing, real-time visualization, and 3D analytics. You don’t want to miss the part about their work for the DoD’s Joint AI Center in humanitarian assistance / disaster relief.
Catherine Breslin of Cobalt joins Daniel and Chris to do a deep dive on speech recognition. She also discusses how the technology is integrated into virtual assistants (like Alexa) and is used in other non-assistant contexts (like transcription and captioning). Along the way, she teaches us how to assemble a lexicon, acoustic model, and language model to bring speech recognition to life.
Emily Robinson, co-author of the book Build a Career in Data Science, gives us the inside scoop about optimizing the data science job search. From creating one’s resume, cover letter, and portfolio to knowing how to recognize the right job at a fair compensation rate.
Emily’s expert guidance takes us from the beginning of the process to conclusion, including being successful during your early days in that fantastic new data science position.
Matt Brems from General Assembly joins us to explain what “data science” actually means these days and how that has changed over time. He also gives us some insight into how people are going about data science education, how AI fits into the data science workflow, and how to differentiate yourself career-wise.
Craig Wiley, from Google Cloud, joins us to discuss various pieces of the TensorFlow ecosystem along with TensorFlow Enterprise. He sheds light on how enterprises are utilizing AI and supporting AI-driven applications in the Cloud. He also clarifies Google’s relationship to TensorFlow and explains how TensorFlow development is impacting Google Cloud Platform.
Expanding AI technology to the local languages of emerging markets presents huge challenges. Good data is scarce or non-existent. Users often have bandwidth or connectivity issues. Existing platforms target only a small number of high-resource languages.
Our own Daniel Whitenack (data scientist at SIL International) and Dan Jeffries (from Pachyderm) discuss how these and related problems will only be solved when AI technology and resources from industry are combined with linguistic expertise from those on the ground working with local language communities. They have illustrated this approach as they work on pushing voice technology into emerging markets.
Daniel and Chris explore Semantic Scholar with Doug Raymond of the Allen Institute for Artificial Intelligence. Semantic Scholar is an AI-backed search engine that uses machine learning, natural language processing, and machine vision to surface relevant information from scientific papers.
Daniel and Chris do a deep dive into The AI Index 2019 Annual Report, which provides unbiased rigorously-vetted data that one can use “to develop intuitions about the complex field of AI”. Analyzing everything from R&D and technical advancements to education, the economy, and societal considerations, Chris and Daniel lay out this comprehensive report’s key insights about artificial intelligence.
Production ML systems include more than just the model. In these complicated systems, how do you ensure quality over time, especially when you are constantly updating your infrastructure, data and models? Tania Allard joins us to discuss the ins and outs of testing ML systems. Among other things, she presents a simple formula that helps you score your progress towards a robust system and identify problem areas.
One of the things people most associate with AI is automation, but how is AI actually shaping automation in manufacturing? Costas Boulis from Bright Machines joins us to talk about how they are using AI in various manufacturing processes and in their “microfactories.” He also discusses the unique challenges of developing AI models based on manufacturing data.
Chris and Daniel talk with Greg Allen, Chief of Strategy and Communications at the U.S. Department of Defense (DoD) Joint Artificial Intelligence Center (JAIC). The mission of the JAIC is “to seize upon the transformative potential of artificial intelligence technology for the benefit of America’s national security… The JAIC is the official focal point of the DoD AI Strategy.” So if you want to understand how the U.S. military thinks about artificial intelligence, then this is the episode for you!
Wow, 2019 was an amazing year for AI! In this fully connected episode, Chris and Daniel discuss their list of top 5 notable AI things from 2019. They also discuss the “state of AI” at the end of 2019, and they make some predictions for 2020.
We have all used web and product search technologies for quite some time, but how do they actually work and how is AI impacting search? Andrew Stanton from Etsy joins us to dive into AI-based search methods and to talk about neuroevolution. He also gives us an introduction to Rust for production ML/AI and explains how that community is developing.
Evan Sparks, from Determined AI, helps us understand why many are still stuck in the “dark ages” of AI infrastructure. He then discusses how we can build better systems by leveraging things like fault tolerant training and AutoML. Finally, Evan explains his optimistic outlook on AI’s economic and environmental health impact.
SpaCy is awesome for NLP! It’s easy to use, has widespread adoption, is open source, and integrates the latest language models. Ines Montani and Matthew Honnibal (core developers of spaCy and co-founders of Explosion) join us to discuss the history of the project, its capabilities, and the latest trends in NLP. We also dig into the practicalities of taking NLP workflows to production. You don’t want to miss this episode!
GANs are at the center of AI hype. However, they are also starting to be extremely practical and be used to develop solutions to real problems. Jakub Langr and Vladimir Bok join us for a deep dive into GANs and their application. We discuss the basics of GANs, their various flavors, and open research problems.
Streamlit recently burst onto the scene with their intuitive, open source solution for building custom ML/AI tools. It allows data scientists and ML engineers to rapidly build internal or external UIs without spending time on frontend development. In this episode, Adrien Treuille joins us to discuss ML/AI app development in general and Streamlit. We talk about the practicalities of working with Streamlit along with its seemingly instant adoption by AI2, Stripe, Stitch Fix, Uber, and Twitter.
There’s a lot of hype about knowledge graphs and AI-methods for building or using them, but what exactly is a knowledge graph? How is it different from a database or other data store? How can I build my own knowledge graph? James Fletcher from Grakn Labs helps us understand knowledge graphs in general and some practical steps towards creating your own. He also discusses graph neural networks and the future of graph-augmented methods.
Everyone is talking about it. OpenAI trained a pair of neural nets that enable a robot hand to solve a Rubik’s cube. That is super dope! The results have also generated a lot of commentary and controversy, mainly related to the way in which the results were represented on OpenAI’s blog. We dig into all of this in on today’s Fully Connected episode, and we point you to a few places where you can learn more about reinforcement learning.