Machine Learning Icon

Machine Learning

Machine Learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
236 Stories
All Topics

Practical AI Practical AI #166

Exploring deep reinforcement learning

In addition to being a Developer Advocate at Hugging Face, Thomas Simonini is building next-gen AI in games that can talk and have smart interactions with the player using Deep Reinforcement Learning (DRL) and Natural Language Processing (NLP). He also created a Deep Reinforcement Learning course that takes a DRL beginner to from zero to hero. Natalie and Chris explore what’s involved, and what the implications are, with a focus on the development path of the new AI data scientist.

Go Time Go Time #213

AI-driven development in Go

Alexey Palazhchenko joins Natalie to discuss the implications of GitHub’s Copilot on code generation. Go’s design lends itself nicely to computer generated authoring: thanks to go fmt, there’s already only one Go style. This means AI-generated code will be consistent and seamless. Its focus on simplicity & readability make it tailor made for this new approach to software creation. Where might this take us?

Chip Huyen huyenchip.com

Real-time machine learning: challenges and solutions

Chip Huyen:

In the last year, I’ve talked to ~30 companies in different industries about their challenges with real-time machine learning. I’ve also worked with quite a few to find the solutions. This post outlines the solutions for (1) online prediction and (2) continual learning, with step-by-step use cases, considerations, and technologies required for each level.

Alex Strick van Linschoten github.com

ZenML helps data scientists work across the full stack

ZenML is an extensible MLOps framework to create production-ready machine learning pipelines. Built for data scientists, it has a simple, flexible syntax, is cloud and tool agnostic, and has interfaces/abstractions that are catered towards ML workflows.

The code base was recently completely rewritten with better abstractions and to set us up for our ongoing growth and inclusion of more integrations with tools that data scientists love to use.

AI (Artificial Intelligence) deepmind.com

A 280 billion parameter language model named Gopher

In the quest to explore language models and develop new ones, we trained a series of transformer language models of different sizes, ranging from 44 million parameters to 280 billion parameters.

Our research investigated the strengths and weaknesses of those different-sized models, highlighting areas where increasing the scale of a model continues to boost performance – for example, in areas like reading comprehension, fact-checking, and the identification of toxic language. We also surface results where model scale does not significantly improve results — for instance, in logical reasoning and common-sense tasks.

Sometimes size matters, sometimes it doesn’t as much. Fascinating analysis.

A 280 billion parameter language model named Gopher

Practical AI Practical AI #160

Friendly federated learning 🌼

This episode is a follow up to our recent Fully Connected show discussing federated learning. In that previous discussion, we mentioned Flower (a “friendly” federated learning framework). Well, one of the creators of Flower, Daniel Beutel, agreed to join us on the show to discuss the project (and federated learning more broadly)! The result is a really interesting and motivating discussion of ML, privacy, distributed training, and open source AI.

Practical AI Practical AI #158

Zero-shot multitask learning

In this Fully-Connected episode, Daniel and Chris ponder whether in-person AI conferences are on the verge of making a post-pandemic comeback. Then on to BigScience from Hugging Face, a year-long research workshop on large multilingual models and datasets. Specifically they dive into the T0, a series of natural language processing (NLP) AI models specifically trained for researching zero-shot multitask learning. Daniel provides a brief tour of the possible with the T0 family. They finish up with a couple of new learning resources.

Practical AI Practical AI #157

Analyzing the 2021 AI Index Report

Each year we discuss the latest insights from the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and this year is no different. Daniel and Chris delve into key findings and discuss in this Fully-Connected episode. They also check out a study called ‘Delphi: Towards Machine Ethics and Norms’, about how to integrate ethics and morals into AI models.

Practical AI Practical AI #156

Photonic computing for AI acceleration

There are a lot of people trying to innovate in the area of specialized AI hardware, but most of them are doing it with traditional transistors. Lightmatter is doing something totally different. They’re building photonic computers that are more power efficient and faster for AI inference. Nick Harris joins us in this episode to bring us up to speed on all the details.

Machine Learning cerebralab.com

Boring machine learning is where it's at

It surprises me that when people think of “software that brings about the singularity” they think of text models, or of RL agents. But they sneer at decision tree boosting and the like as boring algorithms for boring problems.

To me, this seems counter-intuitive, and the fact that most people researching ML are interested in subjects like vision and language is flabbergasting. For one, because getting anywhere productive in these fields is really hard, for another, because their usefulness seems relatively minimal.

Practical AI Practical AI #154

🌍 AI in Africa - Makerere AI Lab

This is the first episode in a special series we are calling the “Spotlight on AI in Africa”. To kick things off, Joyce and Mutembesa from Makerere University’s AI Lab join us to talk about their amazing work in computer vision, natural language processing, and data collection. Their lab seeks out problems that matter in African communities, pairs those problems with appropriate data/tools, and works with the end users to ensure that solutions create real value.

Practical AI Practical AI #152

The mathematics of machine learning

Tivadar Danka is an educator and content creator in the machine learning space, and he is writing a book to help practitioners go from high school mathematics to mathematics of neural networks. His explanations are lucid and easy to understand. You have never had such a fun and interesting conversation about calculus, linear algebra, and probability theory before!

0:00 / 0:00