Machine Learning Icon

Machine Learning

Machine Learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
154 Stories
All Topics

The New Stack Icon The New Stack

How I built an on-premises AI training testbed with Kubernetes and Kubeflow

This is part 4 in a cool series on The New Stack exploring the Kubeflow machine learning platform.

I recently built a four-node bare metal Kubernetes cluster comprising CPU and GPU hosts for all my AI experiments. Though it makes economic sense to leverage the public cloud for provisioning the infrastructure, I invested a fortune in the AI testbed that’s within my line of sight.

The author shares many insights into the choices he made while building this dream setup.

How I built an on-premises AI training testbed with Kubernetes and Kubeflow

Practical AI Practical AI #127

Women in Data Science (WiDS)

Chris has the privilege of talking with Stanford Professor Margot Gerritsen, who co-leads the Women in Data Science (WiDS) Worldwide Initiative. This is a conversation that everyone should listen to. Professor Gerritsen’s profound insights into how we can all help the women in our lives succeed - in data science and in life - is a ‘must listen’ episode for everyone, regardless of gender.

Python github.com

A PyTorch-based speech toolkit

SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, multi-microphone signal processing and many others.

Currently in beta.

Python github.com

`whereami` uses WiFi signals & ML to locate you (within 2-10 meters)

If you’re adventurous and you want to learn to distinguish between couch #1 and couch #2 (i.e. 2 meters apart), it is the most robust when you switch locations and train in turn. E.g. first in Spot A, then in Spot B then start again with A. Doing this in spot A, then spot B and then immediately using “predict” will yield spot B as an answer usually. No worries, the effect of this temporal overfitting disappears over time. And, in fact, this is only a real concern for the very short distances. Just take a sample after some time in both locations and it should become very robust.

The linked project was “almost entirely copied” from the find project, which was written in Go. It then went on to inspire whereami.js. I bet you can guess what that is.

HackerNoon Icon HackerNoon

Why ML in production is (still) broken and ways we can fix it

Hamza Tahir on HackerNoon:

By now, chances are you’ve read the famous paper about hidden technical debt by Sculley et al. from 2015. As a field, we have accepted that the actual share of Machine Learning is only a fraction of the work going into successful ML projects. The resulting complexity, especially in the transition to “live” environments, lead to large amounts of failed ML projects never reaching production.

Productionizing ML workflows has been a trending topic on Practical AI lately…

Why ML in production is (still) broken and ways we can fix it

Practical AI Practical AI #122

The AI doc will see you now

Elad Walach of Aidoc joins Chris to talk about the use of AI for medical imaging interpretation. Starting with the world’s largest annotated training data set of medical images, Aidoc is the radiologist’s best friend, helping the doctor to interpret imagery faster, more accurately, and improving the imaging workflow along the way. Elad’s vision for the transformative future of AI in medicine clearly soothes Chris’s concern about managing his aging body in the years to come. ;-)

Elixir thinkingelixir.com

ML is coming to Elixir by way of José Valim's "Project Nx"

Elixir creator José Valim stopped by the Thinking Elixir podcast to reveal what he’s been working on for the past 3 months: Numerical Elixir!

This is an exciting development that brings Elixir into areas it hasn’t been used before. We also talk about what this means for Elixir and the community going forward. A must listen!

Queue up this episode and/or stay tuned for an upcoming episode of The Changelog where we’ll sit down with José after his LambdaDays demo to unpack things even more.

Machine Learning marksaroufim.substack.com

Machine Learning: The Great Stagnation

This piece by Mark Saroufim on the state of ML starts pretty salty:

Graduate Student Descent is one of the most reliable ways of getting state of the art performance in Machine Learning today and it’s also a fully parallelizable over as many graduate students or employees your lab has. Armed with Graduate Student Descent you are more likely to get published or promoted than if you took on uncertain projects.

and:

BERT engineer is now a full time job. Qualifications include:

  • Some bash scripting
  • Deep knowledge of pip (starting a new environment is the suckier version of practicing scales)
  • Waiting for new HuggingFace models to be released
  • Watching Yannic Kilcher’s new Transformer paper the day it comes out
  • Repeating what Yannic said at your team reading group

It’s kind of like Dev-ops but you get paid more.

But if you survive through (or maybe even enjoy) the lamentations and ranting, you’ll find some hope and optimism around specific projects that the author believes are pushing the industry through its Great Stagnation.

I learned a few things. Maybe you will too.

Practical AI Practical AI #119

Accelerating ML innovation at MLCommons

MLCommons launched in December 2020 as an open engineering consortium that seeks to accelerate machine learning innovation and broaden access to this critical technology for the public good. David Kanter, the executive director of MLCommons, joins us to discuss the launch and the ambitions of the organization.

In particular we discuss the three pillars of the organization: Benchmarks and Metrics (e.g. MLPerf), Datasets and Models (e.g. People’s Speech), and Best Practices (e.g. MLCube).

Machine Learning huyenchip.com

The MLOps tooling landscape in early 2021 (284 tools)

Chip Huyen:

While looking for these MLOps tools, I discovered some interesting points about the MLOps landscape:

  1. Increasing focus on deployment
  2. The Bay Area is still the epicenter of machine learning, but not the only hub
  3. MLOps infrastructures in the US and China are diverging
  4. More interests in machine learning production from academia

If MLOps is new to you, Practical AI did a deep dive on the topic that will help you sort it out. Or if you’d prefer a shallow dive… just watch this.

Practical AI Practical AI #118

The $1 trillion dollar ML model 💵

American Express is running what is perhaps the largest commercial ML model in the world; a model that automates over 8 billion decisions, ingests data from over $1T in transactions, and generates decisions in mere milliseconds or less globally. Madhurima Khandelwal, head of AMEX AI Labs, joins us for a fascinating discussion about scaling research and building robust and ethical AI-driven financial applications.

Practical AI Practical AI #115

From research to product at Azure AI

Bharat Sandhu, Director of Azure AI and Mixed Reality at Microsoft, joins Chris and Daniel to talk about how Microsoft is making AI accessible and productive for users, and how AI solutions can address real world challenges that customers face. He also shares Microsoft’s research-to-product process, along with the advances they have made in computer vision, image captioning, and how researchers were able to make AI that can describe images as well as people do.

Machine Learning blog.exxactcorp.com

A friendly introduction to Graph Neural Networks

Graph neural networks (GNNs) belong to a category of neural networks that operate naturally on data structured as graphs. Despite being what can be a confusing topic, GNNs can be distilled into just a handful of simple concepts.

Practical uses of GNNS include making traffic predictions, search rankings, drug discovery, and more.

Practical AI Practical AI #114

The world's largest open library dataset

Unsplash has released the world’s largest open library dataset, which includes 2M+ high-quality Unsplash photos, 5M keywords, and over 250M searches. They have big ideas about how the dataset might be used by ML/AI folks, and there have already been some interesting applications. In this episode, Luke and Tim discuss why they released this data and what it take to maintain a dataset of this size.

AI (Artificial Intelligence) nullprogram.com

You might not need machine learning

Chris Wellons:

Machine learning is a trendy topic, so naturally it’s often used for inappropriate purposes where a simpler, more efficient, and more reliable solution suffices. The other day I saw an illustrative and fun example of this: Neural Network Cars and Genetic Algorithms. The video demonstrates 2D cars driven by a neural network with weights determined by a generic algorithm. However, the entire scheme can be replaced by a first-degree polynomial without any loss in capability. The machine learning part is overkill.

Yet another example of a meta-trend in software: You might not need $X (where $X is a popular tool or technique that is on the upward side of the hype cycle).

Practical AI Practical AI #113

A casual conversation concerning causal inference

Lucy D’Agostino McGowan, cohost of the Casual Inference Podcast and a professor at Wake Forest University, joins Daniel and Chris for a deep dive into causal inference. Referring to current events (e.g. misreporting of COVID-19 data in Georgia) as examples, they explore how we interact with, analyze, trust, and interpret data - addressing underlying assumptions, counterfactual frameworks, and unmeasured confounders (Chris’s next Halloween costume).

0:00 / 0:00