Machine Learning Icon

Machine Learning

Machine Learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
190 Stories
All Topics

Practices eugeneyan.com

The first rule of X: start without X

Eugene Yan, in a post titled The first rule of machine learning: start without machine learning:

Applying machine learning effectively is tricky. You need data. You need a robust pipeline to support your data flows. And most of all, you need high-quality labels. As a result, most of the time, my first iteration doesn’t involve machine learning at all.

Eugene is stating the obvious with this post, but hey sometimes you just gotta state it. What’s even more interesting to me is how nicely the format generalizes! Let’s pattern match this sucker:

The first rule of X: start without X

Now, apply the pattern a few times and see if it holds:

  1. The first rule of Kubernetes: start without Kubernetes
  2. The first rule of goroutines: start without goroutines
  3. The first rule of coding: start without coding

Yeah, that abstraction holds pretty true. Surely there will be cases where it falls flat on its face, though. Can you think of any examples?

AI (Artificial Intelligence) github.com

Jina – build search-as-a-service powered by deep learning in just minutes

Jina calls itself a “cloud-native neural search framework”. What is neural search, exactly?

The core idea of neural search is to leverage state-of-the-art deep neural networks to build every component of a search system. In short, neural search is deep neural network-powered information retrieval. In academia, it’s often called neural IR.

And what can it do for you?

Thanks to recent advances in deep neural networks, a neural search system can go way beyond simple text search. It enables advanced intelligence on all kinds of unstructured data, such as images, audio, video, PDF, 3D mesh, you name it.

For example, retrieving animation according to some beats; finding the best-fit memes according to some jokes; scanning a table with your iPhone’s LiDAR camera and finding similar furniture at IKEA. Neural search systems enable what traditional search can’t: multi/cross-modal data retrieval.

This project looks quite established and collaborative. 172 contributors and counting…

The Verge Icon The Verge

OpenAI Codex translates english into code

Codex is a descendant of GPT-3 – its training data contains both natural language and billions of lines of source code from publicly available sources, including code in public GitHub repositories.

“We see this as a tool to multiply programmers,” OpenAI’s CTO and co-founder Greg Brockman told The Verge. “Programming has two parts to it: you have ‘think hard about a problem and try to understand it,’ and ‘map those small pieces to existing code, whether it’s a library, a function, or an API.’” The second part is tedious, he says, but it’s what Codex is best at. “It takes people who are already programmers and removes the drudge work.”

Mozilla Icon Mozilla

Mozilla Common Voice adds 16 new languages and 4,600 new hours of speech

That’s a big addition. Here’s what Hillary Juma (Common Voice’s community mgr) had to say about it:

Internet access is increasingly mediated through speech: Voice assistants and smart speakers give us directions, search for information, connect us to friends, used in assistive technology and much more. Yet this technology doesn’t work for millions of people. For example, neither Amazon’s Alexa, Apple’s Siri, nor Google Home support a single native African language.

By giving individuals the ability to share their speech, we can help ensure all communities have access to voice technology and the opportunity it unlocks.

What a great initiative! (I first heard about Common Voice on Practical AI.)

Chip Huyen huyenchip.com

A free book on how to survive the machine learning interview process

Chip Huyen has been on both sides of ML-related interviews and has a lot of expertise on the process:

If you’ve picked up this book because you’re interested in working with one of the key emerging technologies of the 2020s but not sure where to start, you’re in the right place. Whether you want to become an ML engineer, a platform engineer, a research scientist, or you want to do ML but don’t yet know the differences among those titles, I hope that this book will give you some useful pointers.

Facebook Engineering Icon Facebook Engineering

A data augmentations library for audio, image, text, and video

AugLy is a great library to utilize for augmenting your data in model training, or to evaluate the robustness gaps of your model! We designed AugLy to include many specific data augmentations that users perform in real life on internet platforms like Facebook’s – for example making an image into a meme, overlaying text/emojis on images/videos, reposting a screenshot from social media. While AugLy contains more generic data augmentations as well, it will be particularly useful to you if you’re working on a problem like copy detection, hate speech detection, or copyright infringement where these “internet user” types of data augmentations are prelevant.

A data augmentations library for audio, image, text, and video

Command line interface github.com

Command-line tools for speech and intent recognition on Linux

This isn’t merely a speech-to-text thing. It also provides intent recognition, which makes it great for doing voice commands. For example, when trained with this template, the following command:

$ voice2json transcribe-wav \
      < turn-on-the-light.wav | \
      voice2json recognize-intent | \
      jq .

Produces this JSON event:

{
    "text": "turn on the light",
    "intent": {
        "name": "LightState"
    },
    "slots": {
        "state": "on"
    }
}

And it can be retrained quickly enough to do it at runtime. Cool stuff!

AI (Artificial Intelligence) exxactcorp.com

Disentangling AI, machine learning, and deep learning

This article starts with a concise description of the relationship and differences of these 3 commonly used industry terms. Then it digs into the history.

Deep learning is a subset of machine learning, which in turn is a subset of artificial intelligence, but the origins of these names arose from an interesting history. In addition, there are fascinating technical characteristics that can differentiate deep learning from other types of machine learning…essential working knowledge for anyone with ML, DL, or AI in their skillset.

Disentangling AI, machine learning, and deep learning

The New Stack Icon The New Stack

How I built an on-premises AI training testbed with Kubernetes and Kubeflow

This is part 4 in a cool series on The New Stack exploring the Kubeflow machine learning platform.

I recently built a four-node bare metal Kubernetes cluster comprising CPU and GPU hosts for all my AI experiments. Though it makes economic sense to leverage the public cloud for provisioning the infrastructure, I invested a fortune in the AI testbed that’s within my line of sight.

The author shares many insights into the choices he made while building this dream setup.

How I built an on-premises AI training testbed with Kubernetes and Kubeflow

Python github.com

A PyTorch-based speech toolkit

SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, multi-microphone signal processing and many others.

Currently in beta.

Python github.com

`whereami` uses WiFi signals & ML to locate you (within 2-10 meters)

If you’re adventurous and you want to learn to distinguish between couch #1 and couch #2 (i.e. 2 meters apart), it is the most robust when you switch locations and train in turn. E.g. first in Spot A, then in Spot B then start again with A. Doing this in spot A, then spot B and then immediately using “predict” will yield spot B as an answer usually. No worries, the effect of this temporal overfitting disappears over time. And, in fact, this is only a real concern for the very short distances. Just take a sample after some time in both locations and it should become very robust.

The linked project was “almost entirely copied” from the find project, which was written in Go. It then went on to inspire whereami.js. I bet you can guess what that is.

HackerNoon Icon HackerNoon

Why ML in production is (still) broken and ways we can fix it

Hamza Tahir on HackerNoon:

By now, chances are you’ve read the famous paper about hidden technical debt by Sculley et al. from 2015. As a field, we have accepted that the actual share of Machine Learning is only a fraction of the work going into successful ML projects. The resulting complexity, especially in the transition to “live” environments, lead to large amounts of failed ML projects never reaching production.

Productionizing ML workflows has been a trending topic on Practical AI lately…

Why ML in production is (still) broken and ways we can fix it

Elixir thinkingelixir.com

ML is coming to Elixir by way of José Valim's "Project Nx"

Elixir creator José Valim stopped by the Thinking Elixir podcast to reveal what he’s been working on for the past 3 months: Numerical Elixir!

This is an exciting development that brings Elixir into areas it hasn’t been used before. We also talk about what this means for Elixir and the community going forward. A must listen!

Queue up this episode and/or stay tuned for an upcoming episode of The Changelog where we’ll sit down with José after his LambdaDays demo to unpack things even more.

Machine Learning marksaroufim.substack.com

Machine Learning: The Great Stagnation

This piece by Mark Saroufim on the state of ML starts pretty salty:

Graduate Student Descent is one of the most reliable ways of getting state of the art performance in Machine Learning today and it’s also a fully parallelizable over as many graduate students or employees your lab has. Armed with Graduate Student Descent you are more likely to get published or promoted than if you took on uncertain projects.

and:

BERT engineer is now a full time job. Qualifications include:

  • Some bash scripting
  • Deep knowledge of pip (starting a new environment is the suckier version of practicing scales)
  • Waiting for new HuggingFace models to be released
  • Watching Yannic Kilcher’s new Transformer paper the day it comes out
  • Repeating what Yannic said at your team reading group

It’s kind of like Dev-ops but you get paid more.

But if you survive through (or maybe even enjoy) the lamentations and ranting, you’ll find some hope and optimism around specific projects that the author believes are pushing the industry through its Great Stagnation.

I learned a few things. Maybe you will too.

Machine Learning huyenchip.com

The MLOps tooling landscape in early 2021 (284 tools)

Chip Huyen:

While looking for these MLOps tools, I discovered some interesting points about the MLOps landscape:

  1. Increasing focus on deployment
  2. The Bay Area is still the epicenter of machine learning, but not the only hub
  3. MLOps infrastructures in the US and China are diverging
  4. More interests in machine learning production from academia

If MLOps is new to you, Practical AI did a deep dive on the topic that will help you sort it out. Or if you’d prefer a shallow dive… just watch this.

Machine Learning blog.exxactcorp.com

A friendly introduction to Graph Neural Networks

Graph neural networks (GNNs) belong to a category of neural networks that operate naturally on data structured as graphs. Despite being what can be a confusing topic, GNNs can be distilled into just a handful of simple concepts.

Practical uses of GNNS include making traffic predictions, search rankings, drug discovery, and more.

AI (Artificial Intelligence) nullprogram.com

You might not need machine learning

Chris Wellons:

Machine learning is a trendy topic, so naturally it’s often used for inappropriate purposes where a simpler, more efficient, and more reliable solution suffices. The other day I saw an illustrative and fun example of this: Neural Network Cars and Genetic Algorithms. The video demonstrates 2D cars driven by a neural network with weights determined by a generic algorithm. However, the entire scheme can be replaced by a first-degree polynomial without any loss in capability. The machine learning part is overkill.

Yet another example of a meta-trend in software: You might not need $X (where $X is a popular tool or technique that is on the upward side of the hype cycle).

Craig Kerstiens info.crunchydata.com

Building a recommendation engine inside Postgres with Python and Pandas

Craig Kerstiens told me about this on our recent Postgres episode of The Changelog and my jaw about dropped out of my mouth.

… earlier today I was starting to wonder why couldn’t I do more machine learning directly inside [Postgres]. Yeah, there is madlib, but what if I wanted to write my own recommendation engine? So I set out on a total detour of a few hours and lo and behold, I can probably do a lot more of this in Postgres than I realized before. What follows is a quick walkthrough of getting a recommendation engine setup directly inside Postgres.

Craig doesn’t necessarily suggest you put this kind of solution in production, but he doesn’t come out and say don’t do it either. 😉

Machine Learning blog.acolyer.org

The case for a learned sorting algorithm

Adrian Colyer walks us through a paper from SageDB that’s taking machine learning and applying it to old Computer Science problems such as sorting. Here’s the big idea:

Suppose you had a model that given a data item from a list, could predict its position in a sorted version of that list. 0.239806? That’s going to be at position 287! If the model had 100% accuracy, it would give us a completed sort just by running over the dataset and putting each item in its predicted position. There’s a problem though. A model with 100% accuracy would essentially have to see every item in the full dataset and memorise its position – there’s no way training and then using such a model can be faster than just sorting, as sorting is a part of its training! But maybe we can sample a subset of the data and get a model that is a useful approximation, by learning an approximation to the CDF (cumulative distribution function).

0:00 / 0:00