Machine Learning Icon

Machine Learning

Machine Learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
91 Stories
All Topics

Python github.com

Efficient, reusable components for 3D computer vision research with PyTorch

PyTorch3d is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. For this reason, all operators in PyTorch3d:

  • Are implemented using PyTorch tensors
  • Can handle minibatches of hetereogenous data
  • Can be differentiated
  • Can utilize GPUs for acceleration

Get started with tutorials on deforming a sphere mesh into a dolphin, rendering textured meshes, camera position optimization, and more.

Uber Engineering Icon Uber Engineering

Uber's new GTN algorithm speeds up deep learning by 9x

Here’s a new acronym for you: Generative Teaching Networks (GTN)

GTNs are deep neural networks that generate data and/or training environments on which a learner (e.g., a freshly initialized neural network) trains before being tested on a target task (e.g., recognizing objects in images). One advantage of this approach is that GTNs can produce synthetic data that enables other neural networks to learn faster than when training on real data. That allowed us to search for new neural network architectures nine times faster than when using real data.

Fake data, real results? Sounds pretty slick.

Victor Zhou victorzhou.com

A gentle introduction to Visual Question Answering using neural networks

Show us humans a picture of someone in uniform on a mound of dirt throwing a ball and we will quickly tell you we’re looking at baseball. But how do you make a computer come to the same conclusion?

Visual Question Answering

In this post, we’ll explore basic methods for performing VQA and build our own simple implementation in Python

JavaScript github.com

7 simple functions to give you a feel for how machines can actually "learn"

NanoNeuron is an over-simplified version of the Neuron concept from Neural Networks. NanoNeuron is trained to convert temperature values from Celsius to Fahrenheit.

The NanoNeuron.js code example contains 7 simple JavaScript functions (which touches on model prediction, cost calculation, forward/backwards propagation, and training) that will give you a feeling of how machines can actually “learn”. No 3rd-party libraries, no external data-sets or dependencies, only pure and simple JavaScript functions.

This is not a complete guide to machine learning. Just a primer.

Learn github.com

A booklet on machine learning systems design with exercises

This booklet covers four main steps of designing a machine learning system:

  1. Project setup
  2. Data pipeline
  3. Modeling: selecting, training, and debugging
  4. Serving: testing, deploying, and maintaining

It comes with links to practical resources that explain each aspect in more details. It also suggests case studies written by machine learning engineers at major tech companies who have deployed machine learning systems to solve real-world problems.

TensorFlow github.com

TensorFlow 2.0 focuses on simplicity and ease of use

Folks have been talking about TensorFlow 2 for some time now (See Practical AI #42 for one excellent example), but now it’s finally here. The bulleted list:

  • Easy model building with Keras and eager execution.
  • Robust model deployment in production on any platform.
  • Powerful experimentation for research.
  • API simplification by reducing duplication and removing deprecated endpoints.

This is a huge release. Check out the highlights list in the changelog to see for yourself.

Victor Zhou victorzhou.com

Random Forests for complete beginners

Victor Zhou has been killin’ it lately with these explainers:

In my opinion, most Machine Learning tutorials aren’t beginner-friendly enough.

Last month, I wrote an introduction to Neural Networks for complete beginners. This post will adopt the same strategy, meaning it again assumes ZERO prior knowledge of machine learning. We’ll learn what Random Forests are and how they work from the ground up.

Netflix Technology Blog Icon Netflix Technology Blog

Python at Netflix

From the Netflix Technology Blog on how they’re using Python.

As many of us prepare to go to PyCon, we wanted to share a sampling of how Python is used at Netflix. We use Python through the full content lifecycle, from deciding which content to fund all the way to operating the CDN that serves the final video to 148 million members. We use and contribute to many open-source Python packages, some of which are mentioned below. If any of this interests you, check out the jobs site or find us at PyCon. We have donated a few Netflix Originals posters to the PyLadies Auction and look forward to seeing you all there.

Hamel Husain towardsdatascience.com

How to automate tasks on GitHub with machine learning for fun and profit

This is an explainer on how to build a GitHub App that predicts and applies issue labels using Tensorflow and public datasets. Hamel Husain writes:

In order to show you how to create your own apps, we will walk you through the process of creating a GitHub app that can automatically label issues. Note that all of the code for this app, including the model training steps are located in this GitHub repository.

See also: Issue Label Bot

NVIDIA Developer Blog Icon NVIDIA Developer Blog

NVIDIA Jetson Nano - A $99 computer for embedded AI

Google, Intel, and others have recently been targeting AI at the edge with things like Coral and the Neural Compute Stick, but NVIDIA is taking things a step farther. They just announced the Jetson Nano, which is a $99 computer with 472 GFLOPS of compute performance, an integrated NVIDIA GPU, and a Raspberry Pi form factor. According to NVIDIA:

The compute performance, compact footprint, and flexibility of Jetson Nano brings endless possibilities to developers for creating AI-powered devices and embedded systems.

And it’s not only for inference (which is the main target of things like Intel’s NCS). The Jetson Nano can also handle AI model training:

since Jetson Nano can run the full training frameworks like TensorFlow, PyTorch, and Caffe, it’s also able to re-train with transfer learning for those who may not have access to another dedicated training machine and are willing to wait longer for results.

Check it out! You can pre-order now.

The Allen Institute for AI Icon The Allen Institute for AI

China to overtake US in AI research

China has committed to becoming the world leader in AI by 2030, with goals to build a domestic artificial intelligence industry worth nearly $150 billion (according to this CNN article). Prompted by these efforts, the Semantic Scholar team at the Allen AI Institute analyzed over two million academic AI papers published through the end of 2018. This analysis revealed the following:

Our analysis shows that China has already surpassed the US in published AI papers. If current trends continue, China is poised to overtake the US in the most-cited 50% of papers this year, in the most-cited 10% of papers next year, and in the 1% of most-cited papers by 2025. Citation counts are a lagging indicator of impact, so our results may understate the rising impact of AI research originating in China.

They also emphasize that US actions are making it difficult to recruit and retain foreign students and scholars, and these difficulties are likely to exacerbate the trend towards Chinese supremacy in AI research.

OpenAI Icon OpenAI

OpenAI creates a "capped-profit" to help build artificial general intelligence

OpenAI, one of the largest and most influential AI research entities, was originally a non-profit. However, they just announced that they are creating a “capped-profit” entity, OpenAI LP. This capped-profit entity will supposedly help them accomplish their mission of building artificial general intelligence (AGI):

We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company.

The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount—and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP—are owned by the original OpenAI Nonprofit entity.

To some this makes total sense. Others have criticized the move, because they say that it misrepresents money as the only barrier to AGI or implies that OpenAI will develop it in a vacuum. What do you think?

Learn more about OpenAI’s mission from one of it’s founders in this episode of Practical AI.

Python github.com

GIPHY's celebrity-detecting deep learning model 🕵️‍♀️

GIPHY is proud to release our custom machine learning model that is able to discern over 2,300 celebrity faces with 98% accuracy. The model was trained to identify the most popular celebs on GIPHY, and can identify and make predictions for multiple faces across a sequence of images, like GIFs and videos.

Give it a try on the demo page or download the model yourself and follow along with the examples.

0:00 / 0:00