AI (Artificial Intelligence) Icon

AI (Artificial Intelligence)

Machines simulating human characteristics and intelligence.
242 Stories
All Topics

Machine Learning cerebralab.com

Boring machine learning is where it's at

It surprises me that when people think of “software that brings about the singularity” they think of text models, or of RL agents. But they sneer at decision tree boosting and the like as boring algorithms for boring problems.

To me, this seems counter-intuitive, and the fact that most people researching ML are interested in subjects like vision and language is flabbergasting. For one, because getting anywhere productive in these fields is really hard, for another, because their usefulness seems relatively minimal.

AI (Artificial Intelligence) github.com

Jina – build search-as-a-service powered by deep learning in just minutes

Jina calls itself a “cloud-native neural search framework”. What is neural search, exactly?

The core idea of neural search is to leverage state-of-the-art deep neural networks to build every component of a search system. In short, neural search is deep neural network-powered information retrieval. In academia, it’s often called neural IR.

And what can it do for you?

Thanks to recent advances in deep neural networks, a neural search system can go way beyond simple text search. It enables advanced intelligence on all kinds of unstructured data, such as images, audio, video, PDF, 3D mesh, you name it.

For example, retrieving animation according to some beats; finding the best-fit memes according to some jokes; scanning a table with your iPhone’s LiDAR camera and finding similar furniture at IKEA. Neural search systems enable what traditional search can’t: multi/cross-modal data retrieval.

This project looks quite established and collaborative. 172 contributors and counting…

The Verge Icon The Verge

OpenAI Codex translates english into code

Codex is a descendant of GPT-3 – its training data contains both natural language and billions of lines of source code from publicly available sources, including code in public GitHub repositories.

“We see this as a tool to multiply programmers,” OpenAI’s CTO and co-founder Greg Brockman told The Verge. “Programming has two parts to it: you have ‘think hard about a problem and try to understand it,’ and ‘map those small pieces to existing code, whether it’s a library, a function, or an API.’” The second part is tedious, he says, but it’s what Codex is best at. “It takes people who are already programmers and removes the drudge work.”

Mozilla Icon Mozilla

Mozilla Common Voice adds 16 new languages and 4,600 new hours of speech

That’s a big addition. Here’s what Hillary Juma (Common Voice’s community mgr) had to say about it:

Internet access is increasingly mediated through speech: Voice assistants and smart speakers give us directions, search for information, connect us to friends, used in assistive technology and much more. Yet this technology doesn’t work for millions of people. For example, neither Amazon’s Alexa, Apple’s Siri, nor Google Home support a single native African language.

By giving individuals the ability to share their speech, we can help ensure all communities have access to voice technology and the opportunity it unlocks.

What a great initiative! (I first heard about Common Voice on Practical AI.)

Licensing fsf.org

Free Software Foundations declares GitHub Copilot "unacceptable and unjust"

The FSF is funding white papers on “philosophical and legal questions around Copilot”. In their post announcing the fund, Donald Robertson states:

The Free Software Foundation has received numerous inquiries about our position on these questions. We can see that Copilot’s use of freely licensed software has many implications for an incredibly large portion of the free software community. Developers want to know whether training a neural network on their software can really be considered fair use. Others who may be interested in using Copilot wonder if the code snippets and other elements copied from GitHub-hosted repositories could result in copyright infringement. And even if everything might be legally copacetic, activists wonder if there isn’t something fundamentally unfair about a proprietary software company building a service off their work.

One thing is for sure: there are many open questions that need answering. How we (as a community / industry) go about answering those questions is much less clear. But it’ll probably take place on blogs, forums, GitHub Issues, and even court rooms over the next decade.

AI (Artificial Intelligence) exxactcorp.com

Disentangling AI, machine learning, and deep learning

This article starts with a concise description of the relationship and differences of these 3 commonly used industry terms. Then it digs into the history.

Deep learning is a subset of machine learning, which in turn is a subset of artificial intelligence, but the origins of these names arose from an interesting history. In addition, there are fascinating technical characteristics that can differentiate deep learning from other types of machine learning…essential working knowledge for anyone with ML, DL, or AI in their skillset.

Disentangling AI, machine learning, and deep learning

Python github.com

A PyTorch-based speech toolkit

SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, multi-microphone signal processing and many others.

Currently in beta.

Tooling github.com

Search inside YouTube videos using natural language

Use OpenAI’s CLIP neural network to search inside YouTube videos. You can try it by running the notebook on Google Colab.

The README has a bunch of examples of things you might search for and the results you’d get back. (“The Transamerica Pyramid”, anyone?)

The author also has another related project where you can search Unsplash in like manner.

Ines Montani github.com

Introducing spaCy 3.0

You may recall spaCy from this episode of Practical AI with its creators. If not, now’s a great time to introduce yourself to the project. 3.0 looks like a fantastic new release of the wildly popular NLP library. The list of new and improved things is too long for me to reproduce here, so go check it out for yourself.

There’s also three YouTube videos accompanying the release. That’s evidence of just how much effort and polish went in to this.

Machine Learning marksaroufim.substack.com

Machine Learning: The Great Stagnation

This piece by Mark Saroufim on the state of ML starts pretty salty:

Graduate Student Descent is one of the most reliable ways of getting state of the art performance in Machine Learning today and it’s also a fully parallelizable over as many graduate students or employees your lab has. Armed with Graduate Student Descent you are more likely to get published or promoted than if you took on uncertain projects.

and:

BERT engineer is now a full time job. Qualifications include:

  • Some bash scripting
  • Deep knowledge of pip (starting a new environment is the suckier version of practicing scales)
  • Waiting for new HuggingFace models to be released
  • Watching Yannic Kilcher’s new Transformer paper the day it comes out
  • Repeating what Yannic said at your team reading group

It’s kind of like Dev-ops but you get paid more.

But if you survive through (or maybe even enjoy) the lamentations and ranting, you’ll find some hope and optimism around specific projects that the author believes are pushing the industry through its Great Stagnation.

I learned a few things. Maybe you will too.

Machine Learning blog.exxactcorp.com

A friendly introduction to Graph Neural Networks

Graph neural networks (GNNs) belong to a category of neural networks that operate naturally on data structured as graphs. Despite being what can be a confusing topic, GNNs can be distilled into just a handful of simple concepts.

Practical uses of GNNS include making traffic predictions, search rankings, drug discovery, and more.

AI (Artificial Intelligence) nullprogram.com

You might not need machine learning

Chris Wellons:

Machine learning is a trendy topic, so naturally it’s often used for inappropriate purposes where a simpler, more efficient, and more reliable solution suffices. The other day I saw an illustrative and fun example of this: Neural Network Cars and Genetic Algorithms. The video demonstrates 2D cars driven by a neural network with weights determined by a generic algorithm. However, the entire scheme can be replaced by a first-degree polynomial without any loss in capability. The machine learning part is overkill.

Yet another example of a meta-trend in software: You might not need $X (where $X is a popular tool or technique that is on the upward side of the hype cycle).

Learn github.com

A roadmap to becoming an AI expert in 2020

Below you find a set of charts demonstrating the paths that you can take and the technologies that you would want to adopt in order to become a data scientist, machine learning or an ai expert. We made these charts for our new employees to make them AI Experts but we wanted to share them here to help the community.

I didn’t embed the roadmap images because they are too many and too vertical to fit. It sound like an interactive version is Coming Soon™️, but don’t wait on that to get started here. 2020 is almost over. 😉

InfoQ Icon InfoQ

AI training method exceeds GPT-3 performance with 99.9% fewer parameters

A team of scientists at LMU Munich have developed Pattern-Exploiting Training (PET), a deep-learning training technique for natural language processing (NLP) models. Using PET, the team trained a Transformer NLP model with 223M parameters that out-performed the 175B-parameter GPT-3 by over 3 percentage points on the SuperGLUE benchmark.

NVIDIA Developer Blog Icon NVIDIA Developer Blog

NVIDIA's new GAN reduces video bandwidth by orders of magnitude

This is bonkers:

New AI breakthroughs in NVIDIA Maxine, cloud-native video streaming AI SDK, slash bandwidth use while make it possible to re-animate faces, correct gaze and animate characters for immersive and engaging meetings.

Instead of transferring your face at N frames per second, they transfer it once at the beginning of the call and then update key positions over time. The results are super impressive (and just a bit creepy?).

Microsoft github.com

Microsoft's deep learning approach to restoring old photos

What’s linked is the official PyTorch implementation of a paper published in April of this year called Bringing Old Photos Back to Life.

We propose to restore old photos that suffer from severe degradation through a deep learning approach. Unlike conventional restoration tasks that can be solved through supervised learning, the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize. Therefore, we propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs. Specifically, we train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces.

The results are impressive!

Microsoft's deep learning approach to restoring old photos
0:00 / 0:00