AI (Artificial Intelligence) Icon

AI (Artificial Intelligence)

Machines simulating human characteristics and intelligence.
112 Stories
All Topics

Uber Engineering Icon Uber Engineering

Uber's new GTN algorithm speeds up deep learning by 9x

Here’s a new acronym for you: Generative Teaching Networks (GTN)

GTNs are deep neural networks that generate data and/or training environments on which a learner (e.g., a freshly initialized neural network) trains before being tested on a target task (e.g., recognizing objects in images). One advantage of this approach is that GTNs can produce synthetic data that enables other neural networks to learn faster than when training on real data. That allowed us to search for new neural network architectures nine times faster than when using real data.

Fake data, real results? Sounds pretty slick.

Victor Zhou victorzhou.com

A gentle introduction to Visual Question Answering using neural networks

Show us humans a picture of someone in uniform on a mound of dirt throwing a ball and we will quickly tell you we’re looking at baseball. But how do you make a computer come to the same conclusion?

Visual Question Answering

In this post, we’ll explore basic methods for performing VQA and build our own simple implementation in Python

TechCrunch Icon TechCrunch

Hugging Face raises $15 million to build their open source NLP library 🤗

Congrats to Clément and the Hugging Face team on this milestone!

The company first built a mobile app that let you chat with an artificial BFF, a sort of chatbot for bored teenagers. More recently, the startup released an open-source library for natural language processing applications. And that library has been massively successful.

The library mentioned is called Transformers, which is dubbed as ‘state-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.’

If any of these things ring a bell to you, it may be because Practical AI co-host Daniel Whitenack has been a huge supporter of Hugging Face for a long time and mentions them often on the show. We even had Clément on the show back in March of this year.

TensorFlow github.com

NVIDIA's StyleGAN2 TensorFlow implementation

Style-based GAN architecture produces impressive image generation results, but it’s not without its limitations. NVidia’s research team has been hard at work fixing some of the problems with StyleGAN (artifacts).

In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably detect if an image is generated by a particular network.

Check out the video of StyleGAN2 in action or, if you’re feeling brazen, dive right into their paper.

Learn github.com

A booklet on machine learning systems design with exercises

This booklet covers four main steps of designing a machine learning system:

  1. Project setup
  2. Data pipeline
  3. Modeling: selecting, training, and debugging
  4. Serving: testing, deploying, and maintaining

It comes with links to practical resources that explain each aspect in more details. It also suggests case studies written by machine learning engineers at major tech companies who have deployed machine learning systems to solve real-world problems.

AI (Artificial Intelligence) github.com

Meet the new AI that knows you better than you know yourself

Winner of Mozilla’s $50,000 award for art and advocacy exploring AI.

Stealing Ur Feelings is an augmented reality experience that reveals how your favorite apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize American democracy. Using the AI techniques described in corporate patents, Stealing Ur Feelings learns your deepest secrets just by analyzing your face.

If you haven’t tried this yet, drop what you’re doing and give it a go. Top notch production.

Meet the new AI that knows you better than you know yourself

The Verge Icon The Verge

California has banned political deepfakes during election season

Colin Lecher reporting for The Verge:

Last week, Gov. Gavin Newsom signed into law AB 730, which makes it a crime to distribute audio or video that gives a false, damaging impression of a politician’s words or actions.

While the word “deepfake” doesn’t appear in the legislation, the bill clearly takes aim at doctored works. Lawmakers have raised concerns recently that distorted deepfake videos, like a slowed video of House Speaker Nancy Pelosi that appeared over the summer, could be used to influence elections in the future.

This is the first (but likely not the last) piece of legislation aimed at fighting the potential impact of GANs Gone Wild.

It’ll be interesting to watch this game play out. I think the only long-term, sustainable solution will emerge from the same arena where the problem began: technological advances.

TensorFlow github.com

TensorFlow 2.0 focuses on simplicity and ease of use

Folks have been talking about TensorFlow 2 for some time now (See Practical AI #42 for one excellent example), but now it’s finally here. The bulleted list:

  • Easy model building with Keras and eager execution.
  • Robust model deployment in production on any platform.
  • Powerful experimentation for research.
  • API simplification by reducing duplication and removing deprecated endpoints.

This is a huge release. Check out the highlights list in the changelog to see for yourself.

OpenAI Icon OpenAI

Microsoft is investing $1 billion in OpenAI

Straight from the horse’s mouth:

We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI. We’ll jointly develop new Azure AI supercomputing technologies, and Microsoft will become our exclusive cloud provider—so we’ll be working hard together to further extend Microsoft Azure’s capabilities in large-scale AI systems.

Sometimes it’s hard to see the value traded in large scale investments like these. What do both sides get? With this particular investment, however, it’s pretty obvious what Microsoft is getting (Azure++) and what OpenAI is getting (an expanded R&D budget). It’s also worth noting that this is specifically focused on Artificial General Intelligence, not merely advancing the current state of the art in Machine Learning.

Mozilla Icon Mozilla

Mozilla has published their 2019 Internet Health Report

The report focuses on 5 questions about the internet.

  • Is it safe?
  • How open is it?
  • Who is welcome?
  • Who can succeed?
  • Who controls it?

The answer is complicated, and the report doesn’t make any particular conclusions so much as share a series of research & stories about each topic. Includes some fascinating looks at what’s going on in AI, inclusive design, open source, decentralization and more.

NVIDIA Developer Blog Icon NVIDIA Developer Blog

NVIDIA Jetson Nano - A $99 computer for embedded AI

Google, Intel, and others have recently been targeting AI at the edge with things like Coral and the Neural Compute Stick, but NVIDIA is taking things a step farther. They just announced the Jetson Nano, which is a $99 computer with 472 GFLOPS of compute performance, an integrated NVIDIA GPU, and a Raspberry Pi form factor. According to NVIDIA:

The compute performance, compact footprint, and flexibility of Jetson Nano brings endless possibilities to developers for creating AI-powered devices and embedded systems.

And it’s not only for inference (which is the main target of things like Intel’s NCS). The Jetson Nano can also handle AI model training:

since Jetson Nano can run the full training frameworks like TensorFlow, PyTorch, and Caffe, it’s also able to re-train with transfer learning for those who may not have access to another dedicated training machine and are willing to wait longer for results.

Check it out! You can pre-order now.

The Allen Institute for AI Icon The Allen Institute for AI

China to overtake US in AI research

China has committed to becoming the world leader in AI by 2030, with goals to build a domestic artificial intelligence industry worth nearly $150 billion (according to this CNN article). Prompted by these efforts, the Semantic Scholar team at the Allen AI Institute analyzed over two million academic AI papers published through the end of 2018. This analysis revealed the following:

Our analysis shows that China has already surpassed the US in published AI papers. If current trends continue, China is poised to overtake the US in the most-cited 50% of papers this year, in the most-cited 10% of papers next year, and in the 1% of most-cited papers by 2025. Citation counts are a lagging indicator of impact, so our results may understate the rising impact of AI research originating in China.

They also emphasize that US actions are making it difficult to recruit and retain foreign students and scholars, and these difficulties are likely to exacerbate the trend towards Chinese supremacy in AI research.

OpenAI Icon OpenAI

OpenAI creates a "capped-profit" to help build artificial general intelligence

OpenAI, one of the largest and most influential AI research entities, was originally a non-profit. However, they just announced that they are creating a “capped-profit” entity, OpenAI LP. This capped-profit entity will supposedly help them accomplish their mission of building artificial general intelligence (AGI):

We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company.

The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount—and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP—are owned by the original OpenAI Nonprofit entity.

To some this makes total sense. Others have criticized the move, because they say that it misrepresents money as the only barrier to AGI or implies that OpenAI will develop it in a vacuum. What do you think?

Learn more about OpenAI’s mission from one of it’s founders in this episode of Practical AI.

Casey Newton The Verge

The secret lives of Facebook moderators in America

Eventually Artificial Intelligence will take over the human powered content moderation jobs for Facebook. Until then, this small population of humans employed by Cognizant (on behalf of Facebook) in Phoenix, Arizona accept the job of subjecting themselves to the worst of humankind to provide “a better Facebook experience.”

Casey Newton writes for The Verge:

The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed. She knows that section 13 of the Facebook community standards prohibits videos that depict the murder of one or more people. When Chloe explains this to the class, she hears her voice shaking.

Returning to her seat, Chloe feels an overpowering urge to sob. Another trainee has gone up to review the next post, but Chloe cannot concentrate. She leaves the room, and begins to cry so hard that she has trouble breathing.

No one tries to comfort her. This is the job she was hired to do…

AI (Artificial Intelligence) towardsdatascience.com

A response to OpenAI's new dangerous text generator

Those of you following AI related things on Twitter have probably been overwhelmed with commentary about OpenAI’s new GPT-2 language model, which is “Too Dangerous to Make Public” (according to Wired’s interpretation of OpenAI’s statements). Is this discussion frustrating or confusing for you?

Well, Ryan Lowe from McGill University has published a nice response article. He discusses the model and results in general, but also gives some perspective on the ethical implication and where the AI community should go from here. According to Lowe:

The machine learning community really, really needs to start talking openly about our standards for ethical research release

NVIDIA Developer Blog Icon NVIDIA Developer Blog

NVIDIA's PhysX project goes open source and beyond gaming

PhysX is NVIDIA’s hardware-accelerated physics simulation engine that’s now released as open source to move it beyond its most common use case in the gaming world, to give access to the embedded and scientific fields — think AI, robotics, computer vision, and self-driving cars.

PhysX SDK has gone open source, starting today with version 3.4! It is available under the simple 3-Clause BSD license. With access to the source code, developers can debug, customize and extend the PhysX SDK as they see fit.

0:00 / 0:00