AI (Artificial Intelligence) Icon

AI (Artificial Intelligence)

Machines simulating human characteristics and intelligence.
297 Stories
All Topics

Python github.com

Imagen (Google's text-to-image neural net) implemented in Pytorch

Last week I logged the very impressive Imagen project, which smarter people than me have said is the SOTA for text-to-image synthesis. Now a WIP implementation is just a pip install imagen-pytorch away.

Architecturally, it is actually much simpler than DALL-E2. It consists of a cascading DDPM conditioned on text embeddings from a large pretrained T5 model (attention network). It also contains dynamic clipping for improved classifier free guidance, noise level conditioning, and a memory efficient unet design.

Google Icon Google

A text-to-image diffusion model with an unprecedented degree of photorealism

Google researchers are giving DALL-E a run for its money:

Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model.

A text-to-image diffusion model with an unprecedented degree of photorealism

Practical AI Practical AI #178

Active learning & endangered languages

Don’t all AI methods need a bunch of data to work? How could AI help document and revitalize endangered languages with “human-in-the-loop” or “active learning” methods? Sarah Moeller from the University of Florida joins us to discuss those and other related questions. She also shares many of her personal experiences working with languages in low resource settings.

Practical AI Practical AI #176

MLOps is NOT Real

We all hear a lot about MLOps these days, but where does MLOps end and DevOps begin? Our friend Luis from OctoML joins us in this episode to discuss treating AI/ML models as regular software components (once they are trained and ready for deployment). We get into topics including optimization on various kinds of hardware and deployment of models at the edge.

Practical AI Practical AI #171

Clothing AI in a data fabric

What happens when your data operations grow to Internet-scale? How do thousands or millions of data producers and consumers efficiently, effectively, and productively interact with each other? How are varying formats, protocols, security levels, performance criteria, and use-case specific characteristics meshed into one unified data fabric? Chris and Daniel explore these questions in this illuminating and Fully-Connected discussion that brings this new data technology into the light.

AI (Artificial Intelligence) nautil.us

What would it take for artificial intelligence to make real progress?

Gary Marcus makes the case that deep learning has hit a wall:

Let me start by saying a few things that seem obvious,” Geoffrey Hinton, “Godfather” of deep learning, and one of the most celebrated scientists of our time, told a leading AI conference in Toronto in 2016. “If you work as a radiologist you’re like the coyote that’s already over the edge of the cliff but hasn’t looked down.” Deep learning is so well-suited to reading images from MRIs and CT scans, he reasoned, that people should “stop training radiologists now” and that it’s “just completely obvious within five years deep learning is going to do better.”

Fast forward to 2022, and not a single radiologist has been replaced.

But he doesn’t stop there. After laying out multiple examples of deep learning failures, he change tone:

For the first time in 40 years, I finally feel some optimism about AI.

Read the article to find out why that is.

Practical AI Practical AI #166

Exploring deep reinforcement learning

In addition to being a Developer Advocate at Hugging Face, Thomas Simonini is building next-gen AI in games that can talk and have smart interactions with the player using Deep Reinforcement Learning (DRL) and Natural Language Processing (NLP). He also created a Deep Reinforcement Learning course that takes a DRL beginner to from zero to hero. Natalie and Chris explore what’s involved, and what the implications are, with a focus on the development path of the new AI data scientist.

Go Time Go Time #213

AI-driven development in Go

Alexey Palazhchenko joins Natalie to discuss the implications of GitHub’s Copilot on code generation. Go’s design lends itself nicely to computer generated authoring: thanks to go fmt, there’s already only one Go style. This means AI-generated code will be consistent and seamless. Its focus on simplicity & readability make it tailor made for this new approach to software creation. Where might this take us?

Chip Huyen huyenchip.com

Real-time machine learning: challenges and solutions

Chip Huyen:

In the last year, I’ve talked to ~30 companies in different industries about their challenges with real-time machine learning. I’ve also worked with quite a few to find the solutions. This post outlines the solutions for (1) online prediction and (2) continual learning, with step-by-step use cases, considerations, and technologies required for each level.

The Changelog The Changelog #472

AI-assisted development is here to stay

We’re joined by Eran Yahav — talking about AI assistants for developers. Eran has been working on this problem for more than a decade. We talk about his path to now and how the idea for Tabnine came to life, this AI revolution taking place and the role it will play in developer productivity, and we talk about the elephant in the room - how Tabnine compares to GitHub Copilot, and what they’re doing to make Tabnine the AI assistant for every developer regardless of the IDE or editor you choose.

0:00 / 0:00