Large language models (LLMs) Icon

Large language models (LLMs)

A language model is a probability distribution over sequences of words. Given any sequence of words of length m, a language model assigns a probability P to the whole sequence. Language models generate probabilities by training on text corpora in one or many languages. Whew!
37 episodes
All Topics

JS Party JS Party #331

Building LLM agents in JS

Play
2024-07-18T18:30:00Z #javascript +2 🎧 8,860

KBall and returning guest Tejas Kumar dive into the topic of building LLM agents using JavaScript. What they are, how they can be useful (including how Tejas used home-built agents to double his podcasting productivity) & how to get started building and running your own agents, even all on your own device with local models.

Practical AI Practical AI #277

Vectoring in on Pinecone

Play
2024-07-10T17:30:00Z #ai +2 🎧 21,296

Daniel & Chris explore the advantages of vector databases with Roie Schwaber-Cohen of Pinecone. Roie starts with a very lucid explanation of why you need a vector database in your machine learning pipeline, and then goes on to discuss Pinecone’s vector database, designed to facilitate efficient storage, retrieval, and management of vector data.

Practical AI Practical AI #274

The perplexities of information retrieval

Play
2024-06-19T16:30:00Z #ai +2 🎧 24,820

Daniel & Chris sit down with Denis Yarats, Co-founder & CTO at Perplexity, to discuss Perplexity’s sophisticated AI-driven answer engine. Denis outlines some of the deficiencies in search engines, and how Perplexity’s approach to information retrieval improves on traditional search engine systems, with a focus on accuracy and validation of the information provided.

Practical AI Practical AI #272

Rise of the AI PC & local LLMs

Play
2024-06-04T18:45:00Z #ai +2 🎧 32,507

We’ve seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.

Practical AI Practical AI #271

AI in the U.S. Congress

Play
2024-05-29T14:30:00Z #ai +2 🎧 24,230

At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master’s degree in C.S. with a concentration in Machine Learning.

Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act.

We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.

Practical AI Practical AI #267

Private, open source chat UIs

Play
2024-04-30T20:45:00Z #ai +2 🎧 25,812

We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).

Practical AI Practical AI #266

Mamba & Jamba

Play
2024-04-24T15:45:00Z #ai +1 🎧 23,493

First there was Mamba… now there is Jamba from AI21. This is a model that combines the best non-transformer goodness of Mamba with good ‘ol attention layers. This results in a highly performant and efficient model that AI21 has open sourced! We hear all about it (along with a variety of other LLM things) from AI21’s co-founder Yoav.

Practical AI Practical AI #263

Should kids still learn to code?

Play
2024-04-02T20:00:00Z #ai +3 🎧 25,877

In this fully connected episode, Daniel & Chris discuss NVIDIA GTC keynote comments from CEO Jensen Huang about teaching kids to code. Then they dive into the notion of “community” in the AI world, before discussing challenges in the adoption of generative AI by non-technical people. They finish by addressing the evolving balance between generative AI interfaces and search engines.

Practical AI Practical AI #261

Prompting the future

Play
2024-03-20T13:45:00Z #ai +2 🎧 30,262

Daniel & Chris explore the state of the art in prompt engineering with Jared Zoneraich, the founder of PromptLayer. PromptLayer is the first platform built specifically for prompt engineering. It can visually manage prompts, evaluate models, log LLM requests, search usage history, and help your organization collaborate as a team. Jared provides expert guidance in how to be implement prompt engineering, but also illustrates how we got here, and where we’re likely to go next.

Practical AI Practical AI #260

Generating the future of art & entertainment

Play
2024-03-12T17:00:00Z #ai +3 🎧 24,832

Runway is an applied AI research company shaping the next era of art, entertainment & human creativity. Chris sat down with Runway co-founder / CTO, Anastasis Germanidis, to discuss their rise and how it’s defining the future of the creative landscape with its text & image to video models. We hope you find Anastasis’s founder story as inspiring as Chris did.

Practical AI Practical AI #256

Gemini vs OpenAI

Play
2024-02-14T20:00:00Z #ai +2 🎧 29,877

Google has been releasing a ton of new GenAI functionality under the name “Gemini”, and they’ve officially rebranded Bard as Gemini. We take some time to talk through Gemini compared with offerings from OpenAI, Anthropic, Cohere, etc.

We also discuss the recent FCC decision to ban the use of AI voices in robocalls and what the decision might mean for government involvement in AI in 2024.

Practical AI Practical AI #255

Data synthesis for SOTA LLMs

Play
2024-02-06T22:00:00Z #ai +1 🎧 24,612

Nous Research has been pumping out some of the best open access LLMs using SOTA data synthesis techniques. Their Hermes family of models is incredibly popular! In this episode, Karan from Nous talks about the origins of Nous as a distributed collective of LLM researchers. We also get into fine-tuning strategies and why data synthesis works so well.

Practical AI Practical AI #254

Large Action Models (LAMs) & Rabbits 🐇

Play
2024-01-30T21:00:00Z #ai +2 🎧 27,284

Recently the release of the rabbit r1 device resulted in huge interest in both the device and “Large Action Models” (or LAMs). What is an LAM? Is this something new? Did these models come out of nowhere, or are they related to other things we are already using? Chris and Daniel dig into LAMs in this episode and discuss neuro-symbolic AI, AI tool usage, multimodal models, and more.

Practical AI Practical AI #253

Collaboration & evaluation for LLM apps

Play
2024-01-23T22:30:00Z #ai +1 🎧 27,171

Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.

Practical AI Practical AI #238

Fine-tuning vs RAG

Play
2023-09-06T12:30:00Z #ai +1 🎧 38,229

In this episode we welcome back our good friend Demetrios from the MLOps Community to discuss fine-tuning vs. retrieval augmented generation. Along the way, we also chat about OpenAI Enterprise, results from the MLOps Community LLM survey, and the orchestration and evaluation of generative AI workloads.

Practical AI Practical AI #237

Automating code optimization with LLMs

Play
2023-08-29T21:30:00Z #ai +1 🎧 33,318

You might have heard a lot about code generation tools using AI, but could LLMs and generative AI make our existing code better? In this episode, we sit down with Mike from TurinTech to hear about practical code optimizations using AI “translation” of slow to fast code. We learn about their process for accomplishing this task along with impressive results when automated code optimization is run on existing open source projects.

Player art
  0:00 / 0:00