Large language models (LLMs) Icon

Large language models (LLMs)

A language model is a probability distribution over sequences of words. Given any sequence of words of length m, a language model assigns a probability P to the whole sequence. Language models generate probabilities by training on text corpora in one or many languages. Whew!
21 episodes
All Topics

Practical AI Practical AI #256

Gemini vs OpenAI

Play
2024-02-14T20:00:00Z #ai +2 🎧 22,660

Google has been releasing a ton of new GenAI functionality under the name “Gemini”, and they’ve officially rebranded Bard as Gemini. We take some time to talk through Gemini compared with offerings from OpenAI, Anthropic, Cohere, etc.

We also discuss the recent FCC decision to ban the use of AI voices in robocalls and what the decision might mean for government involvement in AI in 2024.

Practical AI Practical AI #255

Data synthesis for SOTA LLMs

Play
2024-02-06T22:00:00Z #ai +1 🎧 21,269

Nous Research has been pumping out some of the best open access LLMs using SOTA data synthesis techniques. Their Hermes family of models is incredibly popular! In this episode, Karan from Nous talks about the origins of Nous as a distributed collective of LLM researchers. We also get into fine-tuning strategies and why data synthesis works so well.

Practical AI Practical AI #254

Large Action Models (LAMs) & Rabbits 🐇

Play
2024-01-30T21:00:00Z #ai +2 🎧 24,141

Recently the release of the rabbit r1 device resulted in huge interest in both the device and “Large Action Models” (or LAMs). What is an LAM? Is this something new? Did these models come out of nowhere, or are they related to other things we are already using? Chris and Daniel dig into LAMs in this episode and discuss neuro-symbolic AI, AI tool usage, multimodal models, and more.

Practical AI Practical AI #253

Collaboration & evaluation for LLM apps

Play
2024-01-23T22:30:00Z #ai +1 🎧 24,603

Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.

Practical AI Practical AI #238

Fine-tuning vs RAG

Play
2023-09-06T12:30:00Z #ai +1 🎧 36,628

In this episode we welcome back our good friend Demetrios from the MLOps Community to discuss fine-tuning vs. retrieval augmented generation. Along the way, we also chat about OpenAI Enterprise, results from the MLOps Community LLM survey, and the orchestration and evaluation of generative AI workloads.

Practical AI Practical AI #237

Automating code optimization with LLMs

Play
2023-08-29T21:30:00Z #ai +1 🎧 32,637

You might have heard a lot about code generation tools using AI, but could LLMs and generative AI make our existing code better? In this episode, we sit down with Mike from TurinTech to hear about practical code optimizations using AI “translation” of slow to fast code. We learn about their process for accomplishing this task along with impressive results when automated code optimization is run on existing open source projects.

Practical AI Practical AI #236

The new AI app stack

Play
2023-08-23T12:00:00Z #ai +2 🎧 34,298

Recently a16z released a diagram showing the “Emerging Architectures for LLM Applications.” In this episode, we expand on things covered in that diagram to a more general mental model for the new AI app stack. We cover a variety of things from model “middleware” for caching and control to app orchestration.

Practical AI Practical AI #235

Blueprint for an AI Bill of Rights

Play
2023-08-09T16:20:00Z #ai +3 🎧 32,245

In this Fully Connected episode, Daniel and Chris kick it off by noting that Stability AI released their SDXL 1.0 LLM! They discuss its virtues, and then dive into a discussion regarding how the United States, European Union, and other entities are approaching governance of AI through new laws and legal frameworks. In particular, they review the White House’s approach, noting the potential for unexpected consequences.

Practical AI Practical AI #232

Legal consequences of generated content

Play
2023-07-18T18:30:00Z #ai +2 🎧 30,934

As a technologist, coder, and lawyer, few people are better equipped to discuss the legal and practical consequences of generative AI than Damien Riehl. He demonstrated this a couple years ago by generating, writing to disk, and then releasing every possible musical melody. Damien joins us to answer our many questions about generated content, copyright, dataset licensing/usage, and the future of knowledge work.

Practical AI Practical AI #231

A developer's toolkit for SOTA AI

Play
2023-07-12T21:00:00Z #ai +2 🎧 28,658

Chris sat down with Varun Mohan and Anshul Ramachandran, CEO / Cofounder and Lead of Enterprise and Partnership at Codeium, respectively. They discussed how to streamline and enable modern development in generative AI and large language models (LLMs). Their new tool, Codeium, was born out of the insights they gleaned from their work in GPU software and solutions development, particularly with respect to generative AI, large language models, and supporting infrastructure. Codeium is a free AI-powered toolkit for developers, with in-house models and infrastructure - not another API wrapper.

Practical AI Practical AI #230

Cambrian explosion of generative models

Play
2023-07-06T17:30:00Z #ai +3 🎧 29,580

In this Fully Connected episode, Daniel and Chris explore recent highlights from the current model proliferation wave sweeping the world - including Stable Diffusion XL, OpenChat, Zeroscope XL, and Salesforce XGen. They note the rapid rise of open models, and speculate that just as in open source software, open models will dominate the future. Such rapid advancement creates its own problems though, so they finish by itemizing concerns such as cybersecurity, workflow productivity, and impact on human culture.

Practical AI Practical AI #225

Controlled and compliant AI applications

Play
2023-05-31T17:00:00Z #ai +2 🎧 27,700

You can’t build robust systems with inconsistent, unstructured text output from LLMs. Moreover, LLM integrations scare corporate lawyers, finance departments, and security professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA), leaked IP/PII, and “injection” vulnerabilities.

In this episode, Chris interviews Daniel about his new company called Prediction Guard, which addresses these issues. They discuss some practical methodologies for getting consistent, structured output from compliant AI systems. These systems, driven by open access models and various kinds of LLM wrappers, can help you delight customers AND navigate the increasing restrictions on “GPT” models.

Practical AI Practical AI #224

Data augmentation with LlamaIndex

Play
2023-05-23T16:15:00Z #ai +1 🎧 29,114

Large Language Models (LLMs) continue to amaze us with their capabilities. However, the utilization of LLMs in production AI applications requires the integration of private data. Join us as we have a captivating conversation with Jerry Liu from LlamaIndex, where he provides valuable insights into the process of data ingestion, indexing, and query specifically tailored for LLM applications. Delving into the topic, we uncover different query patterns and venture beyond the realm of vector databases.

Practical AI Practical AI #223

Creating instruction tuned models

Play
2023-05-16T18:20:00Z #ai +1 🎧 27,503

At the recent ODSC East conference, Daniel got a chance to sit down with Erin Mikail Staples to discuss the process of gathering human feedback and creating an instruction tuned Large Language Models (LLM). They also chatted about the importance of open data and practical tooling for data annotation and fine-tuning. Do you want to create your own custom generative AI models? This is the episode for you!

Practical AI Practical AI #222

The last mile of AI app development

Play
2023-05-11T13:00:00Z #ai +3 🎧 29,433

There are a ton of problems around building LLM apps in production and the last mile of that problem. Travis Fischer, builder of open AI projects like @ChatGPTBot, joins us to talk through these problems (and how to overcome them). He helps us understand the hierarchy of complexity from simple prompting to augmentation, agents, and fine-tuning. Along the way we discuss the frontend developer community that is rapidly adopting AI technology via Typescript (not Python).

Changelog Interviews Changelog Interviews #532

Bringing Whisper and LLaMA to the masses

Play
2023-03-22T21:00:00Z #llm +1
🎧 32,964

This week we’re talking with Georgi Gerganov about his work on Whisper.cpp and llama.cpp. Georgi first crossed our radar with whisper.cpp, his port of OpenAI’s Whisper model in C and C++. Whisper is a speech recognition model enabling audio transcription and translation. Something we’re paying close attention to here at Changelog, for obvious reasons. Between the invite and the show’s recording, he had a new hit project on his hands: llama.cpp. This is a port of Facebook’s LLaMA model in C and C++. Whisper.cpp made a splash, but llama.cpp is growing in GitHub stars faster than Stable Diffusion did, which was a rocket ship itself.

Player art
  0:00 / 0:00