Practical AI Practical AI #238

Fine-tuning vs RAG

In this episode we welcome back our good friend Demetrios from the MLOps Community to discuss fine-tuning vs. retrieval augmented generation. Along the way, we also chat about OpenAI Enterprise, results from the MLOps Community LLM survey, and the orchestration and evaluation of generative AI workloads.


Discussion

Sign in or Join to comment or subscribe

2023-10-08T19:25:36Z ago

Loved this episode! I would love perhaps a deeper dive on RAG vs Fine Tuning for domain specific data. That was quite surprising to me that fine tuning isn’t necessarily effective in producing a better model based on company specific data (internal docs, etc)

2023-10-16T17:23:23Z ago

RAG is used to reduce the hallucination right?
I mean RAG and fine-tuning has different use-case, finetuning is more like bringing the uniqueness in the output. Shouldn’t both go hand in hand?

Player art
  0:00 / 0:00