13% of the time, Devin works every time
Jerod, KBall & Nick discuss the latest news: Devin, Astro DB, The JavaScript Registry, Tailwind 4 & Angular merging with Wiz. Oh, and a surprise mini-game of HeadLIES!
Jerod, KBall & Nick discuss the latest news: Devin, Astro DB, The JavaScript Registry, Tailwind 4 & Angular merging with Wiz. Oh, and a surprise mini-game of HeadLIES!
Daniel and Chris are out this week, so we’re bringing you conversations all about AI’s complicated relationship to software developers from other Changelog pods: JS Party, Go Time & The Changelog.
Daniel & Chris explore the state of the art in prompt engineering with Jared Zoneraich, the founder of PromptLayer. PromptLayer is the first platform built specifically for prompt engineering. It can visually manage prompts, evaluate models, log LLM requests, search usage history, and help your organization collaborate as a team. Jared provides expert guidance in how to be implement prompt engineering, but also illustrates how we got here, and where we’re likely to go next.
Runway is an applied AI research company shaping the next era of art, entertainment & human creativity. Chris sat down with Runway co-founder / CTO, Anastasis Germanidis, to discuss their rise and how it’s defining the future of the creative landscape with its text & image to video models. We hope you find Anastasis’s founder story as inspiring as Chris did.
While everyone is super hyped about generative AI, computer vision researchers have been working in the background on significant advancements in deep learning architectures. YOLOv9 was just released with some noteworthy advancements relevant to parameter efficient models. In this episode, Chris and Daniel dig into the details and also discuss advancements in parameter efficient LLMs, such as Microsofts 1-Bit LLMs and Qualcomm’s new AI Hub.
We’re all thinking about it and wondering if our job is safe from AI. Maybe. Maybe not. In this episode Johnny Boursiquot is joined some industry veterans who have been through multiple innovation cycles to share their insights and advice on this subject.
This week Adam is joined by Quinn Slack, CEO of Sourcegraph for a “2 years later” catch up from his last appearance on Founders Talk. This conversation is a real glimpse into what it takes to be CEO of Sourcegraph in an era when code intelligence is shifting more and more into the AI realm, how they’ve been driving towards this for years, the subtle human leveling up we’re all experiencing, the direction of Sourcegraph as a result — and Quinn also shares his order of operations when it comes to understanding the daily state of their growth.
Recently, we briefly mentioned the concept of “Activation Hacking” in the episode with Karan from Nous Research. In this fully connected episode, Chris and Daniel dive into the details of this model control mechanism, also called “representation engineering”. Of course, they also take time to discuss the new Sora model from OpenAI.
Chris & Daniel explore AI in national security with Lt. General Jack Shanahan (USAF, Ret.). The conversation reflects Jack’s unique background as the only senior U.S. military officer responsible for standing up and leading two organizations in the United States Department of Defense (DoD) dedicated to fielding artificial intelligence capabilities: Project Maven and the DoD Joint AI Center (JAIC).
Together, Jack, Daniel & Chris dive into the fascinating details of Jack’s recent written testimony to the U.S. Senate’s AI Insight Forum on National Security, in which he provides the U.S. government with thoughtful guidance on how to achieve the best path forward with artificial intelligence.
This week we’re joined by Stefano Maffulli, the Executive Director of the Open Source Initiative (OSI). They are responsible for representing the idea and the definition of open source globally. Stefano shares the challenges they face as a US-based non-profit with a global impact. We discuss the work Stefano and the OSI are doing to define Open Source AI, and why we need an accepted and shared definition. Of course we also talk about the potential impact if a poorly defined Open Source AI emerges from all their efforts.
Note: Stefano was under the weather for this conversation, but powered through because of how important this topic is.
Google has been releasing a ton of new GenAI functionality under the name “Gemini”, and they’ve officially rebranded Bard as Gemini. We take some time to talk through Gemini compared with offerings from OpenAI, Anthropic, Cohere, etc.
We also discuss the recent FCC decision to ban the use of AI voices in robocalls and what the decision might mean for government involvement in AI in 2024.
We’re taking you back to the hallway track at THAT Conference where we have 3 MORE fun conversations: one with Samuel Goff about the future of energy, one with YouTuber Jess Chan about the future of content creation & one with Vanessa Villa / Noah Jenkins about ag tech & the future of food.
Nous Research has been pumping out some of the best open access LLMs using SOTA data synthesis techniques. Their Hermes family of models is incredibly popular! In this episode, Karan from Nous talks about the origins of Nous as a distributed collective of LLM researchers. We also get into fine-tuning strategies and why data synthesis works so well.
This week on The Changelog we’re talking with Joe Reis about data engineering and the beginning of generative AI. We discuss phone hacking via frequency, the role of a data engineer, this AI hype cycle we’re in, build vs buy, the disconnect between data analysts and the business, ethical considerations around AI-generated content, and more. We also discuss the tension between AI and traditional engineering, as well as the inevitability of AI integration into pretty much everything.
Recently the release of the rabbit r1 device resulted in huge interest in both the device and “Large Action Models” (or LAMs). What is an LAM? Is this something new? Did these models come out of nowhere, or are they related to other things we are already using? Chris and Daniel dig into LAMs in this episode and discuss neuro-symbolic AI, AI tool usage, multimodal models, and more.
Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.
Recently, Intel’s Liftoff program for startups and Prediction Guard hosted the first ever “Advent of GenAI” hackathon. 2,000 people from all around the world participated in Generate AI related challenges over 7 days. In this episode, we discuss the hackathon, some of the creative solutions, the idea behind it, and more.
We scoured the internet to find all the AI related predictions for 2024 (at least from people that might know what they are talking about), and, in this episode, we talk about some of the common themes. We also take a moment to look back at 2023 commenting with some distance on a crazy AI year.
Prashanth Rao mentioned LanceDB as a stand out amongst the many vector DB options in episode #234. Now, Chang She (co-founder and CEO of LanceDB) joins us to talk through the specifics of their open source, on-disk, embedded vector search offering. We talk about how their unique columnar database structure enables serverless deployments and drastic savings (without performance hits) at scale. This one is super practical, so don’t miss it!
The new open source AI book from PremAI starts with “As a data scientist/ML engineer/developer with a 9 to 5 job, it’s difficult to keep track of all the innovations.” We couldn’t agree more, and we are so happy that this week’s guest Casper (among other contributors) have created this resource for practitioners.
During the episode, we cover the key categories to think about as you try to navigate the open source AI ecosystem, and Casper gives his thoughts on fine-tuning, vector DBs & more.
In this enlightening episode, we delve deeper than the usual buzz surrounding AI’s perils, focusing instead on the tangible problems emerging from the use of machine learning algorithms across Europe. We explore “suspicion machines” — systems that assign scores to welfare program participants, estimating their likelihood of committing fraud. Join us as Justin and Gabriel share insights from their thorough investigation, which involved gaining access to one of these models and meticulously analyzing its behavior.
Gergely Orosz is back for our annual year-end update on the tech market, writ large. How is hiring? Has AI really changed the game? What about that OpenAI fiasco?
We also talk in-depth about Gergely’s self-published book, The Software Engineer’s Guidebook, which has been four years in the making.
Daniel & Chris conduct a retrospective analysis of the recent OpenAI debacle in which CEO Sam Altman was sacked by the OpenAI board, only to return days later with a new supportive board. The events and people involved are discussed from start to finish along with the potential impact of these events on the AI industry.
This week on we’re joined by Emil Sjölander from Figma — talking about bringing Dev Mode to Figma. Dev Mode is their new workspace in Figma that’s designed to bring developers and design to the same tool.
The question they’re trying to answer is “How do you create a home for developers in a design tool?” We go way back to Emil’s startup that was acquired by Figma called Visly, how we iterated to here from 20 years ago (think PSD > HTML days), what they did to build Dev Mode, what they’re doing around codegen, the popularity of design systems, and what it takes to go from zero to Dev Mode.
Shopify recently released a Hugging Face space demonstrating very impressive results for replacing background scenes in product imagery. In this episode, we hear the backstory technical details about this work from Shopify’s Russ Maschmeyer. Along the way we discuss how to come up with clever AI solutions (without training your own model).