Of prompts and engineers
Tips, tricks, best practices and philosophical AI debates abound when OpenAI ambassador Bram Adams joins Natalie, Johnny & Mat to discuss prompt engineering.
Tips, tricks, best practices and philosophical AI debates abound when OpenAI ambassador Bram Adams joins Natalie, Johnny & Mat to discuss prompt engineering.
The Astro team releases a new documentation builder, legendary computer scientist Donald Knuth plays with ChatGPT, over 500 volunteer mods have signed an open letter to Stack Overflow Inc, Reddit faces a revolt due to their new API pricing & the Technology Innovation Institute release Falcon, a new open source LLM that’s topping Hugging Face’s leaderboard.
This week Adam is joined by Scott Johnston, CEO of Docker. Scott shares his journey to the CEO role, how he’s leading the company to not only grow revenue, but to also invest in developer facing features, their shift from a enterprise sales focus to a PLG driven model, and we even talk about Docker Desktop, the competition it faces, and the struggle they face when considering making it open source.
Gerhard is back! Today we continue our Kaizen tradition by getting together (for the 10th time) with one of our oldest friends to talk all about the continuous improvements we’re making to Changelog’s platform and podcasts.
KBall interviews Nick Nisi about the Pandora’s box that is his tooling/developer setup. Starting at the lowest layer of the terminal emulator he uses, they move upwards into command line tools, into Tmux (terminals within terminals!), his epic NeoVim configuration, and finally into the tools he uses for notekeeping and productivity.
This week on The Changelog we’re continuing our Maintainer Month series by taking to you back to the hallway track of The Linux Foundation’s Open Source Summit North America 2023 in Vancouver, Canada. Today’s anthology episode features: Stormy Peters (VP of Communities at GitHub), Dr. Dawn Foster (Director of Open Source Community Strategy at VMware), and Angie Byron (Drupal Core Product Manager and Community Director at Aiven).
Special thanks to our friends at GitHub for sponsoring us to attend this conference as part of Maintainer Month.
Return guests Ben Johnson & Chris James join Mat & Kris to talk about the files and folders of your Go projects, big and small. Does the holy grail exist, of the perfect structure to rule them all? Or are we doomed to be figuring this out for the rest of our lives?
You can’t build robust systems with inconsistent, unstructured text output from LLMs. Moreover, LLM integrations scare corporate lawyers, finance departments, and security professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA), leaked IP/PII, and “injection” vulnerabilities.
In this episode, Chris interviews Daniel about his new company called Prediction Guard, which addresses these issues. They discuss some practical methodologies for getting consistent, structured output from compliant AI systems. These systems, driven by open access models and various kinds of LLM wrappers, can help you delight customers AND navigate the increasing restrictions on “GPT” models.
The Gorilla team is building an API store for LLMs, DeviceScript is Microsoft’s new TypeScript programming environment for microcontrollers, Nyxt is a hackable browser written in Lisp, Morgan Housel writes about expectations debt & I issue a gentle reminder to my fellow software engineers: there’s still no silver bullet.
What if your favorite conference’s hallway track continued year round? That’s the vibe we’re trying to capture with Changelog & Friends, a new Friday talk show from your friends at Changelog. In this intro episode, Adam & Jerod talk all about our new MWF plan for The Changelog , discuss what this Friends flavor is all about, and have a lot of fun along the way.
Nick is excited to explain CVA to us like we’re five (then again like we’re 41).
KBall is excited to share details of his new stack (for the new app he’s building).
Jerod is excited to share some recent news items (but he’s the only one).
And finally, we’re all excited to debate TypeScript vs JSDoc comments!
This week on The Changelog we’re taking you to the hallway track of The Linux Foundation’s Open Source Summit North America 2023 in Vancouver, Canada. Today’s anthology episode features: Beyang Liu (Co-founder and CTO at Sourcegraph), Denny Lee (Developer Advocate at Databricks), and Stella Biderman (Executive Director and Head of Research at EleutherAI).
Special thanks to our friends at GitHub for sponsoring us to attend this conference as part of Maintainer Month.
Now that you’ve aced that CFP, the gang is back to share our best tips & tricks to help you give your best conference talk ever.
Large Language Models (LLMs) continue to amaze us with their capabilities. However, the utilization of LLMs in production AI applications requires the integration of private data. Join us as we have a captivating conversation with Jerry Liu from LlamaIndex, where he provides valuable insights into the process of data ingestion, indexing, and query specifically tailored for LLM applications. Delving into the topic, we uncover different query patterns and venture beyond the realm of vector databases.
Will McGugan’s Trogon auto-generates friendly TUIs for your CLI apps, Stability AI’s official open source variant of DreamStudio, John Calhoun writes about life after 26 years programming at Apple, Google’s news TLDs could be a boon to scammers & Pablo Meier documents a way to discuss programming languages.
Nick & KBall sit down with the brilliant Stephen Haberman to discuss all things ORMs! 💻🔍
From the advantages and disadvantages of ORMs in general, to delving into the intricacies of his innovative project Joist, which brings a fresh, idiomatic, ActiveRecord-esque approach to TypeScript. 🚀
So sit back, relax, and let’s dive deep into the world of ORMs with the experts!
This week Sarah Drasner joins us to talk about her book Engineering Management for the Rest of Us and her experience leading engineering at Zillow, Microsoft, Netlify, and now Google.
At the recent ODSC East conference, Daniel got a chance to sit down with Erin Mikail Staples to discuss the process of gathering human feedback and creating an instruction tuned Large Language Models (LLM). They also chatted about the importance of open data and practical tooling for data annotation and fine-tuning. Do you want to create your own custom generative AI models? This is the episode for you!
Thunderbird is thriving on small donations, Syncthing is a super-cool continuous file sync program, LLMs are so hot right now and they’re making vectors hot by proxy & MDN defines a Baseline for stable web features.
Developer slash artist Alex Miller joins Jerod & Amelia to discuss the challenge he faced after deciding to eschew fancy frameworks and libraries in favor of vanilla JS to build an interactive essay called Grid World for the html review.
Conferences are an integral part of the Go community, but the experience of conferences has remained the same even as the value propositions change. In this episode we discuss what conferences generally provide, how value propositions have changed, and what changes conference organizers could make to realign their conference experience to a new set of value propositions.
There are a ton of problems around building LLM apps in production and the last mile of that problem. Travis Fischer, builder of open AI projects like @ChatGPTBot, joins us to talk through these problems (and how to overcome them). He helps us understand the hierarchy of complexity from simple prompting to augmentation, agents, and fine-tuning. Along the way we discuss the frontend developer community that is rapidly adopting AI technology via Typescript (not Python).
This week we’re celebrating Maintainer Month along with our friends at GitHub. Open source runs the world, but who runs open source? Maintainers. Open source maintainers are behind the software we use everyday, but they don’t always have the community or support they need. That’s why we’re celebrating open source maintainers during the month of May. Today’s conversation features Alyssa Wright (Bloomberg), Chad Whitacre (Sentry), and Duane O’Brien (Creator of the FOSS Contributor Fund and framework). We get into all the details, the why, the hows, and the struggles involved for companies to support open source.
Jeremy Howard thinks Mojo might be the biggest programming language advance in decades, Amelia Wattenberger is not impressed by AI chatbots, a leaked Google memo admits big tech has no AI moats & Werner Vogels reminds us that monoliths are not dinosaurs.
This week Adam is joined by Michael Grinich, Founder & CEO at WorkOS. Michael shares his journey to build WorkOS, what it takes to cross the Enterprise Chasm, and how he’s building his sales organization for growth.
Dax Raad joins KBall and Nick to chat about SST, a framework that makes it easier to build full-stack applications on AWS. We chat about how the project got started and its goals. Then we discuss OpenNext, an open source, framework-agnostic server less adapter for Next.js.
The DevCycle team joins Jon & Kris for a deep conversation on WebAssembly (Wasm) and Go! After a high-level discussion of what Wasm is all about, we learn how they’re using it in production in cool and interesting ways. We finish up with a spicy unpop segment featuring buzzwords like “ChatGPT”, “LLM”, “NFT” and “AGI”
José Valim joins Jerod to talk all about what’s new in Livebook – the Elixir-based interactive code notebook he’s been working on the last few years.
José made a big bet when he decided to bring machine learning to Elixir. That bet is now paying off with amazing new capabilities such as building and deploying a Whisper-based chat app to Hugging Face in just 15 minutes.
José demoed that and much more during Livebook’s first-ever launch week. Let’s get into it.
Model sizes are crazy these days with billions and billions of parameters. As Mark Kurtz explains in this episode, this makes inference slow and expensive despite the fact that up to 90%+ of the parameters don’t influence the outputs at all.
Mark helps us understand all of the practicalities and progress that is being made in model optimization and CPU inference, including the increasing opportunities to run LLMs and other Generative AI models on commodity hardware.
Hyperswitch is like the adapter pattern for payments, Austin Henley writes about the future of programming by summarizing recent research papers, Thoughtworks published their 28th volume of their Tech Radar, the team at General Products reminds devs to scan our technical writing for words such as “easy”, “painless”, “straightforward”, “trivial”, “simple” and “just” & we finish with a lightning round of cool tools.