Join us at the After Party, the 12-Factor Agent, how to build an agent, Pocket Flow, getting forked by Microsoft & much more

Changelog News

Developer news for endless flow state

Jerod again! 👋

A recently published study called The Effect of Deactivating Facebook and Instagram on Users’ Emotional State proved what we all knew to be true, but previously didn’t have a study to point to: social media, at least in the form we’ve been using it for the past decade plus, is a total drag.

oh well, let’s get into this week’s news.


🎧 Fresh beats for endless flow state

Our fourth full-length Changelog Beats album dropped today! It’s called After Party and it features beloved BMC tracks from our outros, ad rolls & transitions. It’s basically 26 chill beats to help you get into a state of flow and stay there. 🧘

cover art: blue to black gradients with a diamond outlining the words “After Party” in the center.

And for the many, many people who have asked us to make The Changelog’s epic outro, Last Light, available to stream/buy… we got you! Our long-time outro is also the outro track on After Party 💚

🤖 The 12-factor Agent

Dex Horthy, who has been hacking on AI agents for awhile, set out to answer the question:

What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?

What he came up with is 12 factors, in the spirit of The Twelve-Factor App:

  1. Natural Language to Tool Calls
  2. Own your prompts
  3. Own your context window
  4. Tools are just structured outputs
  5. Unify execution state and business state
  6. Launch/Pause/Resume with simple APIs
  7. Contact humans with tool calls
  8. Own your control flow
  9. Compact Errors into Context Window
  10. Small, Focused Agents
  11. Trigger from anywhere, meet users where they are
  12. Make your agent a stateless reducer

If those 12 bullet points are more confusing than enlightening, that’s because they’re just bullet points! Click through for the full explainers.

🏗️ How to build an agent

Speaking of agents… you might now be thinking it’s super hard to build one. Thorsten Ball says you are wrong:

It’s not that hard to build a fully functioning, code-editing agent.

It seems like it would be. When you look at an agent editing files, running commands, wriggling itself out of errors, retrying different strategies - it seems like there has to be a secret behind it.

There isn’t. It’s an LLM, a loop, and enough tokens.

Turns out you can build a “small and yet highly impressive agent” in less than 400 lines of code. Thorsten proves this out by building an Anthropic-based agent in Go over the course of this blog post. I followed along and I have to say, it’s all very basic stuff. Thorsten’s closer:

These models are incredibly powerful now. 300 lines of code and three tools and now you’re to be able to talk to an alien intelligence that edits your code. If you think “well, but we didn’t really…” — go and try it! Go and see how far you can get with this. I bet it’s a lot farther than you think.

That’s why we think everything’s changing.

💰 Teams with faster builds ship faster

Thanks to Depot for sponsoring Changelog News

Depot’s CEO Kyle Galbraith recently shared his thoughts on the age-old question: “build or buy?” for CI/CD.

The fact is teams with faster builds ship faster and more. And teams that know how to CI well create faster builds. Imagine what would happen if your builds got 10x faster…

Kyle found that when companies talk about “building their own CI,” what they actually mean is self-hosting runners from another CI provider. After analyzing hundreds of engineering teams, he identified three distinct archetypes:

First, there’s “The Abstraction-First Team” - these folks know what matters and move fast. They discover Depot and immediately think, “It’s ten times faster, half the price of GitHub Actions, and I just change one line of code? No brainer.”

Then there’s “The Infra-Curious Team” - they’re in that honeymoon phase with self-hosting. “GitHub Actions is slow,” so they spin up their own runners in AWS. But now they own the uptime engineering, literally, and they now have to learn how to make the builds faster AND keep them secure.

These folks eventually become the last archetype, which I’ll let Kyle cover in his post. After you read that, do yourself a favor and check out Depot!

🏭 Pocket Flow is a 100-line LLM framework

The author of Pocket Flow, Zachary Huang, thinks current LLM frameworks (LangChain, CrewAI, LangGraph, etc) are bloated. After reading Thorsten Ball’s essay on how to build an agent, I can believe it. The 100 lines in Pocket Flow capture “the core abstraction of LLM frameworks” and you build on top of that to do mutli-agent, workflows, RAG, etc.

One great example of Pocket Flow in action is this repo, which “crawls GitHub repositories and builds a knowledge base from the code. It analyzes entire codebases to identify core abstractions and how they interact, and transforms complex code into beginner-friendly tutorials with clear visualizations.”

🍴 Getting forked by Microsoft

As a sole maintainer of an open source project, Philip Laine was excited when Microsoft showed interest in Spegel, his tool designed to enhance Kubernetes cluster scalability through peer-to-peer (P2P) image distribution. But then things got… weird. At KubeCon Paris, Laine attended a talk about strategies to speed up image distribution that mentioned a Microsoft project called Peerd

While looking into Peerd, my enthusiasm for understanding different approaches in this problem space quickly diminished. I saw function signatures and comments that looked very familiar, as if I had written them myself. Digging deeper I found test cases referencing Spegel and my previous employer, test cases that have been taken directly from my project. References that are still present to this day. The project is a forked version of Spegel, maintained by Microsoft, but under Microsoft’s MIT license.

Spegel is also MIT licensed, which does allow for forking and modification without contributing back, but does not allow “removing the original license and purport that the code was created by someone else.” Not cool, Microsoft. What’s an open source maintainer to do in this circumstance?

As an effort to fund the work on Spegel I have enabled GitHub sponsors. This experience has also made me consider changing the license of Spegel, as it seems to be the only stone I can throw.

🎧 Making DNSimple

Anthony Eden, founder of DNSimple, joins the show to talk about the world of managed hosting for DNS and more. VIDEO

Art for the episode: Smiling faces. Title text. That kind of stuff.


🎙️ Vibing into the vibe

Nick Nisi joins us to confess his AI subscription glut, drool over some cool new hardware gadgets, discuss why the TypeScript team chose Go for their new compiler, opine on the React team’s complicated relationship with Vercel, suggest people try Astro, update us on his browser habits, and more. VIDEO

Art for the episode: Smiling faces. Title text. That kind of stuff.

💬 Exmerelda is an AI chatbot just for Elixir

Exmeralda helps you ask questions about Elixir libraries and get accurate, version-specific answers. Powered by Retrieval-Augmented Generation (RAG), it combines the latest AI with real documentation to deliver helpful, grounded responses.

I don’t know if RAG ultimately matters in a world of ever-expanding context windows, but I do know I’ve been asking if each programming community needs their own AI chatbot and it looks like the folks at bitcrowd decided (at least for Elixir) the answer to that question is: yes

🔝 Pipelining might be my favorite language feature

Pipelining, for the uninitiated, is a programming language feature that allows you to omit a single argument from your parameter list, by instead passing the previous value. I absolutely love it, and so does Mond:

If you’re writing pipelined code—and not trying overly hard to fit everything into a single, convoluted, nested pipeline—then your functions will naturally split up into a few pipeline chunks.

Each chunk starts with a piece of ‘main data’ that travels on a conveyer belt, where every line performs exactly one action to transform it. Finally, a single value comes out at the end and gets its own name, so that it may be used later.

And that is—in my humble opinion—exactly how it should be. Neat, convenient, separated ‘chunks’, each of which can easily be understood in its own right.

🖼️ Awesome collection of GPT-4o images & prompts

Y’all know how impressed I’ve been by OpenAI’s latest image generation efforts. As we all know, you can use it to do a whole lot more than turn people into walruses (or my preferred pluralization: walri). The linked repo has tons of cool ideas with accompanying prompts to help bring them into the world. I used this one to create playing cards of Adam, Nick, and myself from this week’s Friends recording. Nick’s was generated last, so it had already started to lose the thread 🤣

A series of 3 digital trading cards: Jerod on the left, Adam in the middle, Nick on the right.


📐 Don’t forget your (un)ordered list


That’s the news for now, but we have some great episodes coming up this week:

Have a great week, forward this to a friend who might dig it & I’ll talk to you again real soon. 💚

–Jerod