git push deployments on your own servers, a nanoGPT pipeline in a spreadsheet, you'll regret using natural keys, a grand unified theory of the AI hype cycle & more

Changelog News

Developer news worth your attention

Happy Monday! 👋

The AI gold rush has Nvidia breaking profit records left and right. Turns out at their current market cap they’re now valued at $102M per employee. I think it’s time for some serious raises… Ok, let’s get into the news.


🎧 Good pods for the week ahead

🎙️ Mark Russinovich, Eric Boyd & Neha Batra changelog.fm/594
👂🗣️ “Best cast of 2024 so far!”
💚 Our #define game show returns changelog.com/friends/47
👂🗣️ “This was the techiest improv show I’ve ever been to”
🪩 Debate! Should web development need a build step? jsparty.fm/326
🚀 Gina Häußge on 3D printed infrastructure shipit.show/107
🤖 Rise of the AI PC & local LLMs practicalai.fm/272

⌛️ Apple finally gets Siri-ous

The seemingly sleepy tech giant in Cupertino woke up today at their annual WWDC event. Their response to the recent transformer-infused language model boom: AI, which is short for Apple Intelligence (see what they did there?)

This “new” AI will weave its way through the entire suite of Apple platforms / 1st-party apps, but the primary interface is still Siri. Yes, that Siri.

This means Siri is getting a new look & feel, the ability to query it via text, better natural language detection, on-screen awareness, app intents, personal context & a gazillion other things. The demo was quite impressive, but aren’t they all?

Oh, and Siri can also ask ChatGPT when it doesn’t have an answer for you. If nothing else, this is a huge upgrade from “Here’s what I found on the web.”

💪 Git push deployments to your own servers

We wanted an Heroku/CloudFoundry-like way to deploy stuff on a few ARM boards, but since dokku didn’t work on ARM at the time and even docker can be overkill sometimes, a simpler solution was needed.

piku is currently able to deploy, manage and independently scale multiple applications per host on both ARM and Intel architectures, and works on any cloud provider (as well as bare metal) that can run Python, nginx and uwsgi.

Heroku 100% changed the deployment game with its git push UX. Ever since, people have been trying to replicate that experience in different places, with varying degrees of success & failure (but mostly failure). From my early reading, it appears that the piku team has done it for the 80% of common use cases. The best part, it’s not a PoC or early Alpha software:

piku is considered STABLE. It is actively maintained, but “actively” here means the feature set is pretty much done, so it is only updated when new language runtimes are added or reproducible bugs crop up.

📦 A nanoGPT pipeline packed in a spreadsheet

This “Spreadsheet Is All You Need” repo (a play on the Attention is All You Need paper that originally introduced the Transformer architecture) is great if you’re still trying to wrap your head around how GPTs actually work. It’s also just an impressive feat. Dabo Chen:

While reading about LLMs, I realised that the internal mechanisms of a transformer is basically a range of matrices calculations being connected in a certain order. I started to wonder if the whole process can be represented in a spreadsheet since all the calculations are fairly simple. I’m a visual thinker, I couldn’t think of a better way to do it. Then with some trial and errors, I wrote the full inference pipeline of the nanoGPT architecture into a single spreadsheet.

nanoGPT spreadsheet screenshot

🩺 What’s causing those poor Core Web Vitals?

Thanks to Sentry for sponsoring Changelog News 💰

Join Salma Alam-Naylor & Lazar Nikolov from Sentry in this FREE workshop so you can learn how to identify the issues causing your poor Core Web Vitals. Then, discover how to trace issues to slow database queries or the dreaded server-side request waterfall. You’ll learn how to:

  • Discover common sources for poor web vitals
  • Setup tracing with Sentry
  • Trace issues through your stack to the code-level with Sentry

The workshop is on June 20th. Don’t miss out: register now right here!

🗝️ You’ll regret using natural keys

I’ve often talked about the nature of software development and its lack of truly generalizable rules, but experience does reveal anti-patterns that we can pass down/around to save others the pain & suffering that we had to endure to uncover them. One such database design anti-pattern that Mark Seemann wants to save you from is using natural keys:

Is it ever a good idea to use natural keys in a database design? My experience tells me that it’s not. Ultimately, regardless of how certain you can be that the natural key is stable and correctly tracks the entity that it’s supposed to keep track of, data errors will occur. This includes errors in those natural keys.

You should be able to correct such errors without losing track of the involved entities. You’ll regret using natural keys. Use synthetic keys.

Take his word for it (he does explain why, of course) or learn the same lesson for yourself the hard way? Your call…

♻️ A grand unified theory of the AI hype cycle

Glyph Lefkowitz describes a 13-phase AI hype cycle and then enumerates five cycles we’ve already been through:

  1. Neural networks and symbolic reasoning in the 1950s.
  2. Theorem provers in the 1960s.
  3. Expert systems in the 1980s.
  4. Fuzzy logic and hidden Markov models in the 1990s.
  5. Deep learning in the 2010s.

Each of these cycles has been larger and lasted longer than the last, and I want to be clear: each cycle has produced genuinely useful technology. It’s just that each follows the progress of a sigmoid curve that everyone mistakes for an exponential one. There is an initial burst of rapid improvement, followed by gradual improvement, followed by a plateau.

So where are we now? Glyph provides some heuristics, but it’s hard to say exactly when the current cycle will end. But he does feel confident about saying this:

What I can tell you is that computers cannot think, and that the problems of the current instantiation of the nebulously defined field of “AI” will not all be solved within “5 to 20 years”.


🤨 Maybe AI shouldn’t implement code either?

On a News #96 I floated the idea of us writing the tests and having LLMs write the implementation. Turns out Jeffrey Riggle had already tried that a couple months back (with ChatGPT) and wrote up his findings. Mixed results:

In the end, we worked out a solution that got all tests passing but still wasn’t quite what I wanted. Ideally, I wanted it to use generics for values on a log but due to how I wrote the tests string was sufficient. A bit tired from the conversation I gave up and accepted the result.

His post lays out a few other efforts he made and even found that his experience was better with ChatGPT writing the tests instead. His conclusion:

In the end, many of the assumptions I had about the tool had been wrong in ways I didn’t expect. However, instead of being completely disappointed I did find one workflow I quite enjoyed with this tool and it did not end up being a complete waste of time. ChatGPT may be useful for many but for the way I like to work with my tools, it doesn’t quite cut it for me.

🦆 Announcing DuckDB 1.0.0

DuckDB (an analytical in-process SQL database management system) has reached a milestone release that’s been a long time coming:

It has been almost six years since the first source code was written for the project back in 2018, and a lot has happened since: There are now over 300 000 lines of C++ engine code, over 42 000 commits and almost 4 000 issues were opened and closed again.

But why is now the time for DuckDB to get stamped with the 1.0 version? In a word: stability

Of course, there will be issues found in today’s release. But rest assured, there will be a 1.0.1 release. There will be a 1.1.0. And there might also be a 2.0.0 at some point. We’re in this for the long run, all of us, together. We have the team and the structures and resources to do so.

🐘 Fleets of Postgres

Thanks to Neon for sponsoring Changelog News 💰

Did you know that Retool manages a massive database fleet with only one engineer?! They do it by partnering with our friends at Neon – a serverless Postgres database with a robust API. Here’s what one Retool engineer has to say about it:

We’ve been able to automate virtually all database management tasks via the Neon API. This saved us a tremendous amount of time and engineering effort. The scale-to-zero functionality of Neon allows us to offer dedicated databases to our customers without worrying about the cost of idle resources

😰 Managing my motivation, as a solo dev

Marcus Buffett:

One of the biggest sticking points of being a solo dev is maintaining motivation. I’ve been keeping a journal entry about how to hack my motivation, what works and what doesn’t. Here are the things that have worked.

He covers converting external sources to motivation, leaving tasks unfinished, using the thing himself & doing nothing, amongst other tactics.


🧰 Dev tools for the ol’ toolbox

  • lsix is like “ls”, but for images
  • MistCSS is a component generator using pure CSS
  • Omakub is an opinionated Ubuntu dev setup
  • Ice is a powerful menu bar manager for macOS
  • hn-text is a text-first Hacker News client
  • next-flag is feature flags via GitHub Issues + NextJS
  • Manifest is a complete backend in one file of simple code

That’s the news for now, but we have some great episodes coming up this week: Kelsey Hightower joins us on Wednesday & Justin Searls joins us for WWDC reactions on Friday.

Have a great week, forward this to a friend who might dig it & I’ll talk to you again real soon. 💚

–Jerod