Changelog News – Episode #43
Mojo might be huge, chatbots aren't it, big tech lacks an AI moat & monoliths are not dinosaurs
Jeremy Howard thinks Mojo might be the biggest programming language advance in decades, Amelia Wattenberger is not impressed by AI chatbots, a leaked Google memo admits big tech has no AI moats & Werner Vogels reminds us that monoliths are not dinosaurs.
Sentry – Get to the root cause of an error or latency issue faster by seeing all the technical details related to that issue in one visual replay on your web application. Use the code
CHANGELOG and get the team plan free for three months.
Notes & Links
All links mentioned in this episode of Changelog News (and more) are in its companion newsletter.
|3||01:53||Boo to AI chatbots|
|5||03:50||No AI moat|
|6||04:48||Monoliths != dinosaurs|
|8||06:05||Rajiv Shah on LLMs|
Click here to listen along while you enjoy the transcript. 🎧
What up, nerds? I’m Jerod and this is Changelog News for the week of Monday, May 8th 2023.
Let’s get into it.
The just-announced Mojo is a Python superset aimed at fixing Python’s performance and deployment problems. It has a great pedigree (Chris Lattner whom you may know from LLVM, Clang & Swift) and Fast.ai’s Jeremy Howard, who is also an advisor to Modular, Mojo’s creators, is very excited about it. Jeremy says: “I remember the first time I used the v1.0 of Visual Basic. Back then, it was a program for DOS. Before it, writing programs was extremely complex and I’d never managed to make much progress beyond the most basic toy applications. But with VB, I drew a button on the screen, typed in a single line of code that I wanted to run when that button was clicked, and I had a complete application I could now run. It was such an amazing experience that I’ll never forget that feeling. It felt like coding would never be the same again. Writing code in Mojo is the second time in my life I’ve had that feeling”
The Mojo team has lofty goals: full compatibility with the Python ecosystem, predictable low-level performance and low-level control, and we need the ability to deploy subsets of code to accelerators all while not creating ecosystem fragmentation. This is a brand new code base and a lot of work is left to be done, but people are excited. This could be huge.
Here is JS party panelist Amelia Wattenberger’s review of AI chatbots:
Wait, no. Sorry that was Princess Buttercup’s nightmare in Princess Bride. Here’s JS Party panelist Amelia Wattenberger on AI chatbots: “Last night, over wine and seafood, the inevitable happened…
Someone mentioned ChatGPT. I had no choice but to start into an unfiltered, no-holds-barred rant about chatbot interfaces. Unfortunately for the countless hapless people I’ve talked to in the past few months, this was inexorable. Ever since ChatGPT exploded in popularity, my inner designer has been bursting at the seams. To save future acquaintances, I come to you today: because you’ve volunteered to be here with me, can we please discuss a few reasons chatbots are not the future of interfaces.”
Bullet points from Amelia’s argument 1) text inputs have no affordances, 2) prompts are just a pile of context, and responses are isolated.
Alright it is now time for some sponsored news.
Have you tried Sentry’s interactive Sandbox? It is the coolest/easiest way to see if Sentry’s app monitoring and error tracking services jive with the way you think. For the small price of your work email address, you have free reign to poke around at a real-world(esque) Sentry dashboard and kick all the tires. Performance, Profiling, Replays, Crons. It’s all there. Check the link in your chapter data and in the newsletter and try it today.
Thanks to Sentry for sponsoring this week’s Changelog News.
I’ve been hopeful about open source LLMs ever since our episode with Simon Willison last month. My hopes continue to wax strong after this (admittedly vaguely sourced) memo that leaked from inside the GOOG: “While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”
The landscape of LLMs is moving crazy fast right now. On a recent episode of Practical AI, Daniel asked Hugging Face engineer Rajiv Shah to describe it to them and his response was very good, but also necessarily lengthy. I’ll attach that clip to the end of this episode for those interested.
Amazon CTO Werner Vogels pens what looks like a defense of monolithic architectures, but is in actuality a defense of there being no silver bullet: Werner saiys: “There is no one-size-fits-all. We always urge our engineers to find the best solution, and no particular architectural style is mandated. If you hire the best engineers, you should trust them to make the best decisions.””
He speaks to S3’s microservice architecture and how well it has served the org, but reiterates that there isn’t one architectural pattern to rule them all.
That is the news for now. Read the companion newsletter for additional stories about the Craigslist test, the “rewrite everything in Rust” movement, the beginning of the end of the password, and much more.
I want to go to there.
In honor of Maintainer Month, our Changelog interview this week features three people who are dedicated to funding open source maintainers: Alyssa Wright, Chad Whitacre & Duane O’Brien.
Have a great week, share Changelog with your friends if you dig it, and I’ll talk to you again real soon.
Our transcripts are open source on GitHub. Improvements are welcome. 💚