The Changelog The Changelog #533  – Pinned

A new path to full-time open source

After years of working for Google on the Go Team, Filippo Valsorda quit last year to experiment with more sustainable paths for open source maintainers. Good news, it worked! Filippo is now a full-time open source maintainer and he joins Jerod on this episode to tell everyone exactly how he’s making the equivalent to his total compensation package at Google in open source.

Django suor.github.io

Ban 1+N in Django

Alex Schepanovski:

I always thought of 1+N as a thing that you just keep in your head, catch on code reviews or via performance regressions. This worked well for a long time, however, the less control we have over our SQL queries the more likely it will sneak through those guards…

I tumbled on a couple of 1+Ns while reading a project code for an unrelated reason and it got me thinking – do I ever want Django to do that lazy loading stuff? And the answer was never.

Turns out the implementation of this is quite easy. ~15 loc for the naive version and ~35 for a more robust version. Give it a try if you, like Alex, never want to allow a 1+N in your Django app again.

AI (Artificial Intelligence) futureoflife.org

A petition to pause all AI experiments for at least 6 months

This open letter by the Future of Life institute has been signed by 1380 people (so far) including notable technologists such as Steve Wozniak, Stuart Russell, Emad Mostaque (Stability AI) & Elon Musk.

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

Postman Icon Postman – Sponsored

What do 37,000 developers say about Postman?

logged by @logbot permalink

Postman surveyed over 37,000 developers to ask them how they worked with APIs. Most of those findings are in their State of the API Report (2022), but there were a few things to highlight separately. Here’s what they learned:

  • 89% would be unhappy if they were not allowed to use Postman anymore
  • 81% say Postman is necessary for enabling an API-first development model
  • 51% say a majority of their organization’s development effort is spent on APIs
  • 75% say Postman helps them collaborate with developers better than other platforms or tools

This is the fourth year in a row for Postman’s State of the API survey and report. It’s the largest and most comprehensive survey and report on APIs. You should check it out.

Practical AI Practical AI #216

Explainable AI that is accessible for all humans

We are seeing an explosion of AI apps that are (at their core) a thin UI on top of calls to OpenAI generative models. What risks are associated with this sort of approach to AI integration, and is explainability and accountability something that can be achieved in chat-based assistants?

Beth Rudden of Bast.ai has been thinking about this topic for some time and has developed an ontological approach to creating conversational AI. We hear more about that approach and related work in this episode.

Hardware Twitter

Introducing ALOHA 🏖

ALOHA stands for “A Low-cost Open-source Hardware System for Bimanual Teleoperation”, which is certainly a stretch in terms of acronym, but the project itself is so cool that I don’t think it really matters… Here’s the pitch:

Fine manipulation tasks, such as threading cable ties or slotting a battery, are notoriously difficult for robots because they require precision, careful coordination of contact forces, and closed-loop visual feedback. Performing these tasks typically requires high-end robots, accurate sensors, or careful calibration, which can be expensive and difficult to set up. Can learning enable low-cost and imprecise hardware to perform these fine manipulation tasks? We present a low-cost system that performs end-to-end imitation learning directly from real demonstrations, collected with a custom teleoperation interface.

Introducing ALOHA 🏖

The Changelog Changelog News

GitHub Copilot X, Chatbot UI, ChatGPT plugins, defining juice for software dev, Logto, Basaran & llama-cli

GitHub announces Copilot X, Mckay Wrigley created an open source ChatGPT UI buit with Next.js, TypeScripe & Tailwind CSS, OpenAI is also launching a ChatGPT plugin initiative, Brad Woods writes about juice in software development, Logto is an open source alternative to Auth0, Basaran is an open source alternative to the OpenAI text completion API & llama-cli is a straightforward Go CLI interface for llama.cpp.

Kafka github.com

FastKafka is a Python library for building Kafka-based services

Dave Runje:

We were searching for something like FastAPI for Kafka-based service we were developing, but couldn’t find anything similar. So we shamelessly made one by reusing beloved paradigms from FastAPI and we shamelessly named it FastKafka.

The point was to set the expectations right - you get pretty much what you would expect: function decorators for consumers and producers with type hints specifying Pydantic classes for JSON encoding/decoding, automatic message routing to Kafka brokers and documentation generation.

Alex speedtyper.dev

SpeedTyper – type racing for programmers

Alexander Lotvall:

It’s a typing app specifically for software developers. You type code snippets from real open source projects. It supports inviting your friends to a room and competing in real time against your friends, and you can get your result on the global leaderboard.

I like how it uses code snippets from popular projects. I don’t like how slow I am at writing Rust code! 🤣

Brain Science Brain Science #34

Develop a high-performance mindset

In this episode Adam and Mireille discuss what it takes to develop a high performance mindset. Your mindset is the mental framework that influences your actions, your decisions, and your overall approach to life. Discover how to nurture a growth-oriented and positive mindset, fostering resilience, adaptability, and a commitment to self-improvement. This episode is a must-listen for anyone looking to optimize their mental framework and cultivate a growth-oriented mindset to achieve success in their personal and professional lives.

Steve Yegge about.sourcegraph.com

Cheating is all you need

Steve Yegge is very excited about LLMs and thinks the rest of us should be as well:

There is something legendary and historic happening in software engineering, right now as we speak, and yet most of you don’t realize at all how big it is.

LLMs aren’t just the biggest change since social, mobile, or cloud–they’re the biggest thing since the World Wide Web. And on the coding front, they’re the biggest thing since IDEs and Stack Overflow, and may well eclipse them both.

Steve’s been in the industry a long time. He worked at Amazon back when AWS was just a demo on some engineer’s laptop and he worked at Google when Kubernetes was just a demo on some engineer’s laptop.

The point: when Steve Yegge gets excited about something it probably means more than when most people get excited about something.

Apple github.com

Transformer architecture optimized for Apple Silicon

Use ane_transformers as a reference PyTorch implementation if you are considering deploying your Transformer models on Apple devices with an A14 or newer and M1 or newer chip to achieve up to 10 times faster and 14 times lower peak memory consumption compared to baseline implementations.

We were just discussing Apple’s next AI move on yesterday’s JS Party live (ships to the feed next Friday). They’ve been the quietest tech giant since the GenAI movement kicked in to high gear. My guess: they’ll have a LOT to say at this June’s WWDC…

The Changelog The Changelog #532

Bringing Whisper and LLaMA to the masses

This week we’re talking with Georgi Gerganov about his work on Whisper.cpp and llama.cpp. Georgi first crossed our radar with whisper.cpp, his port of OpenAI’s Whisper model in C and C++. Whisper is a speech recognition model enabling audio transcription and translation. Something we’re paying close attention to here at Changelog, for obvious reasons. Between the invite and the show’s recording, he had a new hit project on his hands: llama.cpp. This is a port of Facebook’s LLaMA model in C and C++. Whisper.cpp made a splash, but llama.cpp is growing in GitHub stars faster than Stable Diffusion did, which was a rocket ship itself.

Podcasts from Changelog

Weekly shows about software development, developer culture, open source, building startups, artificial intelligence, brain science, and the people involved.

  0:00 / 0:00