Tracy Durnell thinks AI has already poisoned its own well, Adam Hill’s microsite catalogs everything you need to UnsuckJS, Lionel Dricot thinks we need more Richard Stallman, not less & the Vulcan team proves you can’t trust ChatGPT’s package recommendations.
Sentry – See the untested code causing errors - or whether it’s partially or fully covered - directly in your stack trace, so you can avoid similar errors from happening in the future. Use the code
CHANGELOG and get the team plan free for three months.
All links mentioned in this episode of Changelog News (and more) are in its companion newsletter.
Play the audio to listen along while you enjoy the transcript. 🎧
What up, nerds?
I’m Jerod and this is Changelog News for the week of Monday, June 26th 2023. Hey that sounds familiar…
Hello, friends. I’m Jerod and this is Changelog News for the week of Monday, June 27th 2022. What the what?
That was me one year ago this week. That’s right, Changelog News is a one-year old! Cool Cool Cool.
Let’s get into the news.
Here’s a quick clip of me and Simon Willison talking Stable Diffusion back in September of 2022:
That’s oh so relevant today because of a new study on AI model collapse that says “We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs.”
Tracy Durnell writes that she believes AI has already poisoned its own well. “I suspect tech companies (particularly Microsoft / OpenAI and Google) have miscalculated, and in their fear of being left behind, have released their generative AI models too early and too wide. By doing so, they’ve essentially established a threshold for the maximum improvement of their products due to the threat of model collapse. I don’t think the quality that generative AI will be able to reach on a poisoned data supply will be good enough to get rid of all us plebs”
Since there’s no consistent system for marking up generated content online as computer generated, the toothpaste is already being squeezed from its proverbial bottle. Here’s Tracy again:
Because of this approach, 2022 and 2023 will be essentially “lost years” of internet-sourced content, even if they can establish a tagging system going forward — and get people hostile or ambivalent to them to use it.
If she’s right, this is a big deal.
I’d love to see this resource go beyond the basic information and table format it currently has. But still, I’m a big proponent of this “less JS” movement and there are some high quality libraries featured here (and some I’d never heard of!). having them all in one place is a win.
We need more of Richard Stallman, not less. That’s the title of a recent article by Ploum (a.k.a. Lionel Dricot). After a big fat disclaimer differentiating the man’s philosophy from the man himself, he writes: “RMS was right since the very beginning. Every warning, every prophecy realised. And, worst of all, he had the solution since the start. The problem is not RMS or FSF. The problem is us. The problem is that we didn’t listen.”
The core of Stallman’s beliefs were the four freedoms of software. The right to use the software at your discretion. The right to study the software. The right to modify the software. And The right to share the software, including the modified version.
These four freedoms were formalized as copyleft, but according to Ploum RMS’s theory had a weakness in that copyleft itself wasn’t part of the four freedoms it secured. This allowed other non-copyleft licenses to come along and secure all four. There’s too much said to quote it all on the show, so read the piece which includes Ploum’s suggested amendment (one obligation) to RMS’ four freedoms of free software.
Then let me know what you think in the comments. Was RMS right? Did we just not listen? Would Ploum’s amendment fix things? I’d love to hear your thoughts on the matter.
It’s time for some Sponsored News!
Just because you don’t record a problem doesn’t mean it didn’t happen.
Stay ahead of latency issues and trace every slow transaction to a poor-performing API call or database query. Sentry is the only developer-first application monitoring platform that shows you what’s slow, down to the line of code. But don’t take their word for it. Matthew Egan (Engineering Team Lead at DiviPay) has this to say about it: “Unlike past tools we’ve used, Sentry provides the complete picture. No more combing through logs — Sentry makes it incredibly easy to find issues in our code to deliver a much smoother payment experience and a better overall customer experience.”
Check the link in the show notes and get a demo today. Why not, right?
Can you trust ChatGPT’s package recommendations? Maybe not so much. The team at Vulcan have published a new security threat vector they’re calling AI package hallucination. It relies on the fact that ChatGPT (et al) sometimes answers questions with hallucinated sources, links, blogs and statistics. It’ll even generate questionable fixes to CVEs and offer links to libraries that don’t actually exist!
“When the attacker finds a recommendation for an unpublished package, they can publish their own malicious package in its place. The next time a user asks a similar question they may receive a recommendation from ChatGPT to use the now-existing malicious package. We recreated this scenario in the proof of concept below using ChatGPT 3.5.”
Be careful out there…
That is the news for now!
On Wednesday I’m talking yak shaves, system architecture, -10x devs & more with Taylor Troesh. And on Friday Kelsey Hightower joins Adam and I on Changelog & Friends!
Have a great week, share Changelog with your peers who might dig it & I’ll talk to you again real soon.
Our transcripts are open source on GitHub. Improvements are welcome. 💚