Angular Signals
KBall & Amal interview Alex & Pavel from the Angular Signals team. They cover the history, how the Angular team decided to move to signals, what the new mental model looks like, migration path & even dive into community integrations and future roadmap.
The Preact team introduces Signals
The Preact team dropped a new state management solution on us:
Signals are a way of expressing state that ensure apps stay fast regardless of how complex they get. Signals are based on reactive principles and provide excellent developer ergonomics, with a unique implementation optimized for Virtual DOM.
Adding Signals to your Preact app only adds 1.6kB to your bundle size. So what’s the big idea?
The main idea behind signals is that instead of passing a value directly through the component tree, we pass a signal object containing the value (similar to a
ref
). When a signal’s value changes, the signal itself stays the same. As a result, signals can be updated without re-rendering the components they’ve been passed through, since components see the signal and not its value. This lets us skip all of the expensive work of rendering components and jump immediately to the specific components in the tree that actually access the signal’s value.
Human-friendly process signals
This is a map of known process signals with some information about each signal. Unlike
os.constants.signals
this includes:
- human-friendly descriptions
- default actions, including whether they can be prevented
- whether the signal is supported by the current OS
Handy!
`whereami` uses WiFi signals & ML to locate you (within 2-10 meters)
If you’re adventurous and you want to learn to distinguish between couch #1 and couch #2 (i.e. 2 meters apart), it is the most robust when you switch locations and train in turn. E.g. first in Spot A, then in Spot B then start again with A. Doing this in spot A, then spot B and then immediately using “predict” will yield spot B as an answer usually. No worries, the effect of this temporal overfitting disappears over time. And, in fact, this is only a real concern for the very short distances. Just take a sample after some time in both locations and it should become very robust.
The linked project was “almost entirely copied” from the find project, which was written in Go. It then went on to inspire whereami.js. I bet you can guess what that is.
Deep-dive into DeepSeek
There is crazy hype and a lot of confusion related to DeepSeek’s latest model DeepSeek R1. The products provided by DeepSeek (their version of a ChatGPT-like app) has exploded in popularity. However, ties to China have raised privacy and geopolitical concerns. In this episode, Chris and Daniel cut through the hype to talk about the model, privacy implications, running DeepSeek models securely, and what this signals for open models in 2025.
Getting a pulse on your Core Web Vitals 🩺
This week, Amal and Nick are joined by Rick Viscomi and Annie Sullivan from the Chrome team to dive into Core Web Vitals, a set of performance metrics geared towards helping developers surface web page quality signals that are key to delivering great user experiences.
We deconstruct the different vitals and learn how they are helpful, as well as introduce the newest vital to hit the scene, Interaction to Next Paint (INP). Join us for a fun and nerdtastic discussion as we dive into the humbling universe of web performance!
In defense of apps that don’t need updates
Vivian Qu states her case as to why Apple’s decision to remove outdated apps from the App Store is dumb, especially for indies like her.
Never mind the fact that my app has a 5-star rating and was still being downloaded, with no complaints from any of my users. Also disregard the fact that I had other highly-rated apps up on the App Store, some of which had been updated much more recently than July 2019, clearly showing that I have not abandoned these apps entirely. If there had been an actual reviewer who checked my outdated app, they would have discovered that I architected the app from the beginning to dynamically scale the UI so it resizes to fit the latest iPhone devices. All these could be signals that indicate to Apple that this is not a garbage-filled scam app that is lowering the quality of their App Store.
She goes on to tell the entire saga that she (and others) were put through to keep their apps on the store. Sometimes an app isn’t outdated, it’s just complete. Ya know?
What do oranges & flame graphs have in common?
Today we are talking with Frederic Branczyk, founder of Polar Signals & Prometheus maintainer. You may remember Frederic from episode 33 when we introduced Parca.dev.
This time, we talk about a database built for observability: FrostDB, formerly known as ArcticDB. eBPF generates a lot of high cardinality data, which requires a new approach to writing, persisting & then reading back this state.
TL;DR FrostDB is sub zero cool & well worthy of its name.
The clever cryptography behind Apple's 'Find My' feature
In upcoming versions of iOS and macOS, the new Find My feature will broadcast Bluetooth signals from Apple devices even when they’re offline, allowing nearby Apple devices to relay their location to the cloud… it turns out that Apple’s elaborate encryption scheme is also designed not only to prevent interlopers from identifying or tracking an iDevice from its Bluetooth signal, but also to keep Apple itself from learning device locations, even as it allows you to pinpoint yours.
WIRED with a fascinating explanation of an utterly fascinating scheme.
CTRL-Labs lets you control machines with your mind
No, this isn’t science fiction! CRTL-Labs is using neural signals and AI to build neural interfaces. Adam Berenzweig, from CRTL-Labs R&D, joins us to explain how this works and how they have made it practical.
Going offline
Jeremy Keith in an excerpt from his new book Going Offline on A List Apart:
The internet is a network of networks, all of them agreeing to use the same protocols to shuttle packets of data around. Those packets are transmitted down fiber-optic cables across the ocean floor, bounced around with Wi-Fi or radio signals, or beamed from satellites in freakin’ space.
As long as these networks are working, the web is working. But sometimes networks go bad…
When the network fails, the web fails. That’s just the way it is, and there’s nothing we can do about it. Until now.
I’m really excited to see Jeremy Keith write a whole book on service workers. I’ve dabbled a bit here and there, but now that support for them continues to grow, I’m excited to dive in.
pow: Zero-configuration Rack server for Mac OS X
I’ve been a long-time Passenger user to switch between multiple Ruby web apps during development without needing to crank up rails s
for each. When I began using RVM to switch back and forth between multiple Ruby versions, Passenger no longer solved my problem. That’s why I’m excited to try out Pow from 37 Signals. Pow aims to be “a zero-config Rack server for Mac OS X.”
To install, run the install script
curl get.pow.cx | sh
To add apps, just symlink them to your ~/.pow
folder:
$ cd ~/.pow
$ ln -s /path/to/myapp
… and browse to http://myapp.dev/.
NATS and the CNCF kerfuffle
Derek Collison — creator of NATS and Co-founder & CEO of Synadia — joins the show to dive into the origins, design, and evolution of NATS, a high-performance, open-source messaging system built for modern cloud-native systems and part of the CNCF. Derek shares the story behind NATS, what makes it unique, and unpacks the recent tensions between Synadia and the CNCF over the future of the project.
Matched from the episode's transcript 👇
Derek Collison: Yeah, so what the audience might not know is that the CNCF was kicked off at a large meeting at the Switch Data Center in Las Vegas, of which I attended. And I was actually part of the founding governing board. So a lot of people might think “Oh, Derek’s role with the NATS project and NATS being a project in the CNCF - that’s kind of the totality of it.” But that’s not the case. And even early on, we were looking at what value CNCF could bring to the ecosystem from a consumption perspective, the users, to the people or the companies that want to utilize that, and then of course, the projects themselves… And then for me, it was like the companies that are driving said projects. In the case of something like Kubernetes, where it’s all of the big boys, so to speak, there’s not a lot that the CNCF needs to do there. It’s kind of taken on a life of its own and it’s marching down a path.
What we started to realize is that as time goes by - because this was, I think, over a decade ago - that both the CNCF and projects can evolve. And where we were at was looking at the question “Are we the best fit for CNCF?” And no fault of CNCF’s per se, but it just felt pretty clear to us that certain projects were at this tier, and other projects were way down here, and we kind of were in the latter category.
In no way were we attempting to relicense the whole codebase. I mean, one, you can’t do it, because once it’s released, it’s AP2, it’s there. That will never change. But the CNCF in early discussions – because we said “Hey, would you be open to letting the project leave if the landing spot made sense?” Meaning we could do a joint statement where it’s like “No, it’s landing in this new foundation, made up of lots of big users and customers or production uses of the NATS ecosystem. Commitment to AP2, X, Y, Z.” And during those early conversations, which started in February, we were asked “Hey, are you considering a license change?” And we’re very well aware that we can fork and do a Synadia version enterprise server, and X, Y, Z, and so we said “We’re considering, just for the server, that we might do like a BSL.” And at the time, the BSL - for us, it wasn’t translated well out into the media… But the reason that we were considering it was we felt that it was the best show of commitment to both OSS and to our customers. And I’ll explain what I mean by that. The BSL has usage clauses that you have to define when you release the software. And one of them is there’s a period of time after which it converts back to whatever language you pick, which - we would pick AP2. And it doesn’t matter what Synadia does, the Linux Foundation does, CNCF does, that is a legal contract that once we release, let’s say, a server with the BSL license, after two years it’s converted to AP2. Even if we don’t actually update the copyrights, or update the license, or whatever. And so we felt that from an OSS ecosystem.
And again, we’ve gotten lots of additional information, lots of additional feedback. Most of it constructive, some of it not. And we’ve been listening. But our initial thought process as we were going down this path was “Hey, this signals really strongly that we’re not going to hold anything back. It will always kind of revert back to generic drug prices.” Some of the good parts of what the patent system originally was trying to do early on. I’m not claiming that the patent system is necessarily good these days. It’s a necessary evil, right?
[36:16] In addition, if you would imagine a two-year BSL window, and let’s say Jerod wanted to be a customer of Synadia - and customers are worried about vendor lock-in, price gouging, all those normal things - you could say, “I want to do a two-year contract, so I have predictable pricing”, and at the end of that, that all stuff that I’m paying for converts to AP2, and I can now use it for free. Or I could say “Oh, I like the next version, too. I want to use that and [unintelligible 00:36:39.10] and do that.
So I’m not saying that it transpired the way we wanted it to, but I think people’s thought process that we’re just evil and we’re just trying to price gouge and be greedy and stuff like that - I don’t feel that that necessarily is correct, at least as we were discussing internally. We care deeply about our commitment to open source. But we are starting to see a very disturbing trend where customers that everyone would recognize - they’re in the Fortune 50 - that are using NATS to power production-level services or functions or products, not only had never reached out for any type of commercial agreement with us, but actually have policies that say “If you’re in the CNCF and you’re incubating and graduating, we’re not paying for it, period.”
The era of durable execution
Stephan Ewen, Founder and CEO of Restate.dev joins the show to talk about the coming era of resilient apps, the meaning of and what it takes to achieve idempotency, this world of stateful durable execution functions, and when it makes sense to reach for this tech.
Matched from the episode's transcript 👇
Stephan Ewen: [01:18:24.08] Yeah, absolutely. So if you want a quick summary, I’m very biased, but I think there’s almost no reason to not reach for Restate. I think it really is this solution from first principles with amazing developer experience, with a very powerful abstraction that allows you to build what you can build with workflows and signals, but also so much more. And yeah, just the journey from the beginning, downloading the binary, then migrating, scaling out is an absolutely – it’s a great experience. I mean, the project is newer than other projects, so it will have a rough edge here or there, but it’s also moving very quick. It’s very good at reacting to community feedback fast… So I think it’s a good choice. It has made a lot of users happy so far.
Securing ecommerce: "It's complicated"
Ilya Grigorik and his team at Shopify has been hard at work securing ecommerce checkouts from sophisticated news attacks (such as digital skimming) and he’s here to share all the technical intricacies and far-reaching implications of this work.
Matched from the episode's transcript 👇
Ilya Grigorik: Sure. Let’s see how far do we want to rewind. I started my professional career as a founder of a startup. This was back in the 2011 era. Our insight at the time was on the heels of Web 2.0 and all of the social things that are happening, blogs at their heyday, and all the rest… We figured that we could create a better search algorithm. So if you think of PageRank, the original PageRank, of treating links to perform the ranking, effectively that’s a thumbs up… Except that when we approached this problem - and actually it wasn’t 2011, it was 2008; we observed that there was a lot of extra signals available. There was literal thumbs up from different social platforms, you could leave comments, you can share them on different surfaces… So if we could aggregate all of those signals, we could build a better, kind of human-driven algorithm for identifying what are the interesting topics.
So that was kind of the technical underpinning. And then we went on to build a number of products around it, which were analytics for publishers, to help them understand where their audience is, where the content is being discussed, where people are engaging… There was a product for marketing agencies, which kind of worked in reverse, which is, “Hey, if I have a thing that I’d like to seed, who are the folks that I should be engaging? What are the communities, and all the rest?” And through that work, that led us to Google, which acquired the company, and I ended up working on Google Analytics at the time, integrating a lot of this kind of social analytics know-how that we acquired into the product… And later took a hard pivot into infrastructure, technical infrastructure within Google, where we did a lot of fun things, like building radio towers to figure out if we could build a faster and better radio network, and then learning that that’s a hard problem… [laughter] But then later, that actually became [unintelligible 00:07:55.20] which is an overlay network.
Reaching industrial economies of scale
Beyang Liu, the CTO & Co-founder of Sourcegraph is back on the pod. Adam and Beyang go deep on the idea of “industrializing software development” using AI agents, using AI in general, using code generation. So much is happening in and around AI and Sourcegraph continues to innovate again and again. From their editor assistant called Cody, to Code Search, to AI agents, to Batch Changes, they’re really helping software teams to industrialize the process, the inner and the outer loop, of being a software developer on high performance teams with large codebases.
Matched from the episode's transcript 👇
Beyang Liu: Yeah, yeah. We’re working our way up to 15. I think we’ll be here for the next 50 years, so it’s still early days. Actually, it’s funny that you should mention industrialized software with agents and how that’s kind of a shift… I don’t know if you have show notes that you can link to, but I can link you to a version of our seed deck back in April, May 2013, when we were pitching the company for the first time… And it has this phrase, industrialized software engineering. And so that part of the mission has stayed largely constant.
The high-level goal for us was really about basically making professional software engineering as enjoyable and as efficient as hacking on your side project is… That was really the motivator for us in starting this company. It was the delta between every programmer starts from a point of love or delight, at some point. The reason that you get into programming is there is this joy of creation, this spark of creation that everyone experiences at some point, whether it’s at Hello World, or when you first get your first working program to run, and it’s cool, and it does something, and you share it with your friends.
I think everyone who’s working as a programmer is, to some extent, trying to chase that original high; that’s the dopamine rush that makes the job joyful. And it also maps to doing useful stuff. Like, you get joy out of shipping features that impact your users’ lives, and actually get used.
But then you contrast that with the day to day of a professional software developer, most of whom are working in a large existing codebase that they didn’t write themselves, that is the product of the contributions of hundreds or thousands or tens of thousands of shared hands. And that experience is very different. And what we wanted to do is solve a lot of the problems that create toil for professional software engineers in large production codebases, and make it so that it’s possible to focus on this creative part of the job.
[00:07:49.14] So the initial way that we solved that with the technology landscape at the time was focusing on search, because that to us was the thing that we spent a lot of our time working on. We got our career started out at Palantir, but Palantir by extension meant very large banks and enterprise codebases, because that’s the team that we were on. And so the core problem there was just figuring out what all the existing code does, and figure out what the change you need to make, how that fits into the broader picture, and what sort of dependency or constraints that you’re working with. And so that was the first thing that we brought to market.
AI was always sort of in the back of our minds. I had done a concentration in machine learning while I was at Stanford. I was actually – Daphne Kohler was my advisor, and then published some research there in the domain of computer vision… In those days, the state of the art models weren’t nearly as good. This was pre-ChatGPT, pre-Transformer, pre even deep neural net revolution. So in those days, the conventional wisdom was neural nets worked in the ‘80s for limited use cases, but they’re mostly toy use cases, and real machine learning engineers use support vector machines, and boosted decision trees, and things that. So it was a very different world.
We’re always keeping our eye on the evolution of this technology, and we actually started integrating current large language model embeddings based signals into the search ranking starting in early 2022. And so that was something that we’d kind of been playing around with, and then once ChatGPT landed, we were actually pretty well situated in terms of our understanding of the technology to be able to roll that into our platform.
We launched the first sort of context-aware chat, chat that pulls from the context of your broader codebase, and pages that into the context window and uses that to steer the generated code or the question answering that the LLM provides for you. And that helped launch us into the age of AI, so to speak, because that was – I think now it’s table stakes, right? Like, context-aware code generation. Everyone has that, because everyone realizes that that is absolutely essential for making AI useful in the enterprise. But we were first to market there, and that helped us establish our presence inside folks Palo Alto Networks, and Leidos is a big government contracting agency, and Booking.com, the largest travel site in the world… All these very large enterprises with complex coding needs that have adopted Sourcegraph and our AI coding system, Cody, as their preferred essentially platform for accelerating, automating, industrializing software development.
Turso is rewriting SQLite in Rust
Glauber Costa, co-founder and CEO of Turso, joins us to discuss libSQL, Limbo, and how they’re rewriting SQLite in Rust. We discuss their efforts with libSQL, the challenge of SQLite being in the public domain but not being open for contribution, their choice to rewrite everything with Limbo, how this all plays into the future of the Turso platform, how they test Limbo with Deterministic Simulation Testing (DST), and their plan to replace SQLite.
Matched from the episode's transcript 👇
Glauber Costa: We’re all in. And in fact, we wrote a blog post last week, telling the story, a lot of what we’re discussing here. Look, in my wildest dreams, in my wildest dreams, I would expect maybe this to gain like 2,000 more stars in a month or so, then from 1,000 it goes to 3,000… Maybe a couple of other engineers that would come and contribute as well, and start slowly, but we would see some potential on it… That was my definition of success. And every single metric that we thought Limbo could be successful at, we saw three times more, four times more than what we anticipated. So we decided to go all-in.
And there is a blog post that we wrote recently with all the changes that we’re going to make to the platform to allow us to do this, with all the changes that we’re doing to the company… We had a lot of reorganizations internally. And this is really something that we decided in a couple of weeks in January, because we’re just like “How can we ignore this?” I mean it seems very clear to us now that the world at large really wants a evolution of SQLite. The signals are very strong, so I think we just need to get behind it.
The power of the button
Rachel Plotnick joins us for the first show of 2025 to discuss her book “Power Button” and the research she did, and why we love/hate buttons so much. We also discuss her upcoming book “License to Spill” as well as the research she’s doing on energy drinks.
Matched from the episode's transcript 👇
Rachel Plotnick: That’s a great question. I think a lot of people are interested in that and wanting some kind of standardization or some kind of system for deciding when to put these things in which situations… And I’m not a UX expert and I’m not an HCI expert, so I don’t do much in the way of kind of saying “This is right or wrong.” I know there are regulators that are starting to do this kind of work. I just saw in the EU that they’re actually requiring cars go back to having physical buttons instead of touchscreens for things like turn signals, and windshield wipers… So I think we’re going to see more legislation like that, maybe more standardization of things… Especially in safety situations.
I’ve also talked to people about things like defibrillators, and CT scanners, and X-rays - you know, medical situations when you have to push a button… And it seems like any time life or death is involved, or people’s well-being is at stake, probably a touchscreen is not going to be the right way to go, because it’s going to involve a lot more machinations to do that. And I do know that studies have shown that it just takes longer to push buttons on touchscreens. Even simple things take more time than when you’re reaching for a physical button. So I wish I had a list of best practices… I do think a lot of it is situation-specific.
Mozart to Megadeath at CHRP
Daniel and Chris groove with Jeff Smith, Founder and CEO at CHRP.ai. Jeff describes how CHRP anonymously analyzes emotional wellness data, derived from employees’ music preferences, giving HR leaders actionable insights to improve productivity, retention, and overall morale. By monitoring key trends and identifying shifts in emotional health across teams, CHRP.ai enables proactive decisions to ensure employees feel supported and engaged.
Matched from the episode's transcript 👇
Jeff Smith: No, I’d be glad to. As you mentioned those three things, it is the perfect storm, especially for topics today… And it was a journey to get here. And I can give you a little bit about my journey and how we ended up with AI and mental health.
I would say in my background - I’m a corporate guy gone good. A classically-trained entrepreneur. I built six companies, three nonprofits… It’s where I find the joy. Identify a problem in the world, create a unique solution, wrap a company around it and build it to scale.
And this one’s called Chrp, named from the story of the canary in the coal mine. When that bird stops chirping, you get the heck out. It can send signals that we cannot. Methane, carbon monoxide… And so it becomes an early indicator for health and wellness. And we’ve created a platform harnessing that, using music.
And so to go even further back on myself and how I even got in this business - for years, I was the go-to guy for most ad agencies in New York to do all their social impact branding, corporate social responsibility… I became an expert on weaving purpose into the brand narrative, and bringing people alive at work, and through the products, and the brands, and these global campaigns… And along the way found a significant disconnect, I would say, between the leadership – the leadership that cares. They care about purpose, they care about their employees, they’re throwing millions at perks, rewards, telehealth, but their employees aren’t feeling it. They’re not feeling seen, they’re not feeling heard. There’s work-life enmeshment. They’re depressed. They’re anxious. They’re looking for jobs. And so a few of us decided “Hey, let’s take that on.”
We created a small company to come up with solutions to address workplace flourishing, and thought “That’d be kind of cool. Let’s bring people alive in the workplace.” And we started there, and said “Hey, what’s this disconnect between leadership and employees? What’s the problem? If there’s intent and there’s resource, but not results, where’s the breakdown?” And we’ve found that it was an information problem. We’ve found it was the corporate survey, the quarterly polls. How are you feeling? Nobody answers it, they lie in their responses, fear of reprisal, and they’re just making blind bets.
The data they’re getting back is at fault, mainly because people hate surveys. They want to fill them out. And so we said, “Let’s start there. Let’s come up with a better diagnostic tool, so people can feel seen and heard and where they’re truly at. And how do we do that?” And we set out to improve the survey; the survey tool, different modalities, different lengths, maybe even the happy faces you see in the bathrooms at airports… You know, just make it super-simple. And none of those are really tracking after a few months, and I was training for a Spartan race. One of those crazy things that we do to keep ourselves alive… And I found my mood changing with my music. I switched to, I think it was Motley Crue, Kickstart Your Heart, and just found my energy level shifting, and started thinking, as I was running, “What’s happening here? Am I being affected by the music? Am I making certain choices in my music selection that’s a reflection of this?” And that was where I had the a-ha moment, to say “Hey, is music the signal that we’ve been looking for? Is that a reflection of how I feel?” And so we dug into that, looked at music science, listening behaviors, research AI, and found a direct link, as music is a mirror for your mood.
In the simplest form, Chris, if you’re driving in the car and you’re listening to the radio, and you change, change, change until you find a song you like, that’s just your mirror neurons lining up your emotion with that song. it’s how you’re feeling or how you want to feel. It’s very hard to listen to music you’re not feeling. It’s that grind… And so we thought, “Okay, if we can bottle this up, we’ve got a rocket ship. If not, we’ll sell the algorithm and move on to the next task.”
[08:07] And so fast-forward, raised a bunch of capital, surrounded myself with brilliant people, technologists, HR leaders, music… So a buddy of mine, Suman Debroy - we’ve built some amazing things together. As a doctor in machine learning, he jumped in to help figure out the models… HR leaders from enterprise companies, managing hundreds of thousands of employees, speaking into “What does that experience need to look like inside of the company?” The music industry, songwriters, musicians, former execs from the big music streaming companies saying “Hey, this is the data that’s available to you, or even the intent from the musicians.” And that was phenomenal, to understand what were they feeling when they wrote a song? What do they want to put out in the world?
And so that was fascinating. And then even the attorneys, legal counsel. We’ve got the former privacy chief of Homeland Security to really look at “What are the privacy blockers? How do we hold integrity in this conversation?” Because music is so personal. And so we brought them together and said “Hey, let’s solve this problem. Music is our answer.” And like any, I guess now a new tech company, you’re testing it across an alpha group, looking at everything: adoption, the science… You end up with a black box on the table, it works beautifully. And so that was like end of last year. Now you’re shifting to product-market fit. Who is this best built for, right? A healthcare company at 2,500, a sports team, automotive company? So that’s where I’d say the rubber hits the road, and where we’re at today.
And just incredible leaders - you mentioned Greg Enos and others - just looking at this, saying “Hey, here’s a direction. Let’s really look at that how we can apply it.”
AI makes tech debt more expensive
Evan Doyle says AI makes tech debt more expensive, Hunter Ng researches the ghost job ad phenomenon, Gavin Anderegg analyzes Bluesky in light of its recent success, Martin Tournoij rants against best practices & Evan Schwartz tells us why he thinks binary vector embeddings are so cool.
Matched from the episode's transcript 👇
Jerod Santo: Up to 21% of job ads may be ghost jobs
We’ve been talking about fake developer job postings around these parts for awhile, but now someone’s gone and done some actual research on the phenomenon! Hunter Ng:
Using a novel dataset from Glassdoor and employing a LLM-BERT technique, I find that up to 21% of job ads may be ghost jobs, and this is particularly prevalent in specialized industries and in larger firms. The trend could be due to the low marginal cost of posting additional job ads and to maintain a pipeline of talents. After adjusting for yearly trends, I find that ghost jobs can explain the recent disconnect in the Beveridge Curve in the past fifteen years. The results show that policy-makers should be aware of such a practice as it causes significant job fatigue and distorts market signals.
If you don’t mind, I’ll remain DRY by copy/pasting from issue 115 when these “ghost jobs” were first percolating through the dev community: Be careful out there & give yourself a little leeway, too. Maybe you didn’t get the job. But then again, maybe nobody got the job…
It's all about documentation
Carmen Huidobro joins Amy, KBall & Nick on the show to talk about her work, the importance of writing docs, and her upcoming conference talk at React Summit US!
Matched from the episode's transcript 👇
Nick Nisi: No, I think we’re seeing a lot of that… Maybe central – not centralization, but standardization of different paradigms, and approaches to things. JSX is an example maybe, of like “Here’s a prescribed way to represent your DOM, or your view layer”, that React obviously uses, but other tools use kind of versions of that as well, like Solid or other tools. And then speaking of Solid, you have Signals. And Signals are now a stage one proposal, so maybe that’s actually going to be standardized.
Kind of a big deal
Jerod & the gang play “Twenty” Questions to get to know Amy, review the big Svelte 5 release, discuss commercial open source & get Nick’s report from SquiggleConf!
Matched from the episode's transcript 👇
Kevin Ball: I mean, at this point – I think the speed, if I’m understanding correctly, is coming from moving to signals, as reactivity, which lets your granularity of reactivity be the single point of data, and the particular things that depend on it, rather than a component-level reactivity, which is what they had before… Which is very reminiscent of the difference between like a Solid and a React. React, anytime you change a prop, you’re re-rendering the component, whereas Solid can actually go down to those individual fields.
The one other big change they talked about, that I think is interesting, especially in the context of what’s been going on in the web world, or like the arguments of the day, is that they changed the way that they’re dealing with event handlers, and I think also slotted content, to move it to everything’s a prop, it’s all kind of in JavaScript land, programmable and composable… And they explicitly say the reason they didn’t do that before is that they were aligning with web components, because web components do have this sort of distinction between what is the template, and what have you… And they’ve decided that that’s not actually worth it. And so they’re no longer trying to really cater to having web components as a building block, and instead kind of going all in on the sort of “Everything is JavaScript”, and therefore treating it like JavaScript… Which - I think that’s controversial. I don’t personally have a super-strong opinion on it, but I think it’s interesting to see that kind of big shift going on, especially in the context of all the discussion about our web components, the future, should we be building with them etc.
AI for Observability
Yasir Ekinci joins Johnny & Mat to talk about how virtually every Observability vendor is rushing to add Generative AI capabilities to their products and what that entails from both a development and usability perspective.
Matched from the episode's transcript 👇
Johnny Boursiquot: I mean, I keep struggling with the whole just AI for the sake of AI thing, because I see it everywhere. So I’m really trying to get to useful things, genuinely useful things, without having to tell me that it’s AI and this and that… Because these are just signals that I as a customer doesn’t care about.
Towards high-quality (maybe synthetic) datasets
As Argilla puts it: “Data quality is what makes or breaks AI.” However, what exactly does this mean and how can AI team probably collaborate with domain experts towards improved data quality? David Berenstein & Ben Burtenshaw, who are building Argilla & Distilabel at Hugging Face, join us to dig into these topics along with synthetic data generation & AI-generated labeling / feedback.
Matched from the episode's transcript 👇
Ben Burtenshaw: Yeah, so one thing that you can do that is really straightforward is actually to write down a list of the kinds of questions that you’re expecting your system to answer. And you can get that list by speaking to domain experts, or if you are a domain expert, you can write it yourself. And it doesn’t need to be an extensive, exhaustive list. It can be quite a small starting set. You can then take those questions away and start to look at documents or pools and sections of documents from this lake that you potentially have, and associate those documents with those questions, and then start to look if a model can answer those questions with those documents. In fact, by not even building anything. By starting to use, say, ChatGPT, or a Hugging Chat, or any of these kind of interfaces, and just seeing this very, very low, simple benchmark - is that feasible? Whilst at the same time, starting to ask yourself, “Can I, as a domain expert, answer this?” And that’s kind of where Argilla comes in, at the very first step.
So you start to put these documents in front of people with those questions, and you start to search through those documents, and say to people “Can you answer this question?” Or “Here’s an answer from a model to this question, in a very small setting.” And you start to get basic, early signals of quality. And from there, you would start to introduce proper retrieval. So you would scale up your – you would take all of your documents… Say you had 100 documents associated with your 10 questions. You put all those 100 documents in an index, and iterate over your 10 questions, and see “Okay, are the right documents aligning with the right questions here?” Then you start to scale up your documents and make it more and more of a real-world situation. You would start to scale up your questions… You could do both of these synthetically. And then if you still started to see positive signals, you could start to scale. And if you start to see negative signals, “I’m no longer getting the right documents associated with the right questions…”
I personally would always start from the simplest levers in a RAG setup, and what I mean there is that you have a number of different things that you can optimise.
So you have retrieval, you can optimise it semantically, or you can optimise it in a rule-based retrieval, you can optimise the generative model, you can optimise the prompt… And the simplest movers, the simplest levers are the rule-based retrieval (the word search), and then the semantic search.
So I would first of all add like a hybrid search. What happens if I make sure that there’s an exact match in that document for the word in my query? Does that improve my results? And then I would just move through that process.
Break: [00:16:43.03]
Developer (un)happiness
Abi Noda, co-founder and CEO at DX, joins the show to talk through data shared from the Stack Overflow 2024 Developer Survey, why devs are really unhappy, and what they’re doing at DX to help orgs and teams to understand the metrics behind their developer’s happiness and productivity.
Matched from the episode's transcript 👇
Abi Noda: Okay. I see. Okay. I mean, this I can’t share out loud, but… So for risk of attrition, we look at it very similar to like a blood test that you might get at the – so when you get a blood test, they tell you “Here’s the healthy range.” If your blood pressure or cholesterol is within this nanomilligrams to this, you’re normal. So with attrition risk, the normal range - and I maybe haven’t had coffee this morning, but I think it’s 7% to 10% is the healthy range. So if your attrition risk is 7% to 10% of your organization has signals of being at risk of attrition, that’s normal. And I’ll just say we’re at the high end of the normal range… But I’m looking at the data; our reporting will tell you where that kind of risk of attrition is. So I’m looking at it now and I’m aware of what is going on here.
Leveling up JavaScript with Deno 2
Jerod is joined by Ryan Dahl to discuss his second take on leveling up JavaScript developers all around the world. Jerod asks Ryan why not try to fix or fork Node instead of starting fresh, how Deno (the open source project) can avoid the all too common rug pull (not cool) scenario, what’s new in Deno 2 & their pragmatic decision to support npm, they talk JSR, they talk Deno KV & SQLite, they even talk about Ryan’s open letter to Oracle in an attempt to free the unused “JavaScript” trademark from the giant’s clutches.
Matched from the episode's transcript 👇
Ryan Dahl: Yeah, it’s hard to answer in general, but ideally with data. Ideally, we look at some data and we say “Okay, obviously, this is the way to go. This method is faster than that method, thus obviously we do this.” Or “We took a survey, and people prefer this to this.” But very, very often you don’t have clear signals like that, or you just have some dirty signals or some intuition.
[00:56:06.12] Yeah, you talk, talk to the people you trust, you take their opinions… I don’t, not back in Node days, nor currently do I believe that a project should be run as a democracy. I just took a poll today about something, and I value people’s feedback, people’s opinions on stuff, but ultimately you’ve just got to think about it and weigh all the evidence that you have, and decide what is going to level up JavaScript, what is going to further the company, and try to try to decide that as best as you can.
Leveling up JavaScript with Deno 2
Jerod is joined by Ryan Dahl to discuss his second take on leveling up JavaScript developers all around the world. Jerod asks Ryan why not try to fix or fork Node instead of starting fresh, how Deno (the open source project) can avoid the all too common rug pull (not cool) scenario, what’s new in Deno 2 & their pragmatic decision to support npm, they talk JSR, they talk Deno KV & SQLite, they even talk about Ryan’s open letter to Oracle in an attempt to free the unused “JavaScript” trademark from the giant’s clutches.
Matched from the episode's transcript 👇
Ryan Dahl: Yeah, it’s hard to answer in general, but ideally with data. Ideally, we look at some data and we say “Okay, obviously, this is the way to go. This method is faster than that method, thus obviously we do this.” Or “We took a survey, and people prefer this to this.” But very, very often you don’t have clear signals like that, or you just have some dirty signals or some intuition.
[01:04:18.20] Yeah, you talk, talk to the people you trust, you take their opinions… I don’t, not back in NOde days, nor currently do I believe that a project should be run as a democracy. I just took a poll today about something, and I value people’s feedback, people’s opinions on stuff, but ultimately you’ve just got to think about it and weigh all the evidence that you have, and decide what is going to level up JavaScript, what is going to further the company, and try to try to decide that as best as you can.
Undirected hyper arrows
Chris Shank has been on sabbatical since January, so he’s had a lot of time to think deeply about the web platform. On this episode, Jerod & KBall pick Chris’ brain to answer questions like, what does a post-component paradigm look like? What would it look like if the browser had primitives for building spatial canvases? How can we make it easier to make “folk interfaces” on the web?
Matched from the episode's transcript 👇
Chris Shank: It’s a reactive system, not in signals or reactivity, but it’s a reactive system as it’s constantly – the user interfaces that we’re writing are constantly communicating between these two worlds.
MySQL performance
Silvia Botros joins Justin & Autumn for a phenomenal conversation about databases, her career path & the ins/outs of writing High Performance MySQL.
Matched from the episode's transcript 👇
Silvia Botros: Yeah, what is real time? But when people come to you with conversations like that, and this feature is sitting on top of a database that is relational, which by design means it optimizes for consistency, not availability, even in the managed solutions, this is your sign. You need to start talking about – either you need to move the business, which is a harder sell, or you need to start introducing layers of reliability between this database and how you deliver the feature to the customers. Introduce queues, introduce caches. When the writer is failing over, you have to figure out what degraded mode looks like. It needs to become far more nuanced. It’s not going to be binary from then on. That ship sailed. But yeah, that’s definitely one of the biggest signals. If you start saying “I want Aurora, but I want it to fail over without any downtime”, I’m going to be like “Well, meet the CAP theorem.”