A pure Go implementation of jq
This is cool because portability. But also because you can embed it as a library in your Go projects. It’s not identical to jq
in practice, though. Here’s a long list of differences between the two.
This is cool because portability. But also because you can embed it as a library in your Go projects. It’s not identical to jq
in practice, though. Here’s a long list of differences between the two.
Jacob Kaplan-Moss, who has been writing a lot about good interview questions and how to hire well:
Work sample tests are an exercise, a simulation, a small slice of real day-to-day work that we ask candidates to perform. They’re practical, hands-on, and very close or even identical to actual tasks the person would perform if hired. They’re also small, constrained, and simplified enough to be fair to include in a job selection process.
To give you a more concrete idea of what I’m talking about, here are several examples of work sample tests I’ve used…
And just in case you think he’s prescribing whiteboarding…
However, work sample tests are also a minefield: the space is littered with silly practices like whiteboarding, FizzBuzz, Leetcode, and “reverse a linked list”-style bullshit. The point of this series is to separate these silly practices from the good ones and to give you a framework and several examples to use in your hiring rounds.
Nikita Sobolev:
Your sync and async code can be identical, but still, can work differently. It is a matter of right abstractions. In this article, I will show how one can write sync code to create async programs in Python.
Last week, I told you all about an incoming security patch for Postgres. Well, today, it’s here. Please check out this page and upgrade your Postgres. As the Postgres team says, ‘This is the first security issue of this magnitude since 2006.’
As always, you can find the latest information about security patches via the CVE system. Here’s the one for this vulnerability, CVE-2013-1899.
There are three things that can happen with this vulnerability:
Damn.
Versions 9.0, 9.1 and 9.2.
The Postgres team has a FAQ for this release, and here are the release announcements.
You can also see the commit that fixed the issue, with all the gory details.
So, I’m sure you’ve all been waiting with baited breath for me to begin my licensing series. I got lots of great feedback, but something’s made me put it off for a moment: coding. I plan on starting the series in earnest next week, but in its stead, I offer you this: rstat.us.
If you didn’t hear, a week ago Friday Twitter changed their terms of service. This got a lot of people upset, including me. My friends and I started thinking about it, and the real problem is this: any software that’s owned by one entity, corporate or not, is open to the possibility of being abused.
So we decided to fix it. Ten days later, here we are: http://rstat.us/ is born.
To boil it down, rstat.us is a Sinatra application that clones the basic functionality of Twitter. Fine. But here’s the interesting part: if you want to follow someone that’s not on the main rstat.us site, you can copy/paste a URL into a form, and from then on out, it just transparently works. We’re building on the ostatus protocol that other sites like Identi.ca uses, so you can actually follow Identica users on rstat.us right now, and after we work out a kink or two, they can follow you, too.
Oh, and I should mention that: this is very much an alpha release. rstat.us was put together by 6 or 8 of my closest friends in a marathon coding session, so there’s some refactoring work to be done. The documentation is also a bit obtuse, partially to slightly discourage people from running their own nodes just yet. Eventually, this should be a two or three line process, and you can be running your own node up on Heroku. We also want to significantly improve our test coverage.
There’s some pretty big plans for the future: we want to extract a Sinatra extension that will enable anyone to easily build their own distributed network. We’re also releasing three Ruby gems that will let anyone work with the few standards that we build upon, so that other people can make their own tools that work with us, or build their own implementations and copy of the site. Check it out on GitHub, or drop by #rstatus on Freenode if you’d like to say hello.
It’s a distributed world that we live in. Own your own data. Build decentralized networks. Take control of your own social networking. And help us do it. :)
[GitHub] [README] [Discuss on HN]
Two way communication in the browser isn’t easy. James Coglan brings aims to change that with Faye:
Faye is an implementation of the Bayeux prototcol, a publish-subscribe messaging protocol designed primarily to allow client-side JavaScript programs to send messages to each other with low latency over HTTP.
Essentially, Bayeux lets applications publish and subscribe to data in named channels, both in the browser and the server:
fayeClient.subscribe('/path/to/channel', function(message) {
// process received message object
});
fayeClient.publish('/some/other/channel', {foo: 'bar'});
Faye supports long polling and callback polling, depending on if you would like to keep a persistent HTTP connection open in the browser or use a JSONP callback.
The really cool part is that Faye ships with functionally identical server implementations for both Rack and Node.js.
We just might have to try this out to enhance Tail.
Ryan Carniato joins Amal & Nick to discuss Solid with a major focus on Signals, which are the cornerstone of reactivity in Solid.
Matched from the episode's transcript 👇
Ryan Carniato: [01:12:17.26] Okay. Yeah. I think I follow you. That one doesn’t exist as of yet, but I can see how it would be possible. And actually, it was [unintelligible 01:12:23.24] who was basically talking about this. He was very excited about Signals for a bit, and he was like – because someone had brought a Signals port to Rust, in a framework called Leptos, which is almost identical to Solid, but in Rust. And so it was in his language, and in his place, and he was talking about basically using stuff in the IDE to be able to trace it. I don’t believe anyone’s built that. But a more common, practical thing - library writers. I work a lot with Tanner Linsley, who creates the TanStack. React location – or I guess it’s now called TanStack Router, TanStack Query, and numerous other libraries. And one of the interesting things for him was trying to figure out how to manage state. And you’re like “Why do you need to manage state? Because these are these universal libraries.” But even if you’re dealing with data fetching or whatnot, he builds dashboards, and large tables, and very interactive stuff, and he builds tools so people can use it. Quite often, when he was working with stuff for React, for example - and all his libraries work with React - he’d be in this interesting place where he couldn’t use React state primitives and context because they weren’t efficient enough. And this is a common case. Things like Redux at one point switched to using pure React context with set state, and then they realized it was too slow, and they went back.
Basically, anytime you write any kind of state library with React, you have to basically make your own external state management, and then have that feed back into React’s update cycle. So yeah, this was one of those challenges, because then React had a concurrent mode. And this meant that now you use external source. Basically, you had to jump through even more hoops to try and get it to play nice on the React side. And it got to – I mean, I’ve seen this; this is probably why a lot of the people who seem to be the most critical of things like use effect, or people who write state management libraries for React, like XState, TanStack Query. You see a certain trend here. And it’s because they’re basically pushed into being kind of like this outside second-class citizen.
And one of the cool things, one side, when you have a framework that manages state, it means that yes, each version of that plugin for the different frameworks are going to need to use that framework state. But because there’s a set way of doing it, you just adopt it in. And ironically, if it wasn’t for React needing to work the other way, and since it’s the most popular TanStack library, Tanner could have just like basically just pulled in Solid’s version for Solid, Vue’s version for Vue, Svelte’s version of the state management primitives, and just like not have to ship all this extra code with the library. And this has been one of the big challenges, because when the state primitives understand concurrency, when they understand the pieces - because they’re built into the framework and they’re completely exportable - then you are giving those powers to the library authors.
In Solid we have something called create resource, which is a basic async primitive. But when I say basic, I mean it handles data fetching, it automatically serializes for SSR, and it automatically triggers suspense on the server and client, so it automatically does out of order streaming… Because the framework knows this. All Tanner and group have to do is just make TanStack Query wrap that create resource, and then suddenly it works in all of those functionalities. Suddenly, you just take Solid query, which is the TanStack Query version for Solid, and everything above works. Out of order streaming, automatic data serialization… The whole suspense, the whole transitions, the whole thing just works.
[01:16:24.17] It’s hard just me saying it, the impact of that, but when we added server functions - they’re called server dollar sign, but now we use server like React does for syntax; sometimes it’s easier to go along with them. Those just fall right in, too; all the pieces just compose in the third party, because they’re all built with knowledge that the other pieces exist. And that’s sort of what I mean by primitive design. It’s a real case where Solid query ended up being so successful there that the TanStack Query dev tools, the one that everyone installs, the ones for React, Svelte, Vue, are now actually built in Solid. So yeah, it was a good use case for us.
Alex & James Moore, founding members of the Open Web Advocacy (OWA), join Amal to talk about the critical work the OWA has been doing to ensure users have browser choice and that web apps can be first-class citizens on mobile devices. We learn about how an ad-hoc group of software engineers worked with regulators, legislators & policymakers to help drive some of the most impactful legislation curbing anti-competitive behaviors on the web for tech giants such as Apple, Google & Microsoft via the EU’s Digital Markets Act (DMA).
Tune in for this deeply important & timely discussion as we also unpack recent events with Apple and their DMA (un)compliance, and how the OWA helped successfully organize thousands of web developers from around the world to hold ground for a free & open web.
Matched from the episode's transcript 👇
Alex Moore: Before the Blink fork. So at this point we were still developing web apps, and we were waiting for these major features… But then Blink and WebKit fork. So suddenly you have all the features that were getting piled into WebKit suddenly sort of get cut off, and now they’re only going into Chromium and Blink. And then the gap between what was possible on Android and Firefox, and what was possible in Safari started to get bigger and bigger as the time went on.
Then what we experienced was just an enormous number of bugs. Literally almost every second release Safari would break something critical. So there’s IndexDB, which the listeners will be familiar with; it’s for local storage. That would quite often break all the time, to the point there James and I used to joke it was built by the work experience kid at Apple… Because it was just so unstable, and parts of it didn’t work.
Now, that was unfair. The reality actually is that Apple’s investment in Safari was just so small that you couldn’t possibly work on all these different things at the same time and have them work. So it wasn’t a skill issue, it was simply not investing enough resources… Because they had no incentive; they moved parts of their team off onto their native app ecosystem to focus on that instead.
But we persisted, and we were still building web apps, including for iOS… Because you don’t get a choice where your customers are. You have to build for your customers. And in particular, you have to build for the phone of the CEO, or the CTO of the company you’re building for.
So that kept on going till 2015-2016, but the noise from developers was getting bigger and bigger. They’re like “Right, we want push notifications.” But Apple would pretty much – they’d give these sort of proforma non-responses, like “We hear you” or “We wanna hear about the use cases”, for which our response would always be “Well, the use cases are the same as they are for all the native apps. They’re identical.” Do you want a chat app without notifications? Do you want a social media app without chat notifications…?
[00:13:47.27] We got all the way to 2020, and James and I at the time were writing software, doing a point of sale system… And the first thing our customer asked us was “Hang on… These printers you’re making us buy are really expensive. Can we use Bluetooth printers?” And we’re like “Nah, sorry. Apple doesn’t support web Bluetooth. We can do it for your Android devices, we can’t do it for all the iPads.” Because it was a franchisee kind of arrangement, they obviously couldn’t get all the franchisees to swap out all of their iPads with Android devices, so it just wasn’t an option.
Then about a few months later he came up and he said “Oh, for the product ordering system we really need notifications when products turn up.” And again, we said to him “Well, we can’t do that, because Apple hasn’t implemented that functionality”, to which he said “Why can’t I just install another browser and get notifications on that browser?” And we’re like “Ah, because they’re all essentially the same.” And that got me thinking, I was like “Yeah, that’s actually ridiculous.” Obviously, if there was another browser, you could just tell your users to switch to that browser. And the fact there isn’t a browser means that Apple has no competitive pressure to implement any of these features, or invest in Safari, or make it stable, because there is no fear of losing users to another browser.
If there’s a huge bug in Safari, that breaks lots of apps, then the apps rightly - and this is a good use case for it - could say “Look, it’s broken in this one. Go use one of these other browsers instead.” But that doesn’t happen, because as developers we know what’s functionally identical. We’re not gonna push our users to go use another browser, because that’s annoying, and bad practice, and all… So that competitive pressure never turns up, which means that the Apple management – and I wanna be really clear here; this isn’t stuff which is within the Safari WebKit’s team control. It’s management deciding what budget they get. And presumably, in some cases, what they can invest in.
So it was at that point I started reaching out to people that worked at Apple, and I’m like “Look, we really need notifications. It’s been 10 years… What can we do to get this of the ground?” And we got no response. So we kept pushing, and eventually we created a bit post at WWDC, and we said “Look, Safari is years behind by Firefox and the Chromium browsers. We’re missing all these core features for web apps. Safari needs a much bigger budget. What can we do to make sure that happens?” And it was the most upvoted, viewed post at WWDC that year.
This week we’re joined by FreeBSD & OpenZFS developer, Allan Jude, to learn all about FreeBSD. Allan gives us a brief history of BSD, tells us why it’s his operating system of choice, compares it to Linux, explains the various BSDs out there & answers every curious question we have about this powerful (yet underrepresented) Unix-based operating system.
Matched from the episode's transcript 👇
Allan Jude: The ZFS code itself is basically the same. So since FreeBSD 13.0, it’s been the OpenZFS version, as opposed to the previous Illumos version of ZFS. So in FreeBSD 13.2, I think it’s ZFS 2.1.6, which is almost identical to what’s in Ubuntu, which is 2.1.5.
This week we’re talking about Swift with Ben Cohen, the Swift Team Manager at Apple. We caught up with Ben while at KubeCon last week. Ben takes us into the world of Swift, from Apple Native apps on iOS and macOS, to the Swift Server Workgroup for developing and deploying server side applications, to the Swift extension for VS Code, Swift as a safe C/C++ successor language, Swift on Linux and Windows, and of course what The Browser Company’s Arc browser is doing to bring Arc to Windows.
Matched from the episode's transcript 👇
Ben Cohen: Yeah, so like I say, Saleem and the folks at the Browser Company have done some really heroic efforts in wrapping some of the Windows SDKs. And I’m not going to speak for them, because it’s their work, but I suspect the answer is mostly that they’re not looking to create a cross-platform SDK. They want a language, in this case Swift, that will compile Windows binaries, and have that able to access the existing SDK, just like Swift accesses the existing SDK on Apple platforms.
So there is a bit of a runtime in terms of obviously we have a standard library that you can use, we also have taken the next level above the standard library, which is something called Foundation… That’s something that Apple developers will be very familiar with. It’s been around for a long time as the core part of Apple’s SDK.
When Swift first launched - obviously, Foundation itself, when Swift first launched, was written in Objective-C. And the initiative at the time was we were going to create this parallel version of Foundation written in Swift. Unfortunately, the challenge there is those two things got a little bit out of sync, because the Objective-C implementation on Apple’s side wasn’t identical in every way to the implementation on what we refer to as Corelibs Foundation on the Linux side. And people found that a little bit challenging. I think that was one of the reasons why Swift on Linux adoption stalled a little bit in the early days.
[24:13] So about a year ago, at the Swift on Server conference, Tony Parker from the Foundation team announced something new, which is that we were open sourcing a new, pure Swift implementation of Foundation, that was actually going to be the implementation at Foundation, that if you are running iOS 17 is on your phone. And so that’s actually identical code now that we’re open sourcing, and that you can run on Linux and Windows as a package that you download and compile into your binary… Whereas on iOS, it’s there in the frameworks that you use.
And it was a while to get to this point, because we had to do quite an interesting trick, which is we actually had to invert things. We originally had a library written in Objective-C, and then we were sitting on top of it as Swift. And we had to flip that around, so that actually the implementation of Foundation was written in Swift, and then we had to reexpose all of that functionality back to existing Objective-C apps.
One of the things we have on Apple platforms is we have this ABI stable platform where you can write an app, put it up on the App Store, and then the operating system upgrades underneath it without you having to redownload the apps. And that’s really important, and that relies on the technology of ABI Stability, which is something that Swift implemented, I guess three years ago now, with Swift 5.0… Which was a really important point for us, because that allowed us to start implementing parts of our operating system in Swift. Up until that point it was only a technology that we could use internally within the operating system, but we couldn’t expose frameworks written in Swift. But once we achieve that, we were able to do that inversion of Foundation, and now we’re at the point where we’re starting to open source code that is literally the identical code that you’ll be running on your phone, built into the operating system on Windows or Linux as well.
Mat Ryer returns with his guitar, an unpopular opinion & his favorite internet virus.
Matched from the episode's transcript 👇
Jerod Santo: You’re a 10-eggs developer… 64 votes, 72% popular. On Mastodon 73% popular, so pretty much identical. 40 votes.
A hoy hoy! Our old friend Nick Nisi does his best to bring up TypeScript, Vim & Tmux as many times as possible while we discuss a new batch of web browsers, justify why we like the ones we do & try to figure out what it’d take to disrupt the status quo of Big Browser.
Matched from the episode's transcript 👇
Nick Nisi: I think so. I think that a lot of people have a problem paying for software like that, like browsers. But when you think about it as “Oh, this is a dev tool”, if it actually provides enough value - which I think that both of them do; they both have pretty much identical features. I think Sizzy is a little bit nicer, just in like its interaction and its UI. Its UX is better, a little bit. But they’re both Chromium browsers that support this ability to show you the browser, show you what you’re editing, and have dev tools with it. But then they can also do things like “Here’s a phone, here’s a desktop, here’s a tablet”, and see them all at once, and see them all sync together. And then if you’re developing like the Open Graph stuff, the Open Graph links, you can see that, “And here’s an example of what it would look like on Twitter, here’s what it would look like on Facebook.”
Go’s known for it’s fantastic standard library, but there are some places where the libraries can be challenging to use. The html/template
package is one of those places. So what alternatives do we have? On today’s episode we’re talking about Templ, an HTML templating language for Go that has great developer tooling. Co-hosts Kris Brandow and Jon Calhoun are joined by Adrian Hesketh, the creator of Templ, and Joe Davidson, one of the maintainers on the project.
Matched from the episode's transcript 👇
Adrian Hesketh: Well, Joe maintains the sort of Neovim plugin, if you like. The LSP itself is identical between all the different things… But what you do have to add on is the syntax highlighting, which is sadly distinct for each kind of editor… So I maintain the VS Code plugin, and that uses – although Joe did a PR on it the other day, so we both do now… But it uses a different type of syntax highlighting structure than I think Go Land does, and I think Neovim uses a different one again. So that’s probably the most irritating thing.
So I think Microsoft, if they’re listening out there, it’d be ideal if you could in the LSP write back the syntax highlighting rules directly onto “Highlight this and this.” Because then I could use the existing parser that we’ve already got, instead of trying to recreate the parser in regular expressions, just to do the syntax highlighting bit.
As a technologist, coder, and lawyer, few people are better equipped to discuss the legal and practical consequences of generative AI than Damien Riehl. He demonstrated this a couple years ago by generating, writing to disk, and then releasing every possible musical melody. Damien joins us to answer our many questions about generated content, copyright, dataset licensing/usage, and the future of knowledge work.
Matched from the episode's transcript 👇
Damien Riehl: Sure. If I were to create this machine-created coloring book, for example, which under the US Copyright Office today, that entirely machine-created thing is therefore uncopyrightable. This really goes to the heart of what is copyright in the first place. And all copyright is is a monopoly. It is a government-sanctioned monopoly, giving you the author a monopoly of life of the author plus 70 years on the thing you created. But as an exchange for that monopoly, the government says “This has to be original. It has to be your creative work that does this. And if it is truly original, and it is truly creative, we will give you that monopoly of 70 years; life of the author plus 70 years.”
[25:59] So really, the question is, is there anything copyrightable in the machine-generated work? Well, probably not, because there was no human creativity in that thing. So that’s thing number one. But then let’s look at another scenario - what if somebody else did a human-created coloring book that was identical to what the machine had done? Does that turn it from unoriginal, therefore uncopyrightable, with a machine-created one [unintelligible 00:26:21.02] if a human does it, it is copyrightable, even though they’re identical?
Red Hat’s decision to lock down RHEL sources behind a subscription paywall was met with much ire and opened opportunity for Oracle to get a smack in and SUSE to announce a fork with $10 million behind it.
Few RHEL community members have been as publicly irate as Jeff Geerling, so we invited him on the show to discuss.
Matched from the episode's transcript 👇
Jeff Geerling: Yeah. I keep seeing this statement, “All of our code, everything in Red Hat is in CentOS Stream.” It’s like, well, that’s not entirely true. 99%, probably more than 99% is. And in terms of the whole complete source availability, it doesn’t meet that standard. But the license agreement saying you can download the sources if you have an account and you have a subscription in good standing - that does meet it. But they can’t make that statement, that everything is in Stream. Because what they’re trying to say is “Why are you doing this downstream? We have this upstream, which is better, and it has everything.” It’s like, well, if they did their work all in Stream, and if they had coordinated the releases, especially the minor releases through Stream, that’s one thing. But they don’t. Red Hat is kind of a little bit of a fork of Stream. It’s not a big fork, but it’s a downstream of Stream, so it’s not one-to-one identical, but they’re trying to sell it as that to the press, I think. Anyone like us can kind of see through that; it’s not identical.
They’ve tried selling Stream for three or so years now, and nobody outside of the Red Hat ecosystem, like the Red Hat Enterprise Linux subscribers themselves and Red Hat has bought into it. I don’t know how they’re gonna change that without changing Stream, and making it better.
This week we’re joined by Adam Jacob and we’re talking about his mission at System Initiative to rebuild DevOps. They are out of stealth mode and ready to show off their transformative new power tool that reimagines what’s possible from DevOps. It’s an intelligent automation platform that allows DevOps teams to build detailed interactive simulations of their infrastructure and use them to rapidly update their production environments.
Matched from the episode's transcript 👇
Adam Jacob: It is rebuilding it from the ground up. Here’s the thing. So I went back and watched John Allspaw and Paul Hammond’s talk from 2009, the “10 deploys a day at Flickr.” That was basically the moment that DevOps started. Patrick Debois I think was in the audience, or at least saw the talk shortly thereafter. I think he was there. And that’s sort of what led to DevOps. And if you watch them give that talk, it’s an amazing talk today, and they describe how they deploy Flickr 10 times a day… And the way that they deployed Flickr in 2009 is essentially exactly the way that we tell people to do DevOps today. the tools are completely different, we’ve replaced the tools 50 times… You’ve got a ton more options about which things slots into which part of the workflow, or whatever… But that workflow - unchanged since 2009. You put in some code, it goes into CI, at some point someone [unintelligible 00:06:29.01] a button, you do some feature flags, you do a little dark launching… Like, you put it on a blender, get some monitoring and observability… They were using Ganglia; now you’d be like “We use Honeycomb”, because observability is better than monitoring, or whatever… You got some Data Dog… But essentially, it’s identical to what we did in 2009.
Homebrew project leader Mike McQuaid joins us to weigh in on Apple’s big Vision Pro announcement. We also hit on our favorite (and least favorite) non-AR things from the WWDC 2023 keynote.
Matched from the episode's transcript 👇
Adam Stacoviak: [25:49] It could be an homage, they could have licensed it… I don’t know. I’d love to know the legalities there. But it was exactly like Ready Player One. And Ready Player One is all about escaping, but it’s not what Apple went. But the point I’m trying to make is if you’ve seen that film, these goggles, Vision Pro look almost shape-wise identical, and you can see through them. So there’s a lot of inspiration. You’ve got – what was the thing called from Star Trek? I’m not a Star Trek fan, unfortunately… That was like the iPhone.
Tips, tricks, best practices and philosophical AI debates abound when OpenAI ambassador Bram Adams joins Natalie, Johnny & Mat to discuss prompt engineering.
Matched from the episode's transcript 👇
Johnny Boursiquot: So I’m, I’m curious what the – so if we look at sort of the vast world of interactions, and imagine somebody goes to a ChatGPT, and you have millions of people using it every day… Is each interaction unique to that person? Is it factoring in the nuance and context of me and how I ask my questions, and our history of conversations before? Is it truly customized to me, or could someone else who asked a similar question in a nearly identical way get the same exact answer?
KBall interviews Nick Nisi about the Pandora’s box that is his tooling/developer setup. Starting at the lowest layer of the terminal emulator he uses, they move upwards into command line tools, into Tmux (terminals within terminals!), his epic NeoVim configuration, and finally into the tools he uses for notekeeping and productivity.
Matched from the episode's transcript 👇
Nick Nisi: Yup. And that enables me – like, it’s not associated with any specific terminal window, and so one really cool thing… Well, two things, actually. If you’re ever in a situation where you’re presenting, like up on stage, you can actually attach to the same tmux session twice. And so you could have like on your separate monitor, which is the projector, a terminal window, and have another one locally, and just look at the one on your computer and not have to like look back behind you and see what you’re typing, and all of that. You can have them identical, and mirrored, and just see exactly what you’re doing… And that is actually another really cool way to pair with people. I’ve never done this, but in theory, it’s awesome… Because you could just have someone SSH into your machine, and tmux attach to the same thing, and then you’re both editing in the same place. The downside of it is even if they’re a Vim user, which is like less and less likely, it’s such a personal editor, with personal key bindings.
Dax Raad joins KBall and Nick to chat about SST, a framework that makes it easier to build full-stack applications on AWS. We chat about how the project got started and its goals. Then we discuss OpenNext, an open source, framework-agnostic server less adapter for Next.js.
Matched from the episode's transcript 👇
Dax Raad: Yeah, so at the end of the day, there’s a few things – if you’re trying to do something like this, there’s a few things you just need to do. And I think people typically start in the same place. So when you do these serverless systems, another way to think of them is they’re very serviceful, so you’re really taking advantage of all these primitives that your cloud provider has. Obviously, people think of things like functions, but it’s also things like queues, or event buses, or cron jobs… All kinds of little primitives that you need, spun up, so you can actually use it.
So the place where a lot of these frameworks start is on the infrastructure as code side. So you want to be able to define all the things you need, and this tool needs to be able to actually like deploy them. And that’s kind of where SST started; it started as an infrastructure as code tool, where you can define all these things; we’re built on top of CDK, so even if you go outside of the bubble we focus on, you can still use SST to orchestrate all kinds of things.
[06:09] So that’s where we started… Then we just discovered rough edge by rough edge by rough edge, and we continued to progress that way. So the first thing is local development, right? Spinning up a full copy of your whole app is actually a pretty awesome way to do just development when you’re working on an application. But the feedback loops whenever you update function code, or whatever - now you need to recompile that code, upload to AWS, wait for the function to restart… And that was at best a four to five-second feedback loop, which is just kind of unacceptable. So we saw “Okay, that’s a clear problem that’s going to make people use this stuff and say, “I hate it. I don’t want to use it.” So we’ll try to figure out how to how to address that.”
Now, that was kind of the first features that really kind of put us on the map, which was our live lambda debugging. So we made it so the feedback loop can be now measured in like milliseconds, and effectively instant. You can add breakpoints, all this kind of stuff that you’re used to in a local environment, even though it’s still running in the cloud world. Your environment still is 99% identical to what actually gets deployed, so there’s not really any like discrepancies from “Okay, it works on my machine. Why isn’t working when I deploy?”
So that’s where we started. But basically, that’s the pattern that we took - we’ll solve the biggest pain point, then we’ll solve the next biggest pain point, then we’ll solve the next biggest pain point. And over time, our scope has gotten extremely broad. We’ll probably talk about this in a little bit, but now we’re going all the way to like “How do we deploy these more complex frontends to AWS?” Now we’re very deep in the frontend world, doing a bunch of things to help frontend projects get deployed.
Our scope is crazy now, and I love it, because I feel like no matter what’s going on in the tech world, we have a way to participate in that, which is really awesome. I don’t ever feel okay we’re kind of stuck in our little zone. But yeah, it started out pretty narrow, and it’s just gotten wider over time.
This week Adam talks with Andy Klein from Backblaze about hard drive reliability at scale.
Matched from the episode's transcript 👇
Andy Klein: So that’s a really good question, and it does come up… And you’re absolutely right, somebody with a handful of drives, or a small number of drives has to think differently. And I think one of the reasons why the data, what we do has been popular, if you will, for the last number of years is because there’s such a dearth of information out there.
Other than that, you go to the manufacturer, and you could take every data sheet produced by every single manufacturer and just change the name, and they look identical. And they almost have the same numbers on them.
The panel discuss the parts of Go they never use. Do they avoid them because of pain in the past? Were they overused? Did they always end up getting refactoring out? Is there a preferred alternative?
Matched from the episode's transcript 👇
Carl Johnson: I think if you look at the disassembly, it’s identical, yeah.
This week Evan Prodromou is back to take us deeper into the Fediverse. As many of us reconsider our relationship with Twitter, Mastodon has been by-and-large the target of migration. They helped to popularize the idea of a federated universe of community-owned, decentralized, social networks. And, at the heart of it all is ActivityPub. ActivityPub is a decentralized social networking protocol published by the W3C. It is co-authored by Evan as well as; Christine Lemmer-Webber, Jessica Tallon, Erin Shepherd, and Amy Guy. Today, Evan shares the details behind this protocol and where the Fediverse might be heading.
Matched from the episode's transcript 👇
Evan Prodromou: …which is that I started a service called ident.ica in 2008, that was a distributed social network. We had software that is now known as GNU Social, so I’ll call it that… That you could download and install on your own servers, and you could connect to identi.ca. So it was this kind of open federated social network. We used a protocol called OStatus, that was based on a number of existing standards for sharing data across the web… And it turned out really nicely; it was great. I think at our peak we were at about 2 million users, we were seeing a lot of activity… But this was also at a time when, say, Twitter and Facebook were also surging in their growth, and it did not pan out. So we lost out to those other networks.
I have – you know, that was a lot of effort. What came out of that was the software GNU Social, and then another stack that I created after that called pump.io, which was the first kind of ActivityPub implementation, and it was the kind of like “I’m going to just make up a protocol to see how it works.” And that’s what’s running identi.ca right now. I think the time period here, 15 years - for a long time, there’s been a lot of a sense of “This is a good idea, but I’m not ready to join up, be a part of it, maybe put my time or my social effort into it”, for a lot of people. For me, yes; for a lot of people, no.
In October of last year - I don’t know if she’s gonna like me telling this story, but my wife came to me, and if there’s anyone else who’s suffered for the federated social web more than me, it’s my wife. She’s put up with a lot of late nights, and long trips, and so on.
Una & Adam from The CSS Podcast defend their Frontend Feud title against challengers David & Shaw from the keyframers. Let’s get it on!
Matched from the episode's transcript 👇
David Khourshid: I was gonna say Amazon, but that’s like – the only reason you would work there is for the compensation, no offence… But if it’s identical, then – you know what, let’s do Netflix. That’s a big one.
Our “what’s new in Go” correspondent Carl Johnson joins Mat & Johnny to discuss… what’s new in Go 1.20, of course! What’d you expect, an episode about Rust?! That’s preposterous…
Matched from the episode's transcript 👇
Carl Johnson: What is change…? What is comparable? So this is another – it’s one of those definition things. So in the Go spec there’s this idea of comparable types, and then types that are not comparable. So a comparable type is like if you have two strings, you can say String x equals string y. Or string x does not equal string y, right? It’s very simple; if they’re the same, then they’re comparable. And if they’re not…
But there are things that are not comparable in Go. So if you have two slices, you cannot say “Does slice one equal slice two?” and part of that is just because it’s a little bit ambiguous. It’s, again, that ambiguity of “Do you mean that these two slices are identical, or do you mean that these two slices have the same contents?” So there could be like – if you have a slice that’s 123, it could be they both are 123, but they actually are different areas in memory. So if you modified one, you wouldn’t be modifying the other.
So anyway, just to get rid of the ambiguity, you’re just not allowed to compare slices to each other; you can only compare them to nil. That’s the only legal comparison you can make. And so for interface types, this gets a little bit weird. So if you have an interface type, and you want to say x equal to y, and x and y are both some interface type, you can do that even if the type of x, the concrete type underneath the interface is something that wouldn’t normally be comparable. So let’s say you’re comparing two errors, and you want to say “Does error one equal error two?” And it just so happens that error one is implemented by a slice; or error two is implemented by a function. And those are not comparable types. Well, because we’re just thinking about them as errors, it’s okay. We can do that, the language will let you do it, and it’s only going to blow up if it gets down to brass tacks, and it finds out that “You know what - these two, error one and error two, are both implemented by slices, and so the only way I could tell you if they’re the same is if I compare the slices, and I’m not allowed to compare slices”, and so then it’ll panic at runtime.
So that’s sort of just all background for – when generics were introduced, generics have a keyword called comparable. I think it’s maybe technically not a keyword; it might be like a pre declared identifier. But whatever, it’s essentially a keyword. So comparable - when you’re doing a generic, you have to be able to say what you want the types to be. So if you want to do a generic over a map, you can say “I want there to be this type k, and it should be comparable (which means it’s usable as a key and a map), and I want this type v, and it can just be any type, because it’s the value of the map, and I don’t care what the value is.”
[27:47] So for a while, the problem was that in Go, even with generics, you couldn’t write a generic function for a map that used interfaces as the key type, because they weren’t considered to be generically comparable. Because there was that risk of it blowing up at runtime, of it panicking at runtime, the Go team were trying to be very conservative and say, “Let’s just leave that out of generics, at least for now, so that if we decide later that we want to make it more expansive”, which is what they did, “nobody’s going to have their code broken. But if we decide later that we want to make it more narrow, well, then people will have their code broken.” So they started off with just a very narrow definition of what a generically comparable thing was, and now they’ve expanded it to include interfaces as well, even though it runs that runtime risk of having a panic.
Heroku’s free plans officially reach EOL, Swyx explains the mixed reaction to Stable Diffusion 2.0, a real Twitter SRE explains how it continues to stay up even with ~80% gone, Tyler Cipriani tells us about one of Git’s coolest, most unloved features & we chat with Joel Lord about brewing beer with IoT & JavaSCript at All Things Open 2022.
Oh, and help make this year’s state of the “log” episode awesome by lending your voice!
Matched from the episode's transcript 👇
Jerod Santo: Stable Diffusion 2.0 dropped last week to much excitement… but it quickly turned to mixed responses once people started playing with the results. Shawn “swyx” Wang has a solid rundown of what’s new and what’s disappointing the Stable Diffusion community on his L-Space Diaries Substack. After displaying a bunch of side-by-side results from identical prompts Swyx says: If you looked closely and couldn’t decide if SD2 was better than SD1, you weren’t alone…
That said: the task of deciding if a generated image is “better” or “worse” is quite subjective and hard to quantify across a literally infinite unbounded latent space - FID scores being the best we have so far.
and: prompts are a moving target - the same prompt generates different things in SD1 vs Midjourney 3 vs Midjourney 4 vs Dall-E 2 vs SD2 - and users will discover new magic keywords and best practices that subjectively improve results. So perhaps SD2 initially looks “worse” than SD1, but then improves as users learn how to wield it better.
He goes on to explain why “prompt engineering” itself is a product smell. Good stuff, worth a read.
For our last 2022 Kaizen episode, we went all out:
All of this, and a whole lot more, is captured as GitHub discussion 🐙 changelog.com#433. If you want to see everything that we improved, that is a great companion to this episode.
Matched from the episode's transcript 👇
Jerod Santo: So like you said, pull requests were very much an invention of GitHub, and have been copied; I mean, they’re called merge requests over on GitLab, which I think is actually a more accurate name… But sorry, pull request has the inertia, so it just doesn’t quite land the same way, but they’re pretty much identical. Thoughts on different platforms, neither of which were a Git-native action.
Chris sits down with Ankur Goyal to talk about DocQuery, Impira’s new open source ML model. DocQuery lets you ask questions about semi-structured data (like invoices) and unstructured documents (like contracts) using Large Language Models (LLMs). Ankur illustrates many of the ways DocQuery can help people tame documents, and references Chris’s real life tasks as a non-profit director to demonstrate that DocQuery is indeed practical AI.
Matched from the episode's transcript 👇
Ankur Goyal: What’s really interesting about this is OCR is not a new thing, neither is reading data from invoices, or other kinds of documents. But for some reason, most businesses don’t take advantage of it. And I think that’s because the solutions out there are just not easy enough to use. And so we’ve always thought about this from the standpoint of “What does it take to make something that’s actually so easy to use that it provides value for someone?”
[07:58] The solutions that existed prior - they fell into a few different buckets. One is something called an OCR template, where basically you take OCR text, and then you draw a box of XY coordinates around exactly where the text needs to be. And if you’re working maybe at the DMV or something, and taking identical documents and scanning them with an identical scanner every time, that approach can actually work really well. In reality, I’m sure with the invoices that you’re working with in your business it’s never that simple, right? And so that’s an example where the user experience and cost barrier in practice can be just prohibitively high.
Another technique that was really emerging as more popular when we started is this really big pretrained model approach. So AWS has a product called Textract, for example, which is actually a great product. And what it allows you to do is upload any document into it, and it will give you back some data structure about what’s in the document. And the nice thing about this approach is you don’t need to do any of that template definition, or anything like that. But the challenging thing about it is that if the results aren’t what you expected, then you don’t really have any recourse to solve for it. A number of our early customers were using Textract and building machine learning models on top of Textract to normalize the data to be consistent, and they realized “This is just not – what are we doing here?”
This week we’re talking fresh, faster, and new web frameworks by way of JS Party. Yes, today’s show is a web framework sampler because a new batch of web frameworks have emerged. There’s always something new happening in the front-end world and JS Party does an amazing job of keeping us up to date. So…what’s fresh, faster, and new?
The first segment of the show focuses on Deno’s Fresh new web framework. Luca Casonato joins Jerod & Feross to talk about Fresh – a next generation web framework, built for speed, reliability, and simplicity.
In segment two, AngularJS creator Miško Hevery joins Jerod and KBall to talk about Qwik. He says Qwik is a fundamental rethinking of how a web application should work. And he’s attempting to convince Jerod & KBall that the implications of that are BIG.
In the last segment, Amal talks with Fred Schott about Astro 1.0. They go deep on how Astro is built to pull content from anywhere and serve it fast with their next-gen island architecture.
Plus there’s an 8 minute bonus for our ++ subscribers (changelog.com/++). Fred Schott explains Astro Islands and how Astro extracts your UI into smaller, isolated components on the page, and the unused JavaScript gets replaced with lightweight HTML — leading to faster loads and time-to-interactive.
Matched from the episode's transcript 👇
Miško Hevery: They are; they are absolutely connected. Actually, I’m also not an expert at Svelte, but my understanding is that they only have one entry-point; I don’t think they can create separate ones. The thing that Svelte does really well is they can prune the tree; because they don’t have VDOM, they can prune the tree and say “Oh, these things never change, and therefore I don’t have to do updates on them.” But they still have hydration, because in order to recover the state – like, Svelte is also reactive, which means like if something changes, they know how to just update a specific part on the page, which is all great. But in order to rebuild the information about where the components are, where the reactivity are… Like, if I change this data, I have to change this component, and so on and so forth; in order to rebuild all this information, they have to execute the application, at least once, at the very beginning.
[47:59] The theme for all of these frameworks is that in order to recover the internal state of the framework, they have to execute the application. The process of executing of the application is what rebuilds the internal state of the framework.
And you’re correct, that different frameworks you can say have different efficiency factors in terms of how good they are at rebuilding. But I think Qwik is in a category of its own, because it just serializes everything, and you don’t have to download anything in order to make it page-interactive, right?
So imagine, anything you can build in Svelte, you can build in React, and vice-versa, right? We all agree that all these frameworks are kind of universally the same thing kind of apps that they allow you to build. And the same is true also for Qwik; whatever you can build in Svelte, React, Vue, Angular and so on, you can also build in Qwik. So the kinds of applications you build are absolutely identical. What’s different is how the application resumes on the client, and all kinds of other implications we can get into in this show. But the resumability is kind of the key difference.
AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment, and is accelerating research in nearly every field of biology. Daniel and Chris delve into protein folding, and explore the implications of this revolutionary and hugely impactful application of AI.
Matched from the episode's transcript 👇
Daniel Whitenack: Yeah, exactly. There’s this kind of looping that happens… And from what I was reading, using deep neural networks to predict protein structure in and of itself is not an innovation of this work. So people have tried this for quite a while. But I think that there’s two kind of main pieces here that really kind of set this apart. One is this Evoformer architecture, which is unique to what they’ve done… And the second is this kind of iterative process, which kind of helps the network learn across these representations and the predicted structure in a really powerful way.
[31:47] So yeah, it’s interesting in… We can kind of dive into a couple of these things, but the first one - it kind of reminded me a lot of some NLP things to some degree, because you’ve got this input sequence, which again, is just a sequence of amino acids, and they generate two representations from this. Maybe people are more familiar with NLP - you might have a sequence of characters, and you might assign a number to each of these characters; because you have to represent text as numbers to a computer, because a computer knows how to calculate numbers, right? So here, they’re in some ways doing a similar thing. They’re taking this input sequence and they’re representing it by numbers, but in two kind of really interesting ways. One which kind of tries to identify - not identical, but other sequences that have been identified in living organisms, and it kind of creates what they’re calling this multiple sequence alignment. So it’s actually an alignment of this sequence with other sequences; a multi-sequence alignment. And then they have this pair representation where they’re actually trying to identify proteins that have a similar structure, and construct an initial representation that’s kind of a pair representation of these two things, thinking that “There’s similar things maybe in the whole database that we’ve learned about, and similar proteins, so maybe we can learn from those things.”
So the initial sequence goes in these two representations. The multiple sequence or alignment, and then this pair embedding. So one which is kind of an a matrix of sequences, and one which is a pair representation of one sequence with another.