13% of the time, Devin works every time
Jerod, KBall & Nick discuss the latest news: Devin, Astro DB, The JavaScript Registry, Tailwind 4 & Angular merging with Wiz. Oh, and a surprise mini-game of HeadLIES!
Jerod, KBall & Nick discuss the latest news: Devin, Astro DB, The JavaScript Registry, Tailwind 4 & Angular merging with Wiz. Oh, and a surprise mini-game of HeadLIES!
High expectations for performance in both life and work are common, but what do you do when you get stuck and you’re not able to achieve the results you desire? In this episode, Mireille and Adam talk through the different aspects of perfectionism and ways in which is can be adaptive and helpful and other ways in which it poses additional challenges. What happens when we avoid the possibility of failure as opposed to simply having high standards for our performance? How can we begin to focus on healthy striving as opposed to reaching for perfection?
Matched from the episode's transcript 👇
Mireille Reece, PsyD: Yeah, and so this sense of focus on effort and opportunity. Does it matter to me? Is it worth it if I fail? I think I’ve shared this in other episodes about Brené Brown’s work as it relates to daring greatly. It’s the man in the arena who counts. It’s the one who’s willing to get dirty, experience loss and hardship because they’re trying to push themselves forward towards their desired ideals.
The University of Texas at Austin had specifics in regards to this, related to healthy striving. They said this - setting standards that are high, but are within reach. If I want to go, “How can I make mini-goals, so that I’m to some degree again buffering the possibility of threat or loss?” Enjoying the process.
I tend to make sense of this so much in the lane of health and exercise, and going, “What am I trying to work on, what am I getting better at?” This is why people run marathons. This is why– I mean, why do you ride your bike, Adam?
This is a minimal implementation of DALL·E Mini. It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. The only third party dependencies are
torch
for the torch model andflax
for the flax model.
How much mini-er can it get from here? 🤔
DockerSlim promises a lot:
docker-slim
will optimize and secure your containers by understanding your application and what it needs using various analysis techniques. It will throw away what you don’t need reducing the attack surface for your container. What if you need some of those extra things to debug your container? You can use dedicated debugging side-car containers for that.
Their minification examples are impressive…
Victor Zhou uses a lot of SVGs on his blog. Do you? So now he optimizes their size as part of his build process. Do you?
62 SVGs minified, reducing the total size from 459322 bytes to 208897 bytes, a reduction of 54.5%! That’s a total of 250 KB, or 4 KB per SVG. Keep in mind that all of my SVGs were already saved in the Optimized SVG format - these savings were on top of already optimized SVGs. If you haven’t thought about minifying your SVGs before, chances are you’d see much more drastic results.
Alex Russell answers the question, “If not React, then what?” Csaba Okrona identifies four core problems that create and reinforce knowledge silos, Rob Koch’s Markwhen is like Markdown for timelines, Jeff Geerling is quite impressed by Apple’s latest iteration on the Mac mini & Sylvain Kerkour took the time to draw a comparison of Amazon’s O.G. S3 service with Cloudflare’s R2 competitor.
Matt Asay thinks the open source licensing war is over, LangUI is an open source Tailwind component library for your AI chat app, Ivan Kuleshov modded a Mac mini to run via PoE, Apple joins Pixar and others in the Alliance for OpenUSD & John D. Cook says sometimes you shouldn’t pick the best tool for the job.
Asahi Linux is a project and community with the goal of porting Linux to Apple Silicon Macs, starting with the 2020 M1 Mac Mini, MacBook Air, and MacBook Pro.
Our goal is not just to make Linux run on these machines but to polish it to the point where it can be used as a daily OS. Doing this requires a tremendous amount of work, as Apple Silicon is an entirely undocumented platform. In particular, we will be reverse engineering the Apple GPU architecture and developing an open-source driver for it.
Asahi Linux is developed by a thriving community of free and open source software developers.
Yes, please!
Taco de Wolff:
This is about the fastest minifier for web formats and is written in Go. For example, it can minify JS files at about 25MB/s, which is orders of magnitude faster than existing solutions that are typically 100-300 kB/s. There is also a command line tool that can be incorporated in your setup easily.
A fun walkthrough of creating a mini data viz component in React, which teaches useful concepts like drawing with SVG and d3.js scales.
This repo contains the source code for [the Udemy course](Build 20 mini frontend projects from scratch with HTML5, CSS & JavaScript (No frameworks or libraries)) of the same name. Each mini project has its own README and live demo, too.
Adam, Jerod, and Tim get together to put a spotlight on Apple’s October 30th Mac/iPad event from a developer’s perspective. They cover the specs of the new MacBook Air and the viability of having it as a development machine, the new Mac Mini in the ever popular Space Gray, and whether or not Tim will be able to stop pulling his hair out to find an affordable, yet powerful desktop machine with it, and the gorgeous new iPad Pro.
Why use Day.js?
- đź•’ Familiar Moment.js API & patterns
- đź’Ş Immutable
- 🔥 Chainable
- 📦 2kb mini library
- đź‘« All browsers support
It’s worth noting that the author doesn’t claim 100% API compatibility with Moment.js, but they say it’s close enough that “If you use Moment.js, you already know how to use Day.js.”
If your app targets modern browsers and is currently packing Moment.js’ 16.4kb in its JS bundle, maybe you can get by with Day.js’ 2kb instead…
Layout Land is a new YouTube channel from Mozilla and Jen Simmons focused on helping you learn about the new layout possibilities in web design.
If you’re looking to use CSS Grid but you need to support IE users, then you should check out this video and the rest of the series when you’re done. New videos every Tuesday and Thursday.
People always ask, “How can I dare to use this new CSS, when I need to support IE users?” It is very important to make sure every user can do what they came to your website to do, whether or not their browser supports the hot, new CSS you want to use. CSS Grid shipped in March 2017, and is already supported by the vast majority of web browsers in use. But there is a significant portion of users who do not have support for CSS Grid — not just IE, but also UC, Samsung Internet, Opera mini, QQ, and more. So can we use CSS Grid? How do we support these old browsers and use CSS Grid?
Tablesaw is a group of plugins by Zach Leatherman of Filament Group that help you build responsive HTML tables. It's always nice to see solid solutions to problems all web developers share.
Here's a gif of their "stack" table in action:
Check the README for more table modes, IE8 support, and even a mini map!
From the project homepage:
Sass Media Queries is a collection of useful media queries mixins (including iOS devices like iPhones and iPads) for Sass 3.2.0+
Sass Media Queries adds support for @media
query Sass mixins (including iOS devices like iPhones and iPads) for Sass 3.2.0+, and is also Compass and Bourbon safe.
Requirements:
Sass 3.2.0+
Version:
2.1 // developed on 16/02/2013
Variables:
$units: 1px; // default units for 'screen' mixins - it must include a digit!
Mixins:
@include min-screen(width) // shortcut for @media screen and (min-width ...)
@include max-screen(width) // shortcut for @media screen and (max-width ...)
@include screen(min-width, max-width) // shortcut for @media screen and (min-width ...) and (max-width ...)
---
@include iphone3 // only iPhone (2, 3G, 3GS) landscape & portrait
@include iphone3-landscape // only iPhone (2, 3G, 3GS) only landscape
@include iphone3-portrait // only iPhone (2, 3G, 3GS) only portrait
---
@include iphone4 // only iPhone (4, 4S) landscape & portrait
@include iphone4-landscape // only iPhone (4, 4S) only landscape
@include iphone4-portrait // only iPhone (4, 4S) only portrait
---
@include iphone5 // only iPhone (5) landscape & portrait
@include iphone5-landscape // only iPhone (5) only landscape
@include iphone5-portrait // only iPhone (5) only portrait
---
@include ipad // only iPad (1, 2, Mini) landscape & portrait
@include ipad-landscape // only iPad (1, 2, Mini) only landscape
@include ipad-portrait // only iPad (1, 2, Mini) only portrait
---
@include ipad-retina // only iPad (3, 4) landscape & portrait
@include ipad-retina-landscape // only iPad (3, 4) only landscape
@include ipad-retina-portrait // only iPad (3, 4) only portrait
---
@include retina // devices with retina
Now it’s easy to do stuff like:
@include min-screen(320) { ... }
@include max-screen(1024) { ... }
@include screen(320, 640) { ... }
And get back this when compiled to CSS:
@media screen and ( min-width: 320px) ) { ... }
@media screen and ( max-width: 1024px ) { ... }
@media screen and ( min-width: 320px ) and ( max-width: 640px ) { ... }
Check out the project from Rafal Bromirski on GitHub and also the simple, yet neat “resize me” demo.
Discuss on Hacker News if you must.
Nodefront from Karthik Viswanathan aims to make a front-end developer’s life easier. Built in Node.js, Nodefront bundles a local web server for serving up HTML, CSS, and JavaScript assets and supports live reloading via web sockets. Additionally it ships with a command line interface:
nodefront compile
- Compiles Jade and Stylus files to HTML and CSS. Can compile upon modification, serve files on localhost, and even automatically refresh the browser/styles when files are changed.nodefront fetch
- Automatically fetches CSS/JS libraries for use in your project. Provides an interactive mode to add new libraries.
nodefront insert
- Inserts CSS/JS libraries directly into your HTML or Jade files.
nodefront minify
- Minifies CSS and JS files. Can also optimize JPG and PNG images.
Karthik has put together a twenty minute screencast for a full demonstration.
Check out the project web site or source on GitHub for more.
Caolan McMahon, the creator of Async, has released a new project called Jam. After using this project for a short while, I can say this project has huge potential. It is a package management tool that allows you to install, use and compile (minify) JavaScript libraries. Jam is a browser-focused project that uses RequireJS (AMD Modules) to load your website’s requirements.
Installation is simple:
npm install -g jamjs
Likewise, installing libraries is easy:
jam install jquery
Finally, you use the RequireJS to include the libraries:
<script src="jam/require.js"></script>
<script>
require(['query'], function($){
console.log("jQuery loaded!");
});
</script>
One of the coolest features is the compilation, which will package all (optionally a specific set) of your libraries with RequireJS and allow you to load the optimized library as one optimized file.
jam compile jam/compiled.min.js
Then include compiled.min.js instead of require.js, the rest works like before:
<script src="jam/compiled.min.js"></script>
<script>
require(['query'], function($){
console.log("optimized jQuery loaded!");
});
</script>
If you have a library you would like added as a package, feel free to fork the project and add it in. If, in the process, you run into conflicts or issues with NPM’s package.json, Caolan wants to hear about it!
Head on over to the project page or browse the source on GitHub.
Grunt is a project that allows you to use JavaScript to configure and run command line tasks. It is installed as an npm module, and very simple to get started with. If you have Node.js and npm installed, simply run:
npm install -g grunt
Some of the common tasks include: JavaScript Lint, QUnit,
minify, JSHint and file concatenation.
Any project that has grunt.js in it’s root (or subdirectories) can use grunt tasks. An example configuration to use grunt lint
:
module.exports = function(grunt){
grunt.initConfig({
lint: {
all: ['grunt.js', 'lib/**/*.js''test/**/*.js']
}
});
};
Whenever you run grunt lint
in this project, the array of files (grunt.js, every JavaScript file in lib/ and every JavaScript file in test/) will be run through JavaScript Lint.
For some extra grunts and giggles, you can checkout grunt’s own gruntfile. They showcase some common uses and are very self explanatory.
As a disclaimer, grunt is currently in beta, and could possibly (will probably) change before 1.0. The good news is the changes will be based on your feedback. Head on over to GitHub to browse the source code and help get this project to it’s final release!
After 15 beta releases, Sprockets has finally released the big 2.0.
Why’s this a big deal? Well, Sprockets is the gem that’s powering the new asset pipeline in Rails 3.1. It can automatically process, minify, combine, and serve all of your assets in a variety of formats.
If you’re using Rails, I’d recommend upgrading your application to Rails 3.1. Even though it’s still just an RC release, the final is about to come very soon. Then, sprockets is all set up for you. If you’re not on Rails, then you should check out this awesome blog post by Envy Labs. It’ll get you going. Of course, you don’t need to use the Git URL now that version 2 is out.
Check out the source on GitHub and the readme for more details.
Thanks to work by Google and Yahoo, we’re all better informed about how to speed up our web pages. For those on Apache, Google has made it easier to implement these ideas at the Apache Module level. For those running on Ruby web frameworks, Julio Cesar offers up the same goodness as Rack middleware.
Rack Pagespeed offers some HTML output filters to help you implement page optimization best practices with minimal effort. To get started, install the gem:
gem install rack-pagespeed
For Sinatra, Rack, or Padrino apps, configure Rack Pagespeed in your config.ru
Rackup file:
require 'rack/pagespeed'
require 'myapp'
use Rack::PageSpeed, :public => "/app/public/dir" do
store :disk => Dir.tmpdir # require 'tmpdir'
inline_javascript :max_size => 4000
inline_css
combine_javascripts
end
run Sinatra::Application
For those Rails, create a rack_pagespeed.rb
initializer with:
require 'rack/pagespeed' # somewhere
class Application < Rails::Application
config.middleware.use Rack::PageSpeed, :public => Rails.public_path do
store :disk => Dir.tmpdir # require 'tmpdir'
inline_javascript :max_size => 4000
inline_css
combine_javascripts
end
# ...
Out of the box, Rack Pagespeed supports filters to:
data-uri
You can even roll your own filters.
Rack Pagespeed currently supports two storage options: disk and memcached. See the well designed docs for advanced options.
We sit down with Scott Hanselman at Microsoft Build 2025 to discuss open sourcing all the things, cool stuff Windows can do, where we want (and don’t want) AI to fit into our lives, building arcade cabinets, and so much more.
Matched from the episode's transcript 👇
Adam Stacoviak: Is that how you feel then about the way you lead, and what you lead, is that you feel like you run like a mini business within?
Drew Wilson is back! It’s been more than a decade since Adam and Drew have spoken and wow, Drew has been busy. He built Plasso and got acquired by GoDaddy. He built a bank called Letter which didn’t work out…and now he’s Head of Design at Clerk and back to chasing that next big thing.
Matched from the episode's transcript 👇
Drew Wilson: Yeah. So at clerk, I’m actually on the leadership team there, and I am the head of design. So I’ve got a team of like - I think there’s 12 of us now designers. And then I’m also the head of commerce and billing, which is our new product, which has a very large team right now, as we’re prepping for launch in the next week. So yeah, lots of responsibility there. That is like pretty much all that I work on, is Clerk.
The reason I can work on Cortical and Clerk at the same time - Cortical’s designs needs are vastly less than what Clerk needs. And there’s not a team, essentially, that I have there. It’s just me. So it’s way easier to manage. Also, it’s not like a paid job. I mean, I have equity, but it’s not a paid job where I’m making a salary, or anything. And Clerk is also cool with people doing side projects. A lot of people at the company do side projects, and have side businesses. So as long as it’s not interfering with work and you’re not doing it during times that you need to be at work, then you’re good to go. So that’s my situation there. But I just really wanted to be a part of Cortical. I’m not kidding, man… It is literally the first commercially-available biological computer. And I freakin’ designed some of it… I mean, I wasn’t the guy that came up with the idea or anything like that, but I designed the whole site, and helped design a little bit, and it’s cool to be a part of that. There’s a lot of good that can come out of that technology, personalized medicine being one of them. You can take your skin cells - literally, Adam, you can have your skin cell turned into a neuron, put into a CL1, and instead of testing… Let’s say you have some brain disorder, or some chronic disorder that you’re trying to get rid of. Epilepsy, cerebral palsy… There’s a bunch of stuff. Let’s say you want help, but here’s these drugs that were developed, and they could be fatal. Like, there could be all these risks. You take that thing, that’s your one shot, right? Well, let’s say you could just put that thing, drop it into your other Adam, your mini Adam, your brain, and see how it responds. How does it affect that? If it dies off, you’re good to go. It didn’t affect you at all.
There’s just so many cool things that can come about. And then as a drug manufacturer, you’re like “How do I test this in people?” Well, you can’t; you have to get all these approvals. You have to first do mice, then monkeys, and this, and that, and it takes a freaking decade by the time you get to people. So with this, you can just start testing it right away. There’s just so many cool things that can come out of that technology. It’s very nascent, so folks don’t know exactly how it’s going to be used, what it’s going to be used for, but there’s just so many potential use cases.
And then Clerk, separately, is a freaking awesome company. The thing I like about Clerk is that if I were to run a company, it would be very similar to how Clerk runs. So I feel very at home at that company, which is really cool. And they’re like a step or two beyond what I was able to get to with my companies.
[01:14:10.00] Clerk is essentially Plasso, my old company, 2.0. It’s the exact same thing. Embeddable auth, embeddable billing… That’s exactly what Plasso did back in 2014. And Clerk is the same thing, just the React way, right? The newer style. So it’s cool in some ways. I have a lot of experience with that stuff, so it ends up working out.
Daniel Kokotajlo and the AI Futures Project lays out a potential scenario of superhuman AI’s impact, Liam ERD generates beautiful, interactive ER diagrams from your database, Mozilla takes on Gmail with “Thundermail”, algernon explains why grepping remains terrible & Vitor M. de Sousa Pereira rans on the insanity of being a software engineer.
Matched from the episode's transcript 👇
Jerod Santo: Grepping logs remains terrible
algernon first stated his public opinion that grepping logs is terrible a decade ago. The good news is he’s back, the bad news is his opinion hasn’t changed. After a quick comparison of running alike queries against 4.4 GB of JSON logs on his modern M2 SSD desktop and loading the same data into VictoriaLogs on an old Mac Mini, he finds:
What we saw here is that a computer with much more RAM, and computing power, and faster storage got absolutely obliterated by an overloaded Mac Mini that is ten years its senior. And not only in speed, but resource use too: I haven’t shown it, but working with raw text used a lot more CPU time, disk IO, and memory; meanwhile the Mac Mini barely blinked.
And not because it was all cached in memory! These are all cold queries. Cached queries are much, much faster. But how can it be that such an underpowered device runs circles around a powerful desktop?
Purpose-built software, dear reader. Purpose built software.
The moral of the story: databases are good!
Don’t grep your logs. Shove them in a log-ready database and query that. Not only will it require considerably less disk space, it will require less computing power, less RAM, and as a result, less time to perform the queries. And if that’s not enough, you can give the logs structure! You no longer need to remember the order of fields in a httpd’s Common Log Format, you can query against a named field instead. You can run aggregations, computations, build stats, correlate, join, and do all kinds of other database-y stuff. Not only does it perform better, you can do more, and do so more conveniently.
Beyang Liu, the CTO & Co-founder of Sourcegraph is back on the pod. Adam and Beyang go deep on the idea of “industrializing software development” using AI agents, using AI in general, using code generation. So much is happening in and around AI and Sourcegraph continues to innovate again and again. From their editor assistant called Cody, to Code Search, to AI agents, to Batch Changes, they’re really helping software teams to industrialize the process, the inner and the outer loop, of being a software developer on high performance teams with large codebases.
Matched from the episode's transcript 👇
Beyang Liu: One year ago – 2023, right? 2023, Copilot was king, they seemed unassailable… Everyone wanted autocomplete. No one even cared about chat, or context awareness in that chat, right? Because autocomplete was where it’s at. You want it to be fast, you want it to be snappy… You want it to be a little bit context aware, but people just wanted it to be fast, right? And that was the UX paradigm that was dominant at the time.
And then in 2024 the paradigm completely shifted. I think it first shifted with GPT-4, where - okay, that was a model that had a step function improvement in its ability to one-shot applications. So now it’s like, why would I sit there, autocompleting stuff, when I could literally just generate an entire mini app from scratch? It also got a lot better at incorporating context, so now you can do kind of context-constrained code generation, which we deployed, and it had massive success in the enterprise. But then there were still certain things that it couldn’t do well. The code was often sometimes broken, or didn’t quite compile.
And then when Sonnet rolled out, that was like another step functions. Now all of a sudden these things that were kind of just beyond the capability, are now firmly within – like, I can trust this thing to reliably emit JSON, for instance. That used to be – that’s a solved problem now. People used to write entire thought pieces around “How do you constrain the model output to produce a valid JSON?” These days you don’t even think about that, because the model has been trained to treat that as a solved problem. And I think now we’re seeing it in terms of in-editor agents. That is the future, and that’s what we’re building towards. A year ago, I think if you were pushing agents, there were certain companies that were like “We’re going to build the agent that eliminates the developer entirely.” I think now people recognize that they were kind of selling beyond their existing capabilities at the time. It made for a great kind of marketing moment, and they got their name in the headlines, but there was a disappointment in the actual product experience.
Now, with the newest set of models, we’re seeing this approach, just like – this is kind of the new paradigm. That’s what we’re building for. We have kind of like a wait list of people who want to get access to the thing that we’re currently building in collaboration with a lot of these frontier shops… And it works very well in large, messy codebases as well, because again, that’s the problem that we’re solving for.
And so I think the future is very bright, and I don’t – at some point I think we might plateau in terms of model capability, but at least for the past two years, there’s just been this regular cadence of step function improvements in model capabilities. It’s just been really exciting to see that, and sort of ride the wave of the development and maturation of this technology.
Antirez has returned to Redis! Yes, Salvatore Sanfilippo (aka Antirez), the creator of Redis has returned to Redis and he joined us to share the backstory on Redis, what’s going on with the tech and the company, the possible (likely) move back to open source via the AGPL license, the new possibilities of AI and vector embeddings in Redis, and some good ’ol LLM inference discussions.
Matched from the episode's transcript 👇
Salvatore Sanfilippo: Now I see younger developers not really caring about this, but we basically started to write code, and starting to be mini lawyers, understanding all the subtle things about GPL, BSD, MIT… So yes, I care about license, because the way that you express your willing about what others can do or cannot do with your code.
Also, I think that without the copyleft idea, the computers technology could not accelerate to the point that it accelerated. Because when open source was created, basically in order to create a startup, you had to buy complicated workstations, Unix licenses, database licenses… So it was impossible to have the landscape that then it was created by the open source movement. So for me, licensing is very important.
Initially, what I did with my software was to use GPL. Then I started to realize that GPL had two problems. It created problems to myself, because sometimes I thought – and if this becomes big, I want to have a business model. And I don’t want to get some paper signaled by all the contributors that I had so far. So I started to switch to BSD, saying “This is a protection for me. Also, it’s a protection for other people based on the environment they use my software.” And sometimes it can be a problem, even if they don’t want to violate the license, and stuff like that.
[00:32:00.06] When I started Redis, I was very into this BSD stage of my life, so I released the Redis as BSD also because there is also behind that an idea about accelerating society, improving society to be more important than basically what is going to happen to me, in some way. However, then it must be said that the cloud situation changed the landscape, because even if it was very complicated to create a product business model, even before - basically, Red Hat was the only one that really succeeded in this kind of game in the open source, and a few more… But still, if you wanted to sell services, you were the to-go person as a creator of the software.
And then after AWS everything changed, because there was no longer need of somebody supporting you, because it was handled for you. And also, you couldn’t even compete with the others, because in order to compete in cloud services, you have to pay for the instances, and they have them for free. Also, even the billing is complicated. There are many companies that just because of billing will just get what AWS has, and stuff like that.
So I understood at some point that the BSD license that I picked, with the changing world of the software, created serious issues to create business. Now we can go a step back and say “But why it should create business? It’s an open source software.” And I believe that more or less every complicated open source software has in one way or the other an economic system behind it, because it’s a lot of hard work for many years. Either people are paid very well, or they will not afford to do all this kind of work. So I believe that both things are needed to redistribute to the community, and also.
And inside Redis, they didn’t want to change the license, and I didn’t want for a long time… But there was this discussion, but it was some kind of taboo. So it never happened as long as I was there that somebody asked me “But what do you think? We want to change the license.” It’s a conversation that didn’t happen. I just created the module system, and Redis the company started to have the modules that were enhancing Redis capabilities, and that was it.
Then when they changed the license, I understood that it was basically some kind of a forced move in some way, because with BSD it was too complicated to compete in this market. However, now we are realizing - me and also inside Redis - that SSPL was not accepted by the community in some part. And we care about this thing, because you know, I don’t believe that SSPL is a terrible license, because it’s very similar to other GNU licenses. It’s just a couple of sentences… But the reality is that culturally it’s not accepted. And so we are starting to discuss inside the company about this problem.
[00:35:52.25] Also, one important thing is that because of that, we are going to add in Redis a lot of the features that are now only for the paying users. For example, now I’m working a lot to vector sets, which is the first fundamental data type that Redis gets after many years… And it will be released in the community edition, like normal Redis. And like everything I did in the past, it’s no dependencies, so it compiles because the data structure – so the HNSW data structure for vector similarity I wrote from scratch, the quantization I wrote from scratch, the hybrid search… So it’s some code, it’s like 6,000 lines of code in total, the other Redis data structure. So you can open the code and understand how it works. And it was impossible before to do that, but still, maybe there are setups that make everybody happy, enough protection and the community will be more happy. So there is an ongoing discussion inside the community now. I’m not sure what will happen, but we are focused on the problem.
It’s Kaizen 18! Can you believe it? We discuss the recent Fly.io outage, some little features we’ve added since our last Kaizen, our new video-first production, and of course, catch up on all things Pipely! Oh, and Gerhard surprises us (once again). BAM!
Matched from the episode's transcript 👇
Adam Stacoviak: I think thick is a good thing. Anyways, I won’t go back there. It’s an exhaustive, thoughtful comment, that I haven’t even read the whole thing yet, but I was like “Wow, there’s this super-huge comment that somebody’s like actually talking about relevant things, and not how we suck.” So that was cool. I loved that.
I was pushing for this, because I was like “This is what we need to do. There’s a whole audience there that we can tap into, that we’re not.” And clips are great, but they’re not the full-length podcast. I’m now sad that when I share with people that we’re on YouTube, that they’re like “Hey, did you just start producing this podcast?” I’m like “Nah, man. It’s been like forever, basically.” And so we have this huge backlog that’s not there. And that kind of makes me sad, because there’s a lot of visuals and a lot of just like seeing the reactions, like Gerhard mentioned, with Tim… Just being able to see his pause, or his thinking, or my thinking whenever I’m talking, or him pointing to his mini stacks behind him… I think that’s – it’s not for everybody, but I think there’s a large majority of people who are gravitating more and more towards that, who do listen on YouTube, pay attention when they want to, but when they want to, they can go and look at the screen. And that’s been my use case for it personally, and so I wanted that for us for so long, and I just felt… Not so much bored, but there was a missing, necessary, humanistic component that was visual, that wasn’t there. And so when you’re audio-only, I feel like you’re stuck in this box, and I feel like we’re now – we’re like the genie out; we’re the cats out of the box, so to speak. We’re able to explore the bigger world of YouTube, and capture not so much more of an audience, but I think there’s a lot of people that are waiting, wanting what we produce. And now we’re there, in full form.
Jerod and Adam use Chris Kiehl’s post on development topics he’s changed his mind on (over the last 10 years) as a proxy for discussion on dev things they HAVE and HAVE NOT changed their minds on.
Matched from the episode's transcript 👇
Jerod Santo: A Mac Mini.
Techno Tim joins Adam to catch up on the state of Homelab for 2025, the state of AI at home and on-prem (AI Homelab) and where that’s heading, building a creator PC, choosing the parts for your build, GPU availability, Windows being user hostile, and why Tim is happy to be using Windows, Mac AND Linux.
Matched from the episode's transcript 👇
Adam Stacoviak: That’s right. Or the RAM. Did you see that speculation about the Mac Mini basically being free, the lowest tier basically being free? Because if you, I think, added a couple of things to it, it’s like double the price?
Jerod & Adam discuss Nvidia’s recently announced personal AI supercomputer, Waymo’s latest infinite loop, what’s involved in getting a “modern” terminal setup, and whether or not AI has gone mainstream… warts & all!
Matched from the episode's transcript 👇
Adam Stacoviak: That is where his – there’s a Tesla headquarters in North Austin, there’s a Tesla something or other about 20 minutes from me here, that’s like just massive… And then in Bastrop they have their – what is it called? It’s like a city; like a little mini city he’s building. It’s crazy what he’s doing here. I imagine that whatever he may launch will probably launch here first, because so many folks are migrating to Texas.