Ship It! – Episode #60

Kaizen! Post-migration cleanup

gettin' cozy with Jerod & Gerhard

All Episodes

In our 6th Kaizen, we talk with Jerod about all the things that we cleaned up after migrating changelog.com from a managed Kubernetes to Fly.io. We deleted the K8s cluster and moved wildcard cert management to Fastly & all our vanity domain certs to Fly.io. We migrated the Docker Engine that our GitHub Actions is using - PR #416 has all the details. We did a few other things in preparation for our secrets plan. Thank you Maikel Vlasman, James Harr, Adrian Mester, Omri Gabay & Owen Valentine for kicking it off in our Slack #shipit channel.

Gerhard’s favourite improvement: the new shipit.show domain.

Featuring

Sponsors

SourcegraphTransform your code into a queryable database to create customizable visual dashboards in seconds. Sourcegraph recently launched Code Insights — now you can track what really matters to you and your team in your codebase. See how other teams are using this awesome feature at about.sourcegraph.com/code-insights

LaunchDarklyFundamentally change how you deliver software. Innovate faster, deploy fearlessly, and make each release a masterpiece.

RetoolThe low-code platform for developers to build internal tools — Some of the best teams out there trust Retool…Brex, Coinbase, Plaid, Doordash, LegalGenius, Amazon, Allbirds, Peloton, and so many more – the developers at these teams trust Retool as the platform to build their internal tools. Try it free at retool.com/changelog

Akuity – Akuity is a new platform (founded by Argo co-creators) that brings fully-managed Argo CD and enterprise services to the cloud or on premise. They’re inviting our listeners to join the closed beta at akuity.io/changelog. The platform is a versatile Kubernetes operator for handling cluster deployments the GitOps way. Deploy your apps instantly and monitor their state — get minimum overhead, maximum impact, and enterprise readiness from day one.

Notes & Links

📝 Edit Notes

Gerhard & Jerod

Transcript

📝 Edit Transcript

Changelog

Click here to listen along while you enjoy the transcript. 🎧

Welcome, everyone, to Kaizen 6. It’s slightly different this time, and it’s for you to decide whether it’s better or worse in this format.

Or how much better.

Or how much better, exactly. Just me and Jerod this time; we’re keeping it cozy… Even cozier than last time, Lars…

Ooh…

So Lars told us, feedback, in our Slack channel, that episode 50 was a very cozy Kaizen. Nice conversation, collapsing into laughter, and lots of assorted TikToks. So I’m thinking more laughter this time, because we don’t have serious Adam, so we can go crazy… There’s no one to stop us.

Right. We can go crazy. He’ll never try to bring us back to center.

Yeah, exactly. If we go off the rails, we will not recover from that. [laughter] So yeah, cozier. We’re getting cozier, Lars. And we blame you for getting cozier, because you gave us the idea.

What does cozy mean exactly? Because I’m thinking like fireplace, bearskin rug, coffee…

I didn’t ask him, but I think he may have been referring to that Swordfish scene… [laughter] I don’t know. Lars, can you clarify for us, please, in the comments below, what do you mean by cozy?

Yes. Just how cozy is this getting?

But I think just like getting closer. Like, closer.

Okay. Intimate.

Not as close as Swordfish. Yeah, exactly.

[03:59] But not that intimate.

Not too intimate.

So I want to start – I could hardly wait; it’s been a week since we had this, but I’ve been mentioning to everyone…

You’re pretty excited about this.

…my favorite biggest improvement… I’m really, really excited about this. So we have our own vanity domain.

And that domain is…?

Shipit.show.

Shipit.show.

I’ll repeat that, Shipit.show.

Pretty cool, right?

So someone, especially non-English speakers, they thought I said s**t show. I didn’t say that. [laughter]

We did not say that. This was your concern from day one, wasn’t it?

Shipit.show it is.

Yeah. Have we had any complaints about that particular aspect of the show so far? This probably crossed nobody’s mind until just now… And then we’re like “Oh, man…!”

Oh, dang it!

Now they’re gonna hear that every time you say it.

Okay, we may need to cut that out. [laughter] So Shipit.show, and if you do Shipit.show/60, that will be this episode.

How nice is that?

So cool. Let’s talk about vanity domains writ-large for a minute, because it’s like… Short URLs - they used to be cool. Remember tinyurl.com? Going way back, there was a time when URL size mattered… And shorter was better.

And for us it’s still true, but for most people, nobody cares anymore about that, right? I think it was probably because of Twitter… It used to be that your long URL inside of Twitter counted each character in the URL; it counted toward your character count. And we had 140 back then? What did we have? Yeah, we had 140 characters.

Yeah. Then they went to 160, but you’re right, links were not expanding; they weren’t shortened, so that wasn’t great.

Yeah. So if you had a really long URL and you were trying to link to it, you couldn’t even put any content into the tweet. So tinyurl.com I think was the first one that I ever used… And then bit.ly became huge. And bit.ly had a really cool feature where you could do, of course, bit.ly/whatever. But you could also sign up and get your own custom domain. And that’s where us nerds really started to nerd out, in like “What’s the shortest, coolest –” I tried so hard to get san.to… For Tonga, because there was the .to domains. And somebody in Tonga owns it, and I have a recurring reminder to email them once a year and be like “Hey, can I have it now?” Because this still would be cool, right? San.to.

That’s a cool one.

But nobody cares as much anymore, because now Twitter has it built-in, pretty much every social media thing is building it in… And it just has not been all that necessary. But when you podcast, when you try to point people to places audibly, and you have to say “Go to changelog.com/ship-it/60”, it’s so much cooler to say “Shipit.show/60”.

Yeah, that’s true.

Now, we held off on this one for a while, because we were trying to get shipit.fm… Which actually - you like this one better, don’t you?

I do, because I like to think of this as a show. I really do.

Right. Yeah, for us it was just consistency, because we’ve used fm for everything else. So every one of our other podcasts had fm… And we’ve spent – I mean, it’s probably been a year now trying to get it. And then also trying to get ship.it, which we couldn’t get either… But Shipit.show is cool.

I think so, too. I think it’s a good third option, which was also available, and that’s what matters in this case. I mean, we’re still open to getting shipit.fm. So if someone knows the person that has it, or if you are the person that has it, and you have a reasonable ask price-wise…

Yeah, we’ve got money… Though not that much money.

[07:53] Exactly, yeah. We’re not millionaires, yet. When we will be, we can pay you more if you want. Or even better, if Ship It makes many millions, we don’t mind. But for now, let’s just be reasonable. Shipit.fm would be cool. Ship.it would also be cool, but it was a bit more complicated. Actually, we didn’t get any replies.

Super-short…

Yeah. We tried, we didn’t get any replies. And now Shipit.show - for me, the most important thing was when you announce those new Ship It episodes in the Slack, we don’t unfurl the links, which means that it’s just a link and you don’t know what it is… But if you look at GoTime.fm, or JSParty.fm, or even Changelog.fm, you see a preview of the episode. And I really miss those, because it’s so easy to miss when a new episode gets announced.

They are nice. And the reason why it doesn’t work without the vanity domain is because if we’d unfurl every changelog.com URL, it’d be like the same thing over and over again. So we just permanently banned unfurls on the changelog.com domain. But it’s nice for the podcast episodes, because they actually have a pretty nice unfurl, which shows you the album art, it shows you the title, the description… But those are unbanned on vanity domains, and Ship It, for the first 57-58 episodes (I can’t remember when we turned it on) did not have that. And Gerhard - it was just eating you alive, man. It was just eating you alive.

It was, yeah. I really wanted that. Like “Seriously, surely this Kaizen…? Nope. Not this Kaizen. It’s not Christmas yet.”

“No, not this Kaizen…” So maybe we talk very briefly about how these things work, because it sounds like there’s a problem… I think I know what the problem is. You put in our notes there’s a “Something went wrong” page if you go to shipit.show/61, for example, once this one goes live… It’s because there’s no content there.

Yeah, the same thing happens for every episode.

Every 404, I think, actually.

And maybe every 404, you’re right. Yeah.

So I was digging into that a little bit… It’s like we’re serving the 404 header response. But for some reason we’re not serving the content correctly, the actual template… So that’s in the app. I thought it maybe was like a way that Fastly was not doing it right, but I took Fastly out of the equation and it still happens.

So I think there’s just something – it used to work. Somewhere along the line, the way we serve the 404 HTML just doesn’t… Like, it tries to download as a – maybe the mine type is wrong. I don’t know, I have to look into it. But that’s really what it is. It’s not the vanity domain that’s a problem, it’s not Fly, it’s not Fastly. It’s just like the app serves the 404, but can’t serve the content for a 404. So it just tells you something went wrong.

I always thought that was an improvement that we were waiting someone from the community to do. It wasn’t really an Easter Egg. We knew about this for a while; at least I knew for at least one year. I’ve seen it before. But I always thought that someone will pick up on it and will want to improve it. It’s a great small improvement. It hasn’t happened… So at what point do we improve it ourselves?

Yeah, exactly. Or put a bug bounty out there, or something.

Yeah. I mean, November is still far away, but that would be a nice one for November.

Oh, you mean for Hacktoberfest?

That’s October though… [laughs]

Oh, October. Sorry. I’m thinking Movember… No. Mustache? No, no, no.

No mustache. [laughter] Yeah, no bounties for mustaches. But yeah, maybe we could put something out there. I mean, honestly, it’s probably a 15-minute fix once I actually go into it… I just haven’t – I’ve known about it for a while as well… I just can’t be bothered sometimes. I’m like “So it’s a 404… You’re not a page that exists…” I would much rather have a nice thing there, but… Now that it’s on Kaizen, I’ll probably fix it, darn it.

Yeah, Kaizen-driven development. KDE.

Yeah, Kaizen 70… Dang it. Added to the list.

I really like that. [laughter] That’s cool.

That’s how I do most of my development.

Yeah, Kaizen-driven. I love that. Me too. I try to, as much as I can. So Shipit.show/50 does work, because it’s already out… That was the previous Kaizen.

There’s a lot of cool – I pasted you a link… Maybe we can link into the codebase where it does this as well. The way that vanity redirector works - there’s a bunch of other cool little URLs you can use.

Ah, yes.

[12:00] So like Shipit.show/apple will get you to the Apple podcast URL, which is much longer. So this is like a classic shortener. /spotify. /android. /merch if you wanna get to our merch shop. These are just nice ways to just have it in your mind when you’re like talking to a friend and you’re like “How do you listen? On Apple?” “Yeah.” “Shipit.show/apple” will get you to the right spot, and you don’t have to know that. So we have like 10-15 of those, which are pretty cool, I think. I use them quite often. And we link to them on YouTube, and in our Twitter pinned tweets, and stuff. Just clean.

One URL which reminded me of something that one of our listeners gave feedback on is Shipit.show/request, which people can use to request an episode, or propose an episode. It works the same way. So this was - and I’m trying to find it - [12:51] He said “Why on Earth would I possibly want to create a Changelog.com account just to send you guys a little tip, pointer, suggestion for an episode? There’s simply no point in not letting people come up with suggestions easily, without having to go through hoops for it.” So what do we think about that?

Well, I have two responses. The first one to Leo is “How did you submit this feedback?”

An email.

So just write your suggestion into that email. I’m being spicy a little bit, but my point is, there’s ways to reach us. You can tweet at us, you can email us, you can be in our Slack… We are very open to communication. This is an official channel, and we get tons and tons of these requests. I mean, hundreds. So while it’s a little bit of a pain on your end to sign up and do the submission form, that’s kind of on purpose… To just put a little bit of a barrier between us and requests, just because we get so many.

How bad do you want it, Leo? If you want it badly, you will create an account. Okay, so again, still spicy; I’m continuing down the same alley. But let’s try a different one. From my perspective as a show host, I really like when I see those requests in the admin area, because it allows me to – when I basically create a new episode, I can use the request that you created to generate the episode. And that helps me.

The other thing that helps me, and I think it basically helps you, the ones that request the episode, is you’ll get notified when the episode goes out. I’m not sure whether we notify when the status changes for a request…

Not automatically. So you can decline a request and send a message. There’s multiple ways. You can decline it silently, you can decline it with a message and let them know “This didn’t work out. Here’s why.” You can also fail it with a message, and that’s like handwritten. So it’s basically a markdown-based email that we send out, that says “This one didn’t work out. We tried to make it.” I just sent one of those today, actually. We had a request from two years ago. I was working on it. I had a yes, we had a reschedule etc. life came up, pandemic hit… It fell by the wayside, and I finally failed it today with a message that was like “Look, we worked really hard on this, I wanted to do this show; it’s just not working out. Maybe another time.” And we can send that message back.

But when you set it to accepted, or you set it to “scheduled” and stuff, we don’t wanna send people a bunch of emails, plus it still might fall apart… So the only thing we send is if you tell them “I wanna send a message” or when the show does go live, we automatically send an email at that point.

So in summary, Leo, if you want a quick suggestion, you can do it via Twitter, @ShipItFM, @Changelog, @gerhardlazu… All work equally well. You can also send an email, as you have, which is gerhard@changelog.com. I always read those. Or any other email that is out there from me. It will still get to me.

And if you go via the website, and if you do create an account, it helps us to keep track of those, it helps us to basically manage them better, and then you will get a notification when the episode goes live, when the episode goes out as a request episode.

I like to see those nice “requested” tags on them. I find that helpful. And the last one which we had - and it just shows how well I remember it - was the Docker Swarm request. I forget who exactly requested it, I forgot the name, but it’s very easy to go and see that… But that’s the last one that was requested, and it’s just a nice flow.

[16:23] Yeah, it’s really nice. And sometimes people will email in, like for the Changelog specifically, and have an idea… And they’ll actually say “This is a good idea. Can you go open up a request on the website?” Because that way it actually sits in our queue of ideas, right there in the admin, with everything else. It just has a much higher likelihood of becoming real, versus on Twitter or versus an email. But those are still avenues for starting a conversation.

So Leo, you don’t have to create an account. You can try those other ways. But that’s there for reasons, but for our workflow, and also just as a little bit of a bump in the road for people who would spam us otherwise. If it was just completely open, we’d get hundreds a day, versus - now we’re getting hundreds a month.

Cool. Now that we addressed that - and I’m happy how we addressed it; thank you, Jerod - we can go back to talking about episode 50, and what other topics did we cover since episode 50.

For me, the big ones were episode 51, Shipit.show/51 (you already know how this works), with Mark Ericksen, where we talked about the clustering part and the multi-region PostgreSQL integration, which I was convinced I will do by episode 60, and it hasn’t happened. It just shows reality versus plan. And this is normal.

With other things, which are more important - we’ll get to them in a minute. But that’s the one thing that we talked about, I didn’t have time to action on. Anything that you wanna mention about episode 51, Jerod? First of all, did you listen to it?

I did.

I listen to most of them. I haven’t listened to 59 yet, which you’re gonna talk about next… I listened to 51, I enjoyed it quite a bit. I think the biggest surprise to me during that one is that Postgres is not really managed, like I thought it was. It seems – I wouldn’t wanna call it an afterthought, but it’s kind of like… It was a good idea, of like “Well, we have these runtimes, and it’s just this version of one.” It’s just like anything else - you have an app container… What do they call them? Containers? Runtimes? Pods?

I think it’s like an application.

Okay, apps.

I mean, the application in like the Fly context - there’s a special one, PostgreSQL, where you can create a cluster, and the Fly CTL, the Fly CLI, it’s slightly more integrated, you can scale them better, you have visibility into back-ups… Things like that.

Yeah, it’s kind of like any other Fly app, but with special privileges. Special benefits.

Correct. That’s right.

And we were thinking more as like “This is our Postgres service that we’re gonna manage for you.” And there is some management going on, but it was less formal, and maybe a little bit less than I was expecting, and kind of hoping for.

Same here.

Yeah. So that’s what I remember from that episode. And then I remember you giving an ode to Erlang at the end, which I very much enjoyed.

Yeah, I remember that coming out like as a special – I’ve seen it.

Yeah. I put that out as a clip.

That’s how I know that you listen to parts of it, because there’s a clip appearing on Twitter, and I know that that’s something that resonated with Jerod, and he thought it was good enough to share. That’s how that works.

So related - again, this is the episode which hasn’t come out yet as we record this. That is episode 59, with Ben Johnson.

Right. Very much looking forward to that, by the way.

Oh, that was a good one, yes. SQLite instead of PostgreSQL. That was a great one.

I guess by the time people are listening to this, they’ve heard it. But I haven’t heard it. So you can’t really tease it for them in like past tense, and I haven’t heard it yet… So I don’t know, just move on? [laughs]

That’s exactly what I’m thinking, move on.

Okay, moving on, moving on… Fair enough.

Generating some interest for you… You know, trying to prepare you for it when it comes out, because I thought it was very good… But it’s more like future-looking. Again, I don’t want to spoil the fun for everyone, but it’s more future-looking for us, and it’s also some important things that we would require in Fly, in the context of Fly, for us to be able to use it… Which is coming, but it’s not there yet. So that’s great, we know that we’re about to ride the wave which hasn’t even appeared. My favorite thing.

So yeah, we can move on.

Speaking of moving on, one thing that we definitely moved from is our LKE, our managed Kubernetes. Because 30 days after the migration I deleted – I won’t say everything; I definitely deleted the deployment, and everything that we were running on it… And one thing which I wanna mention is the importance of keeping things around just in case.

So the reason why we haven’t deleted it just then, or shortly after, is if there’s a problem and we need to go back, I wanted to make sure that we have something to go back to. So I’m thinking of it as a very long blue/green. Think months. In our case I think it was like two months.

So if within two months we realize that this plan is not working as we think it is, reality vs plan, we always have the option of going back, or stopping the plan, depending on where we are.

So a month before, as we were standing up Fly, we were still running on the old one, still running on LKE, which was the current production, and after we went to Fly, we always had the option of going back. And that was like a very deliberate choice. Luckily, we didn’t have to use it, but it’s one way to make sure that you always have something to fall back on. And I keep mentioning my love for running two things - at least two things - and this is what it looks like in practice.

So how did you pick 30 days or 60 days? Because for me, always the question is “How long do I end up running this?” And then I turn into a hoarder, and it’s like six years later and I’ve got all these services that I haven’t been using for a long time, but I’m still running them just in case.

For me it’s the Kaizens.

Okay…

So that’s why – the recurrent theme is really powerful in a lot of things that I do. And the reason why I have these Kaizens is because they force us to do certain things.

So what do you say to somebody who doesn’t have a cadence? Like, they’re not gonna record a podcast once every ten weeks to Kaizen.

How do they do it? They can’t count on –

Get a podcast and record it every ten weeks. [laughs]

Yeah, everybody needs a podcast. “Get a podcast” is the solution.

Yeah, that’s the conclusion. [laughs] So to be honest, I would set myself a reminder, whatever that looks like for you. So remind yourself to do something in the future. Whatever app you use, whatever system you use, set yourself a reminder to do something.

Even if that reminder is simply to reevaluate. It doesn’t mean you’re necessarily going to delete it, right?

Exactly.

You’re like “Okay, after 30 days I’m gonna have this thought.” Maybe it takes 30 seconds, maybe it takes half an hour to actually go ahead and execute on it… But remind yourself to think about it later. “Is it time to delete this now, or do we wanna keep it there for another 30?” It’s usually a pretty easy decision to make at the time. What’s hard to make is like looking forward six months from now etc.

Exactly. And I think people kind of know when it’s been long enough. So for some, it may be seven days, for others it may be a month. Others still may need more, a longer period. And that’s okay. All those options are valid.

In our case, it is arbitrary, I have to say. 30 days - it just so happened, it was roughly 30 days. Maybe it would have been 31, 32. I can’t remember exactly the time.

But there is transition period which starts whenever the new idea starts, and you still have to run your old system or your current system until you’re migrating on to the new one. But when you have migrated, your work isn’t finished. The migration is done, in that you’re running on the new system, but the old system is still around, because you want to give yourself a plan B if things go south. And it has happened for us. I’m not going to go into details, but episode 50 - we were delayed by a week, or two weeks, something like that.

So in my case, I kept it around just in case we may need to go back, just in case we discover something that we didn’t know, until we switched across. And for me, after 30 days it was just a reminder, because I was thinking about the Kaizen, and what to do next… And this is actually linked to our TSL certs. So we had cert-manager running in LKE, and cert-manager was syncing – actually, first of all renewing the certificate for the wildcard one for changelog.com. But also we had – I wanna say the job… Was it the job? Yes, it was the job which was keeping it in sync with Fastly.

So I knew that I had to migrate the certificate somewhere, and what I did - I just delegated the management to Fastly for the wildcard Changelog.com certificate. There’s a limitation in that only one provider can manage the wildcard certificate, because in DNS you end up creating CNAME records, and it can’t have multiple; it can only have one. So there’s that limit. We’d have happy with text records but there you go it’s just a limitation of how that’s implemented. So we cannot use cert-manager or Certbot and Fastly. We could only have one. In our case we had cert-manager, because we were managing all our certificates in LKE, using cert-manager, for all the vanity domains.

Right.

[26:25] But when we migrated to Fly, the easiest thing was to let Fly manage the certificates for the vanity domains. So shipit.show, for example, to configure it was super-easy. Really, really easy. And the wildcard ones - I can’t remember why we don’t do it. Oh, yes –

Because you have to upload it, right?

Exactly, because something needs to upload it. So then we have to manage that.

Fastly has to have it, and Fly has to have it, if it was going to be managed by Fly.

Exactly.

Whereas if Fastly just manages it, Fly doesn’t care about it, for the application to run.

Exactly.

So we have *.changelog.com is on Fastly, and everything else we do, which is mostly the vanity domains, is on Fly, in terms of DNS – or not in terms of DNS; in terms of certs.

Yeah, correct.

That’s not too bad. I mean, it’s not super-clean, but it’s not super-dirty either.

The thing which we’re missing more is documenting that… Like, to basically capture that. And this goes back to episode 44, with Kelsey, where he mentioned that.

Kelsey would be so upset with you on the documentation front.

I know. I’m sorry, Kelsey, I’m still working on that. I want to show you my list; I know that you wouldn’t care, but it’s there, trust me. [laughs]

Here’s a workaround. All these Kaizen episodes - you know we transcribe the entire conversations. So in a way, you’ve just documented it right now.

True. That’s true. But it’s spread across like four or five different episodes.

Well, he didn’t say the documentation needed to be good, he just said you needed to document it, right?

That’s true, that’s true…

So we’re working on it…

Yeah, you’re right.

Just speak it out loud and it documents itself. It’s self-documenting podcasts.

I like that. So I just need to record myself waffling for like two hours, and then you have all of it. [laughs]

So once again, the solution is everybody needs a podcast. Their documentation needs would be taken care of.

There you go. We just keep coming back… I mean, only if we didn’t own a podcast, and a podcast network…

Right.

And you need to run that podcast on our infrast4ucture. That’s the other requirement.

That’s right. So you talked about having two things hanging around, and how long it’s gonna stay that way… And I do tend to be the cut it off too early type more so, and I have regretted that in the past… But I’m somewhat of a purger. I like to purge. I don’t like things that infinitely grow, such as blockchains, and Twitter.com… [laughter] Because the tweets just keep coming; you’re never gonna get to the end.

What about podcast episodes? Do you like those to keep growing? [laughter]

No, they need to stop at some point.

When do we reset the Ship It number?

Right…

With some exceptions. This is one.

Yes, yes. Podcasts should just always keep going. You have two directories - 2021, 2022, and there’s like a 2022.fly… And then there’s others, Junk, going on… [laughter] And I just wonder –

Don’t call it junk

Okay, one man’s treasure is another man’s trash… Or I can’t remember how that saying goes. But for me, the only reason why this bugs me is because I need to copy the fly.toml into the root directory, unless I wanna have another tmux session in the subdirectory… And I don’t. I don’t want another tmux session. So I’m wondering why we need to have those years anymore. I feel like we’re kind of beyond that. I’m just curious when is that whole thing gonna get cleaned up.

I see. Okay. [laughter] That’s a great question, Jerod. I’m so glad that you joined this episode…

Very cozy in here. It’s getting very cozy.

Right. Yeah, because we’re talking about junk, so you have to, right? [laughter] There’s no way to talk about it without getting cozy.

Oh, my…

Right. So…

We need to put that Explicit tag back on this one…

Where is that? I think we need it now. [laughter] I think Adam is talking out loud, and he doesn’t realize he’s not part of the podcast as he’s listening to this…

It’s getting hot in here.

[30:04] So 2021 - there’s one thing that we still need there… So while I did delete the infrastructure, the directory is still there, how we did the config. I just found myself referencing something for James Haar. So James Haar was following up on episode 58, the one where we talk about how to keep a secret with Rosemary and Rob, the Vault episode… And I gave him an example in the 2021 directory of how we integrate LastPass, secrets in LastPass, with Kubernetes. Or how we used to do that. So it was helpful for that reference.

Gotcha.

But I could have done it in a number of ways. I didn’t have to keep it around. So in the 2021 directory, the most important thing is actually our CI integration. So we’re running Dagger; that runs locally. It also runs GitHub Actions. And if you remember, we did that to migrate from Circle CI, and that was episode 33. So that’s when that happened.

So the only config that we have is the Dagger config, which describes everything that needs to happen in our CI. That config is pinned to Dagger version 0.1.0. Since, Dagger 0.2.0 came out, and that changes a couple of things. I didn’t have time to rewrite the config from 0.1 to 0.2. I should have, but there were other things which kept trumping it. So that’s the most important thing that would need to move from 2021, from that directory. 2022 was for the new Kubernetes cluster, which as you remember, that’s the one that we had around when we did the migration.

Right.

We felt we’d go to Fly, we couldn’t go to Fly, episode 50 has all the details… We ended up on this new 2022 Kubernetes cluster. So we could delete that directory, we don’t need it anymore, but what we do need is the 2022 Fly. That directory contains all the config for the Fly setup. The reason why I use the year - because if you remember, every year we used to upgrade those.

Yes. Legacy. It’s legacy.

Exactly.

Fair enough.

And then we eventually would use to delete them. So I think that we will be able to delete the 2021 as soon as I migrate Dagger. By the way, 2022 Fly is already using the new Dagger, and I tried to configure it to use with Fly. But as I was doing the migration to Fly, I realized that there’s quite a few things which I need to figure out, and I had to separate the work that I had to do on Dagger, versus the work that I had to do for this migration.

So at some point I said “Okay, what’s more important?” So I refocused all my energy and effort on the migration, and I left the Dagger integration with Fly secondary. So the Dagger in the 2022 Fly is already at 0.2. That means that the two are incompatible. So until that migration happens, I can’t change it.

Another thing which happened in the 2022 Fly directory - I’ve added the Docker Engine integration. This is a nice segue into pull request 416, where I explain why we had to deploy and use Docker Engine on Fly.io.

So this is the one where we had those connecting to a Tailscale that you were running at your house, or in some –

Exactly. It’s right there.

I’ll put a picture. I have a fanless NixOS bare metal server. There’s no fan; not even in the PSU. I’ve been waiting to do an episode on that.

No fans.

Yeah, no fans.

Is there a heat sink on the CPU?

[laughs] I thought maybe you were going super – no cooling.

Not 2S–though, that’s actually what makes me hot in this room. [laughter]

Here I thought it was all of our cozy talk…

[33:56] Yeah. [laughs] It’s that fanless server. So anyways… It was connecting via Tailscale to that host. The problem with Tailscale is that when you generate an auth key for the GitHub Actions runners, the maximum TTL is 90 days.

Oh yeah, this one bit me. I was trying to deploy some stuff –

A couple of times. Gone.

Tell us about it.

I had to go “Gerhard…!” Well, I’ll tell you what happened - it wouldn’t deploy, because Tailscale couldn’t log in, or something… Or couldn’t connect, I don’t know. It would fail on the Tailscale, and you would, you would see either TLS, or SSL, or who knows; some sort of error right there. And I said “Gerhard…! I know this is running in your house! Help me!”

Yeah, “Fix it!” [laughter] Exactly.

“I’m coming over!”

Yeah. So that was that thing… If I knew that you were coming over, I wouldn’t have fixed it, to force you to come over. [laughter] Not that you would have known how to fix it, by the way, because there’s two other issues…

Oh, okay. Yeah.

So let me tell you about the other two issues.

Oh, okay… There’s two others…

So there is a Docker Engine running on that host; it’s a bare metal host, so very beefy, very fast… It has an amazing SSD, an NVMe 1, NVMe Samsung 980. Super, super-fast. 64 gigs of DDR4, and a Ryzen 3, 16 cores, I think. Or 12 cores. I can’t remember. One or the other. Anyways. So it’s really fast.

The reason why we do that is because right now - and this is something which we’re fixing in Dagger - the caching doesn’t work for volumes. Out of the box it doesn’t work. What that means is that in our application we compile a lot of dependencies, in our Phoenix application. So to compile those dependencies, if you don’t have a cache so that you can mount all the compiled ones, it would take five minutes easy. Maybe even more, because we know that the GitHub runners only have two CPUs by default, so they’re really not that beefy. So we want a Docker Engine that is persistent across runs.

When you start mounting volumes in GitHub Actions, if you’ve done it, you’ll notice there’s all sorts of issues now and then. So your CI isn’t as stable and reliable, because GitHub Actions are meant to be the runners, they’re meant to be ephemeral. And if you have state, to recreate that state, to get it back from the cache, to restore it, it can be slow. Sometimes things fail, because it’s a distributed system, and sometimes it fails… And then your CI becomes less reliable. So the best thing to do is to have somewhere a runner that you can trust, which is not ephemeral. It will only work for your CI, and in this case it’s not the runner, it’s actually the Docker Engine itself. But this Docker Engine is running in Fly, it’s a Fly application. Thank you, Kurt, for the starter. I even mention it in pull request 416…

So in our 2021 Fly directory we have the config that we used to deploy the Docker Engine on Fly. So now, whenever GitHub Actions runs using Dagger, it connects to this Docker Engine running on Fly.

So you get the caching, you get the volume, you get all that stuff; it’s just as fast. The difference is that it’s using WireGuard to connect to your Fly app. And by the way, if you’re looking at the WireGuard GitHub Action in the marketplace, that one didn’t work for me. So I had to basically follow the Fly.io instructions on how to connect the GitHub Actions runner to the Docker Engine which is running on Fly, and that worked. And we are using that for our app, so you can see that.

I can see that. Let’s say we have a dedicated CPU 4X, with 50 gigabytes of storage running there.

Yeah, exactly. And that’s for the caching, and everything. And there’s something really cool. There’s this really cool – again, we are mentioning all these things because I’m excited about them, and I haven’t tried it out yet, but I can hardly wait to try it out.

[38:00] So Fly introduced Machines, which can spin up in milliseconds. So imagine VMs that start really, really fast. So if you think about our Docker Engine, the one that’s there for the CI, we’re using it less than 1% of the time. So 99% of the time it’s running, it’s using those –

For no reason.

…for no reason, exactly.

Should we set up a SETI@home, or…?

We could do. Or a Bitcoin miner. [laughter] We could. It’s there, we’re paying for it… But you know, to be honest, the reason why this is exciting is because we can spin up on-demand, and we can have maybe more than four CPUs. We only chose four CPUs because of the cost. It just hits the balance nice. But if we could use 16 CPUs on-demand, that would be a lot better. And with Fly machines we can, because they spin up so quickly… You don’t wait minutes for it to come up.

Like, really? Really that quickly? What about the disk? Is it like connected? You just connect it to some sort of –

There’s a blog article that says 250 milliseconds. If it takes more than that, I think we should complain to Kurt. That’s what I think we should do. [laughs]

I think so, too. Okay…

Again, I haven’t tried it out, but I really do want to. I think that’s like an improvement to make on this. The first improvement was to move it off my server, because as I mentioned, the Tailscale key would expire, I would need to manually renew it…

There was the other issue - because I’m using it for my development, it’s Linux-based, so it’s NixOS… So when I’m running my development version of the Dagger CLI, that may need a new –

Wait, are you telling me this is your dev box?

Yes, it doubles up as my dev box

Your dev box was part of our critical infrastructure for months… [laughs]

Well, critical… I mean, if it doesn’t work for me, it’s more important. It’s something that – I basically use it often. So my dev box has actually three hosts, okay?

[laughs]

I have an iMac Pro… So it depends on the day.

So you moved on from “Have two of everything.” You wanna have three of these.

Three of these, exactly.

Yeah, three is better than two.

I have a MacBook Pro, an iMax and the NixOS. Exactly, yeah. [laughs] Yeah, and it’s running a bunch of other things. The point being that sometimes when I would develop – so I don’t run Docker on my Mac.

Too slow.

Yeah. That’s basically it. The TL;DR is “Too slow.”

That’s why I didn’t do it, yeah. Too slow.

Yeah. And even now, it’s still too slow. Especially in networking… Anyways, really weird, because of how it works with the virtualization on MacOS. So Docker and Linux - all the way, but not anywhere else. I didn’t even try it on Windows, but I imagine it’s just as bad, if not worse. Anyways…

So because Dagger manages its own BuildKit, I was constantly upgrading the BuildKit that Dagger 0.1 was using to the latest version as I was developing Dagger… And whenever the CI would run, it would downgrade it, and I would upgrade it, and it was like this constant battle of “Well, which version should you run?”

[laughs] Oh, gosh…

Okay, you can configure it, you can specify a new thing… I haven’t. That was the other thing which bit me. But again, the fix was so quick that by the time you realize it was an issue, I already fixed it. And because I was using it – so again, it’s an issue that was happening, but you weren’t aware of it, because I would always fix it.

Now, are we still vendoring Dagger in our source code?

We’re not vendoring Dagger as the packages…

The Dagger packages. Is that common practice, or is that because we’re on edge, or bleeding?

We haven’t basically figured this problem out in Dagger, but right now any Dagger package that you use, you’re supposed to basically link them to the Dagger CLI version, and you do that by running dagger project update. And then anything that you’re running locally – so it doesn’t have package management like Go, or Python, or some other language.

Isn’t this built in Go?

Dagger is built in Go, yes.

Can’t you just universal-binary that thing? Or I guess there’s too many packages. You wouldn’t wanna throw them all in there.

So how about we put a pin in this? [laughter] Because I wouldn’t want to use the rest of our time to talk about this… It’s something that you’re very passionate about.

Oh, okay…

This is where I’m stopping myself, okay? …to go too far.

Alright, I’ll stop prodding…

[42:11] So yeah, there was the BuildKit issue. And the other issue - and this was like the most recent issue - was where I had a PostgreSQL container binding on the same TCP port, and then the Changelog [42:27] PostgreSQL container was failing to start. So while BuildKit was okay, the Tailscale key was okay, there was a collision on the TCP port, so CI couldn’t run. And that was basically the last thing for me. The straw that broke Gerhard’s mind. I said “Table flip. I’m fixing this.” So that’s why I fixed the Docker Engine and I migrated to Fly.

Once and for all.

I hope so.

[laughs] For now, and for a few people.

Yeah, for now. At least for the next two and a half months this is okay.

Well, while it’s on my mind as we’re in this kind of headspace, I wanna say one more thing I’m excited about with Fly is that I was able to, via Fly Proxy - just proxy the Postgres connection… We talked about this, how we’re doing back-ups, which is – I mean, they’re doing them as well; we don’t have much visibility into that. But if you want an ad-hoc back-up, you just connect with Fly Proxy and you can connect to psql directly. That feels a little weird to me, but what’s great about that is now I can actually just use it as my database. I don’t do that when I’m developing –

Don’t do that…

No, I don’t do it.

Don’t do that… [laughs]

But I could. [laughs]

You could, but don’t.

The possibility is there, and it’s tantalizing. But what I can do is I can connect Postico, which is my favorite little Postgres query UI, directly to production. Because every once in a while you’ve just gotta munge a little bit of data and fix a thing. And with Kubernetes I could never get through whatever layers there were in order to get that done, and I think the option was like “Let’s not expose it publicly to the world.” But this is all just set up and working, and so I just proxy that sucker, connect Postico to production… I had this big, red background, versus green, to know “Hey, be careful. You’re in production.”

[laughs] Oh, my goodness me…

But I can just run arbitrary queries against our production database. And that’s a best practice, isn’t it?

Okay, so before we do that, we need to take you through a certification program, which will be “I develop in production.” That’s the end result. [laughter]

Yeah, there should be a cert for that.

Yeah, exactly.

A+. Isn’t that what A+ gets you?

“I fly planes for real. I don’t just work with Fly.”

You should have like a two-button commit on that thing, where like you and I have to both hit the button at the same time to run the query. That would be fun.

That was Adam’s idea. I still remember it. From deleting the DNS records, remember? That’s what he said.

Oh, that was. That’s right.

[laughs] Adam, you’re still here.

Yeah. Here in spirit. Okay, so I just wanted to get that interval. I was thinking of it, because you were talking about connecting things to different places.

That was a good one.

Yeah. Okay, so we have pull request 416. Obviously, there’s additional stuff to do there eventually… But good to go. Specifically, we would like to spin it up and spin it down on-demand. If that works, it’d be sweet.

That’s right, using Fly Machines. That’s right.

Yeah. Fly Machines… Cool.

But the thing that I want to go back to is the improvement which I mentioned in episode 50. And I think you realized why we can’t run more than one instance of the Changelog app.

The clustering.

It’s that lack of clustering… So looking at the Fly docs, how to integrate that looks really, really simple, but I haven’t done it yet. So I would really want to do that. I mentioned two and a half months ago I’ll do it. Maybe now I’ll do it. I don’t know, we’ll see. It’s summer, lots of holidays coming up, but still, I will get to do that.

[45:54] Yeah. That’s something also I could maybe take a crack at, but I also have other things which I’m working on in the application space that are probably higher priority than that… Which we’d love to have ready for episode 70, which we haven’t teased at all yet… But maybe now we take an opportunity to say we are working with our friend, Lars Wikman, on chapter support for our podcast episodes. And it’s in-progress; no promises. We would love to have it by 70. And if we do have it by 70, we’ll talk through all the details, hopefully get Lars on the show and make a big deal out of it. But that’s what I’ve been focusing my efforts on. I probably won’t get around to this…

But I read that doc and I was like, “I could probably do this as well in the app, to get clustering set up.” So it does look pretty straightforward. I think it’d be a good step for us to do.

Oh, yes.

Just to continue with the teasing theme, episode 61, when it comes out, which is right after this one, we are talking to John and Jason from Transistor.fm.

Ah, okay…

And I’ve mentioned this to them, and they’re excited to find out more.

That’s cool.

Now, they’re running Ruby on Rails, we’re running Elixir in the context of this library, but they were curious to see how we do it. They use FFmpeg extensively, for other purposes, but I was saying that we’re using FFmpeg today just for that.

So they would also have to ditch FFmpeg.

I think the bigger problem would be Ruby on Rails.

I would expect there’d already be tooling in Ruby for this. It would just be slow and memory-consumptive. Is “consumptive” a word? I would expect there to be some tools for this in Ruby land.

Memory-hungry.

I mean, not Ruby in general, but at this particular style pack I think it might be… Because these are large files that you’re reading in the memory and modifying… I don’t know, Lars can speak better to it. But now we’re getting too far into the weeds on it.

Yeah, that’s cool… I’m a big fan of Transistor, and excited for that episode. And yeah, hopefully what we do can at least be looked at by them and integrated. Because we are gonna be editing our chapters in the CMS, and having it syndicated into the mp3 files and into the podcast feeds themselves, because the new podcasting spec has chapters built into it…

…and so you don’t have to put them in your mp3, you can put them in your feed… So we’re gonna do both. And then we could also have them on the website, on the episode page as well, for easy clicking around too. That’s the coolest part about it. But anyways. Now I’m revealing too much.

No, that’s all cool stuff. All that’s coming. I think we touched up on it in one of the Kaizens before, where we talked about –

Why we don’t –

It might have been 40…

[52:12] …where we talked about block storage, and local volumes, and this came up in the context of some of the issues that you had to overcome to migrate from a local volume to object storage, to S3, for assets, for mp3s.

Yeah. And now everything’s out of the way, and we are getting the one last thing blocking us from good chapter support; we’re getting that taken care of, and then we will be good to go.

They’ll be so cool… Because I find myself looking for specific parts of episodes that I want to link to… If there were chapters, it would be much easier to find them. And especially linking them to the transcripts. So if I know which transcript is where, it will be much easier for me to find the portion of the mp3 file where I want to link to. That’s really cool. And if you add player support - now you’re just blowing my mind.

Not all by 70, but these are all things we wanna do.

That’s okay. That’s, again, another reason why we won’t do those improvements regularly, and just release them bit by bit.

Small steps.

Exactly. Exactly. For us, small steps means 2,5 months, but it still shows that with life happening, sometimes just other things become more important, and you just handle them. But we are very conscious about what is happening, because it’s always in our minds. We always check the previous episodes, what we said we will do, versus what we did, and what we still want to do.

So speaking of that, one thing which I still want to do is to figure out clustering. So I don’t know whether we end up pairing for an hour, what that looks like, but I’m very keen by the next Kaizen to have a cluster of Changelog apps running, because I really want to have one running in London. Multi-regions, one running in Virginia… The one in Omaha - I don’t think they have one, but that will be nice. That will be nice.

[laughs] Ohio, I bet, is probably the closest thing to me.

Yeah, Ohio. So again, I don’t know whether they still have that region.

It looks like they have Chicago, Illinois. ORD is a Fly region. That one’s very close to me.

So one there, one in Singapore, one in Sydney, one in Toronto… Where else? I mean, once we’re clustered, right? One in India… We’re huge in Germany. Probably we’ll put one there.

I think we should put one on every continent, to be honest… And we can use Honeycomb to see which data centers get the most traffic, which by consensus would be the most traffic

That’s a good idea.

And then based on that, we can put one in each region. And this is like a small step towards maybe one day using SQLite. I mean, that’s a crazy idea…

Yeah, because now we have, just use a better datastream in each region, right there with the app, and let Litestream do its deal… Or whatever they’re cooking up next over there.

Yeah, pretty much. But for now, we can just basically have dynamic requests responding much quicker. One thing which I’ve noticed today is – do you remember those vanity domains, and the redirects which we have, like Shipit.show, for example? So that one used to take more than 200 milliseconds for me when the request had to travel from the U.K, from London, all the way to Virginia, and back. But now, they only take 23 milliseconds. And the reason why they take so little is because there’s a Fly proxy. And the Fly proxy is distributed.

So I don’t know exactly what magic that is, because I know that the speed of light would take more than 23 milliseconds if it had to go to Virginia today. Because that’s the only place where we currently have our Changelog app instance running. So it’s not hitting the app, it’s hitting the proxy.

The proxy is just caching maybe?

[55:57] Must do. Because otherwise I can’t explain –

How else would it know the answer?

Exactly. That fast. Because it must hit the app. And I can see the Fly.io IP address. Now, I didn’t run an MTR to see where it is. 16 milliseconds. Somewhere in London. NTT.net. And then it’s hitting the internal network, and then all I’m getting is this IP. So that’s the firewall, basically. So it’s my IPE, cw.net, I’m not sure who runs that, lns ltw, but it’s all London12.uk.bbg.ge.entity.net. So that is the entrypoint to the datacenter wherever this is running. And there’s three more hops, which is through the actual datacenter and eventually it hits the proxy IP address. And that’s 6651126203. Maybe that’s one for Kurt. But anyways, it was really cool to see those redirects working so quickly, because if you remember, we were saying at some point “Why don’t we set up those redirects on the CDN, so they respond quicker?”

Right.

We don’t have to do that anymore. Isn’t it amazing? An improvement that you don’t make, and you just find out that it’s just happened?

Those are the best kind.

That’s the best one.

Procrastinating for the win. You know, just sit around and let somebody else figure it out by happenstance. What else should we not do? What else should we strategically not do, so that other people get it done for us? Maybe that 404 thing, should I…

I don’t know, I think we need to ask Kurt. Hey Kurt, what’s your roadmap? Tell us what’s coming. [laughter] But seriously, the whole Litestream thing, the way Ben talks about it - again, I’m just teasing what’s coming in… Actually, no - what came, sorry. Episode 59. So that’s already out by the time –

You’re teasing me, but you’re not teasing our listener, who already listened.

Exactly, yeah. So you already know what we talked about. He’s talking about how there will be a special directory in your Fly deployment, in your Fly app. And if you use that directory, it will automatically be synchronized if you have SQLite there. They’re both gonna be synchronized with the other instances. So at least that’s the thinking. How it will work in practice, we’ll see. But if you put your SQLite in that directory, it’ll automatically be synchronized with your other app instances running on Fly. Now, that’s really cool.

With zero work to do that.

There you go. No Litestream integration. You don’t have to run the process, you don’t have to configure it…

It’s just like, “Put it here, and we’re gonna replicate it around the world to everywhere else your app exists.”

Exactly.

Sign me up. Except we have some Postgres-specific features that we’re using, but… We will address those things as time allows. But very cool, very cool.

Yeah, I can see 2023 starting as an experiment, and we see how far it goes. But this is an interesting idea. And then it doesn’t matter that your database is not managed. Do you even need it to be managed?

It’s kind of managed.

There you go, it’s your application.

It kind of just exists everywhere, all at once. I mean, talk about back-ups…

Now, that’s really cool.

Yeah, that’s cool. You still want snapshots though, because you could screw something up and you wanna go back, right?

Feature request.

[laughter] Get ’em in early!

Exactly.

That’s all it is, it’s a feature request. I love it.

Yeah. And we won’t be the only ones, I’m sure of it. Anyways…

Oh, no. So it’s just the first thing you think of.

So clustering - I think that’s going to be a big deal. We’ll see how far we can push PostgreSQL. I know that Adam was mentioning about Crunchy Data, using their managed PostgreSQL… That’s an interesting. I would like to try it out. But again, let’s see where it fits with everything else. SQLite - I’m very excited about that, especially with Litestream. It wasn’t even an option until Ben joined Fly. See, that’s how things happened - we migrated to Fly, Ben joined Fly, and then amazing things are being discussed. And we’ll see how far it gets, but I’m excited.

The other thing which is on my mind - and this is episode 58; I blame Rosemary and Rob… No. I blame – who recommended it? Someone else recommended that episode, and I forgot. Let me check it out. Shipit.show/58. So easy. Seriously. I love it. Just that. It’s just saving me from typing more.

It is. It’s nice.

[59:51] And let’s see, do we have – ah, we don’t have the transcript.

Not yet, it just came out.

Thomas Eckert.

Oh yeah, Thomas.

Thank you, Thomas Eckert, for the intro. Yeah, Thomas did the intro. So thank you, Thomas, for the intro. And we had some amazing comments… This is actually the episode which had the most comments in the Ship It Slack channel.

Hm. That’s because everybody has opinions on how to do secrets…

Exactly. So we have to take a popular subject, share an unpopular or unconventional approach, and let the comments come in. That’s how we do it. We’ve been doing it wrong all along.

Yeah. Magic.

So Maikel Vlasman - he was a guest for episode 56, we had him on. “DevOps teams with shared responsibilities.” He mentioned that he’s using Sealed Secrets, but a lot of the statements that we make in that episode, in episode 58, he agrees with them. You can see it in Slack. James Hart, he was asking how we do the LastPass integration with Kubernetes… So I referenced the link, it’s there.

But the one which was really interesting is Owen Valentine - thank you very much for sharing the link which shows HashiCorp Vault plugin that integrates with 1Password. That was interesting. So 1Password is not too dissimilar from LastPass.

By the way, Omry Gabi, he had an amazing input… “People store application secrets in LastPass?” Yes, we do.

Yes, they do.

For years. And it works… Kind of. [laughs] Minus like a few small issues.

So if we were to migrate to 1Password, we would still need to have a password store for Vault. So where will they be persisted. And we can use various integrations, but 1Password is one of them. Do you use 1Password, Jerod?

Adam does. I do not. He’s tried to get me on the 1Password team, or whatever their – Pro, or Business, or I don’t know what their plan is called… And I’m open to it, but I was always like – I’m spread across so many at this point; why do I want to add yet another? But if we’re gonna consolidate, I’m open to that.

Well, one thing which I haven’t shared is that I’ve been experimenting with 1Password for about 3-4 months now. So I’ve switched from LastPass to 1Password. I still have LastPass around, but I’m using it less and less. And I know that 1Password works pretty well… So maybe we can take this opportunity to consolidate at 1Password, move everything across, and with HashiCorp Vault do that integration. So we can have the best of all worlds.

We can have the LastPass CLI locally, we can have the browser extension, we can have the HashiCorp Vault integration for the application… That would be really cool.

Did you say LastPass CLI, or did you mean 1Password CLI?

I meant 1Password CLI.

Okay, because you confused me there.

That’s what I meant.

Okay. So they have a CLI, just like LastPass does.

They do.

So you can have the best of all worlds.

Exactly.

1Password CLI, HashiCorp Vault…

Yeah. HashiCorp Vault, with integration to 1Password Connect. Yes.

And we can share passwords amongst ourselves, or you can also have your personal 1Password stuff… It’s like one big, happy, secure family.

[01:03:08.10] It sounds too good. We need to try it out.

Okay, I’m open to that.

I think that’s cool. I’m very excited about that… And not to mention I already use it, so… The more difficult question is which plan do we add people on? And I think, realistically, we’ll join multiple plans, right? I think if Adam has his, we may join that, especially if Changelog has one… And then we can store those secrets there.

Yeah. We would have a Changelog plan of some kind. And if it can be multiples, then…

This is an action item for Adam. This was a retro… This is an action item for Adam. Adam, you’re listening, right?

Right.

Add myself and Jerod to that plan, so that we can start using it. That would be really cool.

Alright. Anything else?

Well, was there enough laughter? What do you think?

Should we work in a little bit more laughter before we go?

Maybe…

I’ve got a few laughs…

I don’t know, it depends… It’s gotta be funny enough, I don’t know… [laughs]

Well, it’s tough to just demand more laughter right here at the end… But if you have any dad jokes… Or dirty jokes. It sounds like they’re more your style… [laughs] Maybe you can squeeze one in here.

Okay, okay… Noah told me a good joke, and he said “Is that a dad joke?” And I said “Yes, it’s a dad joke.” But I forgot the joke. So hang on, I have to ask Noah…

Okay…

I have to go and ask him, “Hey, Noah, what was the joke?” I forgot that. I mean, this is actually a question for you, for the listener. Do you think there were enough jokes? How do we improve the Kaizen?

How many jokes per Kaizen are you expecting? What would be your threshold for happiness, and then pure joy?

And the ratio to technical content. Like, was there enough technical content? And how much do you miss Adam? Because if you don’t miss him, I think this is the new format… [laughter]

It’s much more efficient. [laughs]

So yeah, let us know.

That’s how you end up with Boaty McBoatface. You ask the people what they want. [laughter]

Boaty McBoatface… Oh, nice. Alright, people, let us know,

Any last important takeaways, other than happy 4th of July?

Shipit.show.

Shipit.show. That’s a great one. I love that. Let’s end on that. Shipit.show. Never s**t show. Never. Okay?

Never! [laughs] How dare you even bring that up again?!

No. No, no, no. It’s funny, right? We wanted more jokes.

That’s true.

Okay, so here’s a crazy idea… If someone registers s**tshow, and they just redirect to this… I hope that doesn’t happen.

Why are you giving them ideas? [laughs]

We just said more jokes.

You’re teaching people how to troll us. This is not the way it’s supposed to work,

Alright, I think this is a great place to stop. On that bombshell, as someone famous that I very much like and admire… It’s time to end. See you in two-and-a-half months, everyone. Have a great one, keep kaizening. See y’all.

Kaizen! See ya.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

0:00 / 0:00