This week we’re talking about product development structures as systems with Lucas da Costa. The last time we had Lucas on the show he was living the text-mode only life, and now we’re more than 3 years later, Lucas has doubled down on all things text mode. Today’s conversation with Lucas maps several ideas he’s shared recently on his blog. We talk about deadlines being pointless, trajectory vs roadmap and the downfall of long-term planning, the practices of daily stand-ups and what to do instead, measuring queues not cycle time, and probably the most controversial of them all — actually talking to your customers. Have you heard? It’s this newly disruptive Agile framework that seems to be working well.
Dave Farley, co-author of Continuous Delivery, is back to talk about his latest book, Modern Software Engineering, a Top 3 Software Engineering best seller on Amazon UK this September. Shipping good software starts with you giving yourself permission to do a good job. It continues with a healthy curiosity, admitting that you don’t know, and running many experiments, safely, without blowing everything up. And then there is scope creep…
Inbal Cohen, Product expert and Agile evangelist, joins Natalie & Angelica for a conversation about all things Agile. Inbal lays out some agile tips for Go devs, discusses if and how remote work changes things, describes some downsides of the methodology, and more.
Egon Elbre and Roger Peppe join Mat for a conversation all about bloat (and how to avoid it). Expect talk of code bloat, binary bloat, feature bloat, and an even-more-bloated-than-usual unpopular opinion segment.
This article isn’t arguing against writing user stories, it’s arguing against keeping user stories:
If you put the story in your icebox/backlog/cooler/hat, it’s time for it to go. It’s now duplicate documentation. That story was only there because you were yet to break it down into tasks. Now you have, so you delete it.
I don’t mean tick it off as complete, I mean right-click on it and hit the ‘delete’ button.
Julia Evans shares five things you can do to getting better at debugging, which is a critical skill for everyone in tech! They are:
- learn the codebase
- learn the system
- learn your tools
- learn strategies
- get experience
Each thing comes with an explanation and she shares a great quote at the end (from a paper she extracted these things from):
Their findings did not show a significant difference in the strategies employed by the novices and experts. Experts simply formed more correct hypotheses and were more efficient at finding the fault. The authors suspect that this result is due to the difference in the programming experience between novices and experts.
Having a smaller website makes it load faster — that’s not surprising.
What is surprising is that a
14kBpage can load much faster than a
15kBpage — maybe 612ms faster — while the difference between a
16kBpage is trivial.
The reason for this is because of how TCP works, which is nicely explained in the post. But is
14kB even a feasible reality? The author thinks so:
14kBincludes compression — so it could actually be more like
~50kBof uncompressed data — which is generous. Consider that the Apollo 11 guidance computers only had 72kB of memory.
Most of our audience knows that we operate on the mantra “Slow and steady wins,” and yet there’s lessons to be learned by reading a post adjudicating the need to “Move fast or die.” Let me explain…
There’s a key phrase that sets this post and its lessons apart from us here at Changelog Media — it’s “Here’s how we did it at Facebook.” Clearly, we are not Facebook, so we should not operate on advice that’s focused on Facebook. However, we can learn something.
Of the the five lessons shared, each can be appreciated, but one in particular stands out.
We embraced asking for forgiveness, never for permission.
This, to me, is synonymous with “Hire people smarter than you,” because it assumes everyone can bring something to the table that former wisdom might not. It gives permission to try something new and see if something beautiful comes as a result. That’s a good thing.
In this episode, we’ll be further exploring PRs. Check out The art of the PR: Part 1 if you haven’t yet. What is it that makes a PR a good PR? How do you consider PRs in an open source repo? How do you vet contributions from people who aren’t a part of the repository? How does giving feedback and encouragement fit in to the PR process? We’ll be debating the details, and trying to help our fellow gophers perfect the art of the PR. We are joined by the awesome Anderson Queiroz, hosted by Natalie Pistunovich & Angelica Hill.
Struggling through the tech job interview process? We feel you! On this episode, Amal, Nick & Amelia get together to discuss the various ways the interview process disappoints, share their own interview stories, and suggest ways we can improve the process for everyone.
In this episode, we will be exploring PRs. What makes a good PR? How do you give the best PR review? Is there such thing as too small, or big of a PR? We’ll be debating the details, and trying to help our fellow gophers perfect the art of the PR. We are joined by three wonderful guests Jeff Hernandez, Sarah Duncan, and Natasha Dykes. Hosted by Angelica Hill & Natalie Pistunovich.
Daily stand-ups are a classic example of learned helplessness. We all know they’re useless, but we tell ourselves “that’s just how things are” and do nothing about it.
Lucas provides a set of five symptoms that indicate you’re doing stand-ups wrong and says if your team hits at least three of the five, your stand-ups are useless.
But, instead of just telling you to stop doing them (like I probably would), he provides a bunch of solid advice on how to make them useful again.
A lot of ink is spent on the “monoliths vs. microservices” debate, but the real issue behind this debate is about whether distributed system architecture is worth the developer time and cost overheads. By thinking about the real operational considerations of our systems, we can get some insight into whether we actually need distributed systems for most things.
Scaling up has always been easier than scaling out. It’s amazing what one beefy server can do these days…
How using OpenAPI (previously called Swagger) has led to being able to ship a new service more effectively, by removing the need to write scaffolding, and instead focus on the business logic.
If you’ve been developing software for a while, you know that code has this natural tendency to turn into a mess. Keeping software simple over time is a challenge that keeps me thinking. My last post left you hanging without much concrete advice. This time I will outline a few high-level strategies to keep software simple.
Ben Congdon explains why he’s enthusiastic about stacked PRs (and how they differ from stacked commits). We discussed many of these benefits in our conversation with the Graphite guys, but that’s just one way of going about the practice. Ben also lists out a few helpers tools for going about it.
This article confirms my biases because I’ve always despised every soft delete implementation I’ve come up with. Most of them have looked something like what the author describes:
the technique has some major downsides. The first is that soft deletion logic bleeds out into all parts of your code. All our selects look something like this:
SELECT * FROM customer WHERE id = @id AND deleted_at IS NULL;
And forgetting that extra predicate on
deleted_atcan have dangerous consequences as it accidentally returns data that’s no longer meant to be seen.
ORMs help with this, but not enough. You set it as a default scope and then there’s that one time where you also want the deleted records so you come up with a custom query or dig into your ORM and try to find how to bypass the rule. Yuck!
He goes on to describe other problems as well. Maybe it’s all a big case of YAGNI?
Once again, soft deletion is theoretically a hedge against accidental data loss. As a last argument against it, I’d ask you to consider, realistically, whether undeletion is something that’s ever actually done.
When I worked at Heroku, we used soft deletion.
When I worked at Stripe, we used soft deletion.
At my job right now, we use soft deletion.
As far as I’m aware, never once, in ten plus years, did anyone at any of these places ever actually use soft deletion to undelete something.
While most teams I talk to would love to be able to run A/B tests to accurately assess the performance impact of certain changes, the problem is pretty much every popular A/B testing tool on the market has such a negative impact on load performance that they’re essentially unusable for this purpose.
If you’re just trying to determine which of two marketing headlines converts better, then perhaps this performance difference isn’t so bad. But if you’re wanting to run an A/B test to determine whether inlining your CSS in the
<head>of your pages will improve your FCP, well, those popular A/B testing tools are just not going to cut it.
He goes on to describe his process for running performant A/B tests on his static site using Cloudflare workers to swap in/out his experiments.
You don’t need to look far on web dev social media to find somebody lecturing everybody and nobody about how you should do web development. Hell, I’ve been guilty of that myself.
You should use this or that framework.
Or, you should not use this or that framework and instead use a cognitive framework like Model-View-Controller with ‘vanilla’ web components.
No, no, no Functional Reactive is where it’s at.
You should be writing code test-first and unit test everything.
Or, you should go light on the unit tests and focus instead on integration tests.
Or, end-to-end tests. Just test the shit out of the actual running app.
Or, you should use this different kind of unit test which doesn’t look anything like that other kind of unit test.
Those aren’t integration tests! This is an integration test! What’s wrong with you?!
You don’t pair program? Wow, your code must suck.
Sound familiar? Unfortunately, you don’t get to be a “famous” social media influencer by posting reasoned, nuanced takes that end with something like: “I dunno, this works for me but it may not for you. It depends…”
It feels bizarre saying this, having spent so much of my life advocating for and selling a distribution of Kubernetes and consulting services to help folks get the most of out it, but here goes! YOU probably shouldn’t use Kubernetes and a bunch of other “cool” things for your product.
This post turns out to be less about Kubernetes and more about premature optimization and doing more with less. Also it’s about Kubernetes. 😉
A deep discussion on that tension between development speed and software quality. What is velocity? How does it differ from speed? How do we measure it? How do we optimize it?
Brandon Rhodes’s solution to conflicts between his
~/bin scripts and system binaries: the humble comma
I heartily recommend this technique to anyone with their own
~/bin/directory who wants their command names kept clean, tidy, and completely orthogonal to any commands that the future might bring to your system. The approach has worked for me for something like a decade, so you should find it immensely robust. And, finally, it’s just plain fun.
Building products is a difficult and time-consuming effort. Figuring out what the problems, finding a potential solution to that problem, and then building that solution all take a decent chunk of time and effort. It’s due to this process that the minimum viable product was born. The motivation for building an MVP is still valid. Build something small and easy to test, launch quickly, and pivot or trash it if it doesn’t perform as desired.
There is another, less selfish way.
I read an article by Jason Cohen a few years ago which changed the way I think about product development. Instead of building MVPs, we should be building SLCs. Something Simple, Loveable, and Complete.
I like the thinking behind SLCs. So simple, so loveable, so…
Hat tip to Henry Snopek for linking this up in the #gotimefm channel of Gophers Slack! When it comes to thinking about your projects, Henry says:
I like to use MVP for fast projects, and SLC for “effective” projects…
Yeah, I like that framing too. So simple, so loveable, so…
Anyone who’s worked in the tech industry for long enough, especially at larger organizations, has seen it before. A legacy system exists: it’s big, it’s complex, and no one fully understands how it works. Architects are brought in to “fix” the system. They might wheel out a big whiteboard showing a lot of boxes and arrows pointing at other boxes, and inevitably, their solution is… to add more boxes and arrows. Nobody can subtract from the system; everyone just adds.
Nolan posits the center cannot hold and the current market shift from bull to bear might help bring the collapse of complex software. But it’s never that simple, is it?
One thing working in complexity’s favor, though, is that engineers like complexity. Admit it: as much as we complain about other people’s complexity, we love our own. We love sitting around and dreaming up new architectural diagrams that can comfortably sit inside our own heads – it’s only when these diagrams leave our heads, take shape in the real world, and outgrow the size of any one person’s head that the problems begin.