Domain-driven design with Go
Matthew Boyle, the author of Domain-Driven Design with Golang, sits down with Jon & Mat to talk about (you guessed it!) DDD with Go.
Matthew Boyle, the author of Domain-Driven Design with Golang, sits down with Jon & Mat to talk about (you guessed it!) DDD with Go.
Jerod is joined by Yehonathan Sharvit, author of Data-Oriented Programming, to discuss the virtues of treating data as a first-class citizen in our applications and the four principles that make it possible.
A solid rundown of the discrepency between what we hope from microservices and what we often get instead. TLDR:
Architecture is hard sometimes–people keep offering up some new idea that quickly becomes the mainstream “way to do it” without any context or nuance, and the industry, desperate to find ways to improve their architecture, snaps it up without hesitation. Microservices was the latest in the trend, and it’s time we dissected the idea and got to the real root of what’s going on.
Kirill Rogovoy:
It took me several years to learn how to write code that scales to 10s of team members and a million lines of code. It took even more time to learn to write stupid code again.
Let the debate begin (again)! This time we’re arguing whether or not single-page apps were a big mistake. This premise was inspired by Chris Ferdinandi’s SPAs were a mistake post.
Divya & Nick represent Team Yep and KBall goes solo on Team Nope. Jerod, as per our usual arrangement, is on Team Winner.
As it gets easier and easier to deploy to the cloud, and with increased concern about data sovereignty, regional architectures are becoming more common. This article gives you an overview of how we easily maintain our regional services on AWS at Apptrail.
In this post, I will talk about important factors you should consider when architecting systems that are powered by third-party systems. The factors I detail are:
Dan Luu:
Wave is a $1.7B company with 70 engineers whose product is a CRUD app that adds and subtracts numbers. In keeping with this, our architecture is a standard CRUD app architecture, a Python monolith on top of Postgres. Starting with a simple architecture and solving problems in simple ways where possible has allowed us to scale to this size while engineers mostly focus on work that delivers value to users.
Despite the unreasonable effectiveness of simple architectures, most press goes to complex architectures. For example, at a recent generalist tech conference, there were six talks on how to build or deal with side effects of complex, microservice-based, architectures and zero how one might build out a simple monolith… Larger conferences are similar; a recent enterprise-oriented conference in SF had a double digit number of talks on dealing with the complexity of a sophisticated architecture and zero on how to build a simple monolith.
He goes on to describe boring choices they’ve made and counter-balances that some by also describing why they’ve made some more complex choices such as GraphQL and Kubernetes. An excellent, nuanced piece.
Fine-grained authorization in microservices is hard. Definitely not impossible, but hard. You would expect that a more standardized, all-around, full-proof solution is out there, but I am afraid there isn’t. It’s a complex matter and depending on what you are building, implementation varies.
You will probably start with a boolean admin
flag in your User
model and then you will replace it with a role
field, as we all did. However, as things progress and the business model becomes more and more complex, so do the solutions that we need to implement in order to deal with that complexity.
But how do you actually go from a simple flag to Role Based Access Control (RBAC) and then to Attribute Based Access Control (ABAC), especially in a microservices environment? In the following post I hope to help you get there.
Amal, KBall, and Nick welcome David Khourshid to the show to talk about his project, XState. XState brings state management to a new level using finite state machines and is compatible with your stack. We talk about how the idea came to fruition, its practical uses, and where it’s going.
In this episode we talk with Daniel and Steve about their experience with event-driven systems and shed some light on what they are and who they might be for. We explore topics like the complexity of setting up an event-driven system, the need to embrace eventual consistency, useful tools for building event-driven systems, and more.
Dropbox Engineering tells the tale of their new SOA:
The majority of software developers at Dropbox contribute to server-side backend code, and all server side development takes place in our server monorepo. We mostly use Python for our server-side product development, with more than 3 million lines of code belonging to our monolithic Python server.
It works, but we realized the monolith was also holding us back as we grew.
This is an excellent, deep re-telling of their goals, decisions, setbacks, and progress. Here’s the major takeaway, if you don’t have time for a #longread:
The single most important takeaway from this multi-year effort is that well-thought-out code composition, early in a project’s lifetime, is essential. Otherwise, technical debt and code complexity compounds very quickly.
SMTP should be blocked on public networks.
Email technology offers no effective means to stop phishing, so it’s been a runaway success for the attackers, and a disaster for millions of victims.
Sunsetting SMTP is clearly necessary and feasible. So, I’ve drafted a protocol called TMTP and I’d like to tell you about it.
What is a microservice, and what is a monolith? What differentiates them? When is a good time for your team to start considering the transition from monolith to microservice? And does using microservices mean you can’t use a monorepo?
This had me (literally) lol-ing and thinking about Kelsey Hightower. Viewer beware: there is an NSFW moment near the end. Put on some headphones if you have to, because it’s worth every effort.
The Ship of Theseus is a thought experiment that considers whether an object that has had each of its pieces replaced one-by-one over time is still the same object when all is said and done. If every piece of wood in a ship has been replaced, is it the same ship? If every piece of JavaScript in an app has been replaced, is it the same app? We sure hoped so, because this seemed like the best course of action.
Fascinating look behind the scenes at both the process of rewriting a massively used application and the particular architectural choices made along the way. The approach used was at once incremental and all-encompassing, rewriting a piece at a time into a gradually growing “modern” section of the application that utilized React and Redux. And the results? 50% reduction of memory use and 33% improvement in load time… not too shabby.
What’s the front-end equivalent of a micro-services architecture? A micro-frontends architecture of course. This approach makes a ton of sense, though in my opinion you will definitely want to have an internal components library and some cross-frontend coordination so your UI doesn’t degrade into a series of disconnected, disjointed experiences.
It’s hard to argue against the benefits stated by author Cam Jackson:
Micro frontends are all about slicing up big and scary things into smaller, more manageable pieces, and then being explicit about the dependencies between them. Our technology choices, our codebases, our teams, and our release processes should all be able to operate and evolve independently of each other, without excessive coordination.
This is Segment’s story from monorepo to microservies back to monorepo — “from 100s of problem children to 1 superstar child.”
Software Engineer Alexandra Noonan writes on the Segment Engineering blog:
As time went on, we added over 50 new destinations, and that meant 50 new repos. To ease the burden of developing and maintaining these codebases, we created shared libraries to make common transforms and functionality … Over time, the great benefit we once had of reduced customization between each destination codebase started to reverse. Eventually, all of them were using different versions of these shared libraries.
The woes of operational overhead with each expansion into more microservices.
The number of destinations continued to grow rapidly, with the team adding three destinations per month on average, which meant more repos, more queues, and more services. With our microservice architecture, our operational overhead increased linearly with each added destination. Therefore, we decided to take a step back and rethink the entire pipeline.
One of the original motivations for separating each destination codebase into its own repo was to isolate test failures. However, it turned out this was a false advantage. With destinations separated into their own repos, there was little motivation to clean up failing tests.
I’d love to dig into this story more on The Changelog with the team behind this transition back to a monolith and discuss the deeper details of their lessons learned.