Lars is big on Elixir. Think apps that scale really well, tend to be monolithic, and have one of the most mature deployment models: self-contained releases & built-in hot code reloading. In episode 7, Gerhard talked to Lars about āWhy Kubernetesā. There is a follow-up YouTube stream that showed how to automate deploys for an Elixir app using K3s & ArgoCD.
More than a year later, how does Lars think about running applications in production? What does simple & straightforward mean to him? Gerhardās favourite: what is āhuman scale deploymentsā?
Matched from the episode's transcript š
Lars Wikman: Itās a tricky one. I think Iām still a bit in exploration there⦠Because what my current day to day production looks like is pretty dominated by what my current client is. So I do consulting, and run a team for a product thatās being developed at one of my clients. And they are we run things on Fly. I picked Fly because we ā we were doing an Elixir project, and we want to reduce the amount of ops we have to do, and just focus on mostly development.
[06:19] And I will say, Iāve been pretty happy with Fly. It has been a mixed bag, because this ā itās still an early company, itās still an early platform. So definitely sort of a mixed experience. But they essentially do what a Kubernetes type solution would do for me. They do platform engineering, so I donāt have to; thatās kind of the idea of platform as a service. But I still have to fiddle around with a bunch of YAML, and CI/CD pipelines, and all of that⦠And currently, that runs in GitLab, because the client had GitLab when I came there.
So I rolled with whatever was there, and made some choices based off of that, based on what I see⦠Sort of āOh, the teamās experience level is about yay-high. Okay, we should not spend a ton of our time on the server. Someone else should deal with most of the ops.ā
If I needed to get something off the ground on a budget, or if I built my own SaaS, I think I would probably set up a dedicated server for it, potentially to have the failover. It depends a little bit on the service. Not everything needs to be highly available, really⦠And in that case ā right now Iād probably pick up Debian, or Ubuntu, and Iād be slightly - not thrilled with that choice, because itās not ideal, but itās what I know well enough. Nix seems like it would be cooler; Iām not sure how convenient it would, be because I havenāt explored Nix yet. Thereās, of course, nice things about immutability. But for me, I like to try to package as much of the deployment aspects of the app into the app itself. I run Elixir applications that can provision their own SSL certificates, for example. And whether I would sort of include NGINX, or a specific load balancer, would depend on sort of āDo I need high availability? And in what way do I think I could conveniently provide that?ā Sometimes you can load-balance with DNS, sometimes thatās not really appropriate. Sometimes you need something in front of your application, sometimes you donāt. So thereās always those trade-offs, but I like to boil away as much of the layering as possible, as many of the layers as possible⦠At least when I donāt feel I need the layers.
And thereās a big difference between doing Elixir and when I was doing Python. Because if you were doing Python, and you set up an app server, you absolutely should put NGINX in front of it, because that app server was never intended to meet the world. But when youāre dealing with the Erlang VM, and well-established servers, itās āYeah, no, theyāre fine.ā Iāve seen a lot of people set up Cowboy, which is the common Erlang and Elixir web server, and be āOh yeah, we had cowboy, and then we had NGINX⦠And we had an outage on the first big day, because we had a misconfigured NGINX.ā Itās āOkayā¦ā Both NGINX and Cowboy can, of course, handle a ton of load, and the more layers you have, the more you sort of have to make sure that theyāre all playing nicely. Thatās sort of what I want to avoid.