In this episode, Gerhard follows up on The Changelog #375, which is the last time that he spoke Crossplane with Dan and Jared. Many things changed since then, such as abstractions and compositions, as well as using Crossplane to build platforms, which were mostly ideas.
Fast forward 18 months, 2k changes, as well as a major version, and Crossplane is now an easy choice - some would say the best choice - for platform teams to declare what infrastructure means to them. You can now use Crossplane to define your infrastructure abstractions across multiple vendors, including AWS, GCP & Equinix Metal. The crazy ideas from 2019 are now bold and within reach. Gerhard also has an idea for the changelog.com 2022 setup. Listen to what Jared & Dan think, and then let us know your thoughts too.
Fly – Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.
Sentry – Working code means happy customers. That’s exactly why teams choose Sentry. From error tracking to performance monitoring, Sentry helps teams see what actually matters, resolve problems quicker, and learn continuously about their applications - from the frontend to the backend. Use the code
SHIPIT and get the team plan free for three months.
SignalWire – Build what’s next in communications with video, voice, and messaging APIs powered by elastic cloud infrastructure. Try it today at signalwire.com and use code
SHIPIT for $25 in developer credit.
Grafana Cloud – Our dashboard of choice Grafana is the open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.
Click here to listen along while you enjoy the transcript. 🎧
This is another KubeCon 2019 follow-up, and that was episode (in Changelog) 375, when we talked with Dan and Jared about Crossplane; it was about two years ago, end of 2019. But Marques was here as well… So Jared, where is Marques?
Marques is actually still within the Crossplane ecosystem, which is actually pretty awesome. Equinix, [unintelligible 00:02:57.02] so he’s over there, and still contributing to Crossplane a lot. We don’t miss him too much, because we still get to see him.
Should we have added him to this invite? Was it my fault for not adding him? I think it was, right? Marques, it’s my fault.
Yeah, we definitely miss him on this episode here, but you could probably get him on a podcast that’s more focused on what Equinix is doing as well too, specifically.
Okay, it’s great to know that I’m not the only one thinking that. That was like my follow-up thought; that’s great, I love that. Okay, so many things happened since 2019… 2020 was a very interesting year, from so many perspectives… But let’s just think about Crossplane. I would like to focus on that, and we’ll explain why. Dan, do you remember which Crossplane version was out in November 2019, just before KubeCon? Let’s see how good is your memory.
That is a hard question, and my memory must not be that good. I would guess somewhere around 0.10, but that could be way off.
Jared, what do you remember?
Yeah, I think Dan you’re pretty accurate there. It was either like 0.8, 0.9 or 0.10 or so. Definitely not 1.0 yet, I know that for sure.
[04:09] So before I checked, the only thing that I knew, it was pre 1.0. That’s the only thing that I remembered… Because I checked, I know that it was 0.5.0. You’d just cut that a few days before KubeCon.
Yeah, that early. So what version are we on now, just so that listeners have a point of reference.
We’re now on 1.3, and we have an official support policy as well for maintaining older branches… So our active branches right now are 1.1, 1.2 and 1.3.
I love that. We’ll come back to that later… But I still want to continue with this train of thought, 0.5.0, and 1.3.0. I did a GitHub Compare to see how many commits there have been between 0.5.0 and the latest tag. Do you wanna guess how many?
Oh, I would say over a thousand, probably…
Jared, what do you think?
I’m gonna go like 1,300 is my guess.
Dan, do you wanna readjust, or are you happy with over a thousand? That’s a bit generic.
I’ll go with 1,299.
Okay. 1,838, across 24 contributors. That’s a lot of changes. A lot of changes have happened since November 2019. Now, I would have loved to see how many lines have been deleted and added, but I couldn’t, because the compare was just too big and I couldn’t see that. Now, if we take a step back from the specific changes, and contributors, and versions, what do you remember changing about Crossplane in the last two years?
A big part of the experience has actually changes as well, too. That’s something I tell people a lot when I’m talking about somewhat of the history and the evolution of the Crossplane project… It took us a good while to land on the final experience here around compositions and building your own platforms and abstractions… And that was not in 0.5. We had an earlier – maybe some hints towards that; we were doing something that was more tightly modeled after storage classes in upstream Kubernetes… But now we have the ability to define your own compositions and abstractions that are much more flexible and much more powerful. That’s one of the biggest things experience-wise that’s changed over the past year and a half since we’ve talked.
So before we go into what are compositions and abstractions, Dan, what is Crossplane?
So Crossplane is a way for infrastructure platform teams to build their own platform. A lot of folks come to it, and it’s interesting that we talk about this 0.5 release to 1.3, because I think in a lot of ways the experience now kind of reflects the maturation of the project as well… So a lot of folks come to Crossplane because they want to provision infrastructure using the Kubernetes API, the API they’re already familiar with for deploying their workloads… And as they grow in their adoption of the project, they start to move into these higher-level concepts. Jared already mentioned composition; we also have a concept of different types of packages and extension mechanisms… And as they move through it, they kind of start to evolve from just deploying things on Kubernetes to actually building a platform for others to consume and deploy. And we really like to give you that experience of building a platform right off the bat… So if you go to the documentation for example, you create something generic, a database, and you can select whether that’s provisioned on AWS, or GCP, or Azure, or anywhere else you’d like… And you can also select different configurations that can match that database type. You may want a VPC with your database, you may want to connect it to an existing one. So we try to give you that upfront, but a lot of folks still come in and provision their infrastructure and then grow into building a platform on top of that.
So are the abstractions in your description a database? Would that be an abstraction? And then the implementation would be specifically to a provider, or… Is that how that works?
[08:18] Yup. It could absolutely be specific to a provider, and if you’re across the different clouds, or you’re across on-prem and on the cloud, those could be different implementations… But also different combinations of resources. For any single kind of abstract type or composite type, you can have any number of managed resources, which are the granular things that actually represent external APIs, like an RDS instance, or a VPC. You can combine any number of those to satisfy the abstract type.
So it may be the actual destination for where the underlying requests are being made, or it may be the configuration of the different resources that make up that abstract type.
That makes sense. And the composition - it comes as explained, like these things being composable, and then having – do you have stacks, or what do you refer to those compositions like as a whole? Do they have a name?
So the general mechanism is referred to as composition, which is also an API type in our schema. I think the closest thing to what you’re describing right now is configuration packages. This is a way to basically say “This is an abstract type definition, this is the schema for it, this is a set of compositions that can satisfy this, and this is the dependencies it has on providers, which are other types of package. The providers are things like provider AWS, provider GCP, provider Helm. And that configuration package, when you install it into your cluster, it’s gonna bring along those dependencies in the form of providers, and it’s also bring along those abstractions… And you can also declare dependencies on other configurations.
We really like to see people doing – what we do see really mature Crossplane users doing is composing compositions inside of each other. So if you describe an abstract type like a database, you may make that into a higher level type called an app, that may provision a VM in a database, or something like that. So you can kind of nest these and build them together, which gives you really powerful building blocks for constructing a platform.
To add onto that too, if I can… Getting back to what I was saying about how the experience has changed drastically in Crossplane over the past year and a half, something that’s quite relevant here is that earlier on, in the earlier versions of our experience we were building, we as a project were defining what the abstractions are, like a MySQL abstraction, a Postgres abstraction, a Kubernetes cluster abstraction. And we quickly found that that’s gonna lead to a one-size-does-not-fit-all type of scenario, and we learned that the community really wants to define their own abstractions, so they have complete autonomy and they’re empowered to define what is the shape of the API that’s important to them, and what does MySQL even mean to them, because one size does not fit all, and you don’t want a lowest common denominator problem… So enabling a lot of flexibility to define exactly what these higher-level abstractions mean to your organization, to your business, to your scenarios and needs was a huge part of upgrading and making the experience in Crossplane a lot more powerful.
I think that makes a lot of sense, and I really like the way you think about this. I love it. However, there’s another element which I feel is very important to this flexibility… How do you discover all those abstractions? Is there a central place that you go to just to see them? How do people share these abstractions amongst themselves?
Yeah, good question. So there’s a couple different ways to do that, and Dan has done a lot of work on this, so you can jump in on that and add more, Dan, definitely… So we can package these abstractions up and share them in any OCI-compliant registry. So they have that sort of reuse, and the ability to make themselves available to a broader audience through any sort of registry.
[12:01] We at Upbound are building a registry that has a lot of those rich discoverability and search and sharing type of features into it that can make it, you know, with the semantic understanding of these Crossplane packages, to make it more easy to find them and share them and reuse them etc. But at the end of the day they’re packaged into just an OCI, a regular old image, so they can be shared and reused fairly easily with any registry.
I think you mentioned discoverability there, which is really important… So it’s great to share them, but how will users discover them, and how will users understand how this, for example, abstraction combines with something else? How do you link them together? How do they – a tree-like structure, or some sort of relationships? When you said, Jared, that you built this experience, where, how?
Yeah, this experience is being built in our Upbound Cloud service that we’re building. Our startup, Upbound - they’re the creators of the Crossplane project, and we’re building a SaaS product and a whole experience, an enterprise-focused experience around Crossplane. So since we have a complete understanding of what is the package structure, what are the contents, what does it mean to be a Crossplane composite resource and configuration and all that domain-specific knowledge, we’re able to build a rich experience with discoverability and sharing all those types of things in our Upbound Cloud service on upbound.io.
That makes perfect sense. Anything to add, Dan, to that?
Yeah, well I really liked that Jared pointed out that any OCI-compliant registry can host Crossplane packages. That makes them extremely portable, and that becomes really important when you’re an organization that has potentially really high security concerns, or only run an on-prem setting, or something like that. OCI-compliant registries have become ubiquitous in the industry, kind of alongside the rise of Kubernetes… And the ability for folks to be able to build their private images and push them to their private registries is definitely a big win.
But I know you mentioned this notion of a graph, which I think is a really big part of the untapped potential of the Crossplane community and kind of the marketplace around that. So I mentioned before that those configuration packages can declare dependencies, and you can kind of infinitely compose those. What happens is when you install a configuration, it is going to resolve all dependencies in there. Crossplane will do that for you. It actually generates kind of a manifest that says “These are the lists of packages, and these are the relationships between them.” And it can go through and actually resolve to the correct version of them. So when you create a dependency and a configuration package, you say something like “I need provider AWS, and it needs to be greater than v0.18”, and Crossplane will make sure that provider AWS is present in your cluster like that. And if you have two configuration packages with a common parent, it can go through and resolve that there will be no conflicts.
So we actually generate a directed acyclic graph for all the packages that are installed, which gives you that powerful ability to create a reproducible platform, where you get to the point where if you just install that parent node, that top-level node in your DAG, then you’re actually able to reproduce your platform in any Kubernetes cluster where Crossplane is installed.
That answers so many of the questions which I didn’t ask, but think about, so thank you, Dan. That’s perfect. There’s one more question which I’m thinking about, because I know that we answered the What fairly well, like what it is; how it works - we went into that to a fair bit… But I don’t think we answered the most important question, and this one I think is perfect for Jared - why Crossplane? Why is it important? Why does it matter?
Yeah, great question. I think there’s maybe two different branches of thought there to perhaps explore. The first one is that some of us that created the Crossplane project - we also created the Rook project as well, too. Rook is storage orchestration for Kubernetes. We found that in the early days of persistent storage for Kubernetes; the story needed to evolve a little bit there before people started to become more comfortable with running storage or data-persistent sort of things inside the cluster.
[16:09] So we found there that some of the work that the special interest group for storage in Kubernetes had done was really strong. Persistent volume claim, storage classes, things like that. And we found very early on that applying those same patterns for being able to dynamically provision storage would also work very well for other types of infrastructure platform resources such as databases, and buckets, and even clusters themselves.
So that was the original why of Crossplane, is “Hey, we’ve done great things with Kubernetes for storage… Let’s do more infrastructure resources inside of Kubernetes and bring them in to being managed and provisioned and controlled by the control plane itself.”
And then beyond that, we’ve found that there’s a very strong story too for businesses that are starting to have their own shared services infrastructure platform teams as well, too. They have a responsibility to provision infrastructure and get new services up and running for a whole set of application teams around them… So being able to have some reproducibility, being able to enable self-service for the application teams is a really strong story to be able to make their jobs easier, and for the application teams to be able to get to production faster and have reliable infrastructure, and normalizing on the standards that are practices for the whole organization. It just makes the software delivery story that has a huge dependency on infrastructure all the more strong.
So I can see how this Why is captured really well in the cloud control plane, which is the abbreviation or the short explanation for what Crossplane is. But I think the Why goes deeper into “Why I would want to use it, why is it important.” And I really like this idea – I think, this is my perspective… You take the best bits of Kubernetes and specifically the API, the unified API, the resources, and you make that available – or actually, no. You make infrastructure available via that very simple API, and you bring all the cloud providers. When I say “all” - that’s always a work in progress; there can always be more. But that’s like a growing ecosystem. And thinking about infrastructure as just an API request to your Kubernetes - that’s really, really powerful. So Dan, why is Crossplane important to you?
I think I have a bit of a unique perspective on this, as someone who’s a younger individual in the industry. I like to say that I kind of grew up in the Heroku generation, in that it was always really accessible for me to be able to get access to hardware, and folks in my generation. I have this familiarity with AWS, and even these higher-level things like Heroku and other services where you just said “I’d like a database to run my app”, or something like that.
One of the things that when I was experimenting with those different services I noticed really quickly is you always were operating at someone else’s defined level of abstraction. So AWS you can think of as being pretty granular, and you have to understand a lot of moving parts to be able to use it effectively. So you have to understand a lot about networking to use almost every service on AWS. You have to maybe understand a little bit about how Postgres works or MySQL works to use RDS. On Heroku, at the other end of the spectrum, you get a database and you don’t really get to tune it to your own liking, or anything like that.
So my interpretation of Crossplane when I first saw it open sourced (I believe) at KubeCon Seattle in 2018, I was finishing up my schooling and I recognized it as something that was gonna really revolutionize the way that organizations were able to provide a platform like that… Because if I as an individual college student was feeling the pain of these different services, you can only imagine what a large enterprise organization was feeling… So the ability to actually take that and build your own platform, but also use other people’s platform… We’ve talked already about this marketplace, right? I envision a future, and I think others do as well, where you go and you get the bits of the infrastructure stack that you don’t care about - you get those from other companies. They might publish them, individuals might publish them… Just like we consume libraries from GitHub, or something like that, and you’re able to say “I wanna take these off-the-shelf bits and sprinkle in my own personalized touch”, and then you get an infrastructure platform that’s tailored to your needs, but also doesn’t require a lot of effort for you to build out, which definitely sounds like a magical experience to me.
You mentioned that about discovering all the things that made sense for other people - they package, they put out there. Jared, you mentioned about the Crossplane cloud, where I imagine that some of this exists; people can discover it, people can get started… I also imagine that some of these building blocks you curate yourselves and you make available to the community… Do you need to, I imagine, create an account with GitHub…? I haven’t tried it. I definitely do wanna try it, maybe (you’re right) after we finish recording this… But what I’m trying to understand is how much do you get when you get started, in terms of the experience? What is the experience that you get to begin with, and at what point do you need to say “Okay, I like it, I’m serious about this. I wanna start playing for Crossplane cloud”? What does that onboarding and what does that early experience look like, Jared?
Yeah, I think that since this has been an open source project for over two years now, we’ve always been strongly believing in investing in the community as well, too… So we had to first build this experience and iterate on it to get to where we are today… But through that process we’ve gotten some great adoption and we’ve gotten folks that are heavily invested in the project themselves as well, too… So with the core of Crossplane, the open source upstream addition there, you can do a whole lot of this – the core functionality is all there. So you can provision infrastructure in any cloud provider, or on premises. You can package your own abstractions and define your own platform and push those to a registry to share it with other people. So the core of the value proposition is they’re in the upstream project.
I think that when you start getting to enterprise scenarios and you wanna get maybe some more visibility and a richer experience around the core concepts, then that’s when you can start getting more involved with what we’re building in Upbound cloud as well, too. For instance, if you want to manage a bunch of different Crossplane instances, or you’ve got multiple of them, maybe one for each team, or one for each environment, having some functionality to be able to manage the teams around that, and the permissions, and auditing and all that sort of stuff starts becoming important… And then I think there’s a whole bunch of really great experiences you can build that provide insight and observability and debuggability and all that sort of stuff into Crossplane as well, too… With a rich browser to see all of your infrastructure that’s being managed, what are the relationships between them… I think that things like that, insight and observability, manageability of the platform start becoming quite interesting as well too, which is sort of some of the experience we’re building in Upbound Cloud also.
[24:05] I think that makes a lot of sense, because once you reach a certain scale, then you start having problems that you just wanna pay someone else to handle, because that’s not the value that you’re adding… And that makes perfect sense. But I’m wondering more around that discoverability feature. For example, do I need to define my own abstraction to get started? And sure, I will, but what can I use out of the box quickly to understand how this fits together? How can I discover what’s out there before I grow and before I am bought into Crossplane? What does that look like?
Yeah, let me take a quick stab at that one, too… So I think there’s a couple different depths that you can start diving into. One is the open source upstream crossplane.io docs; they’re very, very useful. We have a Getting Started guide that kind of introduces you into what is a composition, what is a composite resource, how do you connect Crossplane to your cloud providers to start provisioning infrastructure… And it walks you through a very simple scenario where you’re creating an abstraction around a database and providing that to your application teams so that they can self-service and get their databases…
So that’s a great place to get started, and I think that anybody that walks through that getting started guide on Crossplane.io, the docs there, is going to start understanding the concepts and start being productive there.
To go another step further, something else that we’ve done in the open source is we’ve created a set of reference platforms. These are higher-level abstractions that start trying to show what are some scenarios, what are some use cases, what are some things you can accomplish that go a little bit deeper than just the Hello World, Welcome, Getting Started type of guide.
So we have a handful of them… Some of them are around creating clusters and data services in the different cloud providers, like in AWS, or GCP… And then we’ve got one for how do you create a multi-cloud Kubernetes, how do you create an abstraction around Kubernetes and be able to provision a cloud of your choice, and provide a set of services inside of that cluster for your applications and your workloads to consume…
And then I did one in a recent talk as well too for a cloud-native. So we’ve created a cloud-native reference platform as well too, that composes together a lot of different projects within the CNCF ecosystem and shows some of the more modern approaches such as using GitOps and having observability and service mesh and all sorts of things inside of your application cluster as well, too. So those reference platforms are a big help to take you from a getting started to “Oh, this is what I can do at a higher level and some of the more complicated scenarios that I can solve.” So we try to kind of take you through a little spectrum of your journey with Crossplane.
I really like everything that you said so far, especially the dereference architectures… And I would really appreciate having some of the links, and this is why - in 2022, for the Changelog.com setup, I see Crossplane being part of it. So I would like to have less makefiles, have less commands to run locally, and more having this control cluster, the C cluster, cluster zero, that then sets up all the other clusters, and it composes the entire Changelog.com infrastructure. So that’s one of my goals for 2022. And I think that Crossplane is at a point where it can enable that relatively easily. But I see some components missing, and this is where Dan will guide me through what do those steps look like. For our Kubernetes provider - it’s a managed Kubernetes, and we’re using Linode. So the first thing that we’d need to do is somehow to provision Kubernetes clusters in Linode. I imagine that would need to be a Linode provider, which I don’t know whether it exists yet, but it definitely didn’t exist when I last checked Crossplane about a year ago. That would be the first thing.
[27:57] The other thing - and this is more of a nice-to-have - is integrations with Fastly, the CDN. There’s certain configurations that would need to happen… And I know that this is not the [unintelligible 00:28:06.29] that we were talking about, like AWS, GCP, Azure; this is a CDN. But I see Crossplane fitting there really, really well, declaring our CDN as a Crossplane resource. Because it’s all part of the Changelog.com stack. And success in my eyes is being able to define the entire Changelog setup in these Crossplane abstractions. So how does that sound to you, Dan, and what are we missing that I don’t know yet?
That sounds like a perfect use case, and I will admit that I listened to the 2021 infrastructure for Changelog.com episode, and you kind of enumerated that in the past - I think it was six months before that - you had kind of said “We’re moving to Kubernetes to do this”, and folks had said “You’re running an NGINX server, a Phoenix web app, and a MySQL database”, or Postgres database I believe you said… And you got a little bit of pushback on moving to Kubernetes, because folks said “You don’t have a microservice architecture. Why are you doing that?” And I love how in that episode you went through and said “Well, you know there’s all of these kind of hidden dependencies that we have.” I believe you mentioned certificates, CDN, CI/CD, monitoring, all of those things. And as someone who’s worked on Crossplane for a significant period of time now, that was really music to my ears. So getting to your specific question, I do know that there is a provider, Linode, that is very early on, but does exist, and I believe is usable. So that’s one side of it.
Getting to things like CDN - that’s absolutely in scope for Crossplane. That’s a little different from other infrastructure-as-a-service. But we have providers for all types of things, and that cloud-native platform that Jared was just mentioning - it makes use of a really important provider that I wanna bring up, and also a newer provider that just landed, that I think would be useful.
The first one is provider Helm. So what Jared’s talking about is provisioning Kubernetes clusters, and then provisioning Helm charts into them, but that being a single package. So you create your Changelog.com instance as a Kubernetes object; behind the scenes that spins up a Kubernetes cluster, maybe it installs Linkerd into it - I know y’all had some issues with measuring latency on some requests - it puts your Phoenix app in there, NGINX, whatever else you need… It also spins up your managed Postgres instance on your cloud provider of choice, unless if – I know you mentioned that y’all might want to continue running that in a cluster, but as you alluded to, and many folks like Kelsey Hightower have said, that’s definitely something that we would encourage you to looking to manage offerings for; so you’re gonna include that on a single package. And just recently, one of my co-workers, Jared and I’s co-worker at Upbound and a contributor to the Crossplane project who’s worked on a lot of provider Helm just created a new provider called provider Kubernetes. So if you don’t wanna use Helm as your abstraction, you can actually now create Kubernetes objects directly into both the cluster that Crossplane is running in, as well as any tenant clusters you spin up.
So I think altogether we’re gonna have a lot of pieces for exactly what you wanna do, and something that would be really exciting to me is, you know, y’all might create a template or a configuration package for deploying a Phoenix web app, and someone else might come along and see that in the registry and say “Hm, I also have a Phoenix web app with these components. Let me just put in the other bits I need to be able to provision my website.” And you can share that, and it can verified, and go through our conformance testing, and that sort of thing, and be available to others. So I think you’re on the exact right track with the direction you’re going.
That’s amazing. I knew that our journeys would meet at some point, and I think they were getting very close to that point where we start walking together, in a way… I’m very excited about discovering what it looks like. I’m imagining that your documentation has all the examples I need. I know exactly who to reach out to if I get stuck, so that’s great… And I know that all of this happens in the open, so everybody will benefit from this and it’ll be visible to everyone who wants to see how this is done…
[32:09] I’m also wondering - this is another component which I would like to introduce in our stack, which I feel will solve a lot of the runs on the “It runs on my computer” sort of thing, “It doesn’t work on Jerod’s computer.” I mean different Jerods, Jerod Santo from Changelog. And I’m wondering, what is the relationship between Crossplane and Argo CD when it comes to deploying apps and keeping configuration in sync?
Absolutely. I’m really glad you brought that up, because a ton of Crossplane users we’re seeing are using Argo CD, and that’s definitely something both in the Crossplane ecosystem and Upbound Cloud that we’re definitely in support of. Typically, when folks are using Argo CD, a lot of times with any sufficiently-sized architecture they’ll move to this app of apps model. So you kind of have your initial app, which tells where to get your other Helm charts from, or whatever you’re using to deploy…
So a big thing that can be enhanced is now alongside your applications the infrastructure is defined in the same repo… I know you mentioned you like monorepos, so we can definitely give you that experience. And you can start using GitOps to provision your database, or using GitOps to provision your CDN, or something like that… And it’s tied to your deployment of your application. So you’re moving from these nice packaging mechanisms for workloads to a nice packaging mechanism for an application.
And because there’s a standardization on the Kubernetes API, that means if you’re running Crossplane on your Linode Kubernetes service, then Argo CD can target that; if you’re running a hosted control plane on Upbound Cloud, where we actually run Crossplane for you on our own infrastructure, then you can target that with Argo CD, because we give you kubectl access to that cluster. So there are definitely a lot of benefits in going with this GitOps approach. We certainly encourage that kind of outlook.
What would you recommend, Jared? Would you recommend that we set up a Kubernetes cluster where we run Crossplane, and that controls everything else? Or would you recommend that we use Upbound Cloud.
Yeah, I think it depends on what you’re going for, I think. I think that the model in Upbound Cloud works really well if you want to have a single, centralized control plane that is going to be managing a lot of other control planes in other places, so it becomes kind of a central point of managing all of your infrastructure, and you can spin up new clusters, workload clusters, and deploy applications and services into them… I think that that’s a really good model for Upbound Cloud, is having a centralized point there.
I think it’s a perfectly relevant model as well, too… If you’re running one cluster or you want to have it on premises, then you can run a Crossplane instance yourself there, and it’ll have all the workloads, all the applications, all the services within one single place as well, too. It’s a perfectly fine model for that.
One thing that we started doing as well too is that we actually have released a distribution of Crossplane that helps you run Crossplane on premises; even if you’re not going to run the hosted Crossplane instance inside of Upbound Cloud, you can still connect it to Upbound Cloud and get all those observability and manageability features as well, too. Even if you’re running everything on premises and having all your workloads in a single place that is under your control.
One of the biggest reasons why I think I would want us to use Upbound Cloud is because the most important thing that controls everything else is a managed service. So if there’s an issue with Kubernetes - well, we don’t know about that. Actually, we don’t even care how you run that managed Crossplane service. All we care about is that it’s always available. If there’s a problem, you’ll fix it… And we will always know that the thing which manages everything else is healthy. I think that’s a very big value proposition. And we’re not asking you to manage our entire infrastructure, because you don’t even know what it is, it keeps changing, so on and so forth, so I really like this decoupling… But what I do expect to happen is whatever manages everything else, you take care of it, because you know it inside out. And to me, that is like “Yes, please. Sign me up.” That’s what I’m thinking. Would you disagree, Dan?
[36:10] Absolutely. And one of the things that I think is a really important distinction here from other ways to provision infrastructure - so you have your kind of legacy ClickOps, if you will, where you go into the console and you create it –
No, hang on… This is too good. Please say that one more time. This is so good… This is the first time I hear that, and I love that… I think others need to take notice. We can’t just skim over it; this was too good. Please say that one more time.
I definitely can’t take credit for the term, but the term is ClickOps, where you go in and provision your infrastructure by clicking around in the console. I don’t know who to attribute for that, but it’s certainly not myself. But hopefully, that’s not what most organizations are doing… But kind of the next evolution of that - with things like Terraform or Pulumi or infrastructure-as-code tools. And those are really great, because you can version that config that you run to go ahead and provision your infrastructure. That’s an awesome model.
One of the things that could be nice about that is that you don’t have a service that you have to worry about to provision that infrastructure. You run it from your local machine. The drawback of that is that if you’re not actively running something, then that infrastructure is free to change or be modified, and that’s especially a big deal in an organization where you have lots of people provisioning and modifying infrastructure, and things like that. So having that hosted control plane, as you’re saying, you can allow someone like Upbound to host that for you. And then you also don’t have to worry about your infrastructure getting out of sync, because as soon as it is, you can get an alert for it. As soon as something goes down, we can bring it back up for you, according to how you would like that behavior to be reflected. So I think you’re spot on with both the fact that having that hosted kind of central point of provisioning infrastructure is really important, but also just having something that’s constantly evaluating your infrastructure is a big gain over what most organizations are doing.
I mean, even if – we’re not a big team, right? Changelog is a fairly small team, like 3-4 people… And some of us are spending very little time - myself included - on the actual infrastructure side; and I think people miss this.
Now, we wouldn’t want this knowledge to be stored in a wiki or captured in some docs, or even captured in some code. We would want this to be automated so that you don’t need to know much once you encode what you want to happen. And as long as the control plane is a managed service, which is very important, then things will just keep being applied, and everything will be healthy on the management side.
Now, if there is a problem in the integration with the providers - well, that’s a separate problem. That will happen regardless. But at least, you don’t need to be an expert in SRE, an expert in ops to run this thing; it just runs itself, literally. And that’s a dream. You’re literally automating yourself out of the job, and I think that’s the best possible approach to this kind of thing. If you automate it all, it just takes care of itself. How amazing is that?
Yeah, and I think that something you said there, Gerhard, is kind of interesting as well, too. A lot of folks say “automating yourself out of a job”, but in reality, what you’re doing is you’re automating yourself into an ability to handle more important problems. There’s so many services and components and things underneath the stack that people are building and delivering applications today. It’s not really reasonable to know every single thing and worry about every single component as well, too.
So the ability to automate and to offload some of that into managed services, or well-founded processes around automation as well too is really nice to be able to free you up, to be able to worry about more things that are important, and I guess recording other episodes of awesome podcasts as well too, in your case there.
It’s scary how well I could anticipate that. I was expecting one of you two to say what you’ve just said, Jared, and it’s scary that I could anticipate that. It’s like, wow. You’re blowing my mind right now… Because you’re right. What about rather than doing some tedious ops work, SRE work, what about trying new services out? What about trying to level everybody else up? What about helping the industry grow? How amazing would that be? What about trying things out and helping those things improve, such as Crossplane? Now, isn’t that a much more interesting proposition than configuring load balancers and figuring out why your NGINX config is wrong?
[40:30] I mean, that’s what I wanna see, and that’s what I wanna promote… So thank you, Jared, for preparing everything so nicely for that mental picture. You’re not automating yourself out of a job, but you’re automating yourself out of tedious tasks… Which - they get old. I mean, if you’ve been doing this for 10-20 years - sure, things change slightly, but it’s more or less the same thing. We are proposing a new model. Crossplane is proposing a new model, and that’s what gets me most excited about it.
Yeah, and you could see the same exact thing Kubernetes did for applications, of being able to - instead of dealing with running services on a particular VM or making sure they’re up and running with Systemd, Systemctl, whatever, being able to run those completely across an entire fleet of VMs, and have machines that have redundancy and consistency, and just everything working overall, and being able to self-heal, and have all that reliance over time is such a nice model… And continuing to do that in other areas of the stack, in a broader scope as well too - I think it’s just a really good way to keep going with all this.
Is there anything else that I should keep in mind as I explore this Crossplane integration, Dan?
I would really appreciate if you keep in mind the pain points. There’s a lot of really powerful technology in Crossplane, especially around designing compositions, packaging configurations… But the experience is still a little bit painful, in my approximation. Right now, to design your schemas for your abstract types, you actually have to write an Open API v3 schema in yaml and push that, which - obviously, that’s a much lighter lift than doing something like writing an application, and writing some logic, and that sort of thing.
That being said, that’s an experience that we really want to improve, both in the Crossplane community, and on the Upbound Cloud side as a product. We’ve definitely started to invest in some of those areas, particularly around editor support, being able to do things in the browser… We recently had a hack week at Upbound, where we worked on some of those things and made some big strides… But we definitely appreciate feedback from folks like yourself, who have that knowledge of what they want the experience to be… Because for us, this is a product we use ourselves within Upbound to manage our infrastructure, and that sort of thing. That being said, it’s a bit of a small sample size within our own organization.
[43:58] So all types of input we get, whether it’s folks coming in Slack, folks opening issues, jumping on calls with us or doing a podcast episode with us - those all help us make the product better. And the great thing that you’ve alluded to multiple times now - it’s all open source, so if you want it to be different, you can come along and make it better as well. And I think we’ve developed a really strong community around that for new folks to come in and empower them to be able to add the features to Crossplane that they want, or work with us to add them as well.
You’re ticking all my boxes right now. It’s scary, literally. Slack, how to give feedback, GitHub, the experience, the focus… I’m hearing all the right things. It’s scary how excited this makes me, so I have to dial it down a bit, because it’s just like - again, you’re ticking all my boxes. So - okay, that’s good to know…
How does this sound to you, Jared? …if we wanted to use more than just Kubernetes to run our Changelog app in - I’m thinking multi-cloud; if we wanted for example to try out Flyio, and Render.com, and Kubernetes on Linode - what would that look like in Crossplane? Is it even possible?
Yeah, that’s a good question. I am not super-familiar myself with at least Fly; a little bit with Render… But I think something that’s really important to remember here is that the machinery and framework in Crossplane is all there, such that the support for lots of infrastructure that already exists, providers for all sorts of in-cloud, on-premises sort of things - those all exist, but anything that has an API can be managed by Crossplane. So we have an extension mechanism for anybody to write providers that can have a lot of coverage of a lot of different places, and with that base layer of “Hey, here’s a simple provider”, you can almost think of them as a driver for a Crossplane to talk to some set of infrastructure or some set of services, being able to write a provider for that. It then gives you the ability to plug it into the rest of your infrastructure, compose them together, have a consistent model for all of your infrastructure applications and services… So it’s really nice to be able to extend Crossplane through anything or to anything that has an API, and there’s a lot of examples around that.
And a lot of community people are building interesting things that we didn’t expect as well either. For instance, the providers you expect in Crossplane to manage cloud resources like GCP and Azure etc. - those are all there. But then we’ve also got some community ones to manage things like GitHub and GitLab, to be able to manage repos and teams etc. in those places. And then one for SQL to be able to create users, or tables, or things as well, too. So really, literally, anything that has an API can be incorporated into Crossplane, and so giving you the ability to then start stitching and managing everything together with a very simple, normalized, consistent interface.
This reminds me a lot of how Terraform used to work, and how we used to use Terraform for many of these things, like managed DNS, for example; we used to have that integration. What is the comparison between Terraform and Crossplane, if any?
Yeah, I think that’s a question that we get a lot, and I know even on the Crossplane blog we have some posts about “What are the differences between them?” We’ve already talked about a few of them, one of them being that active reconciliation. That’s kind of the obvious one. The difference between a control plane and an infrastructure-as-code tool. We see that as a really big benefit.
You know, getting down into some more specific details… And this may not be super-applicable to Changelog, for instance, because you all have a small number of folks in your company - but you know, you may grow in the future. But one of the big parts that we think is really important about the Crossplane composition model compared to other infrastructure tooling broadly is this concept of bringing the level of permissioning to the level of abstraction. If I break that down a little bit… When you use something like Terraform, you can create modules and compose them into higher-level concepts to where you as the person actually executing it and requesting infrastructure - you don’t have to understand all the underlying bits.
[48:03] That being said, whether you’re executing on your local machine, or you have some sort of jump box that you log into that has the proper credentials - whatever gets rendered out at the end of that pipeline when all the modules are resolved and the conditionals are evaluated, you need to have those permissions, or the system you’re using needs to have permissions to actually create those resources on AWS, or on Linode, or wherever you’re actually provisioning that infrastructure. And that’s fine if you as an infrastructure admin are gonna have those credentials anyway, if you’re the only person doing it. However, when you move to a platform approach, what you want to be saying is “I’m giving you the ability to create the abstract type, and I define the policy and mapping behind that. I’m never giving you permission to create the granular resource…” And that abstraction that you create is going to be long-lived, right?
So one of the big aspects of composition, kind of getting into more of the technical implementation, is there’s two flavors of every abstract type that you create, which you can optionally disable one of them, but - there’s a cluster-scoped version, a Kubernetes cluster-scoped resource, and then a namespace-scoped resource. So you as a developer requesting infrastructure for your application would likely create something at the namespace scope. And you can have [unintelligible 00:49:16.11] to say “This developer and this team can create a database in this namespace”, and then you control the mapping as an infrastructure admin to how that actually gets rendered out… And the provider controller that actually provisions the infrastructure is what is given the credentials to create that.
So you’re never giving the app developer and their namespace credentials to even talk to AWS. You’re giving them credentials to basically be able to provision what you’ve defined as an abstraction, which may go to AWS, may go to Linode, may go to your on-prem infrastructure… But that isolation is really important, and persisting that isolation. That database object continuing to exist in their namespace is a really important distinction from other infrastructure systems, which we believe as you scale and as you grow and as more and more folks are provisioning infrastructure using Crossplane, that becomes even more important.
And from an Upbound Cloud perspective, we’re giving you services around managing your credentials and getting a view into your global infrastructure picture, being able to have a view of that graph, of the relationship of requesting infrastructure and what actually gets rendered out, and what credentials are being used, and when you give someone the ability to create a database in their namespace, what does that mean in terms of their ability to create something on Linode? Those are all really important things that we think sets Crossplane apart from other infrastructure tooling systems.
That is a great answer, thank you very much for that. I’m sure this is something which I’ll be referring back to, so I love having this recorded… Because I’m sure as I gain more experience with Crossplane, this will become more and more relevant, and even necessary to go beyond the getting started part.
You mentioned - I think either Dan or Jared, I can’t remember exactly who, but you mentioned about the hack week that you recently had at Upbound… And I’m wondering, Jared, what other things came out of that hack week that you were excited about?
Yeah, really good question. The hack week was something I was super-pumped about. I’m kind of more involved in engineering leadership these days than hands-on-keyboard technically focused… So that was something that for the team I was super-excited to make happen. So the whole guiding principle there was that people were going to be focusing on what was important to them, what’s something that either they had been dreaming of, or something maybe that was a big pain point for them as well too, so just making themselves more productive, or collaborating with new teammates as well, too.
[51:41] I think in a hack week there’s a lot of different ways you can take it, and it’s really up to the individuals participating in it to get what they want out of it. For instance, some of the other things that came out of it - Dan had mentioned that provider Kubernetes had come out of it; so a brand new provider was one thing that came out of it. The ability to monitor and get metrics from on-premises instances of Crossplanes and being able to surface those up is something that came out of the hack week project. Some developer tooling around the way we build our client browser side apps as well, too; our frontend apps. Some strong developer tooling to get designers integrated more into the process and designers being able to change different UI values and styling of an app, and have that ship to production as well too was something that came out of it… And then also developer tooling for being able to have remote debuggability for clusters as well too was something that came out of it.
So it was just a whole spectrum of things… People working together on some things in open source, some things for Upbound… There’s been a lot of people making progress, and people get really inspired when they get to work on something that’s very important to them internally as well, too. So that was just a really cool experience.
I love the sound of that. I’m wondering, is there a blog post or something public that people can go to and see these specific tools?
We just wrapped up the hack week recently… We had made a little bit of noise on social media about it on Twitter and stuff like that, and at the end of the week we did a demo session where we were kind of live tweeting information about it… So on Twitter there’s a little bit of information, but I think we’re gonna do a write-up to have a blog post about it coming up soon as well too, on Upbound’s blog.
I would love to get that. So maybe by the time this episode goes live, I would love to have a link to put it in the show notes, so that others can see… Because there’s a lot of cool stuff.
One thing which I haven’t heard, and maybe I’m getting confused as to whether this came out of the hack week or not - it’s the k8s container registry. Dan, what can you tell us about that?
Yeah, so k8s container registry was a project – it’s about a month old at this point… And for folks that aren’t familiar, Crossplane (we’ve already said) uses OCI images for its packages… And it actually doesn’t go through the Kubernetes node to be able to pull that. So on an individual Kubernetes node you have a container runtime which basically facilitates pulling images from various registries, and that’s how you get an image to run a pod, or a deployment, or something like that.
Crossplane - our packages are very small, our OCI images are very small, because they actually just contain a stream of yaml in them… So we actually go directly from Crossplane to the registry and we pull that in. We have our own cache for those packages that are just stored in a volume and you can use whatever backing storage you want for that.
So one of the things that I saw as a pain point when people were developing new packages was that they were having to build their package and push it to the registry and then install it declaratively into Crossplane. And this is a good model, and it’s definitely really useful when you’re consuming packages from elsewhere… But if you’re just trying to get a fast development loop, you definitely don’t wanna be pushing your package to a registry just to use it in your local client cluster, or something like that. So what k8s container registry does is it actually utilizes the Kubernetes API server itself to push images through its proxy functionality.
Behind the scenes, the Kubernetes API server is just a REST API. And one of the endpoints for pods is the proxy endpoint. So k8s CR basically is just a CLI tool which will pull an image from your Docker daemon, or a tarball that you have on your local system, and it will push it to a registry running in your Kubernetes cluster through the API server, so you don’t have to actually expose your pod; you don’t have to create a service or a load balancer or anything like that as long as you have kubectl access and you have [unintelligible 00:55:27.06] to hit that proxy endpoint. You can actually just push straight into your Kubernetes cluster… Which means something like Crossplane that needs a registry has one running right beside it.
[55:39] We’ve shown things where Crossplane is running and has a sidecar container which is an OCI-compliant registry. And importantly, a lot of this functionality was very easy to build, because there’s a library that we depend on in Crossplane that a lot of folks are big fans of at this point called Go Container Registry. This gives you kind of the low-level bits of how you actually construct an image. We’ve continue to kind of evolve our usage of that package.
And actually kind of alluding to that hack week, myself and my co-worker Michael - we worked on a way to actually build OCI images in your browser. Since we’re just putting yaml in them, you can imagine putting an editor in a web page that you receive, and using actually Rust and Web Assembly we’re actually able to build and push an image from inside your browser itself.
So lots of fun stuff around that, and lots of stuff that will help the developer loop maybe not be used in most production settings, but getting folks to the point where they can have a package that’s consumable and really useful to them as an organization as quickly as possible - definitely a goal for us.
So what I’ve heard is pushing container images, OCIs straight into Kubernetes, with no external container registry, just using the kubectl, the Kubernetes API. That sounds amazing. I love that. And I have ten follow-up questions, especially around the Web Assembly and the web browser… But we’re running out of time. So the only way we can solve this is by having a follow-up, which we’ll talk about next. But for now, just to wrap this up nicely, this is the last thing which I’m thinking about - if someone was to take away one thing, so if one of the listeners was to take away one thing from our conversation, what would that be, Jerod?
I think one of the biggest things here for me is that folks are starting to buy into Kubernetes and really getting understanding and seeing the power of a control plane type of approach for many of their applications, that “Hey, you can do that for your own infrastructure as well, too”, and that we have a super-welcoming community that loves to talk about these things, support people, get more people involved as well, too. We’ve been watching the community continue to grow, and the community helping itself, and building contributions for themselves as well, too… So the more, the merrier in that party. So if you wanna manage infrastructure in Kubernetes, come to Crossplane.io and join the community.
That sounds amazing to me. What about you, Dan? What would you want people as a listener to take away from this discussion?
I would say that my ask for folks is to think a little bigger with their infrastructure. And what I mean by that is envision a future that seems impossible right now. A lot of folks think kind of like this pie-in-the-sky vision of being able to just consume these infrastructure packages and build them into higher-level abstractions, and then have my own control plane, have my own version of Heroku seem kind of far-fetched, and likely feels like it would be super-hard. There is a lot of tooling in place to be able to do that today in Crossplane, and folks going in and exercising that and saying “This is not quite there” or “This part is great” is gonna help us make that more of a reality. So if that’s something that sounds of interest to you, which I imagine for most folks it would be, please come and try it out. It’s free to try all these things, they’re all open source… And see what you can build. Maybe you’ll build the infrastructure package that large companies start to depend on, and that’ll be useful for you for both managing your infrastructure, but also maybe getting some stars on your GitHub as well.
This was too much fun. It was very difficult to contain myself and not be more excited. Thank you very much for this lovely conversation. See you next time.
Thank you so much for having us. It’s always a pleasure to talk with you.
Our transcripts are open source on GitHub. Improvements are welcome. 💚