Docker, CoreOS, and Industry Leaders Unite to Create and Support an Open Container Spec
There’s a big announcement today coming from Docker, CoreOS, and other industry leaders around standardizing a software container format.
There’s a big announcement today coming from Docker, CoreOS, and other industry leaders around standardizing a software container format.
Fedora CoreOS is a container-focused (mostly) immutable Linux distribution designed to be lightweight and secure. It features Ignition as an early-boot-provisioning systems that alleviates all post-boot configuration, OSTree as an atomic-update mechanism, and podman as a secure and daemon-less container runtime.
If you’ve ever asked yourself WHY you need to SSH in to configure a system, why your cloud server OS comes with inkjet printer packages, or how you can get out of the burden of critical but uninspired kernel updates… then check out Fedora CoreOS!
This is a big deal. We’ve been tracking CoreOS since the beginning — we’re huge fans of Alex, Brandon and the team behind CoreOS.
Red Hat has signed a definitive agreement to acquire CoreOS, Inc., an innovator and leader in Kubernetes and container-native solutions, for a purchase price of $250 million.
Red Hat is a publicly traded company and while this announcement hasn’t really impacted shareholder value (yet), we, the open source community have been immeasurably impacted by the team behind CoreOS.
Also, check out Alex Polvi’s announcement on the CoreOS blog which includes some details and backstory.
Alex Polvi, CEO of CoreOS, joined the show to talk about their new open source product rkt, their App Container Spec, and CoreOS - the container only server OS focused on securing the internet.
We’re talking with Gerhard Lazu, our resident ops and infrastructure expert, about the setup we’ve rolled out for 2019. Late 2016 we relaunched Changelog.com as a new Phoenix/Elixir application and that included a brand new infrastructure and deployment process. 2019’s infrastructure update includes Linode, CoreOS, Docker, CircleCI, Rollbar, Fastly, Netdata, and more — and we talk through all the details on this show.
This show is also an open invite to you and the rest of the community to join us in Slack and learn and contribute to Changelog.com. Head to changelog.com/community to get started.
Talos touts:
- Security: reduce your attack surface by practicing the Principle of Least Privilege (PoLP) and enforcing mutual TLS (mTLS).
- Predictability: remove needless variables and reduce unknown factors from your environment using immutable infrastructure.
- Evolvability: simplify and increase your ability to easily accommodate future changes to your architecture.
Hit up the README if you’re curious about the name, why there’s no shell/ssh access, or how it’s different than CoreOS/RancherOS/Linuxkit
It’s the end of 2020 and on this year’s “State of the log” episode Adam and Jerod carry on the tradition of looking back at our favorite moments of the year – we talk through our most popular episodes, our personal favorites and must listen episodes, top posts from Changelog Posts, and what we have in the works for 2021 and beyond.
Matched from the episode's transcript 👇
Adam Stacoviak: On Go Time, Kelsey Hightower - well-known in the cloud space, well-known in the Kubernetes space… Came from CoreOS, did a bunch of cool stuff there, has been really well-known in the Kubernetes space, doing a lot of cool stuff around containers, a lot of things around Docker, a lot of things around Go… He’s been an MC many times, at many conferences, [unintelligible 00:50:36.01] I’ve seen him recently - or at least a couple years ago - at GopherCon… But he wrote a post for us called “Monoliths of the future.” Technically, he didn’t write it, but technically, he did… Maybe you can give a peek behind the veil to the process there… But on Go Time, Kelsey shared an unpopular opinion called “Monoliths are the future.” And he laid it all out there. And we turned that into a post via the transcript, and shared that…
Because hey, what happens often is content will get stuck in a podcast. And we’ve had Alex doing great transcripts for us since episode 200 of The Changelog. We’re now at 400-and-something now? I don’t know what number we’re at. 430-something, I think, if I can recall correctly… So for many years now, basically. So this post from Kelsey - I mean, I think it got 150,000 uniques… What was the number? 200,000?
Monitoring and debugging distributed systems is hard. In this episode, we catch up with Kelsey Hightower, Stevenson Jean-Pierre, and Carlisia Thompson to get their insights on how to approach these challenges and talk about the tools and practices that make complex distributed systems more observable.
Matched from the episode's transcript 👇
Kelsey Hightower: [00:55:58.21] When I was a CoreOS, when I started to learn about the Raft protocol, Etcd, the implementation of using that to do a key-value store, the biggest thing that I’ve seen was people not really understanding the system. Most people were trying to add more nodes for performance, even though Etcd is the single writer, single leader system… And then people didn’t understand that the mechanism it chose for consistency could also get in the way of availability. We’d make that trade-off of saying “In order to keep the data store consistent, I will take an outage to figure out the consistency”, have a leader election of which one should be the follower based on the data that we have committed in our logs.
People not understanding how that system makes the trade-offs - you tend to get into these issues where, for example, I had people put Etcd in an autoscaling group. So the way that works is three nodes, you have a quorum, everything is healthy; one node crashes, another node comes up. And it looks like three nodes still, based on all the things you can see and observe, but inside of the actual cluster membership API you would see four nodes. And what customers would do is they would be getting lucky for about a year. They would have a machine crash, and six months later now you have five nodes inside of the cluster API, but you actually only have three nodes running. Luckily for you, that’s enough for a quorum.
It’s when you get to that sixth node of this kind of crash/automatic repair that you now have six nodes in the API, and you can no longer reach quorum, because the three other nodes can no longer vote, and now you’re hard down, and the system is down forever, and there’s nothing you can do to recover until you understand that you’ve gotta go remove those three members. But guess what - you can no longer do it online, because it requires a quorum in some systems to actually remove the three dead members from all the other three.
So those are kind of things where things go wrong - people may not understand the trade-offs of the things they’re doing at the consistency layer or the availability layer, and they find themselves in a situation where they can’t troubleshoot… So guess what people are doing? They’re blowing away their entire Etcd setups and starting from scratch because that seemed to fix the problem.
We’re talking with Gerhard Lazu, our resident SRE, ops, and infrastructure expert about the evolution of Changelog’s infrastructure, what’s new in 2020, and what we’re planning for in 2021. The most notable change? We’re now running on Linode Kubernetes Engine (LKE)! We even test the resilience of this new infrastructure by purposefully taking the site down. That’s near the end, so don’t miss it!
Matched from the episode's transcript 👇
Gerhard Lazu: I think that’s a really good place to start… Because last year, as exciting as it was to roll out that infrastructure for 2019, we were using Docker Swarm, and the big difference was that we didn’t have to install Docker, we didn’t have to do any of that management, because it came with the operating system. We were using CoreOS at the time, and CoreOS out of the box just had Docker, so we didn’t have to install it.
So there were fewer things for our scripts, our Ansible to do, and we could switch to something like Terraform, and we could worry about managing not just the VM, but also integrating with the load balancer - NodeBalancer, in Linode speak - and it was a much simpler configuration. But it still meant that we had a single VM. And some might frown upon that, like “Why single VM?”, but looking at our availability for the entire year, it wasn’t that bad, and any problems that we had were fixed relatively quickly, except one; we may go into that later.
For the entire year, we had downtime less than four hours. That was pretty good for a single VM. So it just goes to show that some simple things can work, and you can push them really far. And I know that Jerod is a big fan of simple things, because they’re easy to understand, when something goes wrong it’s easy to fix it…
I know that Adam was really excited about us going to Kubernetes. We wanted to do that for a while, but the time wasn’t right… And it wasn’t right because Linode didn’t have a simple, one-click Kubernetes story. You had to do a bunch of things… You could do it if you really wanted to, but it wasn’t easy. And then, in 2019, at the end, November, the magic happened. Linode Kubernetes Engine entered beta; I was at KubeCon, I met with Hillary Wilmoth and Mike Quatrani from Linode, we gained access to Linode Kubernetes Engine, it was in beta…
[00:08:13.27] And with one command later, we had a three-node Kubernetes cluster. That was really simple; that was the experience that we wanted and we were waiting for. And once we had that, things kind of flowed from there. It was really simple to add all these other components.
Now, compared to what we had before, we had to worry about, I suppose, the migration from CoreOS to Flatcar, because CoreOS became end-of-lifed with the acquisition of CoreOS by Red Hat… So we had to do that migration, and we were approaching – we knew that the end of life would come… So rather than doing that and continuing with a single VM/Docker Swarm complications, we went to something simpler, which was Kubernetes… Because we had this one API, and we could provision everything… Which meant less Terraforming. We didn’t have to provision NodeBalancers, we didn’t have to create volumes and then attach them to VMs using Terraform… We didn’t have to do any of that. This Kubernetes API would do all of those things for us, which meant that it was a much simpler system to work with.
What is cloud native? In this episode Johnny and Aaron explain it to Mat and Jon. They then dive into questions like, “What problems does this solve?” and “Why was Go such a good fit for this space?”
Matched from the episode's transcript 👇
Aaron Schlesinger: [00:19:44.11] The last job I had, we were building a platform as a service… And it was all these containers. We started on CoreOS, with their Fleet system. I think that’s deprecated now. It was sort of “We will take a container and we’ll put it on X number of machines”, and that was what Fleet did. And that was pretty powerful at the time, because you had these sort of beginnings of that abstraction, of “I don’t need to care about VMs anymore. I can give an API a container name, and it’ll do its thing.”
And once we got to that point, we then had to start breaking things apart, because a platform as a service has lots of different logical components that don’t necessarily fit together. And this is right along the lines of what you said, Johnny. It has a Git SSH server, it’s got a logging component, it’s got an administrative interface, and a control plane… And the list goes on.
So once we hit that point where we said “We just can’t have a monolith with all of that stuff in it at once, because managing that thing, opening all the different ports and managing certificates and all that - that’s just not feasible for us.” So once we got all of our stuff running on Fleet, we then had to reinvent the wheel and figure out how to do secrets, and distributed locking, and all that stuff. And then Kubernetes came out and then we just adopted all the primitives that Kubernetes gave you… But stripping away the Kubernetes part, even though that was great, and stripping away Fleet as well - the idea that we could have implemented it ourselves would have been painful, but we probably could have - I’m not gonna say definitely; we probably could have… It’s the fact that - yes, we had a technical requirement that stuff was split up, while at the same time stuff could interact with the other stuff.
Service A could interact with services B and C in a way that was manageable and that didn’t require two different operations and release management teams to manage services A, B and C. And for me, right at that moment - and I remember this - I was dreading having to build those systems, to manage all the things and route network traffic and all that stuff. And once we’ve found Fleet, that was when – we went down this road of starting to think about an abstraction, and starting to think about independently scaling, and starting to think about how to organize the team around all these different services, and manage the sort of organizational aspect of this… I started thinking about a lot more things, too… But right then and there was the seed that got planted in my mind, that started me down this whole cloud native road.
Brad Fitzpatrick returns to the show (last heard on episode 44) to field a mixed bag of questions from Johnny, Mat, and the live listeners. How’d he get in to programming? What languages did he use before Go? What’s he up to now that he’s not working on the Go language? And of course… does he have any unpopular opinions he’d like to share? 😏
Matched from the episode's transcript 👇
Brad Fitzpatrick: So now I’m looking at like “Do I wanna use Flatcar Linux, like the CoreOS continuation project, or do I wanna use K3S, or maybe I just wanna use Podman…?” So I’m kind of like debating all my options now to build a more simple thing to run containers at my house.
Distributed systems are hard. Building a distributed messaging system for these systems to communicate is even harder. In this episode, we unpack some of the challenges of building distributed messaging systems (like NATS), including how Go makes that easy and/or hard as applicable.
Matched from the episode's transcript 👇
Jon Calhoun: And the hard part there is I think almost every open source project has been mostly wrong when trying to figure out how to build a business around it. Even ones – I’m thinking of like CoreOS… I don’t know how well they did, but they had to be acquired, and I assume that if they had a better alternative, they wouldn’t have done that… So you see ones like that, and I’m like “CoreOS seemed like it was doing very well, and… Unfortunately, no.”
In this episode, we’re joined by Kelsey Hightower to discuss the evolution of cloud infrastructure management, the role Kubernetes and its API play in it, and how we, as developers and operators, should be adapting to these changes.
Matched from the episode's transcript 👇
Kelsey Hightower: Now we’re talking about the problem with merge conflicts. At least my experience has been “How do we avoid merge conflicts? I know… Let’s start another repository. Let’s have a better API contract.” We’ve been so bad at language-level API contracts, we decided to leverage things like JSON and RESTful interfaces to give us a much harder contract. They’re very hard to violate, because they’re so rigid. You can’t reach behind a class and call a method, because you can’t do that with REST. It’s not easy.
So I think what happens in the team aspect is – I like the idea of modules. When I saw the way Go did modules - you can actually have separate teams building modules, but that’s independent of how you compile the modules into the final deployable.
At CoreOS I remember we used to do this a lot, we used to have a lot of individual modules, and then package main is where the collaboration happened. I would bring in my module and maybe add a route to it, or something. But once you touch that file, it’s only because you’re saying “Hey, I’m now part of the contract. Here’s my route”, but then I would just go do the rest of my work in the module, and allow the build system to take all of our work and combine it together. And if you’re using tools like Bazel, it could be build one big binary, build three little binaries, with flags… But either way, that’s a separate concern, the way you layout your source trees and how you develop code collaboratively, versus how you deploy the results of that effort.
Changelog’s resident infrastructure expert Gerhard Lazu is on location at KubeCon 2019. This is part one of a two-part series from the world’s largest open source conference. In this episode you’ll hear from event co-chair Bryan Liles, Priyanka Sharma and Natasha Woods from GitLab, and Alexis Richardson from Weaveworks.
Stay tuned for part two’s deep dives in to Prometheus, Grafana, and Crossplane.
Matched from the episode's transcript 👇
Natasha Woods: I would just go a step further with the documentation… A little story of working on the release team… When we were working on release 1.7 and 1.8, more from the communications side - and I mean communicating with the different stakeholders and contributors, and people that are really giving back, but also the companies that are seeing what’s coming down the line for the new releases, and things like that… We didn’t have a ton of documentation on that. So for 1.9, [unintelligible 00:31:09.01] really championed documenting the process, and documenting everybody’s roles.
So you can go and you can see all this documentation now, but what are each release team member’s roles and responsibilities, what is the previous experience that they should have before they are elected to these roles, because it changes each time for the release… And what are the key steps that need to happen from a timeline perspective. I mean, they’ve always had a timeline, but it was a little bit more detailed. And then we’ve just iterated on it for several releases. So Jace came to me and he said “We need to write down something for marketing.” And marketing is always a very last thought, but there’s so much more into it than just maybe tweeting something out, or calling a reporter to get an article.
[00:32:03.01] So I sat down and I documented everything I did for that release, and we implemented it, and then we implemented it for 1.10. Then I was lucky enough to have my second child, and I was on maternity leave, and so I wasn’t able to really update the next person and help them through if they had questions, because I wasn’t available. So they were able to take everything that I documented and implement it from a marketing perspective for the next few releases, which was really great and really helpful…
So some of those things were, you know, learning what is coming down the pipeline from the SIGs, and what is going to make it into the release and what’s not going to make it in the release, and why is this relevant to the audience, and who is the audience; why is it relevant to a vendor, why is it relevant to a customer, why is it relevant to another developer, and then making sure that that is communicated not only within the blog post, but it’s also communicated to any press that are covering it, because you don’t want misinformation out there and confusion to happen. And then how is this being communicated to, say, the companies, the Red Hats, and CoreOS back then, and those types of companies who are following along with the releases, how are you communicating that to them. So we actually created this really great, detailed process out of it.
Obviously, Kubernetes is in a different league, the same league as Linux. But for the smaller projects, you can take pieces of this and see the importance in communication across everything, and also documenting.
Johnny and Mat are joined by Kris Nova and Joe Beda to talk about Kubernetes and Cloud Native. They discuss the rise of “Cloud Native” applications as facilitated by Kubernetes, good places to use Kubernetes, the challenges faced running such a big open source project, Kubernetes’ extensibility, and how Kubernetes fits into the larger Cloud Native world.
Matched from the episode's transcript 👇
Kris Nova: Yeah, and I think it’s important to call out - there’s tooling in this space as well. We’re starting with a prototype; there’s a solution, the operator framework that came out of the folks at Red Hat and Core OS. We have Kubebuilder, which is an open source upstream effort… So we are starting to look at ways of building out frameworks for us to start developing controllers and operators, but again, it’s a lifetime of iterating and working on it, and we’re just not there yet, I don’t think.
Joseph Jacks, the Founder and General Partner of OSS Capital joined the show to share his plans for funding the future generation of commercial open source software based companies. This is a growing landscape of $100M+ revenue companies ~13 years in the making that’s just now getting serious early attention and institutional backing — and we talk through many of those details with Joseph.
We cover the whys and hows, why OSS now, deep details around licensing implications, and we speculate the types of open source software that makes sense for the types of investing Joseph and other plan to do.
Matched from the episode's transcript 👇
Joseph Jacks: Yeah, that is a really good question. I think there’s lots of ways to answer that question, but one potential way of looking at it is 2018 is a really huge year for commercial open source. I guess open source software overall is about 20-25 years old, and just in the last five(ish) years we’ve seen a huge amount of growth in this kind of emerging category of companies that you could classify as commercial open source software companies. Your mention of the index that we’ve been managing/maintaining for several years kind of indicates that obviously there’s quite a lot of activity in companies that have formed, that have reached huge levels of scale.
100 million in revenue or 25 million revenue/quarter was chosen as a metric just based on that revenue number being pretty relevant to companies that can kind of go public, or have large outcomes.
So “Why now?” I think is really a function of 2018. I’m actually just at the GitHub Universe event right now, so GitHub’s a good one to mention, but… So far, in aggregate, we’ve had over 30 billion dollars in either IPOs, private equity events, or mergers and acquisitions of commercial open source software companies. Most of those are companies that have been in existence for several years, at least 5, 6 or 7 years, in some cases 10 years… So it takes a sort of similar or same amount of time roughly for commercial open source software companies. I believe it’s possible to do this in a shorter period, but roughly between 8 to 10 years to become large, sustainable, public, IPO-able companies…
[00:08:11.24] And I think 2018 has been sort of this tipping point year for public markets, and then seeing lots of these large outcomes occur, where most of these companies are venture-funded. But as we’ve talked about in the opening, we haven’t really seen a focused firm I think mostly because there just hasn’t been the synthesis of appreciating these companies are fundamentally different as compared to proprietary, closed source enterprise software companies or just software companies in general.
We very strongly believe that commercial open source software companies are fundamentally different functionally in almost every way, as compared to proprietary closed source software companies. And that’s kind of another motivator for starting OSS Capital - the founders need to be served differently, the support structure is very different along lots of different dimensions, and the companies kind of just grow and evolve and go to market and build products for businesses also quite differently.
But just to answer your question, I think 2018 has been a really remarkable year for large open source outcomes, obviously GitHub being bought by Microsoft, MuleSoft having their second exit to SalesForce after IPO-ing last year, Magento getting acquired by Adobe, SUSE getting acquired by a private equity firm, Elastic’s IPO, CoreOS getting acquired by Red Hat, Alfresco getting acquired by a private equity firm… We think there’s quite a lot of IPO dominoes (if you will) that are gonna fall over the next several months even. We think there’s probably a few more between now and the end of the year… So that’s one of the other factors of 2018.
I guess maybe one last comment - in 2018 almost on a monthly basis, like January, February, March, April, May, every single month we’ve actually seen a large commercial open source outcome… Every single month. So it’s been pretty amazing so far, as a year.
Carmen Andoh joined the show and talked with us about inclusivity, the 2017 Go Developer Survey, visualizing abstractions, and other interesting projects and news.
Matched from the episode's transcript 👇
Carmen Andoh: [00:20:06.09] Yeah, so the one thing that I always kind of latch onto is this inclusion question. And I think about inclusivity because I have been a person who has maybe internalized that maybe I don’t belong. It’s really more relevant to people who maybe don’t feel like they belong, whether that is true or only perceived… But one of the things I noticed in the Go – so I was asked by a part of the Go team, because I’m one of the Working Group through Golang… “Hey, can you take a look at the survey and see if there’s anything we should modify for the upcoming year?” So I looked at it, and everything looked okay. I didn’t at the time see anything that I would change about it.
But when the survey results came out, I kind of took a closer look at the inclusivity question and I was like “Oh, these are interestingly-worded things… Maybe we could discuss and see if we could reword some of the questions.”
I’ll give you a little back-story… My son was diagnosed with autism when he was two, and I’ve talked about this at my GothamGo keynote, but one of the things that I did to try to understand that was to get a masters in it… And my masters did a lot of work with survey design and analysis. So one of the things that’s in surveys is designer bias, whether we like it or not. The perspective of inclusivity in a survey design is by definition you have to decide what is default and what is not default, and why you would identify as an under-represented group, because that was the nature of the question, right?
So last year’s survey, or the results of 2016, there were a lot of write-ins. One of the things that surprised me was “objection to the question” as a write-in… And I would love to speak to those 150 individuals and say “Well, what is it that you object to? Do you object to the way that the question was worded? Do you object to the fact that the question was asked at all? Do you object to the fact that maybe there isn’t a category, or maybe the way that the multiple-choice was put in?” I would love to speak at length further with them.
The other thing that I thought was interesting was that about 33% to 37% in both years did not answer the question at all. They just skipped it. I guess it wasn’t a required thing that you had to choose, right? So I would also love to know - well, that’s interesting… One in three gophers didn’t wanna answer this question; now, what kind of assumptions can I make there? Can I make assumptions that they just didn’t think that this was something worthwhile to answer, and why that is? One of the things is “I don’t consider that I belong to any under-represented group”, whatever that might be; I’m not gonna make any assumptions.
So anyway… Those are the two things in the survey in terms of inclusion that I talk about. Because we always see about diversity and inclusion, and I always focus on inclusion, and it’s something that’s sometimes hard to scale because belonging and inclusion can be intensely personal. I’ve had talks with people who maybe don’t feel included because they feel that they’re of older age; I’ve had conversations about that with some gophers.
Some also are a part of a religious group that cannot drink alcohol or coffee, and so some of our conferences and meetups and events - they feel a little bit left out or there’s no alternate for them. So it’s things like that that we maybe could include, but then again, there’s this whole idea of “Okay, well then do we have to – maybe just allow that write-in, but…”
[00:23:58.10] Anyway, so I went and I kind of looked at other tech communities. There was the Stack Overflow User Developer Survey. That, of course, gets far more submissions, but one of the things they did that kind of created a stir on Twitter was that they asked the gender question and they didn’t have any options for trans, non-binary or other gender minorities… So then the next year they included that, and that helped those that identify with the non-binary gender feel included. I felt like we didn’t have anything like that, and lo and behold, I went and I looked at the Rust survey, because gosh, how many times over time we think of Go, sometimes we think of other languages, and there’s Rust… And for Rust, that was an option to check.
I have some non-binary friends and co-workers, and I kind of said [unintelligible 00:24:49.13] I kind of scrubbed the communities as these and I said, “Which survey design or question do you like better?” and they were like “Well, I don’t feel erased in the second one”, meaning Rust.
These are just things to think about, and I don’t know what the answers are, but I do know that – the thing that I’ve talked about is I remember walking into GopherCon in 2015 to the CoreOS pre-party… Do you remember that, Brian and Erik?
Damian Gryski joined the show and talked with us about perfbook, performance profiling, reading white papers for fun, fuzzing, and other interesting projects and news.
Matched from the episode's transcript 👇
Erik St. Martin: Yeah, and even if you look at Raft - Raft itself kind of explains how the consensus works, but you take an implementation of that, like etcd or Consul or something like that, and there’s so much more that goes into it… Especially when you talk about having a service that can stand the test of time; you need to be able to back-up the data and compact logs and all these things. There’s so much more work than just that; that’s kind of the root of it, but like you said, there’s so much more engineering and problems to solve around it.
[00:32:03.25] Moving on from performance, another topic you are very into is fuzzing. For anybody who may not be familiar with what fuzzing is, do you wanna kind of give a rundown on what fuzzing is, and then we can get into why you think everybody should be doing it?
Jeff Lindsay joined the show to talk about workflow automation, designing apis, and building the society we want to live in…plus a surprise special announcement!
Matched from the episode's transcript 👇
Erik St. Martin: So I didn’t do a lot of development this week. Well, a little bit… So I’ve been in New York City at this Open Hack thing that Microsoft’s been hosting, which is like a cool little hacking challenge conference, and I’ll write up something, a little bit more about that, but it’s been super fun. As part of the thing, we had to deploy a Kubernetes cluster with metrics and stuff, and the Prometheus operator by CoreOS is badass… It’s the first time I’ve used it, because all the Kubernetes clusters I’ve set up and administered was prior to this. But the pattern is really awesome, because it actually uses custom resource definitions, and I kind of hinted at this in our conversation about building abstractions using the operator pattern…
So literally, to get our service monitored by Prometheus in our Grafana graphs sidecar process that scraped this stuff over a custom protocol, use the Go library, which automatically gives you an HTTP listener for Prometheus with a slash metrics endpoints to expose the gauges… So boom, that part’s done. And then in order to get Prometheus to find it, it was just a – because they use CRD’s, they have a custom Kubernetes resource called the Service Monitor, where basically I told it to label to look for my custom service, and it automatically knew how to find all instances of that service and to scrape its metrics. That was it.
[01:00:18.27] That’s so useful, not having to custom-configure Prometheus every time you launch a new app, and then reload the configuration for Prometheus, and stuff… I just thought that was really cool.
Dan Kohn, Executive Director of the Cloud Native Computing Foundation, joined the show to talk about what it means to be Cloud Native, the ins and outs of Dan’s role to the foundation, how they make money to sustain things, membership, the support they give to open source projects, the home they’ve given to Kubernetes, Prometheus and many other projects that have become the de facto projects to build cloud native applications on.
Matched from the episode's transcript 👇
Jerod Santo: That’s interesting… I wanna talk about that architecture a little bit, because from the outside looking at it, even if you go to CNCF.io and you look at the platinum members and you see AWS and CoreOS and Google and Docker and all these large corporations, and then you see some of the membership fees that they’re paying, which for the platinum is like 370k a year - there’s like this aura of this as a pay-to-play type of a situation, and it’s so interesting that the structure you all put in place is specifically to fight against that happening… Is that what you’re saying?
Ivan Porto Carrero joined the show to talk about generating documentation (with Swagger), pks, kubo, and other interesting Go projects and news.
Matched from the episode's transcript 👇
Ivan Porto Carrero: Yes, it does… NSX-T. It includes NSX-T, which is VMware’s overlay network; it’s the second generation of it. What this does over any of the other solutions that are out there - because most people will typically go with flannel originally, and then maybe look at something like Calico for the policies. It actually gives every pod a container interface that can be managed outside of just the environment. You can have a network administrator who sets up a bunch of global policies in some other system - the NSX management plan - and that will then translate into rules for Kubernetes, for example.
There is more stuff to it, because NSX-T is quite an extensive piece of work. So it’s pretty optimized in how it deals with sending traffic and doing the routing rules and so on, but those are implementation details of NSX-T itself. What is unique I think is that it has a centralized management plan for all types of container interfaces, and that is where Kubernetes also takes advantage of it. The NSX team has an integration for Kubernetes that also works with some of the other Kubernetes distributions, so yes, it’s a very important piece of it, the security aspects that NSX-T brings to bear.
Liz Rice joined the show to talk about containers, cloud security, making complex concepts easier to understand, and other interesting Go projects and news.
Matched from the episode's transcript 👇
Erik St. Martin: Being you have a networking background, what about something networking related? I think there’s a lot of stuff going on in the cloud networking space now; you know, CNI, and you’ve got things like Flannel and Calico and all of these things that create these mesh networks and things. Understanding that a little bit might be interesting.
This is an anthology episode from OSCON 2017 featuring awesome conversations with Kelsey Hightower (OSCON Co-Chair and Developer Advocate at Google Cloud Platform), Safia Abdalla (Open Source Developer and Creator of Zarf), and Mike McQuaid and Nadia Eghbal (GitHub Open Source Programs).
Matched from the episode's transcript 👇
Kelsey Hightower: No, I didn’t, and I think that’s when I started to do them more. I got a little bit more confident. I was at one of the very first Kubernetes ones where we were all getting around the 1.0 launch, and we were all in San Francisco. This when I still at CoreOS and I met the core engineering team, and we were all there for the Kubernetes summit. I was doing this demo - it was smooth… I was actually doing it on my laptop. And then the networking switched, and all the VMs crashed, and I’m like almost out of time… I was like, “Anyone wanna see me finish this?” and they were like “Yeah!!!”, because everyone was on the edge of their seats to see how this thing goes down. So I deleted the whole cluster and I built it back from scratch, walked it back up, and we got the whole thing done, and it was like “mic drop.”
Someone came up to me afterwards and was like “You did that on purpose, you were just trying to show off.” I was like “Man, I’m sweating bricks, dude!” That was so dope, and then that told me that it’s okay to mess up. What people come to see is you make it through it, and that’s what gave me that confidence… “If that’s the worst, then I’m good from here.”
Aaron Hnatiw joined the show to talk about being a security researcher, teaching application security with Go, and a deep dive on how engineers and developers can get started with infosec. Plus: white hat, black hat, red team, blue team…Aaron sorts it all out for us.
Matched from the episode's transcript 👇
Carlisia Pinto: That is true. There was that big CoreOS bus. They’re not doing that?
Kris Nova joined the show to talk about developer empathy, running K8s on Azure, Kops, Draft, editors, containerizing odd things…and what it’s like to play a keytar.
Matched from the episode's transcript 👇
Erik St. Martin: That’s one of those things that keeps getting on my list, since I’ve seen it on – I know Tectonic does that too, but since I’ve seen it done, it’s like “Oh, man… I really wanna do that…”, just manage the Kubernetes components inside Kubernetes, too.
Tim Hockin and Aparna Sinha joined the show to talk about the backstory of Kubernetes inside Google, how Tim and others got it funded, the infrastructure of Kubernetes, and how they’ve been able to succeed by focusing on the community.
Matched from the episode's transcript 👇
Jerod Santo: And then I also noticed that – you know, we’re talking about the architecture, kind of the underpinnings here, and I just love to see when there’s other open source things that are involved, because nobody’s building these things in a vacuum, and you have Etcd being used for service discovery, which is a highly allotted tool out of CoreOS, which is very cool… So you’re pulling together things - Docker, Etcd, and of course all this custom stuff as well… At the end of the day, it makes very much sense from a command line, but surely there’s some sort of declarative way that I can define, similar to a Docker file – is there a Kubernetes file where I can say “Here’s what I want it to look like”, and I can pass that off and it builds and runs? Or how does that work?
Marc-Antoine Ruel joined the show for a deep dive on controlling hardware, writing drivers with Go, and other interesting Go projects and news.
Matched from the episode's transcript 👇
Brian Ketelsen: Secret thing, secret thing… You know what I ran into on the internet two days ago? I was doing my typical late-night surfing through GitHub thing, looking for interesting projects to star and talk about on the show, and I ran across a fork of CoreOS that Jessie Frazelle maintains, and it looks very clearly to me, in her fork of CoreOS‘s build scripts, that she is using CoreOS as a desktop OS, because she’s added X11 and all kinds of other stuff to it. I can’t wait to find some time to talk to her about that - maybe at GopherCon - and find out what that looks like.
[00:48:06.25] That is crazy, because CoreOS - it’s got Chrome’s updating system, but it’s Gentoo in the background… So it’s really powerful how you could build the whole OS just by changing a couple of config files and rerunning a script and waiting a couple hours.
Mat Ryer joined the show to talk about creating your own Gopher avatar with Gopherize.me, the importance of GitHub Stars, his project BitBar, and other interesting Go projects and news. Special thanks to Kelsey Hightower for guest hosting too!
Matched from the episode's transcript 👇
Kelsey Hightower: [00:07:42.28] I always think about that… You brought up a good point - Rails did a lot for Ruby, and I would say maybe Docker did the same thing for Golang. Docker adopted Go really early on, and I think most people – because they attracted a huge open source community of contributors, and I can even remember when I was at CoreOS that all their stuff was also written in Go, and I think those projects force a lot of people to look at Go seriously, because they wanted to contribute and get their features in. So in some ways, in my mind, I consider Docker the Rails for Go; even though it wasn’t a frontend app, it was just one of those applications that was so popular, had so many contributors that it introduced so many people to Golang for the very first time.
Travis Jeffery joined the show to talk about Go, Jocko, Kafka, how Kafka’s storage internals work, and interesting Go projects and news.
Matched from the episode's transcript 👇
Erik St. Martin: If you’re Brandon Philips from CoreOS, you hang out and you work until the second somebody taps your shoulder and tells you to go on stage. [laughs] I’ve never seen somebody so calm before having to talk.
Alright, so #FreeSoftwareFriday… I know we’re on a tight timeline with Carlisia having a hard stop.
Johnny Boursiquot and Bill Kennedy joined the show with Erik and Carlisia to talk about a hard subject — Imposter Syndrome. Not often enough do we get to have open conversations about the eventual inadequacies we all face at some point in our career; some more often than others. You are !imposter
.
Matched from the episode's transcript 👇
Erik St. Martin: It’s funny you say that, because it’s kind of the same thing from my perspective too, and for years I’ve avoided public speaking and blogging about stuff. I just wanted to work on cool stuff, I didn’t want to share it out of that fear. Even the podcast… We’re on episode 30 and I’m just starting to get to a point where my anxiety isn’t just making my heart pump out of my chest every time the mic turns on.
People don’t see that, right? They see the outward perspective, and you’re analyzing everything you’re doing, and every uhm and uh and nervousness and things like that, but most people don’t realize, so they perceive that you’re this walking ball of confidence, just walking out on stage, preaching to people and stuff like that. They don’t see the nervous wreck that everybody is for months beforehand, preparing.
[00:08:19.29] Although there are some people who can just do it. Brandon Philips from CoreOS… I watched him backstage, he’s just working on his computer, just waiting for them to tap him on the shoulder and be like, “Alright, you’re up”. He just goes on stage, and everybody else is just kind of like rocking in their shoes, comforting themselves before they go up on stage.