In this episode, we will be talking to Russ Cox, who joined the Go team at Google in 2008 and has been the Go project tech lead since 2012, about stepping back & handing over the reins to Austin Clements, who will also join us! We also have Cherry Mui, who is stepping into Austin’s previous role as tech lead of the “Go core”.
Austin Clements: Yeah, absolutely. So I do want to first say that stability is a very important part of Go. So I have no plans to come in and completely shake up the house. [laughter] So yeah, in terms of what’s sort of top of mind for me and what I’m thinking about… We like to say that Go is about engineering at scale, and engineering for scale. One of the things that was really top of mind for me coming into this role is how we can scale the engineering of Go itself without losing its minimalism and approachability. There are both technical and people aspects to this. For example on the more technical side, we’re starting to see dividends from the new diagnostics strategy that [unintelligible 00:21:58.09] has been driving. There we basically said “There’s no way that we on the Go team at Google have the resources to build all of the runtime diagnostics tools that people want and need. So how can we create a platform on which people can build their own tools?” That’s a very technical approach to sort of scaling the engineering of Go itself.
And then there are a lot of people processes. I’m thinking a lot about how we better communicate with and leverage the amazing developer community that we have. How can we improve our transparency? How can we better accept contributions? How can we keep people around better? How can we create long-term stability in the Go project, while scaling up the engineering of Go itself? Another thing that’s really top of mind for me right now is on sort of the engineering at scale perspective.
One of the sort of founding principles of Go was that programming should be fun. And while I do strongly believe that programming in Go is still fun, I also think that one of the costs of Go stability has been that there have been – there’s been a lot of experimentation in a lot of other languages. And there are places where I think other languages have been finding places that they can do better, that they become more productive, they become more fun. And I think that Go has never been about cargo-culting from other languages, but I think we can learn a lot from sort of the movement that other languages have had in recent years.
Go sort of started the development of more systems programming languages, of sort of more exploration in what can a programming language be… And I think it’s fantastic that we have both sort of Go on the more stable end of the spectrum, and also other languages that are willing to just like go wild and explore things. And I think we can learn from some of the best ideas in there, and bring them back into Go, and sort of keep things fresh… Because the bar for what makes programming fun moves. It’s not in one place. The bar for what makes programming productive moves. And I think we need to keep up with that, and we need to be learning and exploring.
[24:19] And then on engineering for scale - I’ve been thinking about how to engineer systems that scale for a long time. My PhD thesis was on the formal relationship between software API design interfaces and multi-core scalability. And I’ve been thinking a lot about Go’s approach to performance recently. We spent many years basically trying to float all boats, and we’re kind of running out of opportunities on that front. Obviously, we’re going to keep trying to do that, but it gets harder and harder over time.
So I think we’re sort of moving into a new phase with performance and scalability of Go, where we’re going to have to give people more mechanisms to do explicit performance engineering. But that said, I also deeply believe that performance engineering in Go needs to be incremental and composable. So engineers can trade higher engineering cost for lower resource cost just where it matters, without having to pay the high engineering cost everywhere… And so that they don’t have to be constantly thinking about the effects of performance optimizations as a system evolves.
Some of our experiments with memory allocation are actually a great example of this. A few years ago we put out this experimental support for explicit arena-based allocation. We never moved forward with that, because it really wasn’t composable. You had to be pretty careful to keep track of what was allocated how, and the details of that tended to leak out of APIs and libraries that used this. So that was kind of incremental, but it really wasn’t composable. For example, we’re now experimenting with a new variation of arena allocation that is composable, that works on these principles, so that an engineer doing performance optimization can drop in a few annotations in the code, and then not really worry about it again for a while.
And it’s possible to do it wrong, otherwise we could just do it automatically… But if you do it wrong or your system changes over time, it degrades gracefully, which also means that you can revisit those annotations as your software changes, but you only have to do it from time to time, and hopefully, the tooling can tell you “You probably need to look at this again and figure out what’s going on here.”
So those are sort of – there are a lot of things at the top of my mind, but those are a few of the things that I’ve been thinking a lot about.
Break: [26:39]