Go Time – Episode #153

GitHub's Go-powered CLI

with Mislav Marohnić

All Episodes

In this episode we discuss Mislav’s experience building not one, but two Github CLIs - hub and gh. We dive into questions like, “What lead to the decision to completely rewrite the CLI in Go?”, “How were you testing the CLI, especially during the transition?”, and “What Go libraries are you using to build your CLI?”

Featuring

Sponsors

LinodeGet $100 in free credit to get started on Linode – our cloud of choice and the home of Changelog.com. Head to linode.com/changelog

Equinix – Equinix Metal is built from the ground up to empower developers with low-latency, high performance infrastructure anywhere. Get $500 in free credit to play with plus a rad t-shirt at info.equinixmetal.com/changelog

Pace.dev – Minimalist web based management tool for your teams. Async by default communication and simplistic task management gives you everything you need to build your next thing. Brought to you by Go Time panelist Mat Ryer. Try it out today!

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello, and welcome to this episode of Go Time. Welcome back, for those of you who are joining us once more, and for those of you who are new to the show - yeah, welcome for the first time. Hopefully, this is not your last; hopefully, you enjoy today’s panel… Which actually includes Mr. Jon Calhoun. How are you doing, Jon?

Good, Johnny. How are you?

You know, generally speaking I just answer that “Yeah, I’m fine”, whatever it is, but I think I’m gonna give a different answer today. I think I am – not everything in my life is going quite right, but I’m choosing to focus on the things in my life that are going quite right. That way, I can be a bit more… How do you say this…? I can take stock of everything that’s going on, and be thankful for the things that are going right. Because this is 2020. It could be a lot worse. So yeah, that’s how I’m gonna answer that today.

And joining us today - a special guest - is Mr. Mislav… And I know I’m gonna mispronounce your name… Marohnić. Yeah, I knew I was gonna mess it up. [laughs]

It’s pretty good.

Thanks for joining us, Mislav.

Thank you.

Yes, yes, awesome to have you. So Mislav, for those of you who do not know, is the maintainer of a project you probably use, or have used in the past, called hub. So Mislav is gonna give us hopefully a little bit of a history around hub, how it came to be, what he’s been doing with it for the last few years…

[04:00] And also, if you haven’t heard, there’ s a new GitHub CLI that’s been sort of a release recently, that Mislav also had the opportunity to work on at GitHub. So we’re looking forward to unpacking that, and getting to know how you got so lucky… And again, to learn from his learnings. Also, basically following up on our Intro to Go, or rather Introducing Go to Your Team episode from last week - hopefully, this will add some flavor to the stories that Mislav is gonna tell us about Go at GitHub in general.

So yeah, let’s get into this. Mislav, do you wanna give us – I mean, I’ve already introduced you as the person who maintains hub, and now the GitHub CLI officially, but can you give us a little bit of an intro to yourself? Who are you, my friend?

Sure. I worked for GitHub for seven years now, and that’s the place I feel like I have some impact, because I get to – like with these command line tools, they let me experiment a lot. And I get to open source a lot what I do, and I get to ultimately help this platform, which is a large part of what it is for - it’s about sustaining the open source world. That’s what makes me happy, and this is what I like to contribute my free time into. Nowadays I’m lucky enough to also be doing open source as my full-time job.

Awesome. You are the envy of many out there, my friend; you get to work on things you love, and open source, and contribute back, and somebody pays your bills for that. That is pretty awesome. So do give us a little bit of background on hub. How did you come to inherit that project as a maintainer on it, what was its original goal as a project, and how did that evolve under your stewardship?

The story of hub starts about ten years ago (or more; eleven), when it was just a short script made by Chris Wanstrath (know as @defunkt on his online handle), then CEO and co-founder at GitHub… And it was basically a little gimmick; it was meant to extend the interface of Git in a way that it just feels slightly more GitHubby, and defaults for certain shorthands to GitHub URLs, as opposed to somewhere else… And just makes Git have more sense working with GitHub.

That was really well received also by me, who was really at the time nerding out about just CLIs in general, and Git, and had been already a very active GitHub user. So I started contributing a lot, and eventually, as I guess other people who have - especially Chris, who kicked off the whole project, as his participation kind of faded out (as often becomes of open source projects), somebody sometimes comes along and takes it over, and I just appeared to be that person.

So since then, I continued writing it, but over time it had this organic adoption into a tool that eventually was considered GitHub’s official CLI tool… Which it was definitely not, but I guess now in hindsight I can feel how that might have been moving the project under the GitHub org eventually, even though it was maintained ultimately by a then non-GitHub employee. It was something that signaled strongly that this is something that has an organization backing it, whereas it was mostly just like a pet project still, and it was an experiment. It just kind of grew beyond just a little gimmicky tool that people use in the command line.

[07:51] Eventually, I started to feel a really large responsibility about it, because so many people have been using it, so I kind of stepped up my project maintenance to give it more and more time. Eventually, GitHub noticed that this is something that is really worthwhile having, to the extent of investing a whole team, resources of a whole team into it. They asked me would I want to participate, because I had advertised that I wanna switch teams at that point, so it worked out great for me. I got to change up my job after six years, which I feel I had a good run, and then do something completely different.

So you started working on hub before you were working at GitHub, correct?

Okay. So did that help you when you were applying to GitHub? How did that work, I guess? Did that help you get through the interview process a lot easier, or was that something you talked about with them?

I’m sure it did. The way that GitHub hired then, about 7-8 years ago, and the way that it hires right now, of course, is very different, because GitHub is a very different organization since then… But I do remember having this privilege of having known at that point most people who have either founded GitHub, or were otherwise there, in a really high clout capacity. And not in the way just like buddies or something, but in a way that I have actually spent tons of hours, and even with some of them up to several years collaborating on open source projects.

So I think as like an interview, if you can see a person in front of you, and instead of them solving a blackboard problem for you, you know that with this person you have years of experience, collaborating, coding, reviewing PRs, and something like that. I think that all that experience before that was basically a very prolonged interview process… And I must have made a good impact, because they had the trust in me that I’ll be really passionate about what I’m doing, even though I was just a remote employee who is always traveling and working from different timezones. And I think the trust paid off eventually. Since then, I was really excited to work here.

So you said the first hub started off as like a Ruby script. Or you said a script; I think I read that it was a Ruby script. Is that correct?

Yeah, the idea was that hub was a single-file script, so it can be just easily copied over to any system.

So the initial version was kind of meant to replace Git, and I think over time it evolved; at least the GitHub CLI that we have now doesn’t feel like it’s meant to be a Git replacement. It’s not supposed to be an alias for that. So around what time did you start to feel that wasn’t the case with hub, as it evolved?

Well, maybe we should first define what does replacing Git mean. The way I see it, if somebody wanted to replace Git, it would probably have two main potential ways of doing it. So they could either abstract it away in the sense of replacing its entire API, which is on a command line all these commands that we use: log, commit, rebase, and things like that… And replacing it with a smaller API, so something that makes sense for an abstraction. Of course, abstractions want to have a smaller surface area.

And another way would be extending it. So on top of all those commands we had some more commands, and we had certain extra flags. And even though Git is by itself, by nature extendable as its core feature, that it can invoke other git/something executables if they’re found in the path, even though Git is extendable in that way, it’s not extendable in a way that really likes extra flags to be added to its existing commands. So that part is kind of really hacked on in a little bit of a Frankenstein manner that’s hard to maintain.

[11:51] We also considered eventually doing an abstraction of Git, in the sense that what if we could capture the essence of Git’s API mostly as it matters to 80% or 90% of GitHub users, and only expose that? But that was such a scary concept to hold as a team, both then when we were considering that with hub, and now, when before we released the CLI we were considering that for the CLI.

In both instances we decided not to do it, so I think somebody who decides to do it, that’s a really bold undertaking, that I wouldn’t necessarily wanna discourage people from… But I think that is much less feasible than doing other kinds of extensions, which allow still Git to be used in its full capacity.

So just to add a little context… When we talk about extending it, we’re talking about doing things like – normally, if you type “git clone”, you have to give a URL. So the extensions that I remember, at least with hub, were things like you’d type “git clone” and then a username/repo, and it would just know “Okay, I’m gonna go to GitHub and pull it from that user and that repo”, and it would sort of expand that stuff. Are there other ones you can think of that might have stuck out? But I think that’s probably a good example of what type of extensions it was doing…

That’s a very good example, yeah. That one is basically taking an argument and transforming it, massaging it a little bit before it reaches Git proper. Other kinds could be adding a certain flag that only makes sense in conjunction with GitHub, that doesn’t exist there with Git otherwise.

Some others would be adding a completely separate command. For instance hub has a sync command, which if you wrap Git as hub, then you can type “git sync” and have all the local branches sync-ed up with the remote ones.

So all these different types of additions to Git were shipped as the same tool, and that was powerful in the way that it adds a lot of features at once, and it can feel to somebody who studied it as a really good toolset to add to their arsenal… But I think overall, it just was too much of different layers of additions that people would mostly pick one and benefit from some of them, but then to some others they wouldn’t even notice, nor appreciate, or sometimes they would even get in their way by subtle bugs that would be in the layer that’s added on top of Git. So it was hard to maintain in that way.

Yeah, that makes sense. I imagine it being incredibly hard to get something that sits on top of Git and still doesn’t alter something or somehow break some functionality of Git without realizing it.

So you talked about things like sync and cloning, that would sort of add functionality that was GitHub-specific. When you were building hub, did you think about adding stuff that was not GitHub-specific, it was just sort of extensions onto Git that you wanted, or did you leave that to the – you said that you could extend Git itself with just like git/ – I think they’re Bash scripts; is that what they normally are? Or scripts of some sort… Did you usually leave that to people, or were you doing stuff like that as well with hub?

Well, hub does add some of its extra commands to Git while GitHub CLI as the next iteration of the tool decidedly doesn’t wrap Git at all. But going back to hub, it doesn’t prevent the user from also adding their own extensions… It just would – well, the extensions of the same name would clash so somebody would add a custom command, which could be implemented in Bash, but the beauty of Git is it would execute anything executable…

So it really doesn’t matter in which language something is written in; it will be invoked with a certain set of parameters, and then it’s up to that thing to do whatever it needed to do. And hub did add some extensions and sometimes those extensions, in rare cases, but those people were vocal; they would clash with their own. So it was also a little bit testing the limit of this extension framework of Git, which is very barebones, and it’s not really meant to be taken to the extent that hub was taking it…

[16:12] So I feel that maybe it was a little bit intruding on this extension system, that was really just meant as a very simple system for users, specifically for their direct environment to maintain. As third-party tools like hub and others would try to move in on the system and plug into the same mechanic, it just wouldn’t scale past that point anymore… So I felt that was a little bit of a misuse of that mechanism in the first place.

Okay, that makes sense. So you said at one point when we talked about hub that it was written in Ruby, and now - I don’t think it would be on the podcast if it wasn’t written in Go; or if the new CLI wasn’t written in Go.

I don’t know, maybe… [laughs]

Maybe… But it might not make as much sense. So how did that evolution happen? How did you go from Ruby to Go? What led to you trying out Go, I guess?

That was probably the most ridiculous thing that happened in any of my projects. Imagine your open source project, that already has a huge codebase, and thousands or tens of thousands of users - somebody comes along and rewrites it all from Go to Rust, or something like that, and says “Here you go.” You’re now supposed to merge this thing, deleting all of your code and replacing it with new code. This is basically what happened.

There was this person who maintained for a while his fork of hub, that was a total rewrite from Ruby. And this person’s - his name is Owen - primary motivation was that Ruby was really slow. The Ruby interpreter took about 60 seconds then on my machine, on a really high-end MacBook, to only start… And that was often not even including things like the standard library. So as soon as you would then require NetHttp of the same name as it is in Go from the standard library, that would add maybe 20 more, or something like that… So we were talking about almost we’re now at 100 milliseconds and the program hasn’t even started doing anything yet.

The other thing is portability. People had to install Ruby on the system, and they had to be not at a very certain version of Ruby, but over time Ruby versions changed that are pre-installed system, or some things getting checked into the Ruby loader by default that is not compatible when it boots up for hub’s purposes… There were all sorts of these problems, and it was just really hard to make it really seamlessly portable, unless somebody already had a Ruby development environment. That’s all okay, because they understand their environment.

But to somebody who would just like to use hub, that they now have to commit to maintaining their Ruby version over system upgrades over the years, it was a pain for a lot of users. So a precompiled binary that is just cross-compiled for different systems and just dropped in there - that sounded like a dream. And I actually didn’t believe it, because I had no experience. Go was this new thing, and I’m not a very fast learner, or early adopter of things… And it took me a while to warm up to the idea.

Slowly, I started working with Owen to really solidify the test suite around the transition, so we can have some confidence that we didn’t break too much. We knew that every complete rewrite will introduce a lot of bugs; not the normal amount of bugs as a PR or something, but a lot of them. And we just at least tried to minimize this, not to lose the trust of the hub community. It took us six months of mostly addressing edge cases, and in the meantime every new features that was merged in the Ruby version was ported over by us to the Go version. So that was almost like a full-time job of itself. I did that then in my afternoons or weekends, and things like that, as did Owen.

[20:12] In the end, Owen had the privilege of just hitting Delete on all Ruby code, so erasing the entire code in a commit, in the next commit after the thing got merged… And suddenly, the project was – it was a pretty solid transition. We really did well on minimizing the bugs, because people largely could upgrade, and never realize that the program changed, up until the point where months later they got the idea to add a feature; they would open the project in its main branch and see “Well, nothing familiar like before.” The organization structure of a Ruby project is really simple; it’s just a few Ruby files, that’s it. And they went into something like this and they were really confused when they opened an issue what happened… So yeah, you weren’t here for it. You blinked, and…

Now it’s in Go. [laughs]

In one of the past episodes - I think it was a couple weeks ago - we talked about introducing your team to Go. You know, like if you’re working on a team that doesn’t use Go for anything, some different ideas for getting them to try it with some sort of project… So after using go for a CLI, do you think that’s a good fit for that type of thing, or if you’re trying the new language out?

Yeah, absolutely. I think that it’s a great opportunity to introduce it this way. I think also CLIs could be considered as like internal tools as a way of showing in the organization how Go can be productive.

The way it was introduced to GitHub, I remember, and also what helped the transition of hub to Go, what gave me confidence is that I started seeing my colleagues around me, that I didn’t directly work, but they were really engineers that I looked up to - they started using Go for microservices. And the first microservice that was extracted from the monolith, that is a Rails app, at github.com, is the one that still today delivers avatars; it stores users’ avatars. Organization ones and team ones, and things like those. And those developers had a really good time writing this in Go, even if that was unheard of before in the org. I think they just had enough of their hands untied that they could just ship a new service written in whatever they seemed fit.

And seeing their success with that, and also their involvement and contribution to hub around that time also really helped develop this Go version. And we likely couldn’t have shifted without that, and I also wouldn’t have the confidence to merge in a rewrite to a language that I’m kind of still unfamiliar with had it not been for my colleagues that were at the same time introducing Go to the rest of GitHub… And now Go at GitHub is just this huge slice of the org. So many services are written in it, and I would say that it’s almost as fundamental right now to engineering in GitHub as Ruby is.

Going in a slightly different direction… When you talked about the transition from Ruby to Go, you said that you had to sort of take the test suite and make it so it covered both of them… Can you talk a little bit about how you were testing the CLI in a way that you could actually use it for both languages? Because I think a lot of us when we think testing, we think about running go test and having unit tests run, or integration tests of some sort… But I don’t think that’s what you were doing if it’s something you could run with both Ruby and Go. So what did that look like?

One thing to keep in mind while I’m talking of anything about hub is that because of the history of hub and because of its nature of just starting up as a mere proof of concept and evolving far more rapidly in popularity than it actually evolved technically, hub was always this treasure trove of anti-patterns, I would say. So it was not definitely a project that I would advise anyone to look for either good Ruby practices in terms of testing, or later good Go practices. I made probably every Go mistake in the book with the hub project, because it was my first Go project, and I hadn’t been before in teams of other people working on Go, so that I can see how people with experience were using it. I was mostly just inventing its use as I went along, and that was not really great. But of course, we all have to learn somewhere.

So in the Ruby side, the testing approach was – at first, there were some unit tests, but the test coverage wasn’t really great. Just a very few isolated functions were unit-tested. And overall, hub had a very solid and good coverage, but it was done by end-to-end testing through a tool called Cucumber, and I would say story-driven development, because we would write it in this human-parsable Cucumber format, which would then execute those feature files, as they were called, and drive the usage of hub from the outside as if a user is typing into the terminal.

Some of the tests took it to the extreme, where literally we would use tmux, a terminal multiplexer basically, to spawn an internal terminal to the test, a headless one, to literally send key strokes in an interactive shell, and type “hub pull request these and these flags”, press enter, and then inspect what’s happening after that.

And so much of the test coverage was done that way, and I really spent a lot of time making sure that we have really good test coverage across the whole codebase… But it’s not an approach that I would recommend in the long-run for the next person who’s listening to this who might wanna use Go to create a command line app. But somehow, in a bizarre twist of things, because of the rewrite from one language to another, we could have kept the entire test suite, because the test suite never knew what hub was written in. It just ran the hub executable.

So when we rewrote it in Go, mostly what we had to change is those parts where we stubbed out things… For instance, the GitHub API is completely stubbed out; we don’t run the test and then have it talk to the GitHub API. That would just not be – well, first, not great performance, but second of all, they would be really hard to maintain when it comes to write actions, as opposed to get actions.

[27:53] But the way this test was set up, a separate server pretending to be GitHub API was spun up in Ruby. And after the rewrite, we had really no reason to rewrite this test server. So the codebase continued to have a test runner, which is really in the Cucumber language, which is executed in Ruby and also uses Sinatra that pretends to be a GitHub API server.

And in the end, I think that’s why I used this Frankenstein expression earlier, because it was this stitched abomination of different texts, which made no sense if you were to drop into a project and wanted to open up a pull request. You would think “I thought I was contributing to a Go thing, and now I’m editing a Sinatra API endpoint, or something like that?” But it worked beautifully as long as nobody touched it ever, and as long as there was precisely mostly over the time one person working on it who understood how it all worked.

It’s fine when you have one developer who understands all of this; it would have been a nightmare if this was any sort of project where there is actual business value to it, or some pressure, or a shipping cadence, or something like that. Or a team of people working on it. So that would not be something I recommend as a developing practice, but it somehow worked out and it brought us that far.

I was chuckling, partly because I know of some projects that are still in production today, that fit that criteria… [laughs] Nobody touch it, because nobody knows how this thing works, and the Frankenstein that we’ve got here, and whatnot… That’s how software evolves over time, and as you bolt on pieces, and different developers, different perspectives, different hands touching that thing - yeah, it can certainly get that way, for sure.

I did wanna mention that Owen, actually, who was that first transition, used to be a colleague of mine over at Heroku. He’s moved on recently… He’s a pretty smart fella, and I think you were lucky to have had somebody like him to help you in your journey. In the back of my mind, I’m thinking “Man, when I was learning Go - wouldn’t it have been amazing to have a super-mentor who knows the ins and outs of the thing to help you along?” kind of thing… But yeah, I think you lucked out there, for sure.

Yeah, it was great to jump into his codebase and then to learn Go by literally just kind of tweaking the variables and functions that already he laid out… And I think like many people I prefer learning not from a blank slate. So a new language, if I had to write out the new program from a blank directory, I’m not gonna do so well. But if I jump in an existing one, then I’m just having this kickstart and wind in my back, and I can start editing things, and seeing his Git log of changes on a certain command and how he got to the point there. It was almost as if I got to sit next to his shoulder while he’s coding. And eventually, I did.

We met up once when I was passing through Vancouver, and we got to hack together. But of course, most of our collaboration was asynchronous, and across continents. I could thank him primarily to get me through my first year of learning Go, definitely.

So you had mentioned that you eventually rewrote hub into Go, and now we have the new CLI, which I believe is written in Go as well, but I think it’s a complete rewrite from the ground-up. Do you wanna talk a little bit about that rewrite, what caused you to decide to throw out what you had and write something from scratch? And since you were writing from scratch, again, what made you decide to use Go this time?

Well, Go was a very short discussion with the rest of my team, all of whom other than me were unfamiliar with Go; they were familiar with it, but they were not using it for anything of substance up to that point in their engineering careers.

[31:56] And I had pitched the Go idea mostly as a way to preserve what already we know worked well, and that was - well, I’m a big fan of its compiler; I think it’s very robust and it gives me a lot of confidence in it while I write code, together with a good integration with Gopls for instance right now in the text editor, and having this confidence that everything is wired up properly through static typing… And for me, that was a big departure coming from very dynamic languages like Ruby, and into learning how to let go of that mentality and become very secure in the static typing mentality.

Another of the qualities that we wanted to preserve is portability. I remember it being a very short conversation, because my colleagues were just evaluating those things and thinking about potentially other languages we can write it in… But all of that I just mentioned resonated with them, so they were absolutely on board with “Alright, we’re learning Go now”, and through the next few weeks and months they went from basically zero to also doing Go like me right now, on a daily basis. I would say that they surpassed already my abilities, because I sometimes feel I’m catching up to the rest of my team.

So I guess I’m asking you about the rewrite, because typically, when you hear people talk about projects – or to go back to your Frankenstein. You said that hub was kind of a Frankenstein, with the test suite being in Ruby, with some Sinatra, and other things like that… I think when you’re a new college graduate or you’re newly coming into the field, you learn all these things about best practices.

Then you go in an organization and you see a repo like that, and you think “What are these people thinking? This is a terrible idea”, and you don’t really think about how projects evolve over time… And if you actually saw the whole history, it would make complete sense. But when you just see it brand new, you’re like “This doesn’t make sense.” So a lot of newcomers to the field will think “We need to rewrite this”, but almost always that’s a bad idea, because you spend so much time rewriting and trying to get feature parity that it’s just really hard to do. But in your case, it seems like you successfully rewrote, and it sounds like you think that was the right decision. And I’m not saying it wasn’t, I just – so, I guess, can you share a little bit about what really motivated you to be like “This needs to be rewritten from the ground up”?

Well, when it came from learning what really worked with the hub tool, and then choosing how much of that we wanna promote of that spirit, to eventually being GitHub CLI as an official tool, we mostly went over its feature set and decided that its fundamental design paradigm was not something that we wanted to port over. And then after that, considering inputting any of its code, it’s actually really already – it doesn’t align with that first and foremost decision, which we eventually were pretty secure in… Because if you don’t wanna preserve the spirit, a design of how a tool works, then it’s just really hard to get anything from it, especially due to the fact that being my first Go project, I let the Go packages become basically huge, to the extent where – I think there were largely just two Go packages where most of the hub implementation lied… So to cherry-pick the good parts out of that and leave out the bad parts would have been something that’s not really feasible technically, and I think it’s very bug-prone.

[35:48] Another thing is that we didn’t wanna go with the same testing approach, so now we’re copying over parts of the implementation, but we’re actually not gonna port over to Ruby Test because we wanna commit to the Go stack and the default Go tooling to make the project more approachable. With that line of thinking, it was a little bit obvious that starting from scratch would the be right decision.

I appreciate that you brought it up - it’s not an easy decision. It should never be made lightly. And I think rewrites should never be made lightly as well, because these are technologically really risky endeavors. But what made it a little bit less risky in our sense is that we were promoting kind of like a semi-official tool to another one. And even if we broke a lot of things, or didn’t port a lot of functionality over - well, we didn’t actually erode trust, because we were launching this new tool, which starts over from version 0.0.1, and who wants to follow us on our journey can, and who wants to stay safely embedded with the tool that already works for them also can.

And I felt that we couldn’t have kept the trust if we tried to make radical changes, and then mostly just disappoint people (I would say) that used hub to do a lot of automation; I also personally loved doing that. They would be the ones who were most affected then, trying to upgrade to a newer version and finding out that their scripts are broken, and that that tool that they’ve used as a reliable Swiss Army knife is not as reliable. That would be my nightmare scenario personally, and we avoided it.

Speaking of evolution of these tools… I imagine a some point you are going to start sun-setting hub, because I can’t imagine you trying to keep up with basically development of both of these things at the same time… They have both their own sub-communities, and each one is gonna have their own needs, kind of thing… So what is your plan for ultimately retiring hub, and putting all your efforts towards the official CLI tool? And also guiding, helping people who rely on hub today sort of transition over to the new official tool.

I tried to reassure people around the time that I was gonna be hired onto the new project that I won’t just like ride away off hub, and archive the project, and nobody gets any updates anymore… So I did release a few - or at least one that I remember - bug fix release in this year, that also I have parallely developed CLI… And I feel that I fell a little bit short on my own promise how much I’ll be invested into it, because as it turns out, my primary motivation with hub was that I got to nerd out on this CLI that talks to this platform where I host most of my projects.

So GitHub is not just the place where I work, but it’s literally the platform where I host all of my open source projects, and where I communicate with a lot of people on a daily basis as part of my hobby. So that was really important to me, and I got to develop these tools to help me accomplish more with it. Eventually, that is integrally a part of my job as well. So I don’t have the same itch to scratch anymore after-hours. In fact, after-hours what I’m thinking is “Well, I don’t need to now switch over the VS Code tab to another project that is also a command line client, for the same platform that I just worked for eight hours on.”

And I guess because my need for tinkering osn CLIs was satisfied, I had felt that I had not followed up as much – I didn’t make a strong promise, but as much as I imagined in my head. And I will have to publicly admit, in a sense, that I will have to signal that better to the community, about how much I’m actually gonna de-escalate my involvement. But I do wanna make a series of updates before that, handle things more in the long run, like authentication to GitHub, which is subtly changing in its API versions… And maybe potentially exposing things that people have been asking for a while.

But ultimately, I was just imagining investing more into the features that are about extensibility, and people writing their own scripts, like the hub API tool that has a completely equivalent counterpart in the GitHub CLI, which is called the gh api…

[40:13] And I feel investing in social tools is great for everyone in the long run, because I can make a minimum amount of changes to enable other people making a lot more changes on top of that without necessarily shipping updates. So I wanna leave it in a place where it’s still gonna be useful for years to come, and extensible for years to come, but not necessarily have to receive new commands over the future.

I think that makes sense. So you’ve written two CLIs in Go, or you’ve worked on two at this point… Are there any libraries or tools that you’ve found especially useful or especially – what are the ones you’d recommend, what are the ones you’ve used that you sort of didn’t care for? For somebody who wants to build a CLI, what are you recommending to them?

Well, when I started looking at that in Go, and wanted to apply that to hub, by that time Owen had already made his dispatcher command from scratch, so there was no third-party library that we imported for that purpose. And maybe tools like Cobra right now, which is really popular, or your urfave/cli – sorry, some of these projects are only known by their GitHub owner/repo pairs, because that’s how we refer to them from Go import statements, I guess.

Those projects - I’m not even sure they existed; it was a bit while ago… And even if they did, we couldn’t have used them, because as it turns out, the problem of writing a dispatcher that is an extension or a wrapper to something else, like its own commands, and having one that is just its own self-contained command, like a completely new CLI, like kubectl - that part would have made none of those tools really usable. So it was first from scratch. I would not generally recommend that, unless it’s kind of like an exercise. If somebody is doing this for hobby, for instance learning Go, I would actually recommend it. It’s a great exercise. If you love writing CLIs and exploring how you can structure them in Go from scratch, it’s a great way of learning.

I would not recommend it for a work project where maybe the CLI that you’re trying to introduce to your team should immediately do something useful, and not be just a code exercise. I feel that’s a great way to get other people’s buy-in on a certain piece of technology that is not just a gimmick, but it’s also very useful, and that you can iterate fast with.

To Go, actually, with those libraries - I would heartily encourage any of those that I mentioned. But we have a closer relationship with the Cobra project. We chose it for the GitHub CLI project. And I think over time that Cobra really changed in neither direction which people were responsible for it… And I know that it’s from first-hand maintainer perspective; I know how hard it is to maintain projects for many years, especially when they have a lot of eyes on them and a lot of dependents… Because what that means is extra pressure to the maintainers. I think our risk of burning out is actually rising with popularity of our projects…

So popularity is not always a good thing. And I feel for Cobra that this necessity to maintain backwards-compatibility, which I absolutely agree with, eventually kind of coordinated into this stalemate in which it’s hard to make any kind of significant change to the project, even though some of the initial decisions that they did about which stream to output to, how do they do error handling, how do they do help command, and the help flag, and things like that - eventually, a lot of that didn’t work for us in GitHub CLI, and we started working around it or implementing parts of Cobra outside, orthogonally, in a separate package.

So by now, I feel that for our purposes it would have been maybe a better decision to go with something simpler, that we don’t have to fight against… Or to have just a better overview of what Cobra is and what Cobra isn’t and trying not to delegate too much to the tool if you can’t handle the load.

[44:18] So I kind of feel that in hindsight, I wish that some of the corporate documentation was pushing you towards the better practices, rather than encouraging you like “Here’s how to get started. Generate this file, and generate this file, and add this command here and here.” But the way that the tutorial is set up eventually creates a Go command structure that ultimately I feel doesn’t scale… As evident by all of the Cobra large projects that I’ve studied, for instance the kubectl, which is an incredible CLI; there’s so much to study there. I had found that they’re using Cobra in such an unusual way… And eventually that made sense, but it was not really apparent why they did so before we ran into all of those roadblocks… I feel that they must have run into them as well, because now the project is organized as they did.

But I would recommend I guess not relying too much on the CLI library, and using it more as an accessory, as an underlying implementation detail of the library… But structuring the library in a manner that you could imagine that the specific CLI implementation like Cobra or urfave/cli could have been swapped out with minimal disruption. That’s what I would recommend.

I think that makes sense, especially if you’re building something large that needs to withstand the test of time. Were there any other tools that stood out to you? If I recall correctly, the CLI has some color-coded text, and some other text formatting, and things like that… Did you find specific tools or libraries were helpful with that sort of stuff?

For me, off the top of my head it’s hard to remember these –

We’re cheating, because we’re probably looking at the go.mod file. [laughter]

Authors, and things like that… But I can also quickly just open it, to refresh my memory. There is definitely some things that I find myself reaching over and over. Not just myself, but I see common dependencies in the projects. For instance, the Testify library is not just used by our team for testing, but it’s also used wider at GitHub in other Go teams as well. That’s the tool that we reach for often, even though we try to stay as close (with the exception of using Testify) to the Go standard library for testing, and not deviate in that too much. And then some tools that we are using, authored by a GitHub user @MitchellH, and another GitHub user called @muesli. So a lot of the tools are by those two users. And of course, @MattN. MattN published tools like go-colorable, go-isatty, and it had seemed that almost like a lot of the itches that we had, a lot of problems that there were these prolific GitHub and Go contributors who have already encountered these problems and made this super-tiny, hyper-specialized libraries… And I was really a fan of those.

I’m not necessarily always a fan of – in the JavaScript world with the npm, of the super-tiny, micro-specialized JavaScript libraries, but I was very much a fan of that here, because it was something that we could then easily reuse across projects and rely on. And I think if somebody compared the hub codebase and the CLI codebase, they would have found plenty of the same library dependencies.

[47:54] For markdown library we’re actually really impressed by the renderer. It’s a project from charmbracelet/glamour. Without looking it up, I would say that it uses Blackfriday for markdown parsing, which I’ve also found to be a very useful library. And I guess - yeah, a lot of tools that were already there really made it possible for us to just launch ourselves in this space.

But also, all of these tools that I’ve enumerated - they don’t necessarily have to do with writing a CLI. They didn’t have to necessarily specifically apply with writing a CLI. And I feel maybe that specifically tools that interact with the capabilities of the terminal, and are able to output different colors, but also in a way that respects user settings, and the capabilities of the terminal, things like that - I think the fact that those tools are hyper-specialized and so scattered around makes it kind of hard to discover them and assemble them in a proper way.

I had experience in the JavaScript world writing CLIs that CLI-related libraries were much more mature, and I had not experienced as much that with Go. I had more felt that a person really needs to spend a lot of time researching these tools, and sometimes under a time pressure to ship. That does not always work out. Maybe it works out for a hobby project.

It sounds like a good blog post.

It could be. But I feel even as a person who always feels like I wanna contribute back to all this plethora of tools - like the Go project, that itself was made by people in an open source fashion. I’m sometimes thinking “Well, if there’s a big hole that is there, why not invest some time in filling it?” So I’ll try to divest some of my learnings from doing GitHub CLI to maybe start creating more of the Go libraries, because I feel it’s one of the ways that I always give back to the Ruby community and the JavaScript community, and to the Bash community; even though the Bash community doesn’t exist, I like to make the joke, because one of my favorite languages is Bash, and I feel that I’m a little bit too overqualified in it for a sort of limited deployment potential tool… But I see in my future, and hopefully maybe in my team’s future as well, that we take some of those learnings and we learn a little bit more about what does it take to make these reusable libraries in a space that there is not so many alternatives that we feel should be doing a certain thing; that right now projects are commonly reimplementing and reinventing over and over. So maybe we’ll see something like that.

I have one more question, and I think we have to jump to the Unpopular Opinion segment soon…

No, we don’t have to; we want to. [laughs]

We want to. [laughter]

People look forward to that stuff, man…

I understand. When you’re talking about tooling, I know for me when I’m building web servers, tools like Sentry are kind of a go-to; something that will allow you to track bugs or errors and log them somewhere. But with the CLI, I imagine that’s not really – you’re not running on a web server, you’re running on everybody’s computer. So how did you handle that challenge of actually figuring out what these bugs were, and getting people to report them, and actually handling all of that, I suppose?

At the same time, we feel like the non-addition of monitoring and error capturing and reporting and things like that to our tool saved us a lot of trouble about how to do this consciously and as transparently as possible, and as respectful to the user as possible. People aren’t necessarily always comfortable with their clients reporting everything that they do, for reasons that I don’t really have to elaborate. And of course, in our case it’s a little bit different, because I guess that people hosting their projects on the GitHub platform already means that they have to some extent given GitHub trust, and that’s while they’re interacting with a GitHub set of features they wouldn’t mind as much that we report what is the most used command, or what are the most used flags for certain commands, for instance. That would have been a very good insight for us. Unfortunately, we have none of that, because we haven’t built any of that in a tool. It’s not out of the question, but I think that I feel in hindsight that we could have structured the project better to lend itself to that case, because I think eventually we want to, even just for the purpose of debugging and gathering statistics about the execution of the tool, or how much time is spent in shelling out to Git, and how much time is spent in API requests, and things like that; I feel that if we designed the tool as a microservice instead, the capability of monitoring and error reporting, that all of this would be easier to find.

So I see it in our potential future, but I also think that it was a big load off our chests that we didn’t do it initially, and I think if we ever do it, of course, we’ll have to do it in a way that’s probably opt-in, because right now people are using the tool and we just can’t slide in monitoring where people are not necessarily expecting it.

One place where especially that hurt us is that we don’t have any way of crash reporting. And a lot of my colleagues, especially initially in the CLI project, were people who also worked on GitHub Desktop, which is another GitHub client, but it’s graphical; it has nothing to do with CLIs. GitHub Desktop, on the other hand, has an excellent crash reporter. GitHub Desktop’s crash reporter was also always a very good smoke test if there was a bad deploy, or something.

It also has a beta stream for updates, so people could opt in to getting beta updates, and then those users would then – from their reports it would have been evident if there was something really crashy, a ship that is a potential blocker… And we don’t have that kind of visibility with GitHub CLI at all. So we just have to do extra diligence so we didn’t break it for everyone… And it’s very easy to break the CLI tool for people, because unlike graphical apps, the CLI tools can execute in so many different environments, under so many different permutations of circumstances.

I guess their versatility is part of the appeal of the CLI tools, that there’s a very low-barrier to executing them or running them on maybe an embedded system somewhere. But we do have less visibility into it, and these are all trade-offs that we’ve considered. And I wish in the future that we can have some more visibility into that, because I feel that will empower us to then make better decisions about what really matters in the tool, rather than right now relying on self-reporting from users by asking them “Okay, what are your most-used commands?”

I think that makes sense.

Do you wanna lead into the Unpopular Opinions, Johnny?

Oh, no, no, no. I will cede the floor to Mislav. I wanna hear some unpopular opinions… I heard a few were brought in, so let us hear them. [laughter]

I’ll start with a Go-related one, because the other one was not specifically Go-related. A lot of what we were excited to do with the GitHub CLI - so the next iteration after hub - was we wanted to really try out how it feels using the GraphQL version of the GitHub API, which shipped in between. Of course, hub originally has used the REST version, and there was not enough added value into migrating completely to another version of the GraphQL API, so we only did that experiment with GitHub CLI when we eventually started working on it, thinking that that would be this massive win in this new API paradigm, which is supposedly really more powerful… And I’ve found that the exact features of the Go language, static typing and compiling, that it’s not actually lent itself well to being a good GraphQL client.

While I’m talking about this, just keep in mind that I’m mostly just talking about an experience of writing in Go a GraphQL client, so something that makes and parses GraphQL requests. I have zero experience of making a GraphQL server in Go, which some of my other colleagues at GitHub have experience with, but I don’t have first-hand experience… So this is not about making a server, which I feel that there is more solid tooling. But when we look at the offering of the different GraphQL clients that are written in Go right now, and mostly used as a de-facto standard when we look at the largest, most prolific projects that are open source right now, if we look at how they make requests, not just to GitHub’s GraphQL API, but to any other, I feel that all of those libraries right now are missing the mark on what makes GraphQL really stand out.

GraphQL is not a query language that wanted to be used by having a pre-generated query, which is always the same per compiled version of an app, and then having different requests come in separately, because they were all statically-generated from maybe a schema, or something like that… GraphQL wanted to first of all allow people to bundle several queries at once, or even several mutations. I don’t think it will allow bundling a query and a mutation acting on the results of those queries; I think that’s decidedly against its design. But it definitely can execute an arbitrary number of queries at the same time, and also an arbitrary number of mutations. So if I wanted to change labels in a hundred GitHub issues in the same request, theoretically I can do that. And I was really excitedly searching for Go tools that allow you to kind of batch up a bunch of queries, and then they all execute transparently over GraphQL. It wasn’t a thing that I was able to find by weeks of searching and studying the other libraries that were – well, the ones that were open source, of course.

[59:47] And another thing that GraphQL really lends itself well into is to stopping over-fetching. When you make a request to GitHub’s REST API, you don’t get to choose what you get back. You always get this enormous object back. We mostly always return absolutely everything about, let’s say, a pull request that you’re interested in. We return everything about the author of this pull request; all the fields of an author, all the fields of the repository that the pull request is embedded in.

As you can imagine, in a lot of back-and-forth communication eventually a lot of redundant data is not just being exchanged and parsed, but it’s also just being needlessly collected, and presents some overhead on both the client and the server. In GraphQL, the idea is to only request the fields that you are really interested in. But sometimes, between runs, that number of fields for a certain query changes based on user input parameters. So now we’re back again on this square one, where I mentioned with the static compilation of the language we mostly embed a static struct, which may be used as a parse destination for a GraphQL JSON response, for deserialization that a lot of libraries do in a similar way - they deserialize into static structs; or at least they always generate a query from the static representation of the resource itself. There’s no such consciousness about adding APIs that will let us choose the fields that are being queried, right?

So I feel that a lot of Go projects right now are using – well, a lot of projects in general right now are using GraphQL because it’s trendy, but I feel that Go is a little bit lagging behind, because I feel that it wants to use GraphQL because it’s trendy, but I feel that the features of the language are precisely what make it a little bit unsuitable… And I’m not saying unsuitable in an absolute sense, but I’m saying it’s a little bit harder to achieve that theoretical idea of what GraphQL is best in.

So I guess my unpopular opinion would be that I’m not really convinced that it’s being used in a really good way now. And right now I’m also not the person who is offering a better way to do it, but I’m really interested in exploring a better way to do it, and I’m really interested in bouncing ideas with potentially better Go developers to figure out how to solve this problem, and potentially create another client, that could be used not just with the GitHub GraphQL API, but for any other. I would be the first project to migrate over to that, because I would really be keen on figuring out how we can batch and squash queries together, and also use more concurrency features that Go is so good in.

So I heard Mislav say that all current Go implementations of GraphQL clients suck. [laughs] And that a new one ought to exist, and PRs are welcome, or brand new projects are welcome.

To say “suck” would be a hard word, but I’m very thankful for – right now we’re using schurcool/graphql for GitHub CLI, and it’s an excellent library. I would recommend checking it out. But I also would like to explore what can we do on top of that approach. How can we do take that approach even further.

On one hand I’m not too shocked that that’s the current state of everything… One, because Go, like you said, does not strike me as a language that is really meant – it doesn’t seem as flexible as some of the other languages. If you’re writing JavaScript, it’s a lot more flexible in what you can get away with. But then the other aspect of it is so many front-end UIs for websites are built in JavaScript - pretty much all of them - or something that compiles into JavaScript at some point.

[01:03:54.23] So as a result, you expect the libraries to be there. But with CLIs, while there are a decent bit of CLIs, I think, being written in Go, I don’t think that numbers – like, the sheer number is not quite there, especially for… You’re building a CLI and you happen to be interacting with an API that’s GraphQL. That’s gotta be a pretty small number right now. Now, that could grow (I don’t know), but I could definitely see that being not a huge audience right now.

That’s a fair assessment. But I believe it will grow. For instance, I would definitely right now prefer to use GitHub’s GraphQL API. And I’m not just saying this because of course I’m biased - I’m a GitHub employee - but I was first and foremost always a GitHub user, and I had used all of their APIs since the first day they shipped until now, through their different iterations. And those powers of GraphQL that I described are definitely where I see going for it in the future they are harder to set up, but they have a bigger pay-off, and I feel tech is inevitably going towards that.

Look at Kubernetes, for instance. Harder to set up, but there’s a huge pay-off at the end of that. So I think this space will evolve.

I’m curious if that’s one of those fields or areas where if Go doesn’t find a good solution, it possibly becomes less useful as a CLI language.

Whoa, whoa, whoa… Jon. Jon.

If a lot of APIs end up being GraphQL and Go doesn’t do it well, I can see that being problematic.

Blasphemy. [laughs]

I mean, I haven’t seen any other static language do it any better, to my knowledge… But I don’t know. I’m sure somebody will tell me Rust does it better.

There’s always somebody who’s gonna tell you Rust does it better. [laughs] So Mislav, you have two unpopular – or was that like a two-in-one?

No, that wasn’t two-in-one. I have another one, if you have time.

Oh, please.

Okay, so the other thing that I’m really opinionated on - and it’s directly related to a lot of things that we’re talking about, especially early on with hub - is just generally Git. And when I say Git in this context of today’s show, I mean the Git CLI, the Git command line interface… Which I would also dare to say is the primary interface to Git itself… Because Git is a concept, it’s a storage mechanism, and it’s also a protocol, and reference implementations in it can also be in C libraries, it can be imported in the projects… So Git is all of that. But I think most clients of Git still right now wrap Git on the command line.

And a lot of users - I can’t say most, because I have no data on the thing - like myself also still primarily use Git on a command line as a primary interface to Git. So my unpopular opinion is that Git is actually so hard to not just learn, but to use consistently… And I say that as a person who used it for probably over ten years, because I used it since GitHub was in beta, and I heard of this Git thing, and it was trending, and it was cool, and probably my only early adoption thing kicked in around that time, when I wanted to check it out… Since then I was using it probably every day. At least every day that I’m on my computer - which is not every day, but every day that I’m at a computer, I was using it. And I have interfaced with it so often, and read all of its man pages, and documentation, everything, and still to this day, ten years later, sometimes it’s hard for me to explain.

When people come to me with a very basic question of like “Oh, I just pushed a change. I really didn’t wanna push that, so how do I undo it?” So from their perspective, that’s like a really reasonable ask to make. And then I’m just like “I’m sorry… This is gonna sound like I am teasing you or that I’m mocking you, but actually I’m really being frank, and I’m gonna give you my advice. It’s just not gonna be great.”

[01:07:58.26] Or for instance, when people ask “How do I delete a branch?” and then I have to ask them “Well, what do you want to do? Do you wanna delete a local branch, and then just get it recreated when you pull again from the same remote, or delete the remote tracking branch, or delete the remote branch?” And now for the first time I’m realizing that they have never even considered that there’s such a thing, because why would they. A branch is just a single concept in our head that is made needlessly complicated by – I wouldn’t say “needlessly”, but it is made complicated by the inherent distributed nature of Git.

It’s not to say that I’ve become disillusioned with the tech itself. I think it’s amazing, and I think that the tech itself is at the quality of the tech that is Git is a testament to GitHub being able to make it this far in the tech space… And I think it evolved amazingly as well. But I think in its evolution it’s only getting bigger. It’s just getting more commands. And even though its documentation is getting more approachable by newcomers every year, and there’s really good man page documentation by now, and error messaging is fantastic because it often suggests you what you should do next to get yourself out of the mess that you’ve accidentally made…

I feel that with all that power that it’s gaining, instead of being a more approachable tool, that it’s actually being a tool that is continuously making people feel frustrated, to the point where I feel that whatever the next version control system is - and it does not have to be something separate than Git; it should maybe be just a really powerful abstraction built on top of Git. But I think whatever the next iteration of the people’s version control is, it should be something that is just more reflective of how we think about what is version control for us.

And how people think about things is generally always very simple. “I have some changes. I wanna share it with Jon and Johnny, so they can tell me what they think. And then maybe they can add their ideas. And then we can have a merging of our ideas, and eventually test out if it works, and have it out there and ship it.” That wasn’t hard to explain. I think it’s very easy for all of our listeners to understand that mental model in their heads. But then when we come to physically typing out all those commands, suddenly we need months or sometimes even years of learning this set of tools to become proficient enough with them.

I have really initially resisted the idea of graphical tools for Git, because I was this heavy terminal nerd; I was very much in my terminal bubble of being really proficient with a lot of these things, because I was for ten years, before even my Git learning, I was using just terminal tools in general, because I was a very Linux nerd. And I feel that even though it was easy for – not easy, but it was possible for me to learn that, I feel that nobody should need to have spent so much time in a terminal to be able to understand those things. I especially see it when somebody not from my background is approaching this.

So I would definitely say it’s not such an unpopular opinion; I’ve heard a lot of people express their anguish, especially on Twitter, with their inability to use the Git command line even after a long while. I definitely feel, like the Go GraphQL world is something, this is something in version control. The user hands-on aspect of version control, how we interact with it, needs to be built in something that is much more closer to how humans think about it… Rather than being “I will get you to think about a directed graph or as operations on a directed graph” or something like that. No human thinks about that. Humans think about “I’m gonna save my work and I’m gonna share it with other people. Then I’m gonna step off this computer and just leave for the day.”

So I heard Mislav say we should all go back to using Visual SourceSafe or something. [laughs]

Oh, boy…

I guess this short TL;DR version would be I feel that version control systems - the next version of them should not be something that was specifically made for the Linux Kernel community. It should be something that was specifically designed to be used by the wider community. And it can still be implemented on top of the Git tech, or it doesn’t have to be.

For me, I love version control and I’m gonna love it in any iteration it appears in. I feel that the next one should be with more broader users in mind than a bunch of people who are already really comfortable with their terminals, and they’re reading the email from mailing lists in their terminals already, and unpacking patches by typing out a tar command in a single go. The new generations of users of GitHub that I witness are not those people. And they’re not me, and they’re not those Linux maintainers. Sometimes they’re even designers.

We have a designer on our team, and if left to her own devices, she would use a graphical tool for version control like GitHub Desktop. Not just because it’s easier, but I think it’s just a saner solution. I’ve also felt it, that I sometimes feel just very safely coddled by a tool that just is like “Okay, save. Here’s a Save button. Okay, here’s a nice rendering of what just happened.”

I’ve grown to now be in-between the worlds, of not being so seduced into thinking that the terminals are answers to everything, but also to always consider that there’s a graphical equivalent of things, and there’s better abstractions that we can do, and that we should be more inclusive with our software in general.

I think Git has sort of fallen victim to the fact that you have a bunch of power users who want to be able to do anything and everything, and it enables that. But like you said now, the average user wants to do 1% of what Git can do.

Average things, right?

Average things, like maybe 1%. And I think because of that it’s just hard to – it’s kind of like you talked about… I think you talked about libraries earlier. Hub - you didn’t wanna break it for anybody who’s using it… But realistically, there almost needs to be two versions of Git. Like, “This is the average user’s version, and this is the “you wanna be able to do everything under the sun” version”, but that’s hard to maintain. It’s hard to make that work, which is a challenge.

I’m even thinking of – Git has the tools to do a binary search to figure out where something broke, and where a bug was introduced, and the average user probably has no idea how to do that… Which makes sense, because most people probably aren’t doing that. But those things all are there, and they exist, and they’re cool, but it’s just – every time you wanna use them, you’re like “Let me go find a tutorial that teaches me how to do it again, because I sure don’t remember.”

Indeed. Well, it is that time… Sadly, we have to go away. I know you will miss us in our absence, but it’s been a pleasure having you on the show, Mislav. Thank you so much for the insight on hub, how it came to be, and of course, its successor, gh, or the GitHub official CLI. We’re glad you made the decision to write that in Go, even though it has its challenges. From what we hear, we think you think it was a good decision. Are we in the right ballpark?

Definitely. One thing that I’m happy about having chosen it is that I get to now learn it better as well as result of it. That also happens sometimes with people’s contributions who say “Well, I switched to this standard library thing”, and then as a result I’m like “Wow, this exists. This is great.” Something like this happens every week.

Awesome, awesome. Well, thank you for coming. Jon, thank you for being an excellent co-host. We will catch you on the next Go Time!

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00