Go Time – Episode #337

Crawl, walk & run your way to usable CLIs in Go

with Wesley Beary from Anchor

All Episodes

With the number of libraries available to Go developers these days, you’d think building a CLI app was now a trivial matter. But like many things in software development, it depends. In this episode, we explore the challenges that arose during one team’s journey towards a production-ready CLI.

Featuring

Sponsors

Fly.ioThe home of Changelog.com — Deploy your apps close to your users — global Anycast load-balancing, zero-configuration private networking, hardware isolation, and instant WireGuard VPN connections. Push-button deployments that scale to thousands of instances. Check out the speedrun to get started in minutes.

JetBrains – Sign up for the free “Mastering Go with GoLand” course and receive a complimentary 1-year GoLand subscription at bytesizego.com/goland

RetoolThe low-code platform for developers to build internal tools — Some of the best teams out there trust Retool…Brex, Coinbase, Plaid, Doordash, LegalGenius, Amazon, Allbirds, Peloton, and so many more – the developers at these teams trust Retool as the platform to build their internal tools. Try it free at retool.com/changelog

Notes & Links

📝 Edit Notes

Chapters

1 00:00 It's Go Time! 00:47
2 00:47 Sponsor: Fly 02:29
3 03:16 Intro 02:52
4 06:08 Setting out to build 07:59
5 14:08 Network calls 07:09
6 21:17 Sponsor: JetBrains 03:10
7 24:28 Versioning 01:53
8 26:21 Next step 03:37
9 29:58 CLI 07:04
10 37:03 From Ruby to Go 04:50
11 41:53 Words of wisdom 03:21
12 45:13 Sponsor: Retool 01:44
13 46:57 Unpopular Opinions! 00:34
14 47:31 Wesley's first unpop 05:31
15 53:02 Wesley's second unpop 03:35
16 56:37 Outro 01:04

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello, hello, hello. Welcome, listener, to another episode of Go Time, where we talk about Go, obviously, and Go-adjacent things. Today, I am joined… Oh, yeah, I’m Johnny. I always forget to introduce myself. I’ve been doing this long enough, and I always forget. Today, I am joined by someone who definitely remembers his own name, Wesley Beary. How are you doing, Wesley?

I’m doing well. How are you?

I’m doing alright. I’m doing alright. So the reason why you and I are talking today is because of an interesting journey you’ve been on building CLI, and we’ll talk about what you’re building the CLI tooling for. But I’ve found your journey interesting because you’re kind of not using the standard off-the-shelf community projects to do things. You kind of went your own way, so to speak, right? So I wanted to get you on the show for us to talk about basically how did you crawl, walk, run your way to a usable CLI for what you’re trying to build. But before we get into that - yeah, give us a little intro. What have you been up to? What are you working on? Where do you work tat? That sort of thing.

Sure. So I’ve been at this kind of thing for a little while now. My first big exposure to doing CLI stuff was back in the day when I first joined Heroku. My first year or so there I was working on the Heroku CLI, which was all in Ruby, actually, which had its own ups and downs, pluses and minuses… I mean, distributing it was not a fun time. But working in Ruby was pretty pleasant overall. I’m still more a Ruby-ist than anything else, I think.

These days - yeah, I work at Anchor, where I do CLI work, as well as API work, mostly, as well as leading the team, because there’s five of us, so we all have a lot of hats to wear… So just helping keep engineering on track, and moving it forward, and doing our best to provide a great CLI experience for making encryption not be terrible… Because some existing things, like trying to use OpenSSL directly and stuff is – at least for me it has always been nightmarish. I can never remember what does what, and I have to go look back at my notes or something to figure out the magic spell I cast the last time with OpenSSL that got me what I needed, kind of thing.

Yeah, that makes a ton of sense. We actually had an episode about Anchor on this show. I’ll put a link to the episode in the show notes. Yeah, we talked about what Anchor is and does, so we don’t have to get too deep into that product on this episode. And yeah, we’re both Heroku alumni, so that’s always fun. Basically, can you get a little bit into what you set out to build? What made it a non-trivial CLI tool? …something that you could just throw the standard library at, or something like that. What did you set out to build?

Sure. I mean, there are some parts. We are in Go because we wanted to be able to distribute more easily, at least in part, and also because the CTO, who was on the show previously, was much more comfortable in Go than I am. This is my first exposure to Go also, for better or worse. That might have something to do with why some of the standard libraries didn’t play out the way that you might expect, is because I was trying to figure out how to even do this thing.

But yeah, a lot of the goal was really to just provide the best possible user experience we could. So once we got past distribution and some other things, then I kind of started to try to figure out, given some of the things that I want to do, how can we even do it? What tools are available? What’s going to work? What’s not going to work?

And the other thing too is we’re a really small team, and we want to be able to move quickly… And so some of it was also a question of “How do we quickly iterate?” Because we want to add stuff, but we’re probably going to do it wrong the first time, especially right now as we’re still discovering what it is that we’re even going to do… And so how do we iterate as quickly as possible and not waste a lot of time and effort building out stuff that maybe we’re going to realize was the wrong path anyway, and have to throw out… So yeah, kind of all of those things came together in a way that made it challenging to just use the standard library stuff, and this more standard-based stuff out of the box.

I don’t know, for me especially, I think one of the biggest challenges I had was just looking at existing bubble tea and other stuff in that space, was finding good examples of test coverage, that were satisfying, that I felt like I could emulate or use something similar.

[00:08:01.23] A lot of the examples and things from Charm, and a lot of the other examples I saw just had maybe unit tests or something, but not much that really tested what ultimately came out of the CLI on the other side. And that was an ongoing struggle. How we ended up doing some of the things that we did, for instance, was because we just had challenges around that.

So if I understand correctly, you wanted to – your tests were basically given a certain input, you were expecting a certain output… You weren’t necessarily testing the ability of the tooling to be able to find the right target, and trigger the right function, or to do the right thing, but basically, ultimately, if I give this input, could I test what this output was? Which it kind of sounds and feels a little bit different than the regular kind of testing you would do, like to test “Okay, does the right code get executed?” But more so “Do I get the right output given a certain input, and do I get the right expectation?” That sounds a little different, right?

Yeah, I think maybe. I mean, I think especially as I moved into trying to do some things that were a little bit more complicated in the UX… Like, it wasn’t whether or not that particular thing got called; I wanted to be able to test particular cases within that. And also, working on a team and stuff, I wanted to hopefully not make it as terrible for myself to review other people’s changes to the CLI, where I would have to download and install that branch, and manually run a bunch of commands to see if it still looked the way that I thought it ought to look. I don’t know, some of it, I think, comes from also my background of, again, working on APIs, and there’s some similar problems there sometimes, I think, of like trying to keep track of what changes are or aren’t being made, and what the final results of those changes will be… So there’s just like - yeah, kind of a lot of layers, so I wanted to be able to really dig in and look at what those outputs would be, and be able to make assertions about it, and stuff like that… And really drill down, nail down that UX, make sure that we don’t have regressions around it, things like that.

So for someone who’s listening to this episode and they’re hearing “Well, we wanted to iterate quickly, and basically keep things simple, because there will be changes…” To me and to them, it might sound like you’d want to pick up a well-known library, like a Cobra, or - you mentioned Bubble Tea, and there are others… It sounds like these things might already be familiar enough to a lot of Go developers that that would be an easy decision to make, right? “Just pick a well-known library and go with it”, right? Why did you go differently?

Well, I mean, first off, we do have Cobra and Bubble Tea in the app. We are still using those. It was more just there were some, I don’t know, sharp edges, rough edges, whatever, areas where what those were providing to us didn’t quite cut it for what we were trying to accomplish, or we weren’t quite satisfied with the way that they solved for it… I think we’ll get into it more deeply. One of the things we ended up doing relating to some of the testing stuff we were talking about was we now use a lot of golden file stuff. So we basically say “Here are the different things that we expect to happen in the CLI interaction… Like, wait for this prompt to appear. And once it does, then type in these characters and press Enter.” And then at the end of it, we can say “Okay, now take all of the stuff that happened in the display space and print it out to a file. And when I run this in the future, I want you to do that again, and I want you to compare against that file, and I want to know if there’s any discrepancies. Because if there are, we want to make sure those were intentional, we want to update the files if we need to”, all that kind of thing. That Bubble Tea has that is great, but at least last I saw, it was just in an experimental library that Charm ships. It’s not really part of the core Bubble Tea. We had to go hunting a little bit to even find it. And when we did find it, we found that there were some gotchas related to it that made it difficult to use. In particular, there are race conditions basically that come along with it, because of the nature of when you are doing things that cause the UX to render more things, versus when you actually do the check to see what rendered out.

[00:11:58.12] A concrete example is we have some places where in the UX it will say “I’m fetching resources from the API.” It has a little spinner; it’s a pretty common element that you’ve probably seen in a lot of CLIs. The problem was that from the way that it did its golden file tests in the library from Charm, if that spins just until it finishes and then disappears, depending on how long it takes to disappear on different test runs, it might or might not appear in the golden file output. And so now you have this issue of either we need to, I don’t know, mock it out or something, so that we can more consistently know how long it’s going to take, or… You know, it suddenly becomes much more complicated, and you need to remember that and deal with that every time you’re dealing with the test. And we were not keen on doing that. That got to be a headache pretty quickly.

So we actually like are still doing something that’s very similar. The core of it is the same thing that they were doing, but we’ve changed it so that we are making sure that whenever we’re rendering something to the screen, we’re getting some kind of output related to it into our golden files, so that again, it’s consistent from run to run. There’s not those timing inconsistencies. Even if things appear and disappear every time something has changed, it will appear on the screen.

And then because we’re already in the mix of doing all of that, it gave us more capabilities that more recently we added the ability to redact certain things from the golden files, because in some cases it’s like “Oh, well, the name that gets placed here is – we randomly generate a value there, so there’s not always the same thing.” But in terms of the golden file, we just want a name value to appear there. We don’t really care what the name value is, and we don’t want that to like write a new golden file, or whatever.

So since we were already kind of in the guts of things and we already had a pipeline that was touching the golden files, it was pretty easy to make those changes. So it wasn’t that we were like totally starting from scratch, and I don’t think, or at least I hope I didn’t commit the cardinal sin of I came to this language from a different language and I think I know better… I did try pretty hard to see what was there, and try to take from and use it… And then, like I said, it was just like, a lot of it got us pretty far along the way, but not quite all the way there. And so what we’ve done in a lot of cases is just take it and build a little bit more or a little bit different on top of it to just get us over those little bumps that we ran into along the way.

So I’m going to assume that this CLI has to make network calls to perform certain actions. During your testing, how did that factor into your tests? Did you simulate network request responses, or did you actually make real network call? How did you handle that?

All of the above, kind of. So there’s a couple of different things. One of the other things that I initially picked up at my time at Heroku and now I’ve really dived into is doing more like spec-first API development. So when we’re working on API endpoints, which - for us, a lot of what we’re doing is basically like we want to add a new capability to the CLI, which relates to CRUD operations on a resource, or something… Well, how are those going to be driven? Well, there’s going to be a REST API on the other side, right? So we have our Open API spec that defines what all of the API ought to be doing.

So usually, when I’m going to develop a new endpoint… Like, I was working on a new thing earlier this week. So I start, I go into the spec, I add in - in this case it was a new operation to create organizations. Previously, if you wanted to do that, you had to just go into the web interface. Now I’m trying to add it in the CLI. So I went in, I defined it in the spec, and then right now we’re using a tool called - hopefully, I get all of these right. There’s a bunch of tools. So I believe Prism is the right one for that… So Prism is a Node-based command line tool; it relates to Open API stuff. You spin that up and you can actually say “Here is a spec that has example values in it. When I make a POST request to the organizations thing to create an organization, just use the example data from this file and return something that doesn’t necessarily quite match up with what I said, but it is a valid representation of an organization.” Because for my initial stuff that’s fine.

[00:15:49.20] So usually, while I’m developing, I’ll start with that. So I’ll be developing the spec and developing the CLI in parallel, and then I can actually build out the CLI endpoint that works just against all of these mocked data. And then that I’ve found in terms of iterating and stuff is super-valuable, because - I don’t know about other people… Even though I’ve been doing API stuff for a while, I almost never get it right the first time. And it’s pretty costly if “getting it right” the first time means writing out all of the end points, and all of the backing stuff to that, and all of the tests to that, and all of the database interactions to that, and the tables… All of that stuff - there’s a lot of stuff to that. And so if you make a mistake, it can be a real pain to fix it.

So being able to just iterate quickly against the spec directly - it’s way easier, because I can be like “Alright, great. I’ll do this. I’m banging out. I’m working on the CLI endpoint. Oh, wait… There’s two or three more fields that need to be serialized onto this record that I just forgot about. Okay, let me go add those to the spec. Great. They’re there. Okay.”

Now the CLI thing does everything it needs to. Now I can go do the implementation, and I have a clear contract that I’m basically implementing against.

So we start with that. And then in the same way when we’re doing tests, we have a lot of tests that are marked to run just in that mocked mode. And so in the test context, we spin up a Prism mock server, and we run tests against that. Not everything works that way, because - the example I gave a second ago of like create an organization, but don’t really pay attention to the parameters I pass into you, or whatever, just give me back something that’s technically valid… Like, for some tests, that’s great. That’s all you really need. But for other tests, it matters that the database really gets touched, and the valid records are there. And especially in our context of X509 stuff - I wish it weren’t this way because it can be a real pain, but sometimes to create a valid record, you’re actually creating a constellation of records. It’s actually like it’s not just this one record that is valid by itself. You need to create these six different records that all have relationships to one another, or something. And so mocking that becomes a nightmare very quickly.

At the same way we have – Prism can also be run in a proxy mode, where instead of being a mock server, it instead passes requests through, but as the requests pass through in both directions, it checks to make sure that everything that’s passing through matches against the schema that you provided. So that helps to guard us then that now we’re running tests against real, live stuff, creating real records or whatever, but if there’s discrepancies from that contract that we’ve written, we’ll find out about it. So that helps, again, to keep us honest.

And then on the server side, we have some similar tooling. There’s a – so our server in this case is all in Ruby, that runs the API, and there’s a library that’s called Committee, that also has the schema loaded into it. It’s what’s called a rack middleware. So as the requests come in, it checks the requests against the schema, and if they don’t match, it will reject it and say “Hey, you’re including a field that’s not in the schema. I don’t know what to do with that. Please, don’t include this field.” Or “This is field is the wrong type” or all these kinds of things that it can find out just from the schema.

And on the same token, as the request comes back out, it checks it again against it and says “Hey, wait a second. Actually, you included three fields that aren’t in the schema. What’s up with that?”

All of that like provides nice guardrails and helps us iterate faster. And then we don’t always do this, because we’re such a small team, but it can also be really nice in terms of being able to parallelize some of that. Like, once you have a schema that you’ve agreed upon, I can continue working on the CLI and I can potentially hand off the implementation of the API side to one of my colleagues, and we don’t even have to talk to each other or whatever. We’re not stepping on each other’s feet. We’re both implementing against the same contract. As long as the contract stays the same, we can do that without even really having to talk. That forms the talking that needs to happen.

So all of those aspects have been really nice, and definitely have helped us iterate faster. I actually – this is a whole other aside, but in some ways I wish that I had something kind of like that schema set up for CLI stuff, where I could kind of define, I don’t know, somehow what the CLI ought to eventually do or look… Because then we could work backwards from that. We could have the contract… I haven’t seen anything like that, so if anybody knows of something like that, please let me know.
It seems harder, because the ultimate output of the CLI is much more freeform. In the case of an API you’re talking about JSON blobs in and out, so it seems a lot easier to define something that says like what the shape of those blobs should be, what the types should be, stuff like that. CLI is like a bunch of characters on a screen, so what do you even do?

[00:20:12.02] But yeah, not having that can make it harder, right? There’s a lot of just like guess and check. I don’t know. I mean, the closest we got is – this is iterated over time, but we do a lot of sketching basically, where a sketch is… In the early days, my sketches were basically like I would - again, former Rubyist; still Rubyist, whatever. I would write a Ruby program that was just like a bunch of print lines and stuff basically, that more or less did something in the shape of what we wanted the CLI to do, so you could just like see it happening dynamically… Because in a lot of cases, for me at least, that would give me a much better sense of “Does this feel right? Does this feel close to right? Does something seem off here? Does it seem too noisy? Does it seem like it’s not giving enough feedback? How does this feel?” I don’t know, for me at least it’s hard to do that without a little bit of poking and prodding, some just try, guess, check etc. So yeah, that’s been super-helpful too in terms of iterating quickly.

Break: [00:21:09.01]

How have you dealt with the versioning and the compatibility between things? Because I imagine if you’re distributing a CLI, there’s a possibility that a customer will have a version of the CLI that doesn’t work with the current iteration of the API that it has to talk to. How do you handle that?

It’s a work in progress. In the very early days we just said “We have a small enough user base, or whatever. We’re okay with just moving the API and the CLI forward in lockstep”, where basically we can just have… I mean, In the very early days we just had where the CLI wouldn’t allow you to run commands if you weren’t on the latest version. That’s the very brute force way of just like –

It forces you to upgrade.

Yup. “Hey, you aren’t on the latest CLI. Sorry, you’re going to have to upgrade before you run this command.” So one thing that we’ve done actually as a way to be able to fall back to that if we feel like we absolutely have to, but to not have it be the normal behavior, is now we have where we’re able to return to the CLI “This is the minimum acceptable version.” So if we get to a point where we know we’re going to release something big in the API that is a breaking change, and we just want to pull everybody forward, because - we don’t want to do that all the time, but sometimes it might be worth it… That is our break glass kind of thing, is to just say “Hey, I know previously we said you could be on whatever version of the CLI, but we just made a big change and we do not want to have to deal with the backwards compatibility. It isn’t worth it”, whatever. So that’s one possibility.

Similarly, we don’t expect anybody else is consuming the API yet, so we’re not too worried about making those changes… We’re controlling both sides of the equation, but that will not always be true. And API governance and versioning is… I mean, that’s a whole other headache and conversation… But yeah, that’s the main thing we’re doing. Otherwise, it’s similar to other CLIs. If there’s a new version, it’ll let you know, but it won’t make you go to it, unless you really feel like it. But yeah, we’ll continue to revisit that and see how that works over time… But that’s kind of where we’re at right now.

So it sounds like you’re still iterating, trying to find that sweet spot… Or do you think you’ve found it? And perhaps what would the next step look like? Would you be going back and saying “Okay, we’ve learned a ton. We know what works, what doesn’t. We know what works for us and what doesn’t work for us.” Would you build it differently, or “the right way” now? Or would you go back and reinvent that wheel? Or how would your next step look like for you?

I mean, it’s a mixed bag. I think, relating to some of the golden file stuff we were talking about - I like not having to deal with those race conditions, and I like having some capabilities to do some of those redaction or replacement things that weren’t otherwise available. Something like that I think I would still really want. I mean, even to the point that now when I’m doing sketches, I actually don’t do the Ruby file sketches like I used to… I actually will just like basically handwrite golden files. I find that that is a format that all of us are relatively used to. And again, in terms of being fast and easy and cheap, it’s faster and easier and cheaper to edit a text file than it is to write a Ruby program that kind of spits out something that almost looks like the CLI, but always was kind of imperfect and things anyway… So those things were nice.

There are other parts though where it was a pain. As you imagine, as we sometimes deviated from the golden path, the expected standard stuff… Like, good luck finding an example of how to do something when you wrote your own thing yourself. You’re not going to go find a blog post or tutorial or something. Like, there were some parts where it quickly became pretty painful because we just had to figure it out. I mean, there’s still some gotchas related to that, that I’m guessing it has something to do with how we’ve done something wrong. One that’s continued to be a pain is - you know, Cobra does command line flag parsing, right? And so somehow in the way that we’ve set up our test suite, it doesn’t really play nice with the normal flags that you pass to go test.

[00:28:13.16] So like we basically just can’t use those flags, because if we try to either – basically, we’ve gone back and forth and tried different things. Either we mess up the actual Cobra tests, or we mess up the go test flags. One or the other doesn’t play nice. We just have tried a number of things and had a real struggle to get both to play nice next to each other. I’m guessing, again, we must’ve just done something wrong. I don’t know how we got there, but we haven’t been able to like find our way back.

So there’s some of that… I mean, I think also, as is frequently a problem that folks run into, there’s a few places where we have some abstractions into generics and things to try to make things simpler, that I’m not sure succeeded in making things simpler… Like, there’s maybe less code duplication than there would be otherwise, but boy, when something goes wrong, it can be so much harder to actually understand what’s going on and how to fix it, versus just having more duplication, having more boilerplate, that sort of thing. Not that we should never do the generics, but I think there’s a few places where I’m like “I wish we would have waited a little bit longer”, because I think we would have been better off maintaining two or three parallel implementations of this for a little bit longer, until we had a stronger idea of exactly what the abstraction should be like.

So there’s some of those things, but I think some of that would likely have come along even if we hadn’t deviated as much from these common things. I mean, the other thing too is like some of this question you would have to ask Ben, the CTO that you guys had on before, because he just doesn’t like the way that Cobra and Bubble Tea did certain things… And so sometimes that’s influenced what we did too, of - not that it isn’t there, not that it doesn’t technically work, but just, it isn’t how he would do it. So there’s a few things that I think we’ve kind of gone our own way a little bit, just because he would prefer to have it be a little bit more - yeah, just structured a little differently, factored a little bit differently, that kind of thing.

Is the API – how much effort do you put into having the API - or rather the CLI, I should say - help somebody who’s learning to use a CLI, who wants to use a CLI to interact with their accounts and whatnot? Is it self-documenting? Like, when you try to do something, does it tell you what you’re doing wrong, or something like that? Or do you need to have a readme or a guide or a documentation site hand in hand to be able to use it properly?

Sure. I’m definitely not a documentation site alongside, as much as possible. I’ve pushed, I think, extra far in the other direction, I might say. So we’ve done a lot of things that, I don’t know, I think are interesting and compelling, that relate to that.

A really concrete example that has bugged me with other CLIs is, in our CLI, if you – I mean, stepping away from CLIs even. If you go to a website and you try to go to a page that requires authentication and you’re not authenticated, I think we’re kind of used to what happens. Do you just see a page that says “You’re not authenticated. Please go solve this problem and come back to us when you’re done”? No. It redirects you over to the sign-in page, you sign in, and then it redirects you back to where you were and you continue on your way. But somehow, that never came to CLI land. Most CLIs that I’ve interacted with, you try to run an authenticated command and it’s like “Hey, you’re doing this wrong. Maybe go check the manual, or something. You’re not signed in. Please go resolve this problem and come back to us when you’re ready.” So that feels more along the lines of “You need the document hand-in-hand. You’re not going to get anywhere if you don’t have it” kind of thing… Of like “Please get your stuff together and come back to us when you actually know what you’re doing.” That’s very hostile, not helpful.

So in that very narrow example of sign-in, we have all of our auth commands set up where if you’re not signed in, it just says “Hey, number one, we can see you’re not signed in. Number two, we know how to sign you in, we’re well aware of it, so we’re actually going to just help you do that. We’re going to redirect you basically to the sign-in command now.” And “Okay, we’ve finished the sign-in command. Everything went well. Great, you’re signed in now, so let’s just go right back to where you were, because we’re reasonably confident that’s what you were trying to do in the first place.” So we started with that, and with that in mind, then we’ve actually been starting to find a few more places to do similar things.

[00:32:07.19] Another example is kind of similar to GitHub, or other things - in a lot of cases, you have to define which organization you’re operating inside of. A lot of CLIs you’d expect, probably, you’re going to have to pass like an org flag, or this thing’s just going to crash and tell you “By the way, go get an org flag and come back when you have it.”

We did a similar thing where it’s like if you run a command that needs an org, we get to the point that we’re like “Oh, hey, we need an org. You didn’t provide one. We know how to look up orgs. Let’s just look up the orgs for you and provide you with a selection list, and you can choose which one you want… And then we’ll just continue on our way. We don’t need to interrupt this flow and have you go figure out everything.” Because yeah, I for one don’t enjoy this experience of “I tried to run the command, I got to the first error, I went and figured out how to fix it, I ran the command again, got to the second error, went and figured out how to fix it…” And six or eight iterations later you finally have one command that you can run and it succeeds.

But among other things, good luck remembering again this magic invocation that it took to get that to work. You’re going to have to go through that same pain the next time you come back. And especially with something like what we’re working on, where it’s around certificates and stuff - we hope in a lot of cases you’re not having to touch this multiple times every day. Like, you set up your certificates and hopefully they’re pretty good for a little while. Otherwise you probably have a big problem.

So if it’s something where you have to go through a lot of pain and like totally forget how it works and come back and have to just learn from scratch again - that’s going to be a repeatedly bad experience. So we do as much as we can to really just help the user through it.

One of the core things we’re working on is based on setting up local secure context for development. So you can go to like lcl.host and see that in action, you can download it and set it up… But we really gear towards trying to have – we want you to be able to basically just run one command, and the command will do what it needs to do. We will figure out what needs to happen, basically, to end up that – like, when you finish running through this command successfully, you have local secure context stuff set up. And some of that’s still a work in progress. I don’t think it’s totally perfect that that’s actually what happens in reality, but it’s pretty close, and it’s getting closer all the time… And there’s other parts to that, too. Like, it’s set up where it’s intended to be reentrant. So if you actually run the command, and for whatever reason - there’s a few cases where you actually like drop out of the command because it tells you you need to go do something first, and come back… When you run the command again, it looks to see the state of your machine, and it can figure out “Oh, well, we can skip this step because we can see that you’ve already done it, so we don’t need to do that one again.” “Oh, but you haven’t – okay, now we know that you’re right here, so let’s continue where we left off and get you to where you need to go.”

So yeah, definitely… It’s been interesting, because obviously there’s a lot of like TUIs and other things that are very interactive, but that’s not really what we’re doing… So it’s like somewhere in some ways in between traditional CLIs and a full-on TUI experience. I don’t think we want to go to full-on TUI, but within the CLI context, we want to – yeah, we want to help you along. We want it to be a more interactive experience. We don’t want it to just be, again, “Get all the flags right from the start, or try again.” We want to say “Let’s really help walk you through this, and if we have questions or you need to answer something, we’ll give you a chance in line to do that.”

I mean, you can also run it with all the flags if you want to, if you want to do the hard mode… But I don’t feel like that’s what is required of anyone. So yeah, just continuing to explore that.

I don’t know, I really like this idea. This is something that kind of in some ways an idea that I originally had when I was at Heroku even, of like - you know, we’ve seen in the browser and on phone apps and other things all of these leaps and bounds in terms of how interfaces can or should work, and how pleasant they can be, and other things… And a lot of CLI stuff just felt really stuck in the past. So there’s been a lot of places where I’ve just been “Hey, what can we borrow from those contexts where people figured out these cool things that are really compelling? Which of those can be reapplied here?” Not all of them are going to translate, but I think at least some of them do… And those ones have been some that have been successful for us so far. And yeah, I’m excited, I guess, to continue to explore that space and what can we bring to this, how can we make it better, smoother, easier, you know…

[00:36:18.16] Yeah, definitely we’ll have the link to your article, which - I went through it and it was sensible. There was nothing I was looking at and – obviously, I wanted to understand why kind of semi-roll your own thing, as opposed to leveraging existing things… And yes, some of the thinking there seems reasonable to me, and I’m looking forward to seeing what else – basically, to continue the documentation of this journey, and whatever you end up with.

Hey, who knows, maybe it becomes a new way of doing things, or an additional way people can have as a point of reference… At the very least, basically, you’ll have your reasoning for why you want to deviate from the norm.

I do want to circle back to the Ruby to Go transition for you in particular. How was that transition for you?

You know, better person, worse parts. I mean, there’s definitely – in the Ruby context, I was much more comfortable. I’d been doing that for a much longer time. So just anything new is going to be a challenge. But I think there’s some big – I don’t know, I at least perceive some big differences in terms of how the… I don’t know, the philosophy, I guess, of Ruby versus the philosophy of Go in some cases… Ruby is about making things simple, intuitive, whatever, even if that means that there’s 10 different ways you could do something, because different people have different intuitions or whatever… Versus in Go, in a lot of cases it’s like “There is one way that you ought to be doing this, and we’re not going to support the other ways.” That seems to be pretty common at least, in terms of Go’s standard library, and stuff. Or, people will be like “How would I do this?” and the response will be “You shouldn’t”, which is not what I would expect in the Ruby community, where it’s more like “Oh, I don’t know, here are some possibilities, or something. Or maybe we’ll even add some implementation to do that.”

I don’t know, in some ways I feel like they both potentially lean too far in different directions. Somewhere in the middle is maybe ideal. But that was a big difference. And then there’s some stuff that you probably would imagine… In Ruby, metaprogramming is much easier, and so in a lot of cases you do that more quickly and maybe too much again… But on the Go side, how much boilerplate there sometimes is is like still amazing to me, I guess. Like, now that I have a snippet for it, it’s not as bad, but this ort of like “if this return value is an error, then do something else” kind of thing that I had to like put everywhere was like “Wait a second… Do I really have to do this?” I understand more why now, and it doesn’t bother me as much, but especially at first, I was just sort of like “I guess I just put this everywhere because I’ve been told to, not because I really understood what was up with that.”

And then there’s other little things… Like, I hadn’t had to do pointers in a while, and I still mess up pointers. I mean, I think probably everybody, if they’re being honest, messes up pointers a decent amount in terms of “Do I need to dereference it here or not do reference it here?” And all those kinds of little things.

The other day I was running into, yet again, this thing of like - I was checking to see if something was nil… But it’s always a pointer, it’s never nil. It just might be a pointer to a nil or something, and I’m just like – I had to have somebody else look at it and tell me what I was doing wrong, because I just couldn’t reason about it somehow.

So yeah, there’s a lot of little gotchas like that… And just getting up to speed on something is always hard. And like I said, I think I had some particular troubles just because I really struggled, for whatever reason - maybe I was just looking in the wrong places - to find good examples of what I felt like were the kinds of tests I was looking for for Bubble Tea. And that’s what I usually do, is if I’m learning something new, I like to go and look at prior art, especially if it’s a space that I’m not familiar with, to see – like, there are smart people that have worked on this before. I don’t think I’m going to come up with the best thing just off the top of my head.

[00:40:08.21] So yeah, it was it was hard. And then, like I said, when we then went off the beaten path in some cases, not having those examples to draw upon was tricky. But we made it through. But yeah, Go is fine. It’s different. I don’t know. It seems to be pretty good at some things and it’s a bit of a pain in other things… But, I mean, that’s every language for you.

Knowing what you know now about Go, if given a choice, if the choice were yours to build this tool in Ruby or Go, would you stick with Go or go back to your roots, so to speak?

I mean, Ruby still has distribution challenges, for sure, in terms of how do you how do people install it and run it… And it’s interpreted, so they’re gonna have to have that interpreter… And I guess if mine is the only Ruby program you’re running on your machine, that’s fine. But as soon as you’re now trying to maintain that across potentially many programs, and need different versions of Ruby, it gets to be a nightmare really quickly. So not a good experience. So something that could be distributed well would be important.

Go obviously fits into that. I don’t know, I might look at some other – there’s some other language choices I think that could fit into that as well. But Go now I have more familiarity with, so I don’t know that I would want to necessarily start from scratch again. Like I said, even though we needed to build some additional things to get what we needed on top of Bubble Tea or whatever… A lot of the core Bubble Tea stuff is really nice. Like, we have been able to do some really nice UX-y things in it, in terms of – again the dropdown selectors for which organization you want, and stuff like that. Those are all just bubble tea components that I can pull off the shelf and use. So yeah, I think I would be fine with doing Go again at this point. I don’t know. I mean, time will tell, I guess. I don’t know how many more CLIs I have in me, you know…

[laughs] Alright. Yeah, so before we transition to unpopular opinions, is there any parting words, a recommendation that you’d give to someone or perhaps a team that is looking to embark on a similar journey?

Yeah. I mean, there’s a couple of things, I guess. One is just whether it’s CLIs or APIs or whatever, they can be really expensive… So look for cheap ways to do things. Partly, we talked about that before - doing sketching, doing mocking, things like that. But another one that I feel like is not always obvious to people is I think – for me, I feel like a lot of what I do is really informed by kind of trying to become a connoisseur of APIs and CLIs. So I’ve made an effort to go write a bunch of API clients against random other APIs, especially back in the day. In Ruby and Fog, I wrote a bunch of IaaS provider stuff, right? So a bunch of different AWS services, Rackspace services, OpenStack, Azure… The list goes on and on to a degree that is painful to me, because I still have to maintain some of them, but… A lot of different things. But it’s a lot cheaper and faster and easier to interact with and learn from somebody else’s API than it is to make all those mistakes yourself. And it’s not perfect, but I do feel like I was able to develop a sense of, I don’t know, taste by being critical of those, to be able to say… I maybe sometimes don’t even know exactly why, but I don’t like that. I don’t like how that feels. I don’t like the gymnastics I had to go through in order to write a client that would actually do that thing in a way that seemed consistent to me. Something is off there, and it helps you to not walk into those things yourself when you’re going on to design something. You don’t get that many swings at making an API in your career probably, because they take a long time, and most companies don’t need a dozen of them or something. You just get a few chances… So whatever you can do in other ways to keep refining your chops, I think is really valuable.

[00:43:59.14] So yeah I still like looking at other APIs and thinking about what they did or didn’t do, and why, and how I might do it… And it’s the same thing with CLIs. I learned a ton from looking at GH, the GitHub CLI, which also is Go, also uses Bubble Tea, and other things… I’ve looked a bunch of times at how they do their testing, for instance, because that was much closer to what I wanted… But also very specific to their use case, and has a very complicated test harness that gets set up, because of all of the things that they need, that I didn’t feel like duplicating all of that was going to be a good situation for me… But I still learned a lot from looking at it. I’ve looked through a ton of their code, used their tool a lot… A lot of what they do - not exactly how I would do it, but again, you develop those senses… But yeah.

So do the cheaper things, because you can do more of those more quickly, and you’re not going to get it right the first time… And so if you’re going to get it wrong anyway, do it wrong and cheap, instead of wrong and expensive. I think that’s probably one of my biggest pieces of advice, is just like work in that space until you feel a little bit more comfort and a little bit more confidence, and then move into the spaces that are more expensive.

Break: [00:45:07.28]

Alright, let’s get ready for some unpop.

Alright, lay it on us.

Alright, I’ve got a couple of CLI-related ones… I didn’t know if they would be sufficiently spicy, so I’ve got a couple of tries. We’ll see if we got one. The first one kind of relating to stuff that I’ve been working on is, I feel like for a lot of things anyway, kind of talking about the Linux approach, the small, sharp tools kinds of things - I feel more and more like that’s necessary, but not sufficient, I guess. And so even within the context of a given tool - so this is something that, again, exploring within our tool, I’ve mentioned this big LCL command that does all of the things. Not very small, sharp tool at all… But one of the things we did is actually underneath the covers, it’s actually a series of commands, basically. So it’s sort of a workflow, almost. And so I’m kind of exploring this idea that it’s important to have all of those small, sharp tools, all of those individual commands… But having something that ties it together for the user in a “You are trying to accomplish this thing. I’m not going to make you figure out which set of 10 commands you need to run. I’m going to pull them together into something that’s coherent”, I think there’s a lot of value there that I guess I wish more things helped with in some way, shape, or form.

It’s similar, I guess – also, I think about this more and more in APIs. Historically, my inclination was to be “What’s the set of resources I need to provide in this API? Alright, I’ll set up CRUD for each of those, and my job is done.” But more and more it’s like I don’t think it has to be either/or. I think you could have that, but also then potentially have some API endpoints that represent, again, a workflow kind of thing or something, that actually pull together what otherwise would be three or four operations, because it’s super-common for people to do these three or four operations together, and why should they have to always wire them separately? And if that is something that people commonly want to do, maybe help them do it a little bit faster and easier, right? Anyway, I don’t know if that’s – is that sufficiently unpopular? Is that sticking out enough? I mean…

I do want to pull on that thread a little bit…

Because I keep thinking of the example you gave before, where if you didn’t provide an org flag and pausing to ask you, or rather creating some interactive feedback mechanism within the CLI that says “I’m not going to blow up this command.” The whole “You get it right the first time or I complete the exit”, kind of thing… “I’m going to ask you for the detail and then keep going.” To me, that interactive mode perhaps could be the default. But if you’re going to be a good citizen within the world of the small, sharp tools, the Unix philosophy of basically knowing that I could pipe the output from some other command into your CLI… Obviously, I don’t want the interactive mode in that sense. I want the one shot; either get it right, and I need to make sure I provided everything, or fail kind of thing.

Perhaps, if you know your CLI, if you’re building a CLI tool that can be used in either contexts - one in the interactive mode, whereby I’m the user learning to use the tool, and you’re walking me along… But if I have a more advanced use case where I expect all the values to be there, I know what should be passed in, I know what the flag should be, and if I don’t pass something, I want that failure - having that option, I think, could be the best of both worlds.

Yeah. And I think that this is one place where I’ve also butted heads with my CTO at times, is I totally agree with what you just said… I don’t know that I feel like every command within the CLI has to support both use cases, though. I think it’s quite possible that you have just a distinct set of commands, basically, that you’re like – for instance, the lcl command that is like “Everything and the kitchen sink”, right? Maybe that just doesn’t make sense to ever be run outside of interactive mode, because it’s just trying to do too much. If you actually wanted to run it with like all of the flags or fail, you’re going to have to provide 20 flags. And do you really want to do that? It’s just impractical. Maybe in that case we say “Hey, we do have this nice interactive mode to really hold your hand and help you get through all of this. But if you’re an expert, here’s the breakdown of the 10 commands that get run. Maybe you just want to run those yourself.”

[00:51:38.29] Or maybe there’s some commands even that don’t have an interactive mode at all. The kind of analogy that I thought of is in, again, the GH case, the GitHub CLI… They have all of these nice commands that do a lot of like nice high-level stuff, but then they actually have like an API command, where basically you can use it as a low-level API client and tell it to just run a thing and give you back JSON. So that’s an example of the API style commands are very low-level, that is much more expert mode. That’s probably even too low level maybe for a lot of people. But that kind of thing of, again, it’s like it’s necessary, but not sufficient. It’s like, I want to have all the low-level things that you can reach into and just use those if you want, but I also want to have the high-level things. But yeah, I think – there’ve been a few times at least where I’ve already felt like it would be so hard for me to provide a really nice experience for both of those use cases in the context of this one command… Like, do I really have to do that? Is that something I need to tie myself to? Or can I just say “Hey, sorry, this command is intended to be really nice in the interactive mode, but we had to make trade-offs in doing that that make it not really work outside of the interactive mode. So if you want to work outside of the interactive mode, we do want to provide that to you. Here’s how you go get that. It just isn’t this thing.” It’s kind of like a both/and not an either/or situation, if that makes sense.

Yeah, yeah, that seems sensible. So what’s your second take?

It’s kind of related, but again, it relates to an argument that I’ve had with the CTO… It’s that especially for more, I guess CLI commands that have multiple commands - they aren’t just like a one off command, so something like sed or awk; it just does one thing, right? So I’m not including those. I’m including things that are like multiple commands. I would argue that in that context, positional arguments are just a mistake. You just shouldn’t use positional arguments in a multi command CLI. And we’ve had - boy, did we have an argument about this… This feels like more obviously spicy, so I thought this would be right up your alley in terms of unpopular stuff.

Yeah, yeah, that’s a good one. And I don’t know, you and me both, I think, we are in the same camp there. I don’t like – I don’t even like initializing values and things, like with positional. I want key-value – I want to be specific. I want to say “This value, for this particular property or attribute.” I prefer the explicitness. Because I’ve been bitten by so many bugs whereby somebody changed the order of things, and all of a sudden some assumptions are now – I’m doing the wrong thing because the value type happens to be the same thing, but you’ve changed the order of thing, and now the wrong thing is assigned to the wrong value… The tests still pass, because you’re still passing your strings where strings are expected, it’s still passing [unintelligible 00:54:20.11] are expected, and then you’re having these really nasty logic bugs that… Yeah, I’ve been bitten by that so many times. So yeah, definitely in that camp.

Yeah. And there’s one other thing that I’d push on too, that kind of relates to some of the philosophy that we were talking about in terms of how I approach this stuff, which is that I feel like when I’m designing CLI and other stuff one of the big things that I’m trying to do is transfer knowledge to the user. I’m trying to make it so that they learn what’s going on, that when they finish successfully running a command, they’re a little bit more prepared to successfully run the next command… And the problem with positional args is usually in a multi-command CLI from one to another, the positional args mean something different, they aren’t the same. So whatever you learn in one command does not transfer at all to another command. So you end up in this situation of “I learned what command one and three are in this command. Great.” Does that apply anywhere else? No, not at all. Not even a little bit. It’s not going to help you. It’s not going to transfer. And so I want things that are more transferable, and flags feel much more like that.

So an exception even to the multi command CLI is if somehow you have something where the first positional arg or something could always mean the same thing on every command, then it’s transferable. Then I think you’re safe. Then I think that could be a good experience. But outside of that, just use flags. Yeah, it’s a little bit more typing, but how often do you really have to do it? And that clarity and explicitness is quite valuable. So yeah.

Exactly. Yeah, I’m definitely in that camp. So yeah, but we’ll see what the people say. I don’t think you’ll get a lot of pushback on that second one… But maybe in the first one, but we’ll see. We’ll see. Sometimes I get surprised by these things.

Yeah, me too.

Awesome. Wesley, it’s been such a pleasure having you on the show to talk about your journey… And yeah, whenever folks deviate from the standard path, I always love to hear their stories, because not everything works for everybody out of the box the same way. So yeah, I definitely enjoyed chatting with you. And yeah, I’ll keep an eye out on anchor.dev, and the blog, and see – if things pop up, just drop a note. Let me know “Hey, I’ve got some new stuff.” We will bring you back to keep digging into this. This is pretty cool.

Yeah, sounds great. Thanks so much for having me. It was a fun chat.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00