Practical AI – Episode #151

Balancing human intelligence with AI

get Fully-Connected with Chris and Daniel

All Episodes

Polarity Mapping is a framework to “help problems be solved in a realistic and multidimensional manner” (see here for more info). In this week’s fully connected episode, Chris and Daniel use this framework to help them discuss how an organization can strike a good balance between human intelligence and AI. AI can’t solve everything and humans need to be in-the-loop with many AI solutions.

Featuring

Sponsors

SignalWire – Build what’s next in communications with video, voice, and messaging APIs powered by elastic cloud infrastructure. Try it today at signalwire.com and use code AI for $25 in developer credit.

Changelog++ – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with no ads, extended episodes, outtakes, bonus content, a deep discount in our merch store (soon), and more to come. Let’s do this!

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

LaunchDarklyShip fast. Rest easy. Deploy code at any time, even if a feature isn’t ready to be released to your users. Wrap code in feature flags to get the safety to test new features and infrastructure in prod without impacting the wrong end users.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another Fully Connected episode of Practical AI. This is where we will keep you fully connected with everything that’s happening in the AI community. We’ll take some time to discuss some of the latest AI news and topics and we’ll dig into some learning resources to help you level up your machine learning game.

I’m Daniel Whitenack, I’m a data scientist with SIL International, and I’m joined as always by my co-host, Chris Benson, who is a strategist at Lockheed Martin. How are you doing, Chris?

Doing very well, Daniel. It’s a nice – fall weather has arrived and…

Yeah, it’s rainy and gloomy here.

Oh, I’m sorry, man. We’ve had a week of that.

But it is cool. It’s nice. It’s not like an oven outside, which is good.

True. So I’m enjoying the weather at this point. We’ve had a week of rain, and now it’s – so we get to jump into some AI and sunshine, and we’ll see how it goes.

Yeah, that’s good. You know, every week, I introduce you as a strategist, and sometimes I’m thinking, what does a strategist do day-to-day? Maybe my question is more like how is strategy developed at a large organization? And this is related to what we’ll talk about today, but I’m just sort of curious, like, how a large organization thinks about developing their strategy, maybe related to something like AI or a new advanced technology. What does the development of strategy look like at an organization?

Sure. So I would say the short version of an answer is that organizations have needs, and those needs have to get solved. And as simple as that sounds, it is very common for people not to start and say, “What is the need I have to solve?” They say, “I have this cool new technology, man. It’s incredible. It’s called AI, and it’s all these awesome things, and we just need this in our product or service. It’s amazing!” And we call that an approach in the strategy world. It’s, in other words, it is a solution in search of a problem.

Yeah, it’s something that could enable something, but it might just –

Yeah, and here’s the funny thing. If you think back through your career, I’ll bet that 99% of your technology conversations where people are excited about the way forward start with them saying, “I have this amazing thing, I’m so thrilled with it, and everyone will need one.” If anyone tells you “Everyone will need this,” that’s a sure sign that you’re starting in the wrong place. “Everyone will need this.” So you start with a need and you solve it with an approach, and that’s how you get to a good answer, instead of the other way around. So that’s the short version.

So are there tools or frameworks that you use – maybe certain of these can’t be revealed from your proprietary work, but are there like general tools or frameworks that you use in developing strategy? I know, for example, you gave a talk last week at an event that I was helping organize, which - thank you for doing that, it was a wonderful talk.

I think you mentioned some sort of frameworks that you think about; I’m always intrigued by the different ways people frame their strategy… what are some of those tools that you use?

It’s actually going to follow through on what I was just talking about, is that a lot of approaches to strategy are incredibly complex, and because of that, no one sticks with it; there’s a lot of mundane documentation and stuff. There’s a really simple framework that I use as baseline, and we’ll build around that as we need for specific business cases. It’s called NABC, which is Need, Approach, Benefit, Competition, where you start with a need, and you’re looking at what are all the possible approaches (in this case, they may be AI approaches), and then you figure out how those benefit a solution. And there may be a lot of options there… And what the competition is between them, and you try to get to an honest answer. And that very, very simple framework is how I start my thinking around it. And then we do have various proprietary business-specific frameworks. But I always come back to that, because it’s so simple that it’s hard to go wrong if you keep it in your brain.

Yeah, and this gets – I mean, on these Fully Connected episodes, we’re always talking about maybe the latest AI news, or a new capability that comes out, or a new type of method or new data… And definitely, those things, separate from like a strategic mindset… I mean, in the very least, they could not bring value, right? In the very most, maybe they bring harm, if they’re sort of misapplied.

Yeah, suboptimal outcomes are incredibly common. And so even if it’s a simple framework, it takes a lot of discipline. You’ll put something like that out there, and people will kind of nod and go, “Yeah, yeah. Nice, simple four-letter acronym. I’ve got it” and they move on. But then they go do the wrong thing right after that, and you’re like, “No, no, we’re going to come back and solve the need that we actually have identified and is validated.” And that’s hard to do. It’s deceptively simple in speaking it.

Yeah. And I think that maybe a lot of the conversation and blog posts or like conversations I’ve had have been centered around, “Can we justify the use of AI here? Or is AI the strategic choice for this application for XYZ reason?” But I think that – so something happened, actually, earlier today in my own work, where someone asked me to do a bit of analysis of the polarity or the balancing of two things, human intelligence and artificial intelligence. And I thought this was an interesting question that maybe I hadn’t fully thought through yet… Which is not so much like, is AI going to bring value for a particular solution, but how do you weave together human intelligence and human expertise with artificial intelligence, and balance those two in a good way for your organization or for your particular problem that you’re looking at? So that’s what I was maybe going to bring up today. Would you be interested in kind of going through that exercise with me, helping me in my homework assignment?

Absolutely. But I feel – since you have taken me down this path, I feel obligated to point out that we’re kind of starting with an approach there… So maybe we pick a use case and we kind of work our way through that use case, because then we have a validated need to have context for it.

Sure. So yeah, I mean, I think in general we could use the use case, which is definitely the case in – although we’re talking in a little bit more specifics on the project that I work on, we’re generally talking about the application of AI in local language context. So let’s say there’s languages where something like chat dialogue technology, sort of like named-entity recognition, sentiment analysis, that sort of stuff isn’t being applied yet… Maybe it’s machine translation, which doesn’t support these local languages yet, maybe it’s speech recognition. The premise is that the local language community would benefit from these types of technologies being extended to support their languages, but also that the AI community at large would benefit from local language communities actually being part of that natural language processing conversation in a wide array.

So that’s the sort of context that we’re working in, is this sort of AI technology - should it be applied to do things (we can kind of scope down), should AI technology be applied to support machine translation for all these local languages or speech recognition for all these local languages where it’s not currently supported? How does that work, putting your strategist hat on?

I think it really depends on the stakeholder upfront, in that if you’re an organization, either a commercial or a non-profit, a not for profit organization, either way, then there is a mission, there is an objective that your organization has, and you have to figure out whether that use case is going to fit in. If you are a large internet-based company, and without getting into specific names, and a significant proportion of your users are in those local languages, it would make sense for you; but it would not make sense for you necessarily, but it might make sense for no other organization that is trying to help that.

So what I’m saying is there’s a cost to be born, and you have to figure out where the cost should be born, so that you can serve all users well. And so I think there’s room for everyone in getting in on that. You just have to figure out where that goes.

Yeah. And you could think of a scenario, like the most glaring one that we had recently with COVID, and this sort of massive need for around the world rapid translation of COVID-related information into as many languages as we could get it into, because there was an immediate need.

Now the question comes up, “Well, what role does AI play in that, and how is it balanced with human intelligence in the solution to that problem? What are the advantages of one and the disadvantages of the other? And how should they be balanced together?”

The specific thing that came across my desk today was this idea of a polarity mapping, which I kind of liked, because my backgrounds in physics, so anything with polarity is probably okay… I just chatted you a link to that, and we’ll link it in our show notes… But there’s this sort of what I would consider a goofy-looking picture. But there’s this framework, a framework for tracking this sort of problem. I guess, according to this wiki article, it was created by Barry Johnson, who I – I don’t know who that is.

I don’t either.

Barry Johnson. There’s probably a million Barry Johnson’s, but… Barry Johnson. And it was created to help problems be solved in a realistic and multidimensional manner. Now normally, because I’m a practical person and I like to spend most of my time in Vim and in a Colab Notebook or something, I really sort of cringe when I see these kind of like innovation frameworks or innovation map type things; they sort of like – I don’t know, they give me a bad feeling right away.

Hey, I’m a strategist, and they give me a bad feeling too, so I’m right there with you.

They’re also like – I mean, it’s kind of a goofy picture; it almost looks like maybe what Prince would have on some of his like stage art at a show or something, I don’t know.

With lasers, you have to have lasers.

Yeah, maybe. So the idea of the picture, I think, is there’s like these four quadrants - and you can follow up in the show notes, but the idea is that sort of on one side you’ve got human intelligence; it’s really a framework for comparing two things, but on one side of you’ve got maybe human intelligence and then on the other side, you’ve got artificial intelligence. And then what you would want to do is think about the positive results and the negative results from each, or like the pros and cons of each, and then in looking at the pros and cons of each of those, think about the action steps and warning signs when they aren’t in balance, or how you can keep them in balance, because we’re assuming that both will play a role. So in our context, like in the machine translation of COVID information into local languages, how does human intelligence play a role? How does artificial intelligence play a role? And when and how would we know if one is out of balance in comparison to the other one? So does the premise make sense?

The premise makes sense. And I think it’s an interesting topic that I think is very relevant to our future.

Yeah, definitely, definitely. So maybe we’ll talk about – let’s think about the values of each of these first. So on the top of these quadrants, what they have the user of this framework do is think about the values, which they’ve labeled as “The positive results from focusing on…”, and then you fill in the blank with whichever one you’re thinking about, human intelligence or artificial intelligence.

So what are some positive results from focusing on human intelligence when you’re trying to solve this problem of machine translation into a local language? What do you think? Any ideas?

The target, if you will, of what you’re trying to accomplish is a human. And so having a human involved automatically allows you to connect; there’s a whole bunch of problems that just don’t exist, and you get the benefit of us complicated humans in terms of – not only do you get the benefit of communication and understanding, but things like empathy, and there’s huge value in that.

Yeah. So a human is the target of this technology. So that’s definitely good to keep in mind, I think. Like you were talking about earlier, we can get enamored by the tech, right?

And think about like, “This is a cool solution to a thing”, but ultimately, a human is going to interact with it, right?

Absolutely.

So I don’t know, do you think about that term of empathy when you’re thinking about strategy? I’ve heard that term thrown around.

I do. So not everyone does think that way, but it is value. And the older I get, the more I care about that. In most problems, there’s the basic problem and there’s a whole bunch of surrounding concerns around that, and you have to figure out what you’re going to address. So for me, I see benefit to that.

So Chris, I think you brought up a couple of interesting things related to the positive results from focusing on human intelligence to solve the hypothetical problem we’re considering. So humans being the target, it sort of breeds trust, maybe, so I’m going to bring up the word trust.

Indeed.

It maybe brings trust when a human does the thing. So like, if I’m translating material about COVID into a local language, if I have a human that does that translation, maybe there’s more trust there? I don’t know, would you agree?

I think it would be a fair statement to say most people are able to connect with other humans long before they connect with a particular technology. There’s an adjustment period to new technologies being accepted. We’ve seen that in many, many cases of technology release and development over the years. And so that’s a big concern; you can look back and say the kind of the classical, my-grandmother-doesn’t-know-how-to-use-a-cell-phone kind of context. So yes, you’re circumventing a whole set of issues by taking advantage of the fact that you inherently will probably have trust with the human-to-human contact.

And the other thing you mentioned, I think, is maybe related to like flexibility or adaptation. So like a human, we’re used to adapting to all sorts of situations, right? But generally, like an AI model for machine translation. There are good general-purpose models, but oftentimes, there are certain domains… Like, if we think about this domain of translation of COVID material, there’s really not that much in public corpora out there, public data that is representative of COVID type of information. And so a machine translation model might just simply not know that domain of translation, whereas it may be fairly easy for a human to adapt to translating COVID information in a very quick period of time. So we’re adaptable, I guess.

Absolutely. I agree.

Now, depending on – and this is, of course, there’s a whole range of philosophical and religious positions on this, but at least I would argue that humans, as opposed to machines… Like there’s a difference between humans and machines. Now, like I said, depending on your worldview, you might think of that in different ways. But I think we could probably all agree on the fact that at least today’s machines have a very task-focused way of solving problems, and humans have a natural creativity, productivity, adaptation; there’s a different element about how they solve a problem, as opposed to a machine. So I don’t know if that’s creativity, in addition to adaptation. I don’t know, if you have thoughts there.

I think that’s right on; is it’s the complexity inherent in the human mind that allows for the adaptation. And in many cases, for those, it makes those interactions interesting. I mean, a good example of that is, if you look at humor that’s machine-generated, as of today, in 2021 - I’ve yet to find a source that I find funny. And yet, you and I can have a conversation, neither one of us is a professional comedian, despite evidence, possibly to the contrary at times… But we have a good time –

I’m definitely not.

– we can laugh at each other’s jokes. And that’s something that’s uniquely human at the moment, at least. And yet on the other side, as you certainly know, and maybe some of our listeners, I recently got my private pilot’s license… And I would argue that it’s an incredibly manual procedural thing to learn. And most pilots would probably disagree with me, but I actually think that technology can fly a lot better than we can I. I know that I’m prone to make small mistakes. Thankfully, they are small. But there’s ways that by doing it very procedurally task-oriented, that you have superior capabilities from technology.

So I guess, getting to that and thinking about the positive results from focusing on an AI approach to solving this machine translation problem, what comes to mind first?

You’ll have to give me an example to go for that.

So like if we didn’t have a human in the loop at all, and we had a great AI that could or we thought could solve this problem of machine translation of COVID information, what are the benefits of that AI solution in isolation, with no sort of human expertise or intelligence infused?

Once you’ve absorbed the cost of that, assuming for a moment a constant static model based on what your development was, then you can deploy it at scale without incurring additional cost. And not only that, but because of that, you can deploy it much more widespread; there’s a very limited number of human translators out there, and so there are costs associated with that. So long as you’re willing to take the cost of the development into account and the cost of deployment into account, then you can scale a certain capability across a very wide group of users, and that’s a huge benefit. I mean, we see that in all sorts of business use cases.

Yeah. And when I teach classes or workshops normally, the question comes up, like, when is an AI solution appropriate? Or under what conditions? And generally, I do think about that in terms of two things; one is scale… Even if a human, which I think a human would, in most cases, if they knew both languages, create a better translation… But if we have to translate 4 million sentences, the fact is that it’s going to take a human an incredible amount of time. And so there’s a scale factor, and then there’s also a complexity of the problem factor. So like, some problems, just by their very nature, even if they’re not at scale, are hard for a human to visualize. And maybe it’s combining a bunch of sensor data from IoT devices to determine when anomalies are happening in a network or in a manufacturing system or something. Those are like really complex problems; it’s hard for a human to actually parse all of that information and make a decision. Maybe they could retroactively, but in the moment, it’s difficult. So there’s the scale of complexity, but then also the scale of scale.

I get what you’re saying. I’ll offer an alternative point on that one… In translation, if you have a string of words that you’re trying to translate, there may be context around those words, though, that is not captured by the technology. We’ve all lived this pandemic the last couple of years, and there are all sorts of challenging situations that humans have to go through. And sometimes the human can take into account things outside the direct task at hand, which shape the world, the entire interaction is happening within. There’s a trade-off either way.

In this case, you just identified an excellent reason to apply technology, and then we also can turn around say, “There’s a great reason to have a human in the loop.” So it kind of depends on how you want to value those different attributes.

Yeah. I also think that if we think about the combination, and maybe this is the – I don’t know if it’s a pure AI value, but one feature of the AI side of things is that there are many processes where the combination of the AI model plus a human, so the human plus computer is actually both faster and higher quality in its task than either one in isolation, and this has been true in like healthcare applications where doctors are recognizing tumors or whatever it is. So there is this element of the two together can produce actually higher quality results than either one by themselves.

Absolutely. In my industry, we have, I think, a unique term for that, and that is Manned-Unmanned Teaming, is what we call it. We call it MUM-T. So there’s a new acronym for you.

Okay, yeah, I’ll make sure I fit that in, add a few points.

There are great outcomes that can be had by taking advantage of the strengths of both sides.

Yeah. So actually, I think we’ve already sort of got into some of the downsides. But I think it is interesting to think about, on both sides of things… Because a lot of times people think about the downsides of AI, like whether that be bias or whatever… But what are the negative results from over-focusing on human intelligence, rather than bringing in any AI?

Right off the bat, the same thing you just mentioned applies to both sides, and that’s bias. And we in data science tend to focus on bias in data and data processing when we’re building models, but oh, boy, all you have to do is look at the last election cycle with humans and you see that we have the same faults in that way. So yeah, that’s one right off the bat.

Another one that you mentioned earlier, was the fact that humans don’t necessarily take all of the data in and give it all processing time before an output, if you’re looking at all these sensors that you mentioned earlier. Some of the things that make us so strong in some areas make us terribly weak in others. A human inference is very different from a computer inference, and they both have strengths and weaknesses.

One of the interesting things about bias on the human side, which my coworker brought up to me today, is he said, “On a human side biases are covered by shame.” And I was like, “Okay, well, unpack that a little bit for me. What do you mean by that?” And he was basically saying, people hide their mistakes, right? If you have a test set on the AI side, you can just count up how many you got wrong. Now, there could be bias in that, and you can actually measure it, right? But a human, oftentimes either they subconsciously have a bias, and it’s hidden because they just don’t even know that they have it, or they do know that they have a bias and they intentionally try to hide it, right? Which is maybe a more interesting situation.

So when a human makes a mistake, they generally like people to know that they didn’t make the mistake. So it’s really hard to measure sometimes when people are making mistakes and what their actual perception is, and how a situation went down… And so in addition to there being bias on the AI side, there’s this interesting kind of bias on the human side, that’s really hard to measure and deal with.

Yeah. A few minutes ago we were talking about these external concerns for whatever your primary task or motivation is… And when we talk about on the model side, we say, well, a model is creating an inference to solve a particular task. And unfortunately, it didn’t take into account the external environment that it was operating in, and so it didn’t have empathy, it didn’t have other attributes that we might have value for. But at the same time, that’s also the strength of it, as you just pointed out, in that you are getting the benefit of that inference as it should be… Whereas we humans - we want to look good, we want to sound good, we don’t want to make mistakes, we want our peers to like us, and that creates a whole set of concerns that change the way that we’re both in the interactions and the way that we’re communicating this.

We have, I think, on many episodes highlighted some of the downsides of an overfocus on AI solutions. So I don’t know that we need to go into incredible detail on that… I mean, some of those are the sort of black box side of things, where you have a lack of interpretability, which causes problems with trust and debugging, and all of those sorts of things. You’ve got sort of disillusionment of people, where they hype up AI and it’s actually not as great as they think. Of course, AI systems depend on data generated by humans, and humans have bias, and so that data has bias, and humans kind of infuse that sometimes into models. And in general, it’s just hard to – because AI is hard to explain, it’s hard to trust, and it’s hard maybe also in this context, when we’re talking about translation into local languages, where this sort of power to create these AI systems might be centralized in large tech companies, in big GPU clusters, and not accessible to local language communities… Although I think that is rapidly changing. There’s evidence throughout, like efforts like Masakhane in Africa, where people are doing amazing AI research; people from local language communities are doing amazing AI research with things like Google Colab. So I think there’s a balance, of course, to all these things… But anything else you’d like to highlight on that sort of over-focus on AI side of things?

I think for me, at least having spent several years thinking about this stuff with you and others, I am now in the habit of trusting AI that solves procedural and very task-oriented things, and I’ve seen so many cases where it’s doing it better than humans, even if it’s a series of tasks that together will create a complex task, like flying an airplane. I think I’m probably very much in the minority, but I would actually be very comfortable in an aircraft that was mostly modeled, maybe entirely model-driven, without a human at the wheel, so to speak. And I don’t think most people are there yet… I don’t trust AI to handle things that are complex and nonlinear, and where there are many external concerns that can influence the situation, and that’s kind of where I’ve arrived after several years of thinking about it. That’s kind of how I’m looking at the trust issues.

So what do you think would be a symptom or an early warning sign maybe that you are over-focusing on human intelligence, or that you’re over-focusing on artificial intelligence? What would be a symptom of either one of those conditions? That’s another part of this kind of framework, is “Hey, how would I actually know if I’m over-focusing in one of these areas?”

So the way I would arrive at that is I would not start with those things. I would not start with the human intelligence and what I think of it, versus machine intelligence. I would start with what I’m trying to solve, and I would think “What are the kinds of things in the abstract that will solve this?” And then we’ve called out some fairly substantial differences in those two sides, the human side versus the machine learning side, and I think that there are characteristics that kind of automatically lend themselves to one way or another, and I think I would arrive that way.

I think the biggest warning sign, going back to the very beginning of our conversation, is starting with an agenda, which we call an approach. Starting with an approach and trying to hammer that into whatever it is that you’re trying to solve; you go to the other end. So if you feel like you’re hammering something and it doesn’t quite fit, it probably means that’s exactly what you’re doing. And you should reassess and go back to what you’re trying to solve and figure the characteristics out and then which way do you go with that, at least to a natural solution, quite honestly.

Yeah, maybe that’s part of on the overemphasis on the AI side, maybe that’s part of it. That is where you have this inclination to love your AI tech solution, and you are the one that has created it, apart from these end users. You know, these people you’re doing the translation for. And you’re just sort of giving the output to them and they’re not using it, they’re not consuming it, it’s not being adopted. Well, that’s probably a sign that, “Hey, you haven’t involved them, maybe. Maybe what you’re doing isn’t as great as you think it is.” Like you were saying, you’re trying to sort of force a solution on another target audience that maybe you haven’t really involved from the beginning. Of course, there isn’t a disadvantage to always involving everybody from the start, right? It makes things slower. Right? It makes things slower, more costly, and so it is a balance there, I realize that… But maybe that is one of the early warning signs.

I’ll give you an ironic answer for that. With the podcast that we have here, we have the privilege of meeting and talking with lots and lots of amazing people. But there are also people out there with agendas that will reach out. And in my day job, I also have the same thing about, you know, people trying to reach out. I find myself very interested when people reach out in conversation, where they have a really meaningful need and they’re solving it with AI, because it’s the right solution.

And conversely, the conversations that I don’t find myself really gravitating to are when people are trying to sell their stuff based on AI. And rather than talk about the problem they’re solving and why that matters to people out there, they just – it’s all about the AI, “Hey, we have AI now.” And frankly, ironically, on an AI podcast, I find myself getting very bored with that conversation very quickly.

So I think that goes back to the fact that no matter what your job is, no matter what your role in the world is, we’re all productively trying to solve things that need solving. And I think AI is fantastic, but it also needs to be the right thing to solve your particular problem, and that makes it a fascinating conversation. Whereas if it’s just doing it because the marketing people say you need to do it, that’s a good warning sign that you may be off track.

And the other question here in this framework, in addition to those early warnings, they’re asking, “How will we gain or maintain the positive results from AI without sort of over-focusing on AI, and maintaining some balance with human intelligence?” Any thoughts there?

Kind of what we already said. I mean, humans are amazing; and as much as we talk about how amazing current deep learning technologies are in the AI space, humans are amazing, too. And if you can optimize both sides and you can find that teaming between the manned and the unmanned side, and find a way for them to fit together to serve whoever it is they’re trying to serve, that is a really, really good approach. And if you over-focus on one or the other for whatever your alternative reasons, then that tends to send you off.

Yeah, and maybe the way (or a way) to shift our thinking in this respect is to, when we think about one of these solutions, like machine translation for COVID information, we should be thinking from the start maybe about who is the human that needs to be in the loop in this process? And what is their interaction with the AI model? Rather than simply saying, “How can I create the best AI model?” Which is normally where we start, and to be honest, that’s normally where I start, just out of habit. But I definitely know that I need to be more focused on that human in the loop element; it definitely is important, and I think can help maintain that balance.

I agree with you. And I think COVID is a great – since we’re talking about that as our use case, it is such a strong reason to have language models in many, many languages. And what that does is you can let those models do that work, because they can, because it can be very procedural to do that, but there’s also a role for humans there, in that all of that communication is happening inside of context. It’s happening inside an environment that are people’s lives, and their emotions, and their relationships. And I think that’s a really good example of where you can scale language across many, many user groups effectively, and yet, there’s still room for people to be there to add that human element that is so necessary, especially in times like this.

Yeah, for sure. Well, we will link this polarity map into our show notes in case you want to go through this exercise with your own team, or thinking through various other balances that need to be had in technology. I would recommend taking a look. We do normally share some learning resources at the end of our episodes. I’ve got one I wanted to share, which I don’t know how popular it would be, but I hope it’s popular.

So normally, we share like machine learning courses or something like that, but I think I have someone early on in my – oh, I actually know. Shout out to Manish, who is the CEO of Dgraph. In one of our early conversations - this was quite a while ago - he told me that one of the biggest improvements in his own development, professional development as a software engineer, was someone helping him understand that putting time and focus on to his code editor, his IDE and really understanding that very deeply was an extremely important element of development, and can really give you a huge boost in terms of your work.

And I don’t know if you’ve found that to be true, but I definitely have found that to be true over time. So my code editor is Vim. I know it’s not very cool. I don’t use like VS Code, or anything. But there’s a website called vimcasts.org - I learned about this from a recent couple of episodes on the Changelog podcast… And this is just like a great wealth of info, but also that Vimcasts, the guy who runs that podcast, he also has a course about Vim, and the Core Vim course… And I’ve been going through that and I’ve really been enjoying it.

So I wanted to mention that on the podcast. Thank you so much, Drew at Vimcasts, for putting together this great course that is benefiting me a lot, but also another podcast that I love listening to and learning from. So yeah, I just wanted to give a shout out there. I don’t know if you’ve found that to be true as well in your own work, Chris, with your code editor…

Yep, I do. I am not tied to one strictly, so I kind of bounced between several. I use Vim some. But I also – I think these days, I’m probably using Visual Studio Code most.

It seems to be the popular solution.

Yeah, I’ve kind of moved over to that one as well. If I’m on a Linux server, doing something, I’m always in Vim.

I mean, that’s what a Vim person would tell you. Like, if you go to a Linux server, Vim will be there, and then you’re not crippled anymore.

Indeed.

But you know, that’s also sort of – VS Code is pretty amazing.

It is. I agree with you. I’ll offer up one as well. I’ll go back to what I was talking about at the very beginning of the conversation - the NABC value proposition framework, which is beautiful in its simplicity, and allowing you to stay on target and keep disciplined about it. It was actually created by a guy named Curt Carlson, and he has a website at practiceofinnovation.com/nabc-value-propositions, and we will link to it in the show notes. But that is a good place to go and learn a little bit about it, and I think that’s why it keeps it straight and simple. And that means that when you’re in the middle of a conversation, it allows you to stay on target. So that’s what I’ll offer up for strategy.

Awesome. Thanks, Chris. I appreciate you letting a novice like me enter into the strategy world and operate where I’m not qualified to operate.

Oh, boy. Oh, gosh…

So thanks so much, Chris. I hope you have a good week and we’ll talk to you soon.

Sounds good. Thanks, Dan.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00