Practical AI – Episode #186

The geopolitics of artificial intelligence

get Fully-Connected with Chris and Daniel

All Episodes

In this Fully-Connected episode, Chris and Daniel explore the geopolitics, economics, and power-brokering of artificial intelligence. What does control of AI mean for nations, corporations, and universities? What does control or access to AI mean for conflict and autonomy? The world is changing rapidly, and the rate of change is accelerating. Daniel and Chris look behind the curtain in the halls of power.

Featuring

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another Fully Connected episode of the Practical AI podcast. This is where Chris and I keep you fully connected with everything that’s happening in the AI community. We’ll take some time to discuss some of the latest AI news and dig into some learning resources to help you level up your machine learning game. I’m Daniel Whitenack, I’m a data scientist with SIL International, and I’m joined as always by my co-host, Chris Benson, who is a strategist at Lockheed Martin. How’re you doing, Chris?

Doing very well, Daniel. How are you doing today?

Doing pretty good. Lots of exciting progress on various fronts, and with projects, lots of new results coming out. I feel like there’s a lot of plates spinning, which is good… But then, you have to kind of bring in focus sometimes. I don’t know, do you ever read productivity hack type books and that sort of thing?

I hate to say it, but yeah, I get this feeling of desperation… And I go, “I’ve gotta level up.” So occasionally, yeah; not constantly, but yes, I confess.

Have there been any hacks that have really helped you over time?

Turning off email and Slack? At work I will put in a Slack notice on of the channels that are teams and I’ll be like, “I’m going gone for a little while, just a focus.” I’ve learned I have to set the expectation. But yeah, I’m starting to really focus on time to think and get things done, versus time to collaborate, both of which are very, very important, but I’ve learned that if I try to do them all at the same time, it often is not what I wanted.

Yeah, I really liked the reminders… You can set up reminders, and there’s also automatic reminders in Gmail, for like, “Remind me of this email next Tuesday”, or something like that. That’s been really, really helpful for me in all sorts of ways. It’s like the single greatest feature, at least for my workflows, that I’ve seen, and one of the things that I’ve used for quite a while. But yeah, I don’t know if they use AI to determine when to remind you about things, or if it’s all rules-based, but however they’re doing it, it works for me.

That sounds good. No, that works. That works.

[03:52] Well, speaking of redefining workflows, and the power of artificial intelligence and machine learning systems, last week, we had this discussion on a Fully Connected episode about kind of large models, sentience, some new paradigms and models and that sort of thing, which was really fun… Another side of this though that I think we wanted to follow up on in this episode is maybe a more global perspective of how artificial intelligence, how machine learning is shifting kind of both geopolitical, social, economic change in the world, and as practitioners, how that should be on our radar as we’re building systems that are contributing to that. So yeah, I know that you’ve put in a lot of thought about this…

I spend a lot of time on this topic, as you know.

So when - and I’m coming at this from a person who maybe doesn’t spend as much time… Maybe thinking systematically about things, but not necessarily politically about things. Because I even remember, like around the time of GDPR that came out, there was a lot of discussion about regulation around algorithms and that sort of thing… But it got a lot of news because it was maybe this first really big regulation around this sort of stuff. But as you’re maybe following this area more closely, how have you seen the discussions of AI plus politics plus economics plus social change in the world? How have you seen those progress generally over the last couple of years?

Oh, there’s so many paths we can take down that. I’ll actually start with the one that you just brought up, and that’s GDPR. That was the first, as you pointed out, big regulation, to regulate data concerns in Europe. But its scope was fairly limited, and it kind of addressed everything and in a uniform manner.

And a little bit ambiguous.

It was a bit ambiguous. And in European-related conversations that I’ve had, I’ve heard a lot of criticism over the subsequent years. I think the hope was that it might be the first very imperfect step, and that then learnings would occur, and further regulation that was a little bit more insightful and thoughtful, having learned a bit as we forged through this new landscape. And I think that may have somewhat stalled at some levels, and I think having that has been – there was a recent conversation I had where it was interesting, some strongly-worded conversation against GDPR from someone I was talking with.

I guess one follow-up to that is do you think that regulations around AI or machine learning systems are keeping up with the sort of widespread deployments and locations?

Oh, no… [laughs]

Okay, that was sort of a rhetorical question, but I thought I would just mention it to be completely transparent. So there’s this wide gap between deployment and scale of AI and machine learning systems and regulations around those.

There are so many things to go into that rabbit hole on. I mean, AI affects politics directly. And I don’t mean the output of an algorithm, I’m talking about having the capability of both applying AI and novel research… They’re things that people don’t think about that; that is a form of prestige, for instance. we tend to go to things like economics, and sports, and science and all that, but the ability for a nation to do that is a point of pride, and the perceptions among nations or large corporations - it can be any large entities - and what they’re able to project in terms of their capacity… It has a huge impact on business and on people’s perception of business, and thus, economics at a large scale. That’s just one little rabbit hole we can go down, but there’s a lie out there.

[08:23] How much around sort of a nation-state’s focus on an AI “strategy” Do you think – and this is kind of generalities, but how much of that do you think is merely for the prestige, and to not sort of get left behind? Or how much of it do you think is related to real strategies that are core to, whether it’s the economics or the social aspects or the political aspects within a nation?

So it’s a great question, and the answer is yes to everything, but on different timescales and priorities and budgets. So we’re far enough into this current AI revolution – I mean, you and I have been doing this podcast for four years now. We started in July of –

That’s kind of crazy.

Yeah, it’s July of 2022 as we record this, and we started this in July 2018, and we were several years into it when we started this. And so it’s far enough to where it’s a point of prestige; people have been hearing, if you don’t get into it, you’re gonna get left way, way behind. There’s truth to that, and we’re starting to see that truth now in terms of what different social or political groups, whether they be nations or corporations, or whatever social division you want to make in there. We’re already seeing power shifts in a bunch of different areas… And some of that is prestige-based, some of that is the ability to drive economic interests… Obviously, some of that is to drive military interests, which is obviously kind of the industry I’m in at my day job… But they’re all related. And academic, too.

If you are a country who is trying to build its educational system, and you need your universities to be the types of destinations that will draw not only your citizens in, but citizens from other countries, and you’re trying to build that - well, there are not enough professors in AI out there; not even close. Here in the United States, it’s a massive problem that we have about not enough instructors, just to teach the basics. That spreads around the globe, and there are portions of the globe that are really struggling to find anybody that is competent to teach these areas. And so that impacts the universities, each university’s ability to be reputable enough to draw a Daniel Whitenack in, or somebody with your interests.

You went through, some years back, as you were going for your PhD, you had to make choices on where you were going to go. And the students are today are making those choices, but the landscape is changing. And now that AI is touching every field there is, it’s super, super-important. So it’s going to change all of this, and it already is.

And I think if we’re looking at a global scale, there’s the sort of nation-state actors, but then also global companies and organizations… I’m even just thinking of SIL in my own context; just because we’re now kind of intentionally making efforts in the AI and natural language processing area, and we’re establishing intentional projects in that area, the sort of pipeline of talent into SIL - some new opportunities have arisen even with that kind of pipeline of talent, that maybe just wouldn’t have even known about our organization were it not for those efforts. So I think there’s also this pressure at a company level to have a visible AI effort…

[12:12] Indeed.

…regardless of if they really understand what their goals are there. It’s this “I don’t want to get left behind, but also, I want to make sure and get some of this talent, because it seems like everyone’s trying to get this talent.” And I do wonder, both at the political level and the corporate level, those kind of higher leadership, at what level do politicians and at what level do corporate execs actually understand the implications of establishing an AI strategy within whatever is under their purview.

So it’s becoming very common to have both national-level AI strategies and corporate-level. And for the most part, a lot of them look a lot alike, as you move across different organizations…

Which is probably a tell.

It probably is. And I think the differentiation occurs with leaders who are very forward-leaning, and they’re spending a lot of time thinking about where they want to get to, versus what they have today. And I think that makes a big difference on whether or not their approach is actually going to be viable in from an investment standpoint, in terms of its outcome.

But yeah, I mean, I know for a fact that there are leaders of state that are directly involved in these efforts. Not because they have expertise in it, but because they understand that their national interest is hinged to it.

So this is maybe a lower-level question, but I think it’s connected to this… If you are an AI practitioner out there, or a technical person, or a tech lead, or a manager, whether it’s in a government organization or in a corporate organization where this sort of trickle down of AI strategy is reaching you and you’ve sort of got a mandate to do something with AI, but it’s unclear to you maybe what that means or how the value comes out of that… What recommendation would you give to such a person to like navigate that scenario? Because I do think it’s happening in many places.

Well, it’s funny that you ask that on our show called practical AI, because my answer is incredibly practical, as you won’t be surprised… And that is - for your organization or your nation, what are the challenges that you’re expecting to face? And I think a fantastic example of that is the AI in Africa series that we’ve been doing over the past year, or maybe longer now. It’s been fantastic in seeing these AI are researchers in various African states addressing the needs of their populations, and they are channeling productive AI research to address those.

When I have conversations with other people throughout the world, in other contexts, I actually point to that directly and say, “That’s a fantastic way of approaching that”, because big fluffy AI strategy is fine, but if it’s not something that makes a difference in outcomes, it’s a waste of money and time and effort and stuff. So you’ve got to bring it all the way down to solving real needs.

So Chris, I’ve been reading a few articles related to this, and we’ll link some of those in our show notes… But I think what you were just talking about is really interesting, in that in this series that we’ve been doing about AI and Africa, we’ve learned that applications of AI within the local, either language ecology, or geopolitical situation, or nonprofits, or whatever situation where they’re being applied are very different than often a parallel in a location maybe in a Western country.

For example, the agriculture things that we talked about, the way AI is being applied in agriculture in the West is quite a bit different than the sort of large-scale application that’s needed within the African context. We learned that with our guests on one of the previous spotlight shows. And as I’ve been reading in these articles, it’s talking about kind of new models of growth, and how AI will shift sort of power structures, and that sort of thing… But one of the things that’s interesting is that AI systems applied systematically and very globally, if they’re coming purely from a perspective of one nation-state, they might try to scale out globally, but in a way that’s very irrelevant to other contexts.

For example, an effort to apply text machine translation for every language of the world would ignore the fact that some languages of the world have no written form, right?

That’s true.

So what does that mean, when we say that we’re changing, we’re creating this new structure of growth, and enabling wider commerce with machine translation and that sort of thing, when actually, if your context doesn’t fit into that model of growth, then you’re further marginalized, in some senses.

[18:01] Absolutely. I mean, I would kind of summarize that by saying that diversity matters. Diversity of experience, and diversity of the challenges of a particular culture or a group of people, and that the complexities that are arising in their experiences have to be accounted for if they want to use AI in that toolbox to address those things. So going back to your original point, which I thought made a lot of sense, was the fact that if you’re not customizing how you’re using AI and the focus of your research on the particular needs of your area, and those issues which arise from your point in a diverse world, you’ll get a substandard outcome from that.

So you can’t take something that might be a good approach in the United States and drop it into a country that has a very different culture and a very different economy and stuff; it’s not going to work well. So it takes that thoughtfulness. When I see somebody, meaning like a nation or a corporation or something like that, just kind of copying what the others are doing, it always makes me cringe a little bit, because it shows me that either they didn’t understand the need for that focus and that customization, or they simply weren’t thoughtful enough about it.

Yeah, and it makes me wonder kind of generally if AI systems continue to be dominated in terms of their development and the strategy around how they’re developed continues to be dominated by a few certain actors, that runs the risk of a lot of- at the minimum, irrelevance when they’re applied in a whole variety of contexts, but at the most, sort of harm when they’re applied in many contexts.

And the number of unintended consequences that you can have without that is pretty key. And the way that you apply them, for better or for worse, directly affects the power structures of the institutions and nations that we’re talking about. So it has a very real and extensive outcome, much of which is outside the scope of what people are thinking about when they’re trying to apply it. So it can affect both how those organizations – the relationships they have with others, or other nation-states, and it also affects the internals of those organizations, where budgets and power lay going forward, because of investments people are making there. And that can be in the private sector, it can be in the public sector, in terms of education, in terms of government, it obviously can be in military investments and approaches for it… So there’s so many places that it has consequences that, based on observation, I would say usually are expansively seen ahead of time, or not predicted.

And what are some of those key shifts in power or shifts in power structures that you think would be worth highlighting? Is one sort of nation-state government versus private sector? What other ones are kind of in your mind when you’re thinking about shifts of power in various ways?

Well, at the highest level, if we’re talking kind of nation-state level competition in a general sense, there are aspirations that nations have, and they compete with each other in a variety of domains. There’s economic competition, there’s academic competition… On this show we’ve talked many times, as people often speak of the competition that has arisen in AI between the United States and China, and the economics involved around it, and the number of academic papers being published… All of these contribute to trying to position. Obviously, as an offshoot of that, there’s the way that power projection in a military context is changing over time, and AI is certainly affecting that.

[22:08] We’re at a really curious moment in history right now - and I say “curious” not meaning good or bad; just kind of one of those moments where you start watching. And one of those is, as we are recording this, Russia invaded Ukraine a few months ago, and the whole world has kind of banded together, thank God, and stood up for the world order of not invading your neighbors and killing your neighbors. But if you look at how that affects non-military concerns, every nation in the world is watching how the conflict and the economics around it, with the sanctions and everything else, are being affected. And a lot of those mechanisms are now being optimized with AI algorithms. So you have these little AI solutions sprinkled all over the place, economically, and military capability, and all that… And then you have everybody in the world kind of watching to see what happens.

And before I abandon the military thing to move back into the general thing, I’ll note that we are proliferating AI capability all over the place, which will proliferate autonomy all over the place. And so the nature of conflict between these nation-states is also changing. And in Ukraine – Ukraine is doing this heroic job of defeating these big platforms, these big tanks and things, and expensive aircraft, with little missiles that only cost a few thousand dollars. And some of those missiles have capabilities, and over the future, we will see more and more autonomy and AI enablement in those types of things.

So you’re seeing a world where conflict will be judged by the plural proliferation of many, many, many more than we have now, mostly autonomous things. And so that also changes the need on investment. So as you’re looking at that, countries are having to think “If I’m going to be safe from an aggressor, in this case like Russia, going forward, how do I invest to do that?” They’re having to do that in the military context, they’re having to do that in the economic context, they’re having to do that in an academic context. And then all of these global organizations that are all household names are having to react because they’re operating in those environments. So it’s really – it has this endless web of influence that’s going around.

And also those that are in charge of, or have the power in certain domains of technology oftentimes make an even more visible impact in these sorts of conflict zones, potentially, than even nation-state actors. So in the Ukraine thing, just thinking of like – well, even, in certain cases, more so than any other government or state intervening in that situation, you have a lot of companies, whether that be IBM, Dell, Meta, Facebook, Apple, who made a big impact by ceasing operations within Russia, as a result of the conflict. And you just see the power that has in pulling away that technological capability. There’s a huge impact by that. And then you have even individuals, like Elon Musk, who did all the stuff with Tesla, and his Starlink satellite stuff in Ukraine…

And whatever you think of Elon Musk, you must realize, this had at least publicly a very visible impact on the effort to see support from that type of person, from that type of technology. And so whether it’s a perception thing, or an actual kind of tangible impact, those that hold the technology, and I think more specifically, are really plugged into this advanced technology, like AI-enabled technologies, autonomy - they hold a lot of the power, maybe even over nation states, at least in certain scenarios.

[26:13] Oh, indeed. Yeah. I mean, I agree with that completely. The sway of powerful people with powerful corporate backing has tremendous impact on the decisions that nation-states are making. So AI is an incredibly valuable national resource, or corporate resource, depending on what structure you’re in. And so like any valuable resource, it is now being used, and has been for some time, to change the balance of power and change future paths.

This is an uncommon conversation for us; we’ve usually focused more on the practice of using AI or AI research and stuff like that, but we’re living in this larger context, which our community often isn’t paying super close attention to necessarily. And we’ll think about things like AI ethics, but that’s at the practitioner level, as opposed to the environment that these activities that we’re all engaged in has made a huge impact on above us. So it’s all connected; we’re not working in isolation, as we do these things.

Chris, you brought up autonomy as one of the things that’s at play in this whole geopolitical side of artificial intelligence. I’m wondering, as you’ve thought a lot about autonomy, both used by governments and used by companies and other things - as that’s becoming more widespread and global in its application, what are the strategic and maybe human security risks associated with a wider spread of autonomy, or systems that maybe operate with very little human input, if any?

Autonomy will be pervasive going forward. And I’m not gonna put a timeline on that, and you can define pervasive however you want… But what I have certainly observed for a number of years now is this steady progression - you see things happening in the news, you drew Tesla into that, and there are many other companies also driving autonomy forward… We’re going toward a world where many of our activities are autonomous, and it will change what it means to live day to day as a person, in any culture. So that’s some of what we have to navigate going forward. And doing that changes the power structures associated with those cultures, and who is influencing different things… The creators of the autonomy, and with the ability to apply autonomy to certain points in their society are having outweighed influence compared to others in that.

So we’re definitely going in that direction. Clearly, military applications, clearly, many, many different industries are doing that. I’ve long said that there will be a point in our lifetime where it becomes uncommon for us to drive cars. And I’m not a spring chicken anymore, because the technology is moving really, really fast there. And we’re already seeing - I mean, there’s Teslas all over where people are using Tesla’s technology to drive autonomously, and that’s only gonna get better and better across all autonomy manufacturers. So it will not take long to see crash reports, that the number of autonomously caused crashes is quite tiny compared to the number of human-caused crashes for driving cars, for instance.

[30:05] Same thing for aviation. Military has led the way in autonomy for aviation, mainly because they can, because the civilian world is still quite frightened of assuming that a machine is going to fly the airliner for them… But the data tells a very clear story about safety there, and capability. So yeah, that’s going to be our world, whether it be robots, or whether it be vehicles, or whether it be other tools that we have in our work in our in our houses… This is part of our lives, and the people who bring us those tools and allow them to happen will be the ones with the power, whether they be politicians, or corporate leaders, or whatever.

What do you think about companies that would explicitly sort of put in their set of principles that “Hey, we are gonna explicitly build human-in-the-loop AI systems, and we’re not, we’re not going to venture–” And maybe that’s too broad of a statement, but how would you encourage people to think about that side, both in terms of the strategy and for wider-reaching principle within an organization?

I think it depends on the application. It’s funny, I have a lot of friends and colleagues that I have these debates with. This is what we’re chit-chatting about over coffee on a regular basis. And I’m gonna come down with what may be – most folks may not agree with me, but I tend to come down with opinions that aren’t what I want, but they’re what I think is inevitable. And what I think is inevitable, and there will be many, many instances where humans and AI are interacting, because the nature of the work itself is human, it requires both human and AI. Not because we want it to be, but because that’s fundamentally how the work gets done. It’s human-centered work. But there are also many activities that don’t necessarily need a human in the loop; it might make us more comfortable, it might preserve jobs, things like that, but it’s not the most efficient route.

And so whether or not I like that or not being irrelevant, I think that we will see that going forward. We get to a point where if it’s not a human-centered activity, the partnership with a human in the loop versus a human not in the loop - it just doesn’t make sense anymore. The human becomes the big challenge, the limitation, performance-wise, in terms of speed, all sorts of things. And we will see activities that occur without a human in the loop, because at the end of the day, they’re going to have to be. I see that a lot, and it makes people – when I get into specifics, it makes them very, very uncomfortable at times. But that doesn’t change the fact that I think that will happen.

So there is a need for us to be very, very careful with our decisions on that, but then we’re also inevitably going to have to get comfortable with autonomy all over the place, in some of those cases. I know most people are terrified of the idea of getting on that airliner and flying cross-country with no one in the cockpit… And I don’t think that’ll happen soon; I think there will be a human pilot that sits there and basically does nothing but monitor the systems, with an ability for an override… But that pilot’s skill will be far, far, far below what the autopilot can do automatically. So that strictly will be done to make the humans in the back feel better, because your backup, your human is going to be orders of magnitude less capable of handling that aircraft in an emergency than your autopilot. So that’s the kind of thing that is inevitable at some point here.

Well, I do have to make a confession. This is going to seem off topic, but I’ve been using Vim as my editor since whenever – I don’t know, years and years and years. But I’m now – not completely, but I’m using VS Code a lot because of Copilot.

There we go. I knew that was coming.

Me too.

[34:04] And this I think really brings home – yeah, I just love it. And I know there’s mixed opinions on it, but I would say overall most people that I’ve talked about that have really dug in and tried to use it, like legitimately tried to use Copilot, are pretty astounded with the efficiency gains and just like what you’re able to do with it.

So for those that aren’t familiar, Copilot from GitHub and Microsoft is a coding assistant that’s sort of built into VS Code. And I think that it actually does support other editors now, though I wasn’t able to quite get it set up otherwise… But it’s just amazing. All of those pieces – like, as a human, I can focus on the bits that are really important for me to logically consider, in terms of how the program flows, and maybe more complicated bits of it… And the other things, which are like, “Get this data from this database, write a SQL query” or whatever - boom, it just does it… Almost – like, really good. Maybe I modify a couple of things, but often I just actually don’t.

Because it’s pretty good, yeah.

Yeah, it is amazing. And I think that that’s a good example of – I really don’t believe that programming as a whole is going to be automated. I mean, they’ve been saying this since programming started; like, there’s going to be automated things. I think there will be a lot of things that will be easy to generate. But I think that programming will not go away. That’s my own opinion.

Interesting. I think programming – the way Copilot, the model that drives Copilot is using all of that open source code in GitHub. And there’s a whole debate about whether that’s an inappropriate use of open source to create a business that Microsoft has been criticized for in the last few weeks. But the fact is that that model is learning from a wealth of the best code on the planet. And so much like that airliner that doesn’t really need the pilot flying the plane… And I am using Visual Studio Code myself, because I’m working on a project that’s hands-on code, and I’m doing that. So I’m coding every day. But I’ve got to say, I don’t know that I agree with you there. I’m starting to feel like that airline pilot who’s sitting there, kind of just saying, “Yes, I’m accepting that code. Yes, I’m accepting that code”, and it’s just doing it.

Yeah. I think that the fundamental difference in my mind is that – similar to what we were actually talking about last week, is this apparent coherence that’s produced by these types of models. I think perception wise, you as a human coder, are like, “Well, I never expected it to be able to produce a function like that.” But it’s because of this vast wealth of data, which it’s able to assemble apparent coherence out of. But the things that I’ve seen in Copilot, the bits of things that are really like specialized logical pieces of the thing, that are that are specific to my context, still require a lot of tweaking.

And I think, actually – I mean, you can comment on this, because you’re way more familiar with the aerospace use cases and all of that, but my impression would be like an autopilot for a 737 or something, that is flying between known routes in the US, is probably able to almost do everything perfectly. If you sort of created a complete new airplane and just put the same model in the new airplane - it’s not going to work, right? So there does still need to be this fine-tuning, and I think that that’s where the human element comes in. There’s still an adaptation to out of domain data, right?

[38:04] I’m gonna make a stretch here… And I’m not speaking literally. You know, we were just talking I think last week about these visual transformers and the amazing things that they can take from the text input… And we were talking about addressing different domains, where you’re taking the same techniques. But what if one of those domains that you’re talking about is conceiving of some of the software systems ahead? So instead of drawing pictures of raccoons - which I was really enjoying, by the way - instead, what if it is conceiving of software architecture for a particular problem set, and then you already have things like Copilot that can go and find just the right code to fulfill each of the things you’re trying to do there?

I’m not saying that we’re there at this moment, but what I’m saying is, I can certainly conceive of putting chocolate and peanut butter together in the context of coding, and having something that’s particularly tasty.

So I don’t know, you have a point there, but I don’t know if that point will survive very long is kind of what I’m getting at, in terms of the require – And I love programming, I think it’s a wonderful thing for human to do, which is why I’ve stuck with it off and on for all these years. But it also won’t surprise me when there’s no utility for a human to be there anymore.

I think it’s one of those things that the domains in which we operate continually evolve as well. So as soon as I’m writing code to do a thing on Mars that I already wrote code for to do the thing on Earth, it seems to me that there will be sort of out of domain issues that are unexpected, and will need human input over time. And so I think part of what you’re saying is the adaptations that we’re able to handle now are kind of fine-tuning adjustments for a domain. And the generalist models that are able to switch between different domains - the switching will probably become easier over time, but also the domains that we’re exploring are becoming increasingly different and big over time. So the question will be “How do both of those trends evolve over time?” That’s a really interesting question, I think.

So I’ll speculate, as we’re kind of winding up a little bit, to kind of bring it back to how is power shifting at the corporate or geopolitical level, and the role of AI - all these capabilities that we’ve been talking about over recent episodes… Those who have the creative insights, where they can take advantage of these capabilities, and see opportunities… But the thing that humans still have right now is you have a form of very limited creativity in AI. In other words, it’s not self-aware, but you can create those raccoon on rocket pictures now, that we were talking about… Which is pretty cool, but it’s not sentient, and it’s not self aware, and it doesn’t have a special understanding of the overall world at all the different scope levels that we have.

Right. There’s an apparent intent behind the model, but it’s only a perception.

Yes. And so we still have the real thing there. So it will take a while to eclipse all that. So there’s a role for humans, and the humans that learn to do that really well, and are very flexible and creative in the way they approach the world are the ones who will have the power.

So you’re saying because I switched from Vim to VS Code and Copilot, I will have the power?

You are a power-monger, Dan Whitenack. You’re just grabbing power where you see it. I see this in you, I understand how this works. [laughter] But yes, it will be those who take that and recognize something and can go do something new, that their peers are not yet able to do, that will continue - whether they be technical or not technical people - will have the power because of these resources that we have, and they will sway them at all levels, from the practitioner all the way up to the geopolitical leader.

[42:19] Yeah. Well, I think that’s a good way to sort of come to a close. Maybe one more thing, Chris - if maybe there’s practitioners out there that are aware of what they’re doing in their own company, they’re aware of maybe even best practices across industry, but they’re just curious about maybe looking at some of the conversation that’s happening at the geopolitical level around artificial intelligence, just so they can learn kind of broader trends and what’s being talked about their industry, at the maybe government level - is there a place that they can go to, to at least be exposed to some of those things?

Yeah, I’ll give one in a second, but I’ll lead by saying that most large organizations and most nation-states now have official AI resources on their websites and such. And so whatever country you happen to be listening in - and we have listeners all over the world - your nation has resources there for you. You and I are sitting here in the United States, so I’ll point to our own government’s resource starting point is at the URL ai.gov. If you go there, it is called the National Artificial Intelligence Initiative, and it was created by a law in 2021 called the National Artificial Intelligence Initiative Act – sorry, of 2020. I got the year wrong there.

So it is a website where you can start to see how the United States Government at large - this is not specific to military; DOD has a strategy you can Google, most militaries also have that… So if you have an interest in your country, or military, or whatever - all of these different domains or dimensions has these resources online.

If you go to the ai.gov one that the US government has, it has what they call strategic pillars in it, it has different sections with documents such as strategy documents, and different publications, and some of the laws associated with it, and then they have other resources available.

So if you’re interested to see how the people in power over you are thinking about AI and how it may directly influence you and your life and your family, you should go and see what these governments are thinking. And you know what - I’m going to finish by saying participate in the process, where you’re at, so that you can influence people toward the right decisions.

Yeah, I know – I mean, this is happening at a local level, too. Even in my small town, we recently had a bunch of discussions locally about facial recognition in policing that were going on in our local community… And yeah, so this is happening across the board. Thanks so much, Chris, for helping me learn a bunch today. It was a fun discussion.

Yeah, it was.

Talk to you soon.

Take care.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00