Practical AI – Episode #271

AI in the U.S. Congress

with Representive Don Beyer of Virginia

All Episodes

At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master’s degree in C.S. with a concentration in Machine Learning.

Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act.

We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.

Featuring

Sponsors

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Welcome to Practical AI 00:43
2 00:43 Congressman Don Beyer 👀 05:17
3 06:00 Taking the first step 02:37
4 08:37 AI development vs perception 03:00
5 11:37 Recognizing the magnitude 01:49
6 13:25 Trickling policies 01:15
7 14:41 The AI caucus 02:24
8 17:05 Keeping up with AI 03:20
9 20:25 International policies 01:49
10 22:14 AI Geneva Convention 03:09
11 25:24 Dealing with misconceptions 03:43
12 29:06 A word for practitioners 03:17
13 32:24 Future education 05:09
14 37:33 AI in Don's life 02:28
15 40:01 Outro 00:46

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of the Practical AI podcast. This is Daniel Whitenack. I’m the founder and CEO at Prediction Guard, where we’re safeguarding private AI models, and I’m joined as always by my co-host, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are you doing, Chris?

Doing great today, Daniel. Excited about today’s guest.

I’m thankful in this particular weekend that the government has given us a holiday, so we’re about to have a long weekend here in the US… And to kick in the long weekend, we’ve got with us the current congressman from Northern Virginia, who is Don Beyer. Welcome, Don.

Daniel and Chris, thanks so much for inviting me to be on the show.

Yeah, well, we’re super-excited to have you join us. I think Chris and I were talking before the show, it’s just encouraging to see how you’re engaging with the subject of AI, and encouraging that there are people like yourself in our government, and I’m sure governments around the world as well, thinking very deeply about this topic. Could you give us a little bit of your background in terms of particularly how you started getting more and more interested in science and AI policy?

Well, not to go too deeply, but when I was in high school, in grade school, I just loved math, and math puzzles. When I was a kid, at Scientific American there was a guy named Martin Gartner, who had a big puzzle at the back of Scientific American every month, that I loved to doing. And when I was in high school, I thought I was going to be a physicist. My dream was to work at the Niels Bohr Institute of Theoretical Physicists in Copenhagen. And then I figured out in college I wasn’t smart enough to do that, so I became a car dealer instead, and spent most of my professional career - although my mom didn’t think it was a profession - using math that I knew up in fourth grade. Once I had division down, I was okay for the business world.

But I was always interested in it… And when I got to Congress, I was privileged enough to be appointed to the Science Committee. The Science Committee is not anyone’s first choice. Like, nobody gives you money because you’re on the science committee. But I found it fascinating, not only because of climate change, because we got to do the oversight of NASA… We were the first ones to see the pictures of the black hole. When they figured out gravitational waves, we had those scientists come talk to us, and it was just really fun and interesting. So I get to do a lot more science than I’d ever done before.

Some years ago – well, actually 40 years ago, I’d heard a graduation speech by one of those 16-minute guys [unintelligible 00:03:25.09] And this is like late 1970s, and he talked about how much information we are generating every year, and our inability to see the patterns in that information. It was just too much. And it has always been in the back of my mind that we were being overwhelmed by information. So maybe 8-10 years ago, when AI was really starting to catch gear, there was a Coursera course… I took a couple of Coursera courses. My first one was on gamification, which I thought was pretty cool. And so I took an AI course on AI, and the first three weeks, I just found it totally fascinating. The idea that you could do signal amplification, that you use mathematical formulas and linear algebra to progressively get closer to actual connections and relationships.

But then I got to the first exam, and literally, I say I got a zero on it. It was worse than that. I didn’t turn it in, because I couldn’t answer a single question. Because I didn’t know Python, I didn’t know Java, I had never taken linear algebra… And I just put it aside.

And then two and a half years ago, our local university, George Mason University - you know, one of the things you do in politics is you tour all the new sites. Anyplace you can cut a ribbon, right? So we were cutting the ribbon on their new innovation center in Arlington, and I was just really intrigued, and actually jealous. And so as a throwaway, I said “Could I ever take courses here?” And they looked at me sort of funny, didn’t really answer me, but I got the lady’s card, and I wrote her later that night and said “I’d really like to sign up.”

I was late for the filing deadline, the application deadline, but they waived it for me… And I ended up taking precalculus that spring. And again, I’d taken all that 50 years ago, 56 years ago, but it was like taking it all over again. And so in the meantime, I just finished my sixth course on object-oriented programming, and I am coding now for the first time in both Python and Java, and getting ready for the other 11 courses, which as long as I live long enough, will give me a masters in machine learning.

That’s fantastic. Ironically, before I move on to a question for you, I took that same Coursera certificate course, and it was hard. It’s was a hard course.

You did much better than, I’m sure.

[00:05:48.27] It was tough. It was a tough course. And I also had to bone up on some skills that I had long since lost or not touched on… It’s a fantastic story. Just to back up for one second… I actually first solved this story that you’re telling us here when Ian Bremmer had interviewed you, and I was following his social media releases, and was very inspired by that… Especially because it’s so common for the public just to assume Congress doesn’t understand science, doesn’t understand technology and AI… And there you were, doing it, and especially leaping unabashed at the age you’re at, which in my view is a plus here… And we have a lot of folks - I’m in my 50s - that are my age and older, that follow this show, and I really wanted to bring this out.

I’m kind of curious, were you nervous about it? …in terms of diving into this. There’s this perception that even I experienced at my age, that AI/ML is a young man/young woman’s game, and we’re all kind of a little bit on the older side for it, and stuff… Did you have any fear of diving into this topic? How was your head at in terms of making that step?

Chris, I should have been more afraid than I was. [laughter] I had originally graduated in 1972, a degree in economics. And I worked for a year or so, and then I decided that – I didn’t know what I wanted to do, I was wandering around… So I thought “Well, I’ll go to med school”, but I didn’t have any of the prereqs. So I went back and did the whole pre-med thing in about 12 months. And at the time, I had figured out that it was just sailing; I was competing against 18, 19, 20-year olds, and I was 23, and I’d worked for a year, I was married… And I crushed. I just felt like I dominated. So I thought maybe I would again; that all these kids were – they’re too busy trying to figure out who their next romantic liaison might be, and how much they could drink tonight… And boy, was I wrong. The kids that I was in school with are so serious, and they have these great technical backgrounds from high school, and they were the ones crushing me. So it was really a great exercise in humility for me.

And I honestly say, I did well in college, and taken courses here and there over the years… I’ve never worked harder in a course than I did on this last Java course. I typed 193 pages of notes.

Wow. It’s funny, you say that… I’m trying to teach my daughter right now to start taking notes in class, and [unintelligible 00:08:13.26] and stuff like that… But I’m like “The good students, that’s what they do.”

Well, I still have two packed notebooks from the Python course… You know, I look around the classroom with the other 80 Kids, I’m the only one with a notebook. A lot have laptops open, and some have their phones, and some were asleep… You’ve got the whole range.

And as you’ve been diving into this subject at a more hands-on level, I’m wondering, on the one side, you’re a part of and participating in conversations about AI, and increasing conversations about AI on the government side, and policymakers side… And then on this side, you’re kind of in the weeds, so to speak, of creating – literally programming in Python, or wherever it is… What’s been maybe surprising to you about how AI or machine learning is developed at a hands-on level, at a practitioner level, versus the perception at the policymaker level? Is there anything that stood out to you, or things that have surprised you?

Before answering that though, you touched on something that has always fascinated me. I don’t know if you’ve read Kim Stanley Robinson’s Mars trilogy. It’s a wonderful book. It got me through a gubernatorial campaign, 15 pages a night. And one of the things in it that I really took away from it was the idea of both leadership from the balcony, and leadership from the field. That one of the lead characters in it would spend a year or two managing the planet Mars from like a general manager level, a presidential level, and then he’d go out in the field for two years and live in a tent by himself, looking at Mars biology.

[00:10:04.04] And when I was in the car business, just to be mundane once again, I’d always spent two weeks every summer working as a technician in the shop as a mechanic. By the end of the third day, I was cursing [unintelligible 00:10:13.13] and I’d hate management by then, by the third day. But I’d come away from it at the end of those hot summer days with cut hands and a real appreciation for what it was like to work on cars all day long. Replace the water pumps, and trying to figure out where that obnoxious knocking noise is… And now I’m finding the exact same parallel with artificial intelligence, and serving on the AI Task Force, and the AI caucus, and the various committees that we have, that it’s really fun to see – and I’m still very much a rookie, a baby in this field… But I can imagine what two or three years later a senior software engineer is doing, the people that are building these wonderful models right now, that understand how the neural networks come together.

On the other hand, trying to figure out, what do we do about deep fakes? What do we do about hallucinations? What do we do to protect our electoral systems? To look at the policy side also. And I’m blessed to have people like – I have a wonderful tech fellow, which means some other foundation’s paying for him for a year, who’s an MIT computer scientist who works in AI… And he actually knows the math and the computer science and the hardware of it, to help with me on the policy side.

That’s great.

I’m looking forward to diving into the policy side, but before we go there, how have other members of Congress received your dive into this topic with such an intensity? Do they tend to come to you, do they look to you, are you seeing that way? And what hopes might you have of members of Congress really digging in, whether they’re in the House or the Senate, digging in to this topic with some sense of willingness to recognize that it is a huge topic for our future? Do you have any hope for that, or do you think it’s going to kind of stay this way?

You know, Chris, I’m embarrassed to say this, but I think the primary reaction has been amusement. [laughter] Especially when I was doing single-variable/multivariable calculus, I’d bring the homework to the floor, because sometimes we’d have long vote sessions… Especially during COVID, when we had to vote by proxy. So people would come on over and say “What are you doing?” And then they’d look at it, and get a little anxious and walk away from me.

So no, I don’t expect many other people to be going back and taking undergraduate and graduate courses in it, but I do think many, many members of Congress are trying to read everything they can about it. There’s an abundance of AI books out; you see them all over the place. And people are trying to read every article they can. We have had just a myriad visits on the hill, from people that are experts in the field. Just this last week we had a number of people on the AI safety side, people from the NIST game, for example.

That’s very encouraging to hear.

And we’ve got people from industry, and people from Academia. Stuart Russell was there a week or two ago, from Berkeley. He wrote apparently the classic textbook on it. Marc Andreessen came a couple of weeks ago to give us his techno-optimist manifesto. So we’ve been listening to as many people as we can in order to try to develop good policies.

Well done. I think some of us, who are observing our US government with things coming out like the executive order, and you mentioned NIST, I know I’ve read a bit of some of the guidelines, best practices, some of what they’re digging into… Could you help those of us who aren’t well versed in the things that policymakers are involved with around the topic of AI - how would you categorize those? What are the main focuses, and what are the main activities that policymakers, congressmen like yourself, what are those main activities that are going on? And how should we kind of be expecting some of that to trickle down to us as practitioners, or kind of hit our desks, so to speak?

[00:14:16.23] I think there’s a tidal wave coming at us right now, because it seems like every group, every committee, every caucus wants to have a little AI specialty. So for example, I’m on the AI caucus, which is pretty large right now. Bipartisan, almost all of it, which is really encouraging… And so we’re doing the education piece for other members, and their staffs; especially their staffs.

And for those maybe in an international context, as we do have international listeners, what is a caucus in terms of the AI caucus that you’ve just mentioned?

Well, the first AI caucus is open to all members of the House and Senate. And literally in earlier years, we’d have 10 or 12 members, and 6 would show up for lunch. Now 150 show up.

A lot of them staff, but really interested in the speakers, and what they can learn. And then the individual – we have smaller groups; half of the Democrats belong to the new Democratic caucus, which sort of defines itself as being pro innovation, pro business, pro trade… And so they have their own AI caucus. The progressives I don’t think have one yet, but they’re really interested in it.

And then probably the most important one is the speaker, Mike Johnson, and the Democratic leader Hakeem Jeffries appointed a 24-person Task Force for this year, to try to look at the 200 different pieces of legislation that have been introduced on AI, and focus it down to the handful that we should pass this year, that would be building blocks for the years to come.

You can probably elegantly put it in four or five buckets, but clearly, the deep fake problem - not just deep fake, but the whole copyright plus problem, for music, and illustrations, and photographer, and text, and voice, obviously, Scarlett Johansson, that piece… Then there’s the whole piece about generative AI, and what can we expect, and what can we rely on from the large language models that are springing up. The whole safety concern, which – one of the I think most encouraging things… By the way, all this is on the background of social media and the fact that we’ve done nothing in the 25 plus years of social media, except make it impossible to sue them, section 230. And no one has been able to come up with an agreement on how to modify 230 to allow people to be held accountable without crashing the whole internet, making it just endless lawsuits. So we’re trying to get ahead of that… First of all, humbly, Congress will never be ahead of the American people. But we want to get ahead of where Congress typically is, by looking at significant legislation.

It’s really good to hear some of this, and I don’t think – I’m sure that that’s accessible to people if they know where to look, but I think Daniel and I are learning a lot from you here today in terms of how this works. Compared to previous technologies that we’ve seen over the decades, this is a bit of a different beast, AI. It’s going much faster, it is likely to have a much more profound impact on work, certainly in the industry I’m in, warfare, across the board, even what it means to be a human as you go forward in time, to some degree… With these changes happening - you know, we’re getting big news in the AI space every week. Less than two weeks ago as we record this, Open AI announced ChatGPT 4 Omni, and that alone, relative to its previous version of the model, changed how people are using AI in day to day life. You have the entire open source arena, with Hugging Face, there are over a million models there. This thing is happening so fast. How do you envision Congress trying to get an appropriate handle on that, whatever that means to you, on such a fast-moving, expansive topic? I struggle as a citizen to envision how that even happens, and I’m really – I’ve been waiting to ask you that question ever since we agreed to do this.

[00:18:16.24] Chris, you’re not suggesting that Congress acts slowly, are you?

I would never do that to a congressman. [laugh]

That’s a really hard question, because as I’ve discovered in my nine and a half years there, it moves glacially.

Our founding mothers and fathers, by building in a Senate and a House, the competing chambers, and you had a filibuster in, and a one-person hold, and a Senate that I respect as part of the founding compromise… But a very small fraction of Americans elect a big fraction of the senators; like 30% of Americans elect 70% of the senators. And it’s amazing we get anything done. And then you almost sometimes need what they call a trifecta, where one party controls the House, the Senate and the presidency, to do any major legislation, like the Affordable Care Act, or the Inflation Reduction Act, or Donald Trump’s tax cut and Jobs Act. Those happened under trifectas.

So all we can do is keep this as bipartisan as possible, and then probably deal with whatever emerges as the largest downsides. We’re not really having any big downsides yet. We talk about the threats. I mean, there are very obvious things here… The whole CSAM issue, making an undressed Taylor Swift, or sex videos for underage teenage girls that never participated in them… Those are very real threats, and we struggle to know who to hold accountable for them. But in general, the hope is that by talking about it, looking at these 200+ bills, looking at the plethora of bills that have been introduced to state levels - I understand more than 50 just in California - that we figure out the handful that actually make a difference, and actually protect people.

And by the way, I’m very impressed by the President’s executive order, Biden’s. It turns out it’s the largest executive order in American history. They’ve had all of their benchmarks so far, their timelines, and maybe the most important thing is they set up the Safety Institute at NIST, now led by Elizabeth Kelly, that’s staffing up. So finally, at the federal government level, there is a group that is specifically charged with dealing with the safety and trust issues.

So we’ve seen that executive order. I think we had a previous show where we talked through certain pieces of that; really encouraging to see some leadership there. How would you view kind of the more international side of this, in terms of how the US and our policymakers are proceeding forward, versus policymakers across the world? How do you view that from your perspective and what conversations are going on as related to kind of our positioning within that, and the role that AI plays globally?

Daniel, I think that it’s a really important issue, and I think there’s lots and lots of conversations. I think I’ve had no fewer than eight meetings with actual European parliamentarians, who have been putting together their EU AI Act. And just in the last three weeks one big dinner and a long session during the day with those same people and their lead technical staff explaining how the EU AI, their act is working, and how it differs from ours.

The shorthand, that’s an oversimplification, is they describe themselves as a regulatory superpower, and we are all committed to innovation. So we’re not licensing algorithms, or giving permission to do certain things. But we also know that we have to be there in the UK when they talk about their [unintelligible 00:21:49.20] doctrine, we’re going to Japan for their stuff… Ultimately, in the middle run, we need to have something like a Geneva Convention on AI. This is especially true to the extent that we can engage China in it. We know China is concerned – obviously, they’re investing hugely in it, but they also are concerned about the safety parts of it. And we all have to come together, right?

[00:22:14.02] That’s exactly what I was about to ask next… We often bring it up on the podcast, is kind of the safety – you have all these things pulling against each other with tension, you have the innovation just driving forward constantly, as we’ve talked about the understandable safety concerns that we all have, which we have increasingly over the last few years been talking about, and which our audience is also demanding… There’s a lot of concern out there. With looking at the international balance and tensions… We have Russia, doing what Russia is doing with Ukraine, we have China and Taiwan, and on all of these things… Ukraine is the first war that is becoming increasingly AI-driven in terms of the technologies being used… China is an AI superpower, along with the US… And we’re all talking about this need for us and the Europeans to work together, but there’s always the concern about bringing everybody on board, because you’re – I love the idea of the Geneva Convention that you just mentioned, if all the major powers can get on board. How do you envision all these different tensions pulling against each other possibly working out, and getting the motivations of all of the Western countries, led by the US, with Russia and its sphere of influence, and China and its sphere of influence coming together? Do you have any either aspiration or expectation on how you see that coming together over the next few years?

Chris, it’s probably more aspiration than expectation… And with you coming in from the defense industrial base, you know how important our warfighters think this is.

Chris Benson: Indeed, I do.

The chairs of our intelligence committees, the chairs of our armed services committees, they very much do not want us to be behind China. There’s a lot of debate about how much human agency should there be in the use of kinetic force. Can you have machines deciding who to target, rather than an Air Force pilot, even be sitting in a room with levers, in Colorado Springs? …at least that’s a human being saying “Let’s take out that car” or “Take out that building”, rather than letting a drone decide who to attack, and the like. And then there’s space, and the weaponization of space, and the role that AI plays in that.

My hope is that we will have some renewed arms control in the days ahead. It’s been pretty sad the last couple of years, as Russia has progressively withdrawn from arms control agreements, and China has been unwilling even to sit down and talk to us. But sooner or later, if we’re going to make the world a safer place, we need to talk about all the nuclear weapons on the planet, and at the same time talk about artificial intelligence, too.

One of the very first pieces of legislation introduced was Ted Lou, Ken Block and I on prohibiting letting an artificial intelligence algorithm make the decision to launch a nuclear attack on another country. That has to be the president of the United States, and the Chairman of the Joint Chiefs of Staff, and the Secretary of State; human beings making decisions of that magnitude. And yes, they can use all the data they want, but a machine can’t decide.

I totally get that, 100%… But I’m often kind of shocked at how many people don’t understand that, the idea of nuclear weapons. A few years ago I was with the CTO of Lockheed Martin at the time, who’s since left, and I was doing an event in London, and I was on the stage… And I actually had an audience member say “Could you talk to us about–” With an assumption in the question. “Could you talk to us about the fact that the US has AI controlling nuclear weapons, and what you think the implications are?” And I laughed it off and said “That’s not the case, obviously.” But that was one of those first moments where I realized how much misinformation about these topics was out there.

[00:26:04.01] And obviously, since then, that’s just gotten more and more, in so many variations. Deep fakes, constant misinformation that AI enables, in nefarious intent… Could you talk a little bit about kind of the safety of how to approach that from regulation? You’re obviously deeply into the topic… Give us a little bit of guidance. Things I can tell my family, who are not into AI. Because when we sit around Christmas time and holidays, and the extended family, they don’t know either. And I’m always shocked that my own family doesn’t know. And so we’d love to hear your thoughts on just kind of how to approach some of these incredible misconceptions that people have.

Number one safety issue is “How about job elimination?” We know that it’s going to replace many, many jobs. But the exciting thing is what they call ambient clinical documentation… Doctors and nurses say 25%, 30%, 50% of their time is filling out data on the clinical visit they’ve just finished, or in the middle of it. Now, there’s software or hardware that listens to the conversation between Chris and his doctor, and writes it all down. By the time the patient leaves, the doctor can read it and check it, and “Yup, that’s what we talked about.” Saving an immense amount of time.

But then – so job elimination… And what do we do? We know that that’s happened in every revolution - agricultural, industrial, information… But we also know that it will probably happen much more quickly now than it ever has before. So much less time to react and for people to adapt to that change.

A second level is all the misinformation, whether coming out of the large language models, either unintentional or the intentional stuff. How do we protect against that? Then there’s the whole notion - and some people take this seriously, other people say “Nah…” That with desktop ability to generate DNA, and synthesize DNA, and the ability to look up “What’s a smallpox vaccine? What’s the DNA of that?” Or go from COVID-19, let’s make COVID-27. The whole notion of bioweapons, or others things that can be used based on the information that comes from language models and AI. And then all the way to the existential threat. It’s interesting that so many of the computer scientists I talked to just really say “No, no, no, we’re nowhere near Artificial General Intelligence. And even if we were at AGI, that’s not going to be conscious, it’s not going to have will. It’s not going to plan its own things.” I tend to be on the humbler side, which is “We don’t know where consciousness comes from. And we don’t know when and where it’s an emergent property from what.” If we’re building machines that can think some things thousands or millions of times faster than we can, why are we so sure that there won’t be an emergent consciousness coming from this? And in my conversations with Elizabeth Kelly at the NIST Safety Institute, my plea is that there’s at least some subset of that group that always is keeping the existential threat at the top of mind.

Well done. I’m always thinking of my practical day to day work in using models, building models, applying things in sort of an enterprise context… From your perspective, now that you have both of you from kind of the hands-on, granular level, but also from the kind of global and policymaker perspective… If you could say something to everyday practitioners in the AI space who are building these systems, what should we have in mind, or what should we be thinking about kind of moving to the future? You said policy and regulations and all those things will catch up… But there’s people building these systems now. So from your perspective, what are some things to keep in mind from a practitioner level?

[00:29:57.17] Daniel, I love your question. I get a blast email - I think this is once a week; it might be twice - from something called AI Tangle. And in every one there’s like 15 or something new ideas about AI. And the new companies that have just sprung up. And I always read and say “What are they going to do?” And they all sound really similar. You know, they’re going to help you manage your enterprise, and they’re gonna coordinate, blah, blah, blah… And it’s never very inspiring. Instead, if I were in that mode, if I could quit my job now and start an AI company, the first thing I’d probably try to do is say “What are the real big challenges in our lives that we’re not fixing? How can I use AI to improve our climate change posture? How can I use AI to lift all the people that are food insecure in America out of that food insecurity?” Or something that’s very close to my heart… 10 years ago, with a noble Republican, we started a Suicide Prevention Task Force. And if I ever get past my masters, what I’d love to do, based on where history is, is use AI to work on a predictive model on who’s at risk for suicide. We lost one of our beloved Capitol police officers three days ago. He took his own life, died by suicide. And I used to say “Good morning” to him every morning; big smile, sweet guy… I talked to a bunch of his fellow officers yesterday, nobody had any idea this was coming. And if we lose 50,000 people a year - that’s just about where we were last year - to death by suicide, wouldn’t it be great to be able to use AI to figure out ahead of time that 1,000 of them were at risk, and intervene and save those lives? Or make even more?

And that’s just one small example… But there’s so many ways that I’d love for us to use AI. The generative part, yes, but especially the predictive part, to see if we couldn’t make the world just a really better place. You’ve all probably read - was it the Minority Report? [unintelligible 00:31:56.18] could look ahead. Chris could look ahead and figure out who’s going to commit a crime, and then throw them in jail ahead of time. Well, we can’t do that. But if you can use artificial intelligence to figure out who’s most at risk of committing a crime, and intervene in a positive way to change their lives, maybe we can have a safer, happier world.

I really love this line of thinking… Daniel and I focus very much on the show about kind of AI for good, and not everything being about making a profit in a business, but kind of how does it affect society. It’s a big theme in the show. And I run a nonprofit when I’m not working at Lockheed; it’s a pure public service nonprofit, and it’s turned to another full-time job that I don’t get paid for… When you think about how AI can do good in these ways, obviously taking into account the safety and privacy issues that are there… You mentioned climate change, you mentioned food insecurity, and suicide prevention, which kind of ties into the mental health theme… And as we have these AI agents, as we combine generative AI with some other tools, and there’s an agent for everything in the future… There’s some agents that can handle mini tasks, they are mini specialized, and everyone talks about that coming… You mentioned about kind of suicide prevention, and I think that’s a fantastic idea if you have that agent, that personal assistant that’s always there and can kind of take care of you. Beautiful idea. Also an education.

I have a daughter who just turned 12, and I’m trying really hard - and I know education is a big thing for you… How do we see our children today growing up in this world? How can AI help them with education? How can it make their world better? There’s the scary things like job loss that we always worry about, but there’s also these amazing potentials for good as well. What are your thoughts about AI and education, and where that goes into the future?

[00:33:58.17] I’m very excited about it. By the way, just before that, something that ties in sort of the last couple of questions… One of the things that fascinated me about my last meeting with the European Union people was apparently their AI Act banned the use of AI for emotional recognition. They didn’t want – not just facial recognition, but reading faces to see how Chris is feeling today. And I was concerned about that, and pushing back on it, because you’d think that that might be a really helpful thing as you look at somebody who might be at risk. There are a couple of linguistic professors at Georgetown University who are trying to use people’s writing and texts to see language that jumps out that suggests suicide ideation. Once again, for a predictive sense.

But moving to education, I’m sure you guys had the same experience… I know my wonderful [unintelligible 00:34:49.25] tech fellow must have had many boring, boring hours in high school, while he sat there and while everyone else tried to catch up. I can’t tell you how many plays and poems I wrote while other people were trying to figure out a physics problem. The fact that you can use technology in general, and artificial intelligence in particular, to let people go at their own pace, and learn as much as they can, as fast as they can… But then on the other side, for the kids that can’t read at grade level in second, and third and fourth grade, I know that you can use technology in a way that can help them learn these reading skills early on, and improve it. It should be the kind of thing that applying artificial intelligence to education should make the teachers’ task easier, personalize education… I’ve never had to teach a classroom full of 30 kids, but you know, 30 kids have 30 different levels of ability. And that’s gonna be really challenging.

As you mentioned that, the emotional detection thing, I think if you can get through the privacy concerns, and who’s in control of that data, I would imagine that that could be a huge plus across mental health education and stuff to do that… And so I’m very encouraged to hear you indicate that, assuming that the context is right, assuming that we can find the right constraints and barriers around it, that that would be a plus going forward, rather than just saying “Nope, we’re not going to do that.”

Leading into this, I tend to take my daughter rogue a little bit on her homework. The teachers are telling her “Don’t use any AI on any of your stuff.” And the teachers are bound by the policies of the school board, so I’m not lashing out at teachers at all in that way; I just wanted to be clear on that. But policy right now is very much against that in the school systems. And I tell her “No, no. I’d much rather you learn the material, but let’s use the technologies to help that learning happen.” Do you think that that will prevail, and that we’re going to have these technologies in a really beneficial way, very personalized for each student, as you mentioned…? Do you think we can get through the politics and the lobbying against that currently exists?

[00:36:54.05] I think so. I think it’s natural, there’s gonna be resistance in the short run… But already, I’ve talked to a lot of college professors who are like “Don’t worry about it. I’m thrilled that they’re using it, because they’re learning the material. And they’re asking deeper questions, and the AI is often pushing back, and asking what they know…” I think it’s fine. And worst case, we can go back to blue books for the exams, where you handwrite the whole thing.

Yeah, I definitely remember my fair share of all those tests where you have to fill in the little dots with your pencil…

Scantron.

Yeah, Scantrons. Those are interesting. Closing out here, Don, we’re getting near to the end of our conversation… And we’ve talked a bit about education, policy, international approaches, and how AI is influencing kind of global relations, and those sorts of things… I thought it might be fun to end here with just asking you how AI is influencing your life personally. What have been some things that have been helpful for you, and as a congressman working in our government, how is or are you thinking AI will shape your job?

I confess at the beginning, I’m a huge AI optimist. Maybe not as far as Marc Andreessen, but still a big AI optimist. And where I see it most meaningfully is in healthcare. The fact that we can now, in some cases, diagnose pancreatic cancer three or four years ahead of when we could otherwise. I’ve met with a bunch of radiation oncologists the other night, and the difference in getting radiation treatment for cancer 20 years ago and today, because of artificial intelligence, is night and day. They could exactly pick out your tumor, to the micron, externally, and put that protein beam or neutron beam on, and make it dissolve and go away… You know, it’s just remarkable.

And there’s a wonderful new book out called “Why We Die” on the science of longevity, that argues that the first person to live to be 150 years old has already been born. He or she is among us today. Because of the difference that artificial intelligence, just the applied knowledge of this extraordinary amount of data that we have. We have some pretty good ideas about physics, and some on chemistry… We know very little about biology, very little about the human brain. But artificial intelligence is going to open up a lot of those doors for us.

Well, thank you for taking time today to give us a bit of that optimism, but also help us understand how a government is thinking about some of the more difficult and safety-related issues with AI. We’re very encouraged to have you in those conversations, and taking time to join us and speak to practitioners directly in this conversation. So thank you so much, Don. It was great to talk to you.

Thanks a lot, Don. We really appreciate it.

Thank you, Daniel and Chris. Good luck.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00