Practical AI – Episode #34

The White House Executive Order on AI

get Fully-Connected with Chris and Daniel

All Episodes

The White House recently published an “Executive Order on Maintaining American Leadership in Artificial Intelligence.” In this fully connected episode, we discuss the executive order in general and criticism from the AI community. We also draw some comparisons between this US executive order and other national strategies for leadership in AI.

Featuring

Sponsors

LinodeOur cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelog

RollbarWe move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog.

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another Fully Connected episode of Practical AI, where we’re gonna keep you fully connected with everything that’s happening in the AI community, and we’ll also take some time to discuss the latest AI news, we’ll dig into learning resources and help you level up your machine learning game.

I’m joined by my co-host, Chris Benson, who is chief AI strategist with Lockheed Martin, RMS APA Innovations, and I’m Daniel Whitenack, a data scientist with SIL International. How are you doing, Chris? Welcome back from your travels!

Yeah, thank you very much, Daniel. It’s good to be back. I was, as you know, in London recently for the Applied AI conference that was there…

It sounds right within our wheelhouse on Practical AI.

It definitely was. I was given the honor of giving the opening keynote, which was a whole lot of fun. I also got to meet and record some interesting people there, so hopefully there may be some episodes coming up that have to do with that.

Awesome! Can’t wait to hear them.

So how about you? What have you been up to?

I came back from vacation recently, so catching up on all things email, and message, and all of that, and finally digging into some projects again. I’m excited to get a little bit more hands-on this week.

Excellent. Well, we’ve had a lot happening since our last episode out there, and I wanted to dive on into it. A lot of our listeners are probably already aware, but we’re always talking about AI in the context of what different countries are doing, and what’s happening in the private sector versus government and things like that, and recently on February 11th, 2019, the White House issued their executive order on maintaining American leadership in artificial intelligence… Which is significant, because many of us in the AI community and beyond had been waiting to hear if the U.S. was gonna have a national AI strategy issued from the top level. So it is out there, and today we’re gonna talk about that.

Yeah, it’s exciting stuff. Well, maybe exciting stuff. We’ll see.

It is. Now, I wanna do something slightly unusual, and since we’re talking about something that is fairly close in some areas to what I do at work, I want to explicitly note that the opinions that I express on this show I strictly my own and they are not in any way representing Lockheed Martin… Because I love my job and I wanna make sure that everyone knows I’m just speaking for myself.

I probably won’t do that very often, but I will on this show.

[04:06] Awesome. Well, I was looking through this when it came out, the executive order. There’s a lot of different sections of it that we’ll explore, but it has things related to policy, and principles, and objectives, and even data and computing resources… What I think is interesting is it seems a little bit like the U.S. is a little bit late to the game with respect to this executive order. I know on a previous show of ours you pointed us to an article about artificial intelligence strategies, and I see that they have kind of a timeline on that. It’s a Medium article we’ll link in the show notes, but… There have been a whole bunch of these national strategies that have come out around AI, from Canada back 2017, China, Taiwan, France, Australia, Korea… So this is definitely in line with what other countries have been doing. So it wasn’t so much of a surprise to see it from our government; I don’t know about from your perspective, Chris.

No, I think – I mean, obviously there are many organizations in the private sector, Academia and in the government in terms of various government agencies that have been involved in AI, and many of those organizations, both private and public, have put out AI strategies of their own… But I think all of us have been waiting quite a long time for a national strategy, something at the highest levels of government, that we’re doing… Now that that is out, we wanted to dissect it and talk about the good, the bad and the ugly with it. I’m kind of looking forward to figuring out how it relates to the rest of it here.

Yeah, definitely. And I know before our show you did a little bit of research as far as the origins of the executive order. Obviously, being an executive order, it comes from the White House, the president, but you kind of found out a little bit more information about who might have had some input here. Do you wanna share that?

Sure. Just as part of kind of figuring out who might have written the document, I’m gonna speculate; so I don’t have any specific knowledge of who wrote it, but I was kind of looking around… I’m guessing that the actual executive order was probably put together by Dr. Lynne Parker, who is the Assistant Director for Artificial Intelligence at the White House Office of Science and Technology Policy. If she was that person, or whoever was, probably would have had input from a number of senior-level U.S. officials that have various interests in technology and government policy. One of those was likely Michael Kratsios, who would be the Deputy Assistant to the President for Technology Policy at the White House. They may have also gotten some feedback from people at the Department of Defense and other agencies… But I don’t have any firm knowledge, I was just trying to – when I finished the executive order, I thought it was fairly well written in terms of laying out some of the issues… It was written by somebody in the know, but probably somebody without a whole lot of resources at their disposal.

Right. Someone that might know really good directions, but might not have the authority to actually implement a lot of concrete things.

Right. It’s kind of an – and I say this a little tongue-in-cheek… If you or I might have written it knowing that we might not have any authority within us to be able to make stuff happen at the top levels of government… I think it was well-written all things considered, and I certainly want to note that in my opinion I think it has to be a very tough job to be in a government advisement position and understand the implications of some of these technologies without really having a whole lot available to do something with it. That’s a personal opinion I hold… I have some sympathy for whoever did write the executive order on this.

[08:05] Yeah. I mean, we mentioned that a lot of other countries have issued these, but I think probably at the top of people’s mind is China’s recent stance on AI. Even back in 2017 they kind of published this whole plan of artificial intelligence development in which they wanted to become the world leader in AI, and attach to that a bunch of funding, which we’ll talk a little bit more about. As a first step of that, the plan was to catch up with the U.S. on AI technology and applications by 2020. So that’s just around the corner… So I imagine that some of that pressure from that plan, and the immediate goals of it also maybe spurred or motivated the release of this document.

Yeah, I would agree. I think there have been a lot of other countries… You named a whole bunch of them earlier that have jumped out there. Like we started the show with, we’ve been waiting for a while on this, and at least something is out. Frankly, we might be hoping for some iterations on this down the road, but we’ll see where we go on that.

I know that previously there have been research reports on the state of AI and stuff, but not an overall cohesive agenda that’s been laid out at the federal level.

Cool. Let’s maybe jump into what’s in the executive order itself. I’d love to hear some of your perspectives on that, Chris. In general, overall, there’s five major areas of action within the executive order. We will, of course, post links to the executive order itself, and a few articles that we found useful in terms of responding to the executive order. We’ll put those in the show notes for the episode, so make sure and check those out.

Overall, the executive action has five major areas. The first is having federal agencies increase funding for AI research and development. The second is making federal data and computing power more available for AI purposes. The third is setting standards for safe and trustworthy AI. The fourth is training and AI workforce, and the fifth is engaging with international allies, with the caveat of protecting the tech from foreign adversaries. So those are the five sections, if you read through the executive order.

Let’s maybe start with this area of AI research and development. It’s definitely clear from the executive order that there’s a need to increase research and development activity in AI. What was your thoughts about how they presented that in the executive order, Chris?

Well, kind of going back to – they said many of the right things, but without the detail that’s needed. They kind of laid out the bullet points that I think most of us in the AI world would probably tend to agree to, which is why I do think the actual text was written by somebody in the field, and not just maybe a policy person who doesn’t expect that. But since it doesn’t have the detail – you know, detail usually comes from initiative, it comes from the fact that you’re wanting to change the game… And to some degree, the R&D says basically “Let’s go do R&D” without going into specifics on what areas and why. It could have done a lot more in that area, and as we go forward I’ll kind of talk a little bit more about what’s not in the executive order.

Yeah, in the Objectives section, which is section two, first under there is basically they just say “Promote sustained investment in AI R&D, in collaboration with industry, Academia, international partners and allies, and all other non-federal entities, to generate technological breakthroughs.” And of course, they say a few other things related to AI budgeting and other things, but… Yeah, I kind of agree with what you’re saying - they’re saying that this is something that we need to pursue, but we’re relatively light on the details of how that actually is going to happen. So it’s good that they’re promoting AI R&D; it’s not clear at all to me where things will go from there.

[12:24] Sure. And even going on to the next point, where they talk about making the federal data and computing power available for AI purpose, as you mentioned before, it’s very generic talk about sharing data models and computing resources with researchers in the private sector, and it notes that agencies are expected to help those researchers access those resources, but kind of stops at that point. So it kind of states the obvious on what we need to do in the background, without making any kind of leap or strong directive in a detailed sense.

Yeah, this one kind of made me think a little bit, because there is a lot of government data available now, and in my experience in working with government data on various projects it’s not so much that it’s not available, but that it’s incredibly hard to work with and access. I don’t know if you’ve worked with government data in general and their APIs and such, but for me, at least with the ones that I worked with, they were kind of prohibitively slow and hard to parse, and other things… Which caused me to have to implement a lot of data caching and all of these sorts of things when I was working with – I forget which API I was working with.

A lot of this data is already available, so I’d be curious to know how they are wanting to promote access. I would be skeptical to think that they’re going to improve all of their APIs, and go in that direction. That’s a very slow process. I don’t know that they could really do a lot very quickly there, so I’m not sure about the directions that they have in mind, I guess, with that one.

Sure. I guess, moving on a little bit, they did note an ethics aspect, which I am glad to see there. It doesn’t go, again, into great depth, but at least they noted civil liberties; I think that was mentioned several times in the piece. So if you compare it to what China is doing with their surveillance state, which is very much AI-driven surveiling, and having a score associated with every citizen in China, I’m glad to see that we’re at least keeping that kind of ethical concern over the negative aspects of AI that would be potential; in other words, what bad actors with AI might choose to do. So that was good to see that, to talk about the positive. I just wish they’d gone into more detail. Any thoughts on that?

I mean, it is interesting… I have no idea knowing exactly what our U.S. government is doing, but it is interesting in how the U.S. is home to many large organizations that have shown really poor and concerning use of data over the past couple of years. So even though the government might say “Oh, we’re not gonna do this with AI”, and I hope that they don’t do certain things, like utilize facial detection extensively and assign me a score… But I think one of the interesting things will be if they’re actually willing to put regulations in place to help regulate those large tech corporations that have been shown to have concerning methodologies around how they treat data, how they share it, how they sell it, all of those sorts of things… I think I’m interested in more seeing that intersection between the private and public sector in terms of regulation.

[15:56] Yeah, I agree with you completely. And the other thing you noted earlier when you were going through the bullets was training workers. Essentially, this is calling for educational grants to be established, and that’s great; I like the call for that, and I think that is a useful thing. I just wish I had seen a little bit more in terms of actual federal commitment to going and doing this. I think this is gonna be a huge issue going forward. We have the most transformative technology maybe ever, that is going to impact our lives, so I think the idea of getting the workforce into alignment with this is pretty critical.

Yeah. We’ve said this many times on the show, that not all tech people might end up working as AI practitioners or as researchers, but most software engineers are going to be interacting with AI somewhere in the software stack, and it’s gonna be a major part of business strategy… So people that even aren’t AI practitioners necessarily are gonna need some exposure to what AI is, how to interact with it, what the concerns are, how the systems work. I think that level of education is something that we could definitely see some improvement on.

So we’ve kind of talked about what’s in the executive order, and I’m sure our listeners are hearing a little bit of disappointment across a number of those, so let’s kind of cut to the chase and let’s talk about what we are not seeing in the executive order issued by the White House. And I guess to start us off, I’ll throw out the idea of what I was hoping to see, given the fact that we are in a critical juncture where we’re trying to maintain in the U.S. a superior level of AI expertise, and we are identifying at this moment politically speaking China as sort of an adversary in the space… I was hoping to see more of a powerful national vision that would commit the U.S. to maintaining global leadership in the artificial intelligence space.

I guess considering just how important this technology is and will continue to be in the future, transforming the world around us - not just jobs, but the way we live our lives and stuff - I would love to have seen something along the lines of “John Kennedy’s moonshot speech to Congress”, where in ’61 he put the nation on a course to land on the moon by the end of the decade, because he recognized how important it was to be a leading power in the space race.

So considering that - at least in my personal view - AI is every bit as important to the future of the country and all countries, I would have loved to have seen something a little bit more powerful than that.

Yeah, and I think if we kind of look back to that moonshot speech and going back to thinking about the space race, although I certainly don’t want to make it out where we – certainly we don’t think on this show that we as the U.S. are better than Chinese AI researchers or something, and we don’t promote division, but at the same time, I would be very excited to see the U.S. lead in this area.

Similar to kind of the Cold War space race era, when they were really pursuing space technology, something that was directly connected to our advances in that area was funding. And as far as this executive order goes, it kind of lays out that we should be doing a lot of these things, but it doesn’t actually allocate any federal funding towards executing these visions.

I feel like if they do really have this vision that we should be leaders in AI, there has to be funding associated with that, and there has to be a plan for funding associated with that, that really isn’t found in this executive order.

[20:12] Daniel, you absolutely called out the elephant in the room. Everybody I know that had this interest, as we do and as our listeners do, in this area, and was hoping to see great things, that was the number one comment that I heard from people that I know, as we all consumed this document - “Where’s the funding? How can you tell us that this is so important to America’s own interest to be able to drive forward in this area if you’re not gonna allocate funding to do that?”

To draw the analogy with the moonshot, there was funding made to NASA to be able to accomplish this tremendous challenge that president Kennedy issued to the country… And just to point out, that was not just a government or military thing, it was a societal effort. It was something where we’re all going together, gonna go do a great thing, and that is what I don’t think is present in this executive order. They speak toward things they’d like to do, but there’s no funding to drive it, and therefore I fail to see how the White House is truly leading the way into getting us into the future that we all together need to be in. That’s my own personal perspective.

Yeah, so just to make things more specific - really what the executive order does say around funding, at least for R&D sorts of things, is it asks federal agencies to prioritize research in AI by reallocating resources within their existing budgets. So these federal agencies are already funding research, so I’m assuming we’re talking about the NSF, and the DOE, and all of these agencies that are already funding research. When I was in my Ph.D, we were funded by the DOE.

So they’re already funding certain things, and they’re really not saying “You’re gonna get more money to support AI funding, but we’re asking you to prioritize that”, which means that funding for other things will obviously go down.

The problem is that those agencies are already doing that. The ones that have – they have smart people in these agencies, and they have seen AI coming, they have recognized how it could be useful in their own domains, and they’re already allocating funds that are existing into there wherever they can… So the problem there is that the executive order doesn’t change that in any way. It’s basically calling on them to do something that they’re already doing.

Yeah, exactly. If we compare this to China’s approach with funding AI, we can see that China explicitly is stating that it’s spending 150 billion on AI between now and 2030, and then even individual cities - there’s certain individual cities that are spending upwards of 15 billion on AI initiatives within the city. So they’re already making that commitment. China is executing on this vision to become leaders in AI, and they’re putting money behind it… And I think that as you’ve already stated in the analogy with the space race, I think that trickles down not only to the government and defense organizations, but it trickles down to universities, even high schools and lower-level education where people are really emphasizing STEM education, they’re getting educational grants, there’s resources available… There’s a whole trickledown effect from that money being behind the vision and people being on board with it.

[24:14] Yeah, the dichotomy between what China is doing – they truly have put a moonshot-level initiative into place and they’re backing it with the funds, and I truly respect them for doing that. They clearly get it, and they get it at all levels of government. And frankly - I have nothing against China at all - they’re doing what I would do if I was in their shoes. I wish that the United States would take a similar initiative on our side at the same kind of level. I think we will feel the pain down the road if we don’t ride that boat fairly soon.

I wanted to recommend – originally, I heard about a particular book… It was actually my boss, Matt Tarascio, who actually told me about it; it’s called “AI Superpowers. China, Silicon Valley and the New World Order.” The book is great, I highly recommend it; it makes a strong argument that China is probably doing much better than most of us in the West have given them credit for.

We have a bias in the U.S. about our leadership in AI, as we like to talk about it. The book would argue that we may not be quite in the leadership position that we think we are, if we’re being honest with ourselves, and I think that that is an important point to take home.

Whether the book is exactly on point or not, it should call us to attention that it is not a foregone conclusion that the United States would automatically be the dominant power in the artificial intelligence domain. I think that that is pretty key right there; we really need to do, in my opinion, what China is doing. I think more power to them for doing what they think is right. Great people, great country, I just wish we could learn a lesson from the Chinese on that one.

Yeah, and I think you’re really getting at kind of the last point that I’ve really seen people will talk about a lot in respect to this executive order… And it really stems from the fact that we do have this bias in the U.S, many of us, that even the title of the executive order, “Maintaining leadership in AI” kind of implies that U.S. natives are the best at AI that there is… But the fact of the matter is that some of the most brilliant minds in AI have not come from the U.S, and many of the most brilliant minds in AI that are in the U.S. have immigrated to the U.S.

The U.S, at least in the past, has really taken a great stance on importing a lot of great minds into the country and been open about that, but it’s become increasingly hard to get students to come here and study computer science, for example, study AI, that are doing AI Ph.D’s… It’s getting increasingly hard for them to be able to stay here and contribute to U.S.-led companies.

I know this is something that OpenAI has talked about a lot, I know it’s something that has impacted me a lot, seeing friends of mine who I was in Ph.D. program with and have worked with over time just really not having the desire to stay in the U.S. because of all the issues around visas and all of those things, and just deciding to either go back to their home country, or become AI practitioners in another country. So I think this is something that really is at the core of what needs to be addressed for us to maintain leadership.

I completely agree with you. Once upon a time, a few decades back, Ronald Reagan, another Republican president, used to refer to America as a shining city upon a hill, and the idea around that was no matter where in the world you were, America had this reputation as being the place where if you were willing to work hard, you could make anything happen. And accordingly, so many immigrants from around the world that were ready to accept that challenge developed tremendous interest in and loyalty to the United States, and wanted to come here and bring their families here and help America along. I think that we are at risk of losing that in the current climate.

[28:27] We’re now taking some of these great minds that would otherwise love to come and be part of this American experience and asking them to go back to wherever they came from. And of course, they’re gonna take that expertise with them. So it’s not just the immigrants that are losing out, it is our country itself that’s losing out on these great minds to help us in this next great age where artificial intelligence plays such a major role.

Yeah, definitely. Increasingly, when it’s becoming easier to run a company outside of the major tech hubs like San Francisco or New York, and having a company that’s fully distributed and that sort of thing, it’s hard to convince people that living in San Francisco to create your AI company is really the best choice, especially with all of these visa issues, and all of that.

So getting to some over-arching, general thoughts - my general thoughts on this executive order are probably not a surprise, based on my previous comments. I’m kind of skeptical as far as the actual change that will be sparked by the executive order. Given that all of the agencies and the companies and the educational educations already see the advantage of AI and are already making efforts within their own power to promote AI research and development and education, I think the thing that would spark more change would be actual funding and next steps… So I’m skeptical that this executive order on its own will change anything, but I’m definitely hopeful that maybe there will be some next steps coming along with it that will provide actionable items like funding, and programs, and that sort of thing.

I completely agree with what you’ve just said, and I subscribe to that. I think it’s interesting, from that over-arching thing, to even extend that a little bit – I don’t think that this EO will be a major change-creator in our country. I think one of the things is we have so many forward-thinking organizations in the U.S. that have already developed their own AI strategies in the absence of any overarching national ones that have come before, and the limitation there is that they tend to be within what that organization’s domain or purview is, as opposed to – whether they’re in the private sector or government agencies or whatever.

Within the private industry we have the obvious names, that all of us associate with the AI world, like Google, Microsoft, Amazon, Apple and others… And they have provided public leadership in the AI space, since there wasn’t something else out there. We should also note that there are many powerhouses in this space, like Baidu and Alibaba and Huawei and such that are also major powerhouses in this.

I know you’ve spent some time in Academia, Daniel… What do you think about some of the leadership that we’ve seen from Academia?

I think definitely that’s still one area where we see a lot of leadership in the AI space, from especially places like Stanford, where there is just a huge leadership role in Academia in the U.S. But that’s gradually changing, as well. I think the immigration issue kind of overlaps with that, because we’re also educating a lot of brilliant AI researchers that aren’t staying here… So even if we have that leadership in Academia, which is great, there’s still that issue lingering.

[32:01] Sure. It definitely exists there. There’s one other group – you know, we’ve talked about some government agencies… I work for Lockheed Martin, so I’m particularly aware of military impact… In terms of leadership, in 2017 the U.S. Department of Defense published a summary, The 2018 Department of Defense Artificial Intelligence Strategy, subtitled “Harnessing AI to advance for security and prosperity.” They did what other government organizations are doing, where they allocated existing funds into various programs within the Department of Defense to drive forward.

There has for decades - since the start of the internet and before we have the Defense Advanced Research Project Agency, which we all call DARPA, and most people I think are familiar with that, at least a little bit, and they have been funding AI research at a level of about two billion dollars over several years. So that two billion is a good pot of money which smart people can dip into and try to make things happen in the AI research. They obviously work with the private sector and they work with Academia quite a lot… So even though that is a military basis, there’s a lot of cross-over into the private industry space.

I should note, as I say this, as I talk about DARPA and the next thing - working at Lockheed Martin, the team that I’m on works directly with DARPA in terms of implementing AI priorities… As well as this other agency, which is actually a new one; it just came about a few months ago, which is the Joint AI Center - it’s called JAIC for short, and it’s public knowledge that they focus more on applied AI, versus the research side… And they are funding 1.7 billion over five years. I think that was reported by the New York Times recently.

So these organizations are really trying to push forward what we can do in partnership with the private sector and Academia, and that’s great, but they’ve been doing this for some time… And once again, the Department of Defense far outran the White House in this case. So as a private citizen, again, speaking only for myself, I just think that should have been reversed. I think it would have been good if the White House had said, “Hey, this is our national priority” and all the government agencies, as well as private industry patriotically jump on board with stuff… But the best leaders don’t follow the crowd, the best leaders get out in front and lead the way.

Yeah, for sure. I read a couple of books on the space race era… I forget their titles off the top of my head, but I would recommend if you’re interested in this sort of topic around how a government could effectively promote a technological vision, there’s a lot of interesting stuff that happened in that time period that I think is relevant here, and I would recommend reading up on that.

Any other comments on the executive order generally, Chris?

No, I guess I’ll go back to something that I know we both have said several times in this podcast - I would love it if the White House would go back and bring us something a little bit grander, and take a leadership position. For what it’s worth, I say this completely in a non-partisan way - get out there and lead this, and lead the world, and show the amazing things that we can do with this new technology that’s here to stay. I hope there is a round two of executive order that gives us that AI moonshot.

Yeah, me too, for sure. Well, before we jump off of this Fully Connected episode, like always do at the end of these, we really want to give you some good learning resources, so that you can level up your machine learning game, learn more about AI, and particularly as relevant to the topic we discussed.

[35:54] In terms of the topic we discussed today, there’s parts of this that overlap with government data, and regulation, and ethics, and general knowledge of AI across the society, so we wanted to point you first to this new course, AI For Everyone, from deeplearning.ai. It just came out – I believe this last week was when I saw it, but… This I think would be a great resource if you’re one of those people that maybe aren’t a practitioner, but you really wanna learn more about AI, how it’s impacting society and what it actually is, beyond the hype. I think that this might be good for you.

I think also for us as AI practitioners this might be a good one to kind of help us learn how to express AI to people that aren’t so technical, and also to point people (manager, or even acquaintances) to this course, so we can help people get a better understanding of AI and proper expectations for what AI is capable of.

Just to note, I agree; I think that is a course that nearly everybody - as it’s called AI For Everyone - should jump into that. I’m often asked - my job title is AI Strategist, and that’s kind of a new thing that’s coming into being these days, and a lot of people say “Well, how do you do that? How do I understand the business side of how AI can be implemented?” and a lot of that is understanding where it can be used, and being able to communicate effectively what these capabilities are and what the impact is, and a course like that that you just talked about is a great starting point for that… So I would encourage people to do that as well.

Yeah, and a couple others that I’ll just mention quickly… Intel AI just came out with this article - again, we’ll link all these in the show notes - kind of listing out some of the existing ethics toolkits for AI. These include things like Deon, which has checklists for data privacy, security… IBM Fairness 360, Digital Impact Toolkit, Lime, and others as well that they list out and describe in this article.

I think that would be a good chance for you to look into things that you as a practitioner could go ahead and start making part of your workflow to develop AI responsibly, even in the absence of formal regulation.

Then finally there’s a couple links that we’ll provide for government data that is available. Of course, in the U.S. there’s the Federal Data Portal called data.gov. Also one that I’ve found really useful, that’s a little bit closer to home for me, is the city of Chicago data portal, which has just a wealth of data about Chicago land, and a lot of different agencies and processes and information about Chicago that can be really useful if you’re kind of looking into things you can do with public data. So I definitely recommend to check those out.

I definitely will. I use data.gov regularly, but I haven’t seen the Chicago site, so I’m gonna go check that out after the show.

Awesome. Well, thanks for helping me pick apart this executive order, Chris. I hope it was useful for our listeners. If there are additional comments on this, or other things that you would like to have us discuss on the show, we’d really love to hear from you. Reach out to us on our Slack channel; you can join that by going to Changelog.com/community. We’re also on LinkedIn, under Practical AI, and we love to hear from you, hear what you’re liking, and get some feedback and additional topic ideas.

Thanks for being part of the community!

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00