Practical AI – Episode #263

Should kids still learn to code?

get Fully-Connected with Chris & Daniel

All Episodes

In this fully connected episode, Daniel & Chris discuss NVIDIA GTC keynote comments from CEO Jensen Huang about teaching kids to code. Then they dive into the notion of “community” in the AI world, before discussing challenges in the adoption of generative AI by non-technical people. They finish by addressing the evolving balance between generative AI interfaces and search engines.

Featuring

Sponsors

Ladder Life Insurance100% digital — no doctors, no needles, no paperwork. Don’t put it off until the very last minute to get term coverage life insurance through Ladder. Find out if you’re instantly approved. They’re rated A and A plus. Life insurance costs more as you age, now’s the time to cross it off your list.

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Welcome to Pratical AI
2 00:43 Getting fully connected
3 01:57 Jensen Huang's comments on kids learning to code
4 04:07 Future of AI and learning software
5 06:53 Human & AI partnership
6 11:32 Sponsor: Ladder Life Insurance
7 13:40 Connecting with the community
8 17:08 Finding good projects
9 20:47 Building a social web
10 22:34 Getting non-techical people to use AI tools
11 32:02 AI is not a search engine
12 37:51 Join our community!
13 38:33 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another Fully Connected episode of the Practical AI podcast. This is the episode in which Chris and I keep you fully connected with everything that’s happening in the AI world. We’ll hopefully talk through some of the news, and also keep you up to date with some of the latest learning materials. I’m Daniel Whitenack, I am CEO and founder at Prediction Guard, and I’m joined as always by my co-host, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are you doing, Chris?

Doing well today, Daniel. How’s it going?

Oh, it’s going great. My wife and I have been traveling a bit around the UK, which has been enjoyable, other than the train delays, and rain, and wetness… I think today we saw all of the above, of sunshine, rain, hail, snow… The full gamut of things.

It’s amazing how you can have so much weather in one little island.

Exactly. Exactly. Variety.

You’re just going few miles and it changes.

Yeah, yeah. Variety. Well, there have definitely been a good variety also of interesting AI things happening, as there always are. One of the things that was kind of interesting to me, which was circulating around my feeds, was - NVIDIA generally, because they had their GTC… I always want to say GTX, but I think that the card. GTC event, which is their kind of innovation, a yearly conference sort of thing… But Jensen, the CEO, was making some comments… I don’t know if it was actually at the event or in another venue, but making some comments about kids learning to code, and his comment was related to - I forget how he phrased it, but basically the gist being “Kids shouldn’t learn to code anymore, because AI is going to do that fairly well.” So I don’t know, I don’t have kids, Chris, but what is your thought as a parent?

As the designated father on this podcast?

Yes, exactly. [laughter]

Well, I think he’s right in the long run. I think he mentioned that in the keynote; there was a little section where he covered that, and he said “Always in the past we’ve taught our kids they need to code, in recent years”, and he said “Now, AI is changing all that.” And I don’t think that’s news to anybody in the larger sense. We use AI for coding, and technical controls, and all sorts of stuff all the time. And that’s just built into our ecosystem over the last few years, and is increasing constantly. I have to admit that he was kind of like “We’re at the moment.” And I’m not quoting him, but he was kind of like “This is the moment. We’re not teaching them anymore.” And I’m kind of like “Maybe.” Maybe… Because I know that even with us as adopters of AI technology for coding all the time - I mean, if there’s anything that we do, it has to do with that. And there’s all sorts of complexities, and bumps in the road, and things like that. So in the largest moving that way - of course; I think everybody would agree with that. But I don’t think that we’ve all just arrived there because Jensen just said it in a keynote personally. I have great respect, I don’t mean to be negative. It was like flipping a light switch kind of the way he put it, and I think it’s a very, very slow flip, with lots of nuance.

I guess one way to rephrase the question would be at this moment, were you to have a child going into college, let’s say, would you encourage them to pursue software engineering or computer science, or this sort of thing?

I would. I think that in the future - and actually, I’ve spent a lot of time thinking about this topic in general, hundreds and hundreds of hours… And this is one of those things where I think we’re accelerating into the future, and I think with AI capabilities year by year making massive changes to how we live and work, humans are going to have to be fairly dynamic in how they do things… And one of those skills continues to be various technology orientations. And I don’t think that’s going away anytime soon, though AI will continue to change where those boundaries are.

So in the spirit of “We’re always going to have to learn new things anyway”, I don’t see any problem in diving into technology and coding today, with the recognition that the technology will constantly change underfoot, and you’re going to have to change with it. So I don’t think I would recommend – and actually I do, for Georgia State University, I’m one of their advisors in industry for their computer science school… And I would not dissuade any of those students from pursuing a computer science curriculum at this time. They just need to be dynamic enough to change over the years.

Yeah. And maybe part of it is also these tasks that – like if you go to a three-week bootcamp on frontend engineering, or something like that, probably many of the things that you would learn in such a bootcamp… Even though I think there will be a need for frontend engineers for some time also; I think that’s true. But that sort of basic level that you would get is maybe at the level where it’s going to be more and more cookie-cutter sort of things that an AI system is going to be able to do, guided by the hand of maybe a less technical person like a designer, rather than a frontend engineer.

But I run an AI company, and we really need software engineering and programming… So I think at the minimum, all of these AI systems that are coming about are going to need people to build them, and maintain them, and infrastructure that operates well, and scales… Systems are still going to have to scale, and people are going to have to worry about distributed systems, and all of these sorts of things that are really hard engineering problems from my perspective.

I agree. The human algorithm partnership is going to go a long way for many years, but it will change all the time. But that’s not limited to our fields here. This is something that all industries are going to be facing, is that constant change on what the partnership looks like. So I am not inclined - unless somebody sees an area where AI is in the next very few years going to completely take over all human activities in it, I don’t see any reason to avoid these things. I think we’re in a perpetual learning mode going forward.

Yeah, I’ve actually been thinking about this sort of dynamic a little bit after we’ve had a couple of guests on the show, that are working on these systems like Prompt Layer and other ones, where they’re managing prompts, and reasoning workflows at the intersection of domain experts and software engineers… So the technical side, kind of where software meets domain expertise. I was on a panel at Purdue University, speaking of students, and that sort of thing, and one of the questions was around this, “Well, hey, I went into this program thinking maybe I would try to become a data scientist. Is that still a thing? Or should I be thinking about something else?” I think one, encouraging thing for people is – if you’re a data person, no one really has more than a year of experience architecting and building these type of generative AI systems. So in a sense, you could do one really compelling project and be ahead of most of the people trying to get these jobs. So in one respect, that’s really encouraging.

I think it is shifting though what a data scientist means - and this is coming from a data scientist that has been a data scientist for 10+ years… I think there is this kind of hollowing out of the middle, where you had the three things where on one side you had software engineering and infrastructure, and that was the land of sort of software engineers, and DevOps and all that… And on the other side you had domain experts, and in the middle you had data scientists who kind of translated the domain expertise into predictive models, and machine learning and such, but then got handed off on the other side to integrate into software. And I think what you see is kind of this hollowing out of that middle, where domain experts are getting much closer to the software side.

And so I think there’s kind of two maybe takeaways from that from my perspective. Either you could go into the AI engineering side, which is maybe less hardcore infrastructure, low-level programming, and more almost narrative writing of prompts, and creating of these reasoning chains and all that sort of thing, and become amazingly good at that, and rely on good software and infrastructure people on the other side… Or there’s still going to be a need, because everything is still software - there’s still going to be a really heavy need for people that can make your chains of reasoning, and hosted models, and software deployments actually go well.

I totally agree. You said one thing in there as well that really, really resonated with me. Several things that did, but one thing that jumped out, that I’d like to reiterate, is the notion that if you do that one big project, you’re really out in front again. It’s one of those things where because it changes so fast right now from month to month, that it doesn’t take much to do the new thing that’s just coming out. And the people that might have had many years of experience in the title, haven’t had experience in this new thing. And that keeps on happening.

And so the notion of being a, you know, title - whatever your title is - for X number of years is really losing a lot of meaning in that. You might have been in the space, but with the space increasingly evolving, you can kind of catch up into modern experience pretty quick. So people are very worried about jobs in this space, but that’s a little bit of a way where you can be super-competitive and jump up in the area of your interests by leaping into the front, and disregarding the traditional metrics that we tend to use on that.

Break: [00:11:22.14]

Well, Chris, as people are diving into their first projects and areas of interest, and new things in the field, one of the interesting things and kind of learning resources that we don’t maybe spend a ton of time talking about, although we are actively engaged in it, is community around the AI space and where people can connect with that sort of community. We’ve produced a lot of content, but we’ve also engaged in various spheres over the years, and there might be a lot of new people - let’s say they’re web developers, or they’re backend engineers, or whatever, and they’re getting into this space, they’re doing projects, and their kind of normal programming conference isn’t – or maybe it has some AI topics, but they’re wondering, “Is there a better place to find people that are doing these sorts of projects?” and that sort of thing. I know you were at one point involved in kind of the meetup space, although COVID maybe had a little bit to do with the downgrade of some of those communities…

Yeah, it’s a great point. And it’s evolved in interesting ways, kind of what you’re getting at there… And to start with the last thing that you said, for a number of years I ran the “Atlanta deep learning meetup.” And the phrase “deep learning” is kind of antiquated now as well. And it kind of fell off when COVID hit. But we were really the preeminent kind of AI-oriented community in the Atlanta area, which is where I’m at.

It’s interesting, as another counterpoint to go into this, you and I met in a different community. We met in the Go language community, because we were both Go programmers, and we were kind of the two people thinking a lot about AI and data science in that community, so it was a natural thing for us to gravitate together… But interestingly, if you’re really focused on different aspects of AI, whether it be generative AI or other fields of AI - since we’ve recently pointed out that not all things are just generative AI, even though that’s the hot thing right now - there are many vendor specific communities. We have a podcast-specific community here, where we engage our listeners all the time, and there are some platform-specific communities.

[00:16:02.16] But there’s kind of a – where we met, in the Go community, there was an overall… Whatever you were doing in the Go space, there was a larger community, and you kind of knew all the people and all the names that were there, and would follow that and be participant in it. Here, we don’t really have that. We have many, many fragmented AI communities, and we’ll go to Hugging Face to get open source models, and see what’s going on there… And there are lots of these smaller communities. But I would imagine if you’re coming into this space today, and you’re one of those people who really want to dive into AI here in 2024, it must be very hard to figure out what space you should be in to make all the connections and to ramp up. Any thoughts on that, what you What would you recommend to somebody, Daniel, if somebody were to come into the space today?

Yeah, it is a bit of a challenge, because it is a bit fragmented. And maybe we could split this up into a couple of kinds of engagement; maybe one from a more technical side, and one from a less technical side. So in terms of architecting and building generative AI apps, or other kind of AI apps, or fine-tuning models, and that sort of thing, I think what I would generally recommend is starting out with some sort of learning resource that is probably going to be on Hugging Face, Langchain, LLaMA Index, LanceDB, one of the other vector database providers… These sorts of projects have really good tutorials and guides associated with them. Starting out more project-related… Like, those are trusted projects in the AI space, and trusted platforms in the AI space, and then looking at one of those projects - like let’s say you go and you find a guide that is setting up multimodal RAG system to search over videos or images with LLaMA Index, or something like that. Well, a lot of these projects - not all of them, but a lot of them, have some type of forum or chat kind of interface that the community around those projects gathers in. So oftentimes, it’s either Discord, or Slack, or a forum type of thing… So I think if you start kind of in those spaces, like a LLaMA Index, or a LanceDB, and look at a guide that is something similar to what you want to build, and you try going through the guide, but you also look to see if those projects have a Discord associated with them, or a Slack channel associated with them, go ahead and log into those. And it’s okay to lurk for a while, but as you’re going through your example, and you don’t understand this, or you’re getting that error, just go ahead and be brave and put something in those spaces. I’ve generally found them to be fairly welcoming.

For example, if you go to LLaMA Index, there’s a community page, and you’ll see right away Join Discord. There’s many other of these spaces… So our friends over at the Latent Space Podcast have a very, very active Discord server that they’re running. We have a Slack channel associated with this podcast, but there’s other projects; like, LanceDB has a discord channel… And these are generally people that are building projects within this sort of space, within this sort of topic, and they’re generally open to “Hey, I might not only be Using LLaMA index, but I could ask a question about choices of vector databases, or choices of models.” And everyone in there is kind of working in this space, and may have biased or opinionated thoughts on that, but you gradually kind of learn and meet people in that way.

[00:20:16.17] So I think in some ways it’s a little bit more project-related than kind of overall community-related. And Hugging Face is a great community in the sense of GitHub being a community, but it’s not where people are having all of these different conversations about specific projects and guides and that sort of thing. They’re collaborating on models and datasets and that sort of thing, but maybe not in a kind of asynchronous chat sort of way.

So what do you think about the social element? Because there’s some great guidance there on learning, and kind of connecting on a project basis and stuff… But where would you go for the personalities, for the friendships that you develop? How would you approach that, Daniel?

Yeah, it might depend on people’s personality, and what opportunities kind of present themselves to those people. There are a good number of events that are gradually happening. Our friends over at the MLOps community have had a series of online virtual events. I know there was an AI engineering event out on the West Coast; there are events - like, Hugging Face I think is doing some type of Hugging Face tour, with demos at various locations in-person… So that’s a really great place to kind of meet face-to-face with people, and interact and build relationships.

In terms of personalities and that sort of thing, one thing you could do is look at our previous episodes of this podcast, and go and even if you don’t listen to all the episodes - which of course you should, because they’re all great; or maybe some are better than others, but they’re all pretty good, I think. But you could look at the guests from the previous podcast and go to LinkedIn, go to Twitter, go to BlueSky, whatever your favorite social is, and see if you can find some of those people on those platforms. And those are trusted people that we’ve met over the years… And so in an online sense, you can start following them and see who they are kind of reposting and interacting with online… And that’s kind of how your web of connections can form a little bit.

So I want to turn from the community questions that we were just talking about, and spread it out. The community notion that we were just discussing was really focused on those of us who are embedded in AI work, we probably do it for a living, and this is kind of very central to us. But there is most of the world out there that does not fall into that category. And yet, AI is still impacting their lives in tremendous ways, increasingly. And one of the things that I have been keenly interested in lately is for the rest of the 99% out there that are not building their professional lives on AI in every moment the way we are, they still need some entrances into how to use this in a productive way. We are getting on the podcast and with our audience and the listeners talking about Gemini and ChatGPT all the time, and there are these other 99%, they’re hearing this too in the news, but they don’t really understand it; they’re probably not using it. They might have tested out one of the free interfaces here or there, to see what it was like, but it’s not part of their workflow. Right now we’re seeing a period in 2024 where organizations are starting to explore and even demand that their employees start using these tools. They’re making them available, but they’re really struggling with adoption.

I’ve run across all sorts of issues where hitting mainstream adoption with generative AI tools has been a tremendous challenge. And so I’d like to dive into that for a couple of moments and kind of talk it over where we’re going, because that’s certainly – I know the organization I’m part of is interested in this topic, and I talk to people every day that are trying to figure out how do we get it out there beyond our software developers and our data scientists.

[00:24:26.07] Any thoughts you have there in terms of - if you have your typical non-technical worker; they’re a knowledge worker, and they have a set of tasks every day… How do you start to break that nut in terms of getting those people recognizing where some of these generative AI tools can help them do their own? I have a couple of examples I’ll go to in a moment, but I’m curious what you’ve seen out there, Daniel.

Yeah, I think there’s one side of it which is maybe places for those people to start, but also an interesting piece of this is a mechanism for how that knowledge trickles into an organization, which I think is an interesting topic in and of itself… One pattern that I’ve seen a little bit at organizations is maybe a couple of champions that are higher up on the ladder, that see the vision of transformation, and see that this is going to be a transformative technology for their organization… And those leaders might take some type of courses, or certification, or a crash course type of thing in the topic. So MIT has some kind of AI for digital transformation, or generative AI-related things for non-technical people that they offer online… And I think maybe some even live. NVIDIA has some courses, like what is generative AI, generative AI explained; NVIDIA has some of those types of things. So there might be these leaders who see the vision for how this is going to be a transformative technology; they might do one of those things to understand at a level that makes them comfortable, and I think part of the trickle down is leading by example in that. So when they’re having interactions with their team and/or other teams, I think it can literally be something comes up in a meeting, and you’re sharing your screen, and you literally just have a tab open that is ChatGPT, or Claude, or Gemini, or whatever, and you go over there and you’re literally just – you answer a question; or you get something done immediately, because you know how to interact with those tools to do something quickly. And that can be a light bulb moment for other people, where they see a person leading by example, using a tool, and not like “Here’s how you use this tool” sort of way, but really in the flow of how they’re doing their own work. And I think that seeps through; that’s really impactful, because it shows “Oh, this person who is maybe influential in my organization is operating in this way, and able to do these cool things with these tools. Can I do that?”

I think some of it can also be a little bit directed, where you’re having your one-on-ones maybe with your direct reports, and they’re asking questions that they should be able to source the answer to, or accomplish very quickly with these tools, and you can tell them “Hey, there’s a pretty quick way that you can get this summary are develop this outline for your presentation out of this article. Let me show you how to do that”, and actually have them go to the site, generate the outline for the presentation based on some article or something, and actually do that in your one-on-ones… Even to the point of encouraging people like “Hey, you should maybe just have this bookmarked, or have it up on your tab.”

So I think that’s kind of how some of that trickle-down could happen, and how I’ve seen it happening, is kind of a foothold in these influential people within an organization, and then leading by example, and kind of in a one-on-one sort of way, rather than a top-down directive of “We shall now do things this way.”

[00:28:25.25] Understood. I actually have an example to illustrate one of the things that you were talking about there… In my own employment, we have both access to ChatGPT, and we also host internal models, internal open source models, which we love. Because that way, you don’t have to worry about if you’re sending proprietary information out, things like that. And so this morning, in my day job, I was working with a team of people, there’s a presentation that has to come out of that, a PowerPoint presentation that has to come out as one of the deliverables of that… And as we were having a group discussion, sharing a screen, I was able to type in some of the things that we wanted to talk about into the model, generated dynamically. While we were on a group call, I generated a set of talking points to the various issues that we were addressing… Basically, kind of a presentation within the prompts, if you will. I was then able to turn that presentation into VBA code, Visual Basic for Applications code, where it embedded the content in that VBA code. Then I was able to open up PowerPoint right there, copy and paste out of the prompt. This is totally non-technical what we’re talking about here. Go to the Tools tab, down to Macros, and open the Visual Basic Editor in PowerPoint, which is available to everybody. Paste it in the code as a new module and ran it, and it produced our PowerPoint for our team right there. The whole thing took five minutes to get a 30-page PowerPoint set up.

Now, there was a lot of manual tweaking to be done afterwards, and adding some graphics and stuff like that… But we probably cut five, eight hours worth of work out of our workflow by tossing the critical ideas into the prompt, turning them into that code, and copying and pasting. It doesn’t take a developer to do that. Anybody could do that. So that’s one of many possible use cases where you’re using it; you haven’t replaced any of the workers, but you’re accelerating everybody’s productivity dramatically at that point, and saving a lot of time to do it.

So as I’ve been thinking about how we get more people in the world to use these technologies to their benefit, I think having a number of different kind of typical persona use cases like that, that many people might need… So there’s your PowerPoint strategy folks; right now, in whatever job you’re in, you can do that if you have access to one of these larger models.

And then the other thing I wanted to dive into is - it’s interesting the emotional quirks. People are worried about everything from “Will this take my job if I start using it and make me irrelevant?” They wonder “Who’s watching when I do this? Can my boss see if I stumble, if I’m struggling with something? Who in my company is aware of what I’m doing?” So there’s kind of a lot of FUD, fear, uncertainty and doubt associated with the use of the tools… And I think part of that is just kind of - in these trainings that you were alluding to earlier, to be able to have discussions with people about their fears, and see if you can get some interest and uptake by going right at the thing that’s holding them back. And a lot of times people think it’s technical. It’s not. The PowerPoint thing - you can have zero technical training, and go do that if you just know to open up that single thing in the PowerPoint deck, and copy and paste the code that was produced at the prompt. So I think that there are hundreds or thousands of opportunities along this line, that people could take advantage of. Any thoughts on what you might do in that way?

[00:32:02.00] It’s interesting that you bring up the fear and uncertainty piece, because there are a lot of misconceptions… And it doesn’t often work to just straight up invalidate those. So somehow I think if you’re thinking about this adoption in your organization, and you’re working with people, to some degree you kind of have to find an entrypoint where there’s less of this fear and uncertainty, because I think all of us that are working with these models, and have been working with these models, recognize that working with generative AI models, prompting them, integrating them into your workflow - it isn’t often what you expect it to be getting into it, and you kind of have to build up your own intuition of “Oh, this is kind of how this model behaves. And this is kind of how this model behaves. And this is kind of how the prompting works.” You kind of have to build up some of that intuition before you get a sense of how they operate, but you’re never going to build up that intuition if you just focus on the use case that people have some fear over, like putting in customer information into the interface, or something like that.

So I think, to some degree, you have to find some use cases where people are able to safely interact with these models. And it could be a private chat interface that you allow people to use, it could be a local chat interface, like LLM Studio or something like that, that you encourage people to use, because it’s local, and there’s nothing going anywhere, and you can tell people “Oh yeah, this is fine”, and then they get a sense of the models, and you can go from there. So I think it’s about finding that foothold, to some degree.

One of the interesting things that I’ve found is people sort of expect these models – they get disillusioned when they ask a search engine-like question into these models, and they just don’t find what they need… So that’s kind of some of that intuition that I was talking about. These models operate slightly different than a search engine.

That’s a great point.

And so everyone kind of had to build up a little bit of intuition, I think, when they learned about how to Google things. I think there is a skill of how to Google things. And so there is a similar intuition… I still – I’m sure you’ve run into this many times; people ask questions in a business context, and you’re like “Why didn’t you just google that?” Well, maybe they don’t have the intuition around how to like properly - I’ve definitely seen this before - search the Internet to find answers and self-serve themselves. Actually, it’s interesting… I’ve found a an article this week that talked about why AI search engines really can’t kill Google. This was from The Verge, a publication… So it talks about search engines like Perplexity, and You.com, and Google Gemini, and ChatGPT to some degree… It’s a really interesting article; people should look it up, and we’ll include it in our show notes… But it talks through some of the kind of main use cases that you might have learned to do in Google, like navigation or something, navigation questions, that don’t really work so well in the current chat interfaces. So there’s a different sort of intuition that needs to be built up, and one isn’t just a drop-in replacement for the other.

[00:35:37.11] That’s a great point. And not only are they distinct skill sets, but there’s a superset of how you use them together for their strengths along the way. And with the notion that – you know, a search engine’s primary job, as the article notes there, is to get you to a website. And when I talk – like, I’m still old school, and I tend to… I don’t just go to a website that I already know through Google. I will actually just type it in directly, because I know it. But my daughter, who is 11 - she knows the website, she knows where it’s at, but she still puts it in Google to go there. And her friends do that, too. She uses it as a navigation tool, to the point that you just made a moment ago. Whereas when we’re prompting in these models, we’re really seeking information, in a lot of ways. Instead of getting to a website that has information, we’re kind of getting the model to feed that information to us directly. And I personally tend to use both. It’s very common for me to flip back and forth between Google and a large language model, and use them each for what I want; or if I don’t know exactly where to go for Google, I’ll learn a bit from the model, and then I’ll do a deep-dive on a website that’s specific to what I just learned from the model, and get there that way. So it’s an evolving landscape of tools to get these things done now.

Yeah. And I would definitely encourage people to check out this article. It’s quite interesting. They go through different types of queries, like navigational queries, what they call buried information queries, exploration queries, evergreen information, like “How many weeks in a year?” or “When is Mother’s Day?” Real-time queries, like sports scores, and that sort of thing… The exploration questions that I mentioned, like “Why were chainsaws invented?” It was like exploration and learning sort of thing. And they compare some of the answers from different ones.

So if you’re struggling maybe with this intuition, maybe that’s a good place to jump in and try some of those queries yourself that are there, and see what comes back from the various ChatGPT, or Gemini, or You.com, and those sorts of things.

And then circling back on our community idea before, try those things, hop into our community here at the Changelog, and share what you’ve done with that. We’re very curious to see what people choose to do coming out of these discussions that we’ve had today. I’m looking for the most creative ideas to inspire me myself… So please, send what you got.

Sounds great, Chris. Well, it’s been fun exploring this topic with you, and I look forward to many further exploration questions in the future. I hope you have a great evening.

You too. Take care, Daniel.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00