Practical AI – Episode #270

First impressions of GPT-4o

get Fully-Connected with Daniel & Chris

All Episodes

Daniel & Chris share their first impressions of OpenAI’s newest LLM: GPT-4o and Daniel tries to bring the model into the conversation with humorously mixed results. Together, they explore the implications of Omni’s new feature set - the speed, the voice interface, and the new multimodal capabilities.

Featuring

Sponsors

Ladder Life Insurance100% digital — no doctors, no needles, no paperwork. Don’t put it off until the very last minute to get term coverage life insurance through Ladder. Find out if you’re instantly approved. They’re rated A and A plus. Life insurance costs more as you age, now’s the time to cross it off your list.

Neo4j – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Welcome to Practical AI 00:43
2 00:43 AI & protein synthesis 04:26
3 05:09 GPT-4o 01:34
4 06:43 Changes from 4 to 4o 03:23
5 10:06 Video modality 02:24
6 12:31 Sponsor: Ladder Life Insurance 01:52
7 14:36 What should we talk about? 01:21
8 15:57 Privacy concerns 04:22
9 20:19 AI in the real world 04:01
10 24:20 Sponsor: Neo4j 01:02
11 25:39 Migrating to a Kubernetes cluster 01:07
12 26:46 GPT conversations 00:55
13 27:40 Future of AI gadgets 01:44
14 29:24 Where is value coming from? 04:59
15 34:24 Shrinking gap between Open & closed models 00:44
16 35:07 HuggingFace GPU compute 02:39
17 37:46 Learning resources 03:20
18 41:06 GPT closes us out 01:36
19 42:42 Outro 00:45

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hello, and welcome to a another Fully Connected episode of the Practical AI podcast. In these Fully Connected episodes we keep you connected with everything that’s happening in the AI world, and help you find some resources to level up your machine learning game. My name is Daniel Whitenack, I’m founder and CEO at Prediction Guard, where we’re safeguarding private AI models, and I’m joined as always by my co-host, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are you doing, Chris?

I’m doing good today, Daniel. How’s it going with you?

It’s all good. Yeah, I got the chance last week to visit Boston and see a bunch of cool stuff, tour a few labs around MIT, which was a lot of fun… I toured a couple labs where they’re using AI to make proteins, like drug candidate proteins…

Very cool.

So the idea is one of the companies is literally named AI Proteins. Hopefully we can have them on the show sometime. I requested their CEO while I was there, he was giving us the tour, but… Yeah, the idea being that you can use various AI-driven methodologies to explore the space of proteins for drug candidates, and they’re kind of binding to certain stuff; I’m not a biologist or anything like that… But then they take those and then synthesize them in the lab, and test them, and eventually hope to get them into drug candidates, and through FDA testing, and all of that stuff. So it’s pretty cool.

I would love to have them on the show. I’ve loosely followed that field over the last couple of years, largely because someone that I used to work for is a Chemistry PhD from Harvard, and is very familiar with biotech… So he’s kind of kept me up to date on some of that. It sounds fascinating. I know that drug discovery is really all about AI these days. I think that’s where all the action is happening in that field.

Yeah, and it’s pretty amazing, at least from what I’ve heard from a couple of those companies, just hopefully the speed, the orders of magnitude faster that they’ll be able to explore the solution space, I guess. So testing thousands and thousands of drug candidates very quickly, rather than maybe a postdoc, or a PhD testing only a handful over the course of many weeks or even years. They’re able to do things much faster, which is really interesting.

And of course, they’re exploring that really useful application of that technology, but I guess this is one of the reasons that some people might have sort of ethical concerns with some of this stuff, because it’s kind of like, you can apply the technology in a really positive way and explore drug candidates… I’m sure you could also think about things that would be harmful to humans, and even think about like biological weapons, and that sort of thing, and explore that solution space in the same way… And those sorts of things don’t need FDA approval. So yeah, I imagine that there’s people smarter than me that have thought more deeply about those concerns. And I know it was mentioned, I think, in our last round of interviews about Mozilla’s report on AI this last year, we had an episode on that… But yeah, I was thinking about that while I was there; it sort of cuts both ways, I guess.

It does. Since you mentioned that, I know in kind of the defense and intelligence world, with AI capabilities being the great equalizer, the idea of malignant forces in the world deciding to focus on such things, which incidentally is very illegal under international law… But certain places in the world don’t care so much about that. And so we’ll have to see. I’ve had a lot of conversations about the very good and the very bad about AI with folks lately, and what an uncharted world we’re moving into at this point.

Yeah, yeah. Well, I’m very happy… At least the people that I’ve run across are quite ethical…

The good guys.

…and moving toward things that’ll hopefully, hopefully benefit us all. But speaking of benefit to many people, there was something rolled out this week that definitely caused a bit of a stir, and also instantly appeared on a bunch of people’s phones and devices… And that was the next GPT, GPT 4.0, standing for Omni, if I got that right. GPT 4.0 Omni. I don’t know the full background of that naming, if it’s meant to evoke omniscience, or…

[00:05:42.15] The explanations I’ve seen have been about multi-modality. Just the fact that it can photographic, video, voice everything… And yes, it was quite a release. Everybody’s been talking about upcoming – the expected release in the summer, potentially, of GPT5. Interestingly enough, this came out and it was quite – even though it’s still part of that 4 family, it’s had quite an impact. I know that in the last week I will say that it’s been the most open app on my phone, pretty much around the clock. It’s starting to feel like a family member, because it’s involved in all of our family decisions, things like getting a leak in the drywall, and trying to use it to do something as mundane as figure out the plumbing concerns… And it seems to be, when my wife and I are talking about household things, we now have ChatGPT as a third party in all those conversations. It seems to have supplanted my daughter. I’m not sure she likes it very much…

Yeah, well, since you have been getting hands on and using GPT 40 quite a bit, for you either in the announcement or in your own use of it, what are those things that stand out as the things that have changed from, let’s say GPT4 to GPT40.

Well, I don’t have a list of things in front of me at this moment or anything, but things that I’ve certainly experienced is it is much faster than just GPT4 had been. It’s able to respond very quickly in any of the modalities that we’re talking about…

And when you say modalities, you’re meaning sort of text, speech, image…

That’s correct. It seems much faster. I haven’t measured it across the board… But I think the thing that’s been notable in my own workflow of it is that I’m not having to wait around and kind of figure it out… Before this week, I’d kind of say “Okay, I’m gonna get onto GPT4 and ask it a question.” I kind of stop everything, what I’m doing, and do that. And I think the difference is, the thing that’s really impacted me the most is maybe the subtleness of no longer waiting around, being able to do it just by speaking and being spoken to… And it’s no longer a stop and do something kind of activity. It’s now, as I’m doing it, as we’re in the middle of conversation, it just becomes part of the conversation. I don’t tell my wife “Hold on one second… I’m gonna check real quick on this question with GPT4. Let’s see what it says, and then we can take that into account as we talk.” Now it’s just right there at the kitchen table. We just do it. So third party in the conversations.

Dueling GPTs.

Yeah, that’s right. That’s right.

Yeah. I think that some of the main features, if people haven’t been following it quite as much in the news - which I’m sure a lot of our listeners have been following it quite closely… One thing that they focused on was speed. So in particular, with the voice response, when your voice is kind of recorded in, then they’re talking about responding in milliseconds, rather than I think before it was a few seconds, something like that… Which, of course, is much faster.

I think, in general, it’s a fast model, in my understanding, in terms of response and streaming across the modalities. Also, it’s in terms of access, both account-wise and cost-wise, another drop in costs, as far as the cost for the performance goes. So that’s a trend that I think continues.

Also, one of the things I was happy to see was most of the GPT models over time have penalized you basically in terms of token count for putting in languages other than English, because you would get higher token counts. And if you’re charged by how many tokens you put in or generate out, and let’s say you’re putting in Korean or something like that, then it’s actually more expensive to use the tool in those other languages. So I think they, at least in my understanding, from what I’ve read… I don’t know if that’s fully equitable at this time, but there was an effort to kind of correct some of those issues as they came up.

[00:10:05.06] Yeah. Have you used the video features much? I have a bit, and it’s very good compared to things that have come before. But sometimes it seems to get amazing context, and occasionally it struggles a little bit. I think it depends on how much context it’s able to get out of the imagery.

Yeah, I mostly use the sort of image-related stuff, versus video. So audio, and text, and image is kind of what I’ve done. But yeah, they show a good number of things in the demo videos related to video, and also even kind of combining one version of this running with another version, and having interactions between the two, and interview prep with the tool, and all sorts of cool stuff… So if people haven’t seen it, I definitely recommend that people go and check out the demos to kind of get a sense of the performance. But yeah, it’s overall quite impressive.

The subtlety of being able to do these things with that reduced time and across modalities, while it might not be whatever giant jump that upcoming GPT5 would be, the fact that it’s changing our behaviors and the way that we’re using it in this last week, and enabling things that just weren’t practical before… I think that really makes a difference going forward, to the point where - here in the Atlanta area I work with some of the local universities, and there are various computer science colleges and schools and such… And I was at one on Friday, for kind of a day-long strategic planning meeting on computer science, and where they were going with it… And we were talking about this - while we’ve been talking about AI as impact, obviously, at any kind of computer science program, this may change not only what you can do, but education as well in a pretty fundamental way in terms of teaching, and being able to do it in real time, and stuff. And we had quite a rich conversation, that lasted quite a while, around how we might be able to utilize these new capabilities in the classroom going forward, and also how it might change the curriculum. So I think we’re really starting to get to a point where I think a lot of new capabilities and education are right around the corner.

Break: [00:12:22.26]

I’m having a discussion with my podcast co-host Chris about the new GPT 4.0 model. What do you think we should talk about?

That sounds like a great topic for your podcast. You could discuss how GPT 4.0 compares to previous versions, especially in terms of its capabilities and improvements. Delve into its applications across different industries, like healthcare, creative arts, or customer service, and consider exploring the ethical implications, including privacy concerns and bias in AI. It might also be interesting to discuss real-world case studies or predictions about how AI will evolve in the future. What do you think your listeners would be most interested in hearing about?

Well, a lot of that seems pretty interesting… You mentioned healthcare. What were you thinking in relation to healthcare?

Sorry, I’m having issues right now. Our systems are experiencing heavy load. Please try again later.

Well, at least I’m not the only one having it. At least I know that ChatGPT 4.0 has issued itself at times. There’s something slightly satisfying about that, I must say…

Yeah. Well, it was doing pretty good there, until it had volume issues, or whatever they’re experiencing. And I got cut off.

I’m gonna call it a mental blank. I have those from time to time, I’m just gonna say…

So Chris, our friend over in the ChatGPT world, ChatGPT 4.0, suggested some things about privacy concerns as related to AI. I probed a little bit healthcare-related things, but it wasn’t able to give me an answer and got bogged down… But it also mentioned privacy concerns. Yeah, have you thought about that, as you’ve obviously been using the system? What changes now in terms of privacy, now that we have 4.0 and not 4? How is it different, if at all?

I think it is. And this is a topic that has come up quite a bit this past week, and in various online forums… There was a particular LinkedIn post - I’ll try to find it and include it in the show notes if I can - that brought it up… And with us now talking to it, and receiving it back, how does that impact “Is this recording? Is it not recording? How does this qualify under different state laws?” When we were busy typing it in and getting our questions back, while there were privacy concerns, it’s extending now to audio recording of voices, which is covered under state laws of all states in the US, at least, and I’m sure many countries out there. What do you think? I’m just curious… I know neither of us are attorneys, but now that we’re leaving our phones open to ChatGPT and capturing people, I’m sure – and I’ve done it in public places a bunch this week… So how do you think that impacts – do we need to tell everyone we’re doing it? “Okay, everyone. Quiet, everyone… Okay, I’m starting ChatGPT 4.”

It’s weird, because it’s some of the same feelings I think people had originally when they started bringing Alexas or Google Homes into their home… And it was sort of always supposedly not listening, but it had to be listening at least to get the wake word… So there was this awkwardness there in terms of what’s actually being recorded, and that sort of thing. I think the difference here - you kind of almost got there when you were talking about how you were using it in your everyday life. I think people can see that this technology, because there’s a quick response, so there’s – as I was playing that, you could tell the first response that I got from ChatGPT was pretty quick. I would say it’s still not quite like you and me talking. It’s not natural, right? But it’s pretty quick. And so there’s this tendency then to think “Oh, well, I can leave this on at certain times”, or like you say, have it as part of the dinner table conversation…

[00:18:27.19] You kind of then bring in these devices, like the Meta AI glasses, and like maybe I just have ChatGPT watching what I’m watching through my Meta AI glasses, and telling me about this or that… And so you’ve got all of these modalities coming together. It’s recording in your kind of physical space; not only your voice, but potentially images and videos from your physical space… And all of that data is going over an API to Open AI, or Microsoft, or however the Microsoft Open AI conglomeration (that’s not a word) works these days. But yeah, it’s that embedding, I think, of the technology in the physical world, or the clear application of that within our sort of physical world. And like you say, not pausing to go and pull up a tab and talk to ChatGPT. It could be ubiquitous and embedded in our physical world, I guess would be a good way to summarize it.

To extend that a little bit, Sam Altman, the Open AI CEO, one of the comments he had made this week in an interview was – somebody was saying “When should you use it?”, I believe, and he said “Oh, you should just have it on all the time. Just listen…” And I’m paraphrasing him, I’m not quoting him. But the gist was “Never have it off.” I know that was one of those moments that the privacy notion – at least right now, I’m operating under the assumption that it’s coming into play when I and the people around me are familiar with it, and we’ve kind of made that choice to do that. But certainly, going back to the Alexa notion and stuff, I think this is going to continue to be an issue here. The Alexa stuff - we have those as well. Oddly enough, I don’t find myself paying much attention to them anymore. I guess I’ve just gotten so used to them being part of the environment, and stuff… But we’ll see.

Yeah. Well, AI meeting the physical world is definitely, I think, going to become more and more a reality. At the Boston Logan Airport when I was flying out this last time I saw they had – normally, they have little booths, where there’s a person that’s like your helper at the airport; like if you have some random question about where the bathrooms are, “Am I at the right gate?”, or “How do I catch this bus?”, there’s a helper. And they just didn’t have anyone there at the thing, and then just relabeled it “virtual assistant”, and just had a screen that you could push and talk to… And I know there’s a good number of companies that are working on sort of interactive virtual agents for retail environments, that sort of thing… And then you have this crossover with the glasses, and Rabbit r1, and Humane Ai Pin, and Meta AI glasses and all this stuff. So are you becoming a cyborg, Chris? Or are you mostly just keeping it in your phone?

I think I’ve accepted the fact that it’s inevitable to do that. I say that half tongue in cheek, half not. To that point, actually, it makes me think – you know, this is penetrating so far beyond people like us in this space. And I have a very good friend, who I don’t think would identify as a technology person, and she brought up the fact that – and this isn’t even specific to ChatGPT 4.0 or anything, but it is to your effect there… She brought up that they had pulled in – she and her daughter had pulled into Chick-fil-A, and they noticed a sign that said “Robot crossing.” And they didn’t really know what that meant, but then they actually saw a robot delivering food.

[00:22:02.12] Now, that robot I’m sure at this point doesn’t have very sophisticated AI capability for interactions… It’s probably pretty basic. But in the conversation I pointed out it’s inevitable you have, with so many – you know, as we pointed out a week or so ago that we’re over a million models already on Hugging Face, and with these kinds of profound releases each week, it’s only a matter of a very short time before even the most mundane retail experience is going to have both robotics and AI in that. And so all of those things raise the privacy concerns that we were talking about before, and they also raise cultural, and just folks getting used to it, frankly. And of course, that inevitably led to the concern over jobs and such as that, as is often coming up… But I think this is maybe the first year that it’s moving so fast in terms of these capabilities that even I am trying to – I’m even struggling to take them in as they come out. How about yourself? …even though you’re in that profession.

Yeah. Well, even out here in the prairie, in Indiana…

The prairie…

It’s becoming the silicon prairie, with Intel building their big factory in Ohio, and new semiconductors coming to Lafayette, Indiana… So plug for the silicon prairie. Anyone out there can hashtag Silicon Prairie on Twitter, and all the places. But even out here – well, I’m close to a university, close to Purdue, but they’ve had food delivery robots here six to eight years now, where they’ve had food delivery robots around campus here… Which is cool, but –

Oh, wow. You’ve just made me feel very parochial.

[laughs] Yeah. But it’s cool… I don’t know, it’ll be interesting to see also which brands kind of embrace this, and how they embrace it and put it into their retail environments, and how that’s associated with specific brands, and how they create new experiences for customers around these things. It’s quite interesting.

Break: [00:24:12.04]

Hey, so this week I’m going to have to migrate some services out of AWS ECS into a Kubernetes cluster that’s hybrid between my cloud environment and a data center. What should I keep in mind?

That sounds like a big project. Here are a few things to keep in mind when migrating services from AWS ECS to a hybrid Kubernetes cluster. One, assessment and planning. Start by evaluating your current ECS configurations and services. This will help you understand the dependencies, data flows and necessary resources. Mapping out how these will translate into Kubernetes is crucial. Two, Kubernetes environment setup. Setup.

Yeah, that was getting kind of boring… So maybe just tell me what cool song I should listen to while I’m doing the migration.

Uh-oh… Oh, boy.

Are you still there? No dice. Alright, well –

A simple entertainment question.

I struck out twice… Open AI, you had your chance live on the Practical AI podcast, and I got skunked both times. So…

Yeah. Well, we didn’t rehearse enough. They did have some pretty cool videos on the ChatGPT 4.0 release. They had several different ones, but one of them, they had two phones with ChatGPT talking to each other, and they introduced them, and they gave them little monikers to differentiate between them… But I will admit, I tried that at home, right after it got released… I saw that video and I was like “I want to try that”, and I will confess, it did not work well for my end either. Unrehearsed… So I guess those ChatGPT folks at Open AI have the inside track on smooth conversations.

I’m sure it worked at one point, as most demos do… But it’s still impressive nonetheless, I have to say. I gave it a pretty complicated question there maybe, one that I could definitely use some help with… So yeah, I think it did pretty good at answering, of course, and was responsive. I’m wondering, Chris, what you think about - now that we have GPT 4.0, what is the future of all of these different physical AI device gadgets that have come out in recent times? So there’s been Rabbit r1, there’s been the Humane Ai Pin, there’s been the Meta AI glasses, and probably others that I’m not even aware of… What’s your thought on how this influences these sort of AI gadgets?

[00:28:08.15] Well, this is also a golden age of AI startups, it’s also - the bar keeps getting raised very rapidly and unexpectedly… So you can go from super-cool to obsolete overnight. You can be one announcement away from a tough moment there for your product or service. For instance, now that the world has had a little time to try out the 4.0 version, and it’s changed the way we do it a little bit, that’s set a new bar, it’s set a new expectation on how you’re going to interact with AI… And I will confess that this week, whereas both you and I are always big fans and supporters and advocates of open models, and being able to do that instead of just having a service provider, I have to confess that when I was using open source models this week, with as much as I was also using the 4.0 model, it was frustrating, because my own expectation had arisen. So if I was using one of these products, and the world just changed in terms of kind of standard expectation on these model capabilities, it wouldn’t take much to not be able to survive that if you can’t react to it quickly enough. So it’s interesting times that we live in.

Yeah. So where do you think, if anywhere, those out there building AI products, or maybe products that are driven by AI features, where can they capture value? Because certainly from my perspective, even with this release in GPT 4.0, unless you’re already a certain ways there, it’s probably not just having an LLM API, because that is essentially just a commodity now.

That is.

Now, some are more expensive than others, but essentially, the price is kind of dropping to almost zero, unless you’re at a very high usage rate, which certainly some companies are, and that becomes an issue for them. But yeah, where do you think the value is to be had?

I still think it comes from kind of a classic Steve Jobs throwback comment - it’s not just about the AI. It’s not just about the LLM. It’s about you’re producing something of value, that’s trying to solve a problem, and you’re combining all these things together to create the right capability or experience for your customer. And I still think that’s where it’s at. Maybe if I’d give a devil’s advocate to my own comments a moment ago, if you’re going to have a product that has AI integrated into it, make sure the AI is really serving the capability of that product, as opposed to being about the AI itself… Because then you can be undone with by the next announcement. So I really think it’s utility for the thing that you’re buying the device for, as we’re buying more and more AI-enabled devices going forward. And most of them will not have the leading edge capability of the API in it.

Yeah, I think that the space of those that are working on general-purpose, serve-everyone type of AI products - which definitely fits into these kinds of assistant places - it’s a hard road, because like you say, something could knock you off that pedestal quite easily. It’s hard to compete in terms of price and the commoditization of these things. But in the enterprise it’s still very hard to utilize these tools. That report that I’ve referred to a number of times from Andreessen recently, they’re saying there’s these huge budgets in AI across enterprise companies, and 75% of it has nothing to do with the usage of the model at all, or the hosting of any models or anything like that at all. It has to do with engineering integrations around workarounds and malfunctions, and making sure it’s reliable, and dealing with all the issues.

[00:32:09.19] So there’s still a lot of space, I think, even if you’re not vertically focused… But certainly, there’s also people that are vertically focused, that I think will come out really well. One of the companies that I was able to interact with a little bit last week, they’re doing financial workflows in the financial services sector - they’re called Farsight AI - automating things that used to take days with market research, and creating slide decks and all of this stuff… It’s pretty cool things, but they’re bringing their domain expertise into that field, and they’re applying it… And that’s what really creates the value. That’s why someone would pay for that, whereas there’s not really going to be that many people that say “No, I would rather build that from a raw LLM API.” Not very many people are going to do that, because it’s much harder than you might expect. So yeah, I think that in certain verticals, applying domain knowledge, creating these agents, these automations - that’s a really interesting space moving forward as well.

One of the things that you taught us a while back was kind of that the relatively speaking smaller models, and that kind of 7-8 billion range, where you’re able to do it on just one piece of hardware and stuff… And I think that was fantastic guidance that you gave us. This was on a previous episode; we can look it up and we can connect back to it. But I think that that’s where all the action is. Whereas the press goes to these huge model releases, the real action in creating value in a product is going to still be the smaller models, that are fine-tuned very well to the problem that they’re solving… And I think those will continue to be wild, because whereas ChatGPT 4.0 is wonderful in terms of these conversations - usually wonderful - on our iPhones… And iPhone is only one of many things I pick up in a given day. And frankly, as we go forward, I would expect all the other things I pick up are probably going to have some models associated with it, just to do what that does very well.

Maybe there’s the reason why my GPT 4.0 isn’t performing well, because I’m using it on Android. Anyway, one of the other things I wanted to mention, Chris - and this is kind of tied into some of this as well, where there continues to be an advance of these closed source models. I think if you look, there’s a chart that Hugging Face maintains about the sort of convergence of open models and the closed models, and the closed models are still ahead… And now, of course, GPT 4.0 is up there at the peak of it… But those lines are converging. So they’re not just running parallel, and closed models are all the way kind of ahead to infinity, but there’s a sort of crossover point… Which we’ll see if that actually happens, but that’s kind of, at least as far as those graphs, it looks to be what’s happening, which is interesting.

There was some news out of Hugging Face this week though that is good news for those that aren’t big foundation model builders and have big clusters of GPUs… So Hugging Face announced that they’re going to be sharing $10 million worth of GPU compute… And the article that I read said “to help beat the big AI companies.” So this is quite relevant to the discussion that we’re having now. In my understanding, they’re making these GPUs in a project called Zero GPU, they’re making this compute available within Hugging Face spaces, compute an application environment. And so yeah, for those of you out there, you might be sitting around and still wanting to innovate with open models, or try your own things and feel maybe not adequately resourced in terms of compute, and particularly GPUs… So really cool to see Hugging Face take this step and provide some of that GPU resources to the community that’s operating on Hugging Face. So yeah, check it out. If you just search for “Zero GPU” you can probably find out a little bit about that effort from Hugging Face.

[00:36:26.02] And I love seeing that from them. We’ve long talked about that if you’re probably looking a little ways down the road, AI ever integrating more and more with the software around it, to the point where it’ll be kind of ludicrous to have software that doesn’t have some sort of AI capability in it in the future. It’s feeling more and more like software in that way, that when we hit the million open models on Hugging Face, and then just seeing these capabilities coming up… You know, when you said that, that reminds me of all the major cloud providers will kind of offer a limited free tier, so that you can go do some stuff with it, and that’s kind of how Hugging Faces offering that with open models feels to me, in terms of being able to go use something when you might not have the resource otherwise. So yeah, it’s good stuff… But boy, gosh, the world is changing faster, isn’t it…?

Yeah. Clem from Hugging Face, he made a quote in The Verge article that I was reading… “It’s very difficult to get enough GPUs from the main cloud providers. And the way to get them, which is creating a high barrier to entry, is to commit on very big numbers for long periods of time.” And of course, that’s something that smaller companies, or even individuals don’t have the resources to do. So it’s cool to see…

Well, there’s the Zero GPU thing. So if you’re out there, if you’re wanting to learn, if you’re wanting to run some of these models yourself, that in itself is a great learning resource and option for you to do. But there’s a couple of really cool things event-wise coming up soon. And actually, events where either Chris and/or I will be present, physically. So I wanted to mention those to everyone, because there’s some good things that will be streamed in terms of content and learning resources like workshops from people all across industry.

So the first of these is with our good friends over at the MLOps community, they’re putting on this AI Quality Conference. It’s AIqualityconference.com. And that’s going to be June 25th, in San Francisco. And as you all have seen when Dimitrios has been on the podcast, that guarantees to have some really great content there. They’ve got really great speakers, including people that have been on this podcast before, like Jerry Liu from LlamaIndex, and others. So I would definitely recommend going there and learning from people at the AI Quality Conference.

[00:39:02.12] Then also that same week in San Francisco - so if you wanted to, you could time this quite nicely, which is what I’m going to do… There is the AI Engineer World’s Fair. So you can go to ai.engineer/worldsfair and find out more about that. That’s also going to be in San Francisco. It’s going to be June 25th through the 27th. And of course, this is at least partially arranged and sponsored by also some good friends involved with the Latent Space podcast and Discord community, which we’ve done collaborative episodes with, and sponsored also by Microsoft… There’s a day of workshops, there’s speakers, and lots of really great content there.

So two wonderful learning opportunities. I am not sure if both, but I’m pretty sure some of that content will be streamed also live online. So if you aren’t able to make it out to the Bay Area, you could still see some of that content and participate. I would encourage everyone to get out, at least online, and see what’s going on in the community, and check out these great events from some of our friends across the community.

That sounds fantastic. I hope people join us there. And there’s one other learning source that I thought I’d mention. It’s a book, I haven’t read it yet, I’m about to read it, and I thought I’d go ahead and share that… With my fascination about AI and the way it’s changing education - we talked a little bit on this show, and some other recent shows… It’s by Salman Khan. Khan is the CEO and founder of Khan Academy, which has helped millions and millions of people learn skills… And fantastic. I use it regularly. The book is “Brave New Words: How AI Will Revolutionize Education, and Why That’s a Good Thing.” It was just recommended by Bill Gates of Microsoft and the Gates Foundation fame… So that’s probably one of the next books I’m about to pick up, and I thought I’d mention it, and maybe we’ll bring it up again on a future show. So I’m recommending that one.

Sounds good. Well, let’s have GPT 4.0 take us out. Hey, I’m about to go to sleep after this, because I’m recording this podcast late with my friend Chris… I’d like to dream about an amazing future with AI. Could you give me some suggestions about maybe some thoughts I could have as I go to sleep, which might prompt some really amazing dreams about AI in the future?

That sounds like a great way to wind down. Here are a few ideas to let your mind wander into the possibilities of AI in the future. One, imagine an AI that can predict personal health events, and seamlessly coordinates with medical professionals to provide proactive care, ensuring optimal health with minimal effort from individuals. Two. Think about an AI-driven global environmental system that manages resources and energy usage across the world, balancing human needs with planetary health and restoring ecological balance. Three. Envision a world where AI personal assistants evolve to become lifelong learning partners, adapting to your interests and educational needs over time, making learning continuous, engaging, and perfectly tailored to your personal and professional growth. As you drift off, let your mind explore how these advancements could change everyday life, create new opportunities, and solve the big challenges facing humanity. Sweet dreams about the future of AI.

You can’t beat that.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00