Practical AI – Episode #279
Hyperventilating over the Gartner AI Hype Cycle
with Demetrios Brinkmann of the MLOps Community
This week Daniel & Chris hang with repeat guest and good friend Demetrios Brinkmann of the MLOps Community. Together they review, debate, and poke fun at the 2024 Gartner Hype Cycle chart for Artificial Intelligence. You are invited to join them in this light-hearted fun conversation about the state of hype in artificial intelligence.
Featuring
Sponsors
Intel Innovation 2024 – Early bird registration is now open for Intel Innovation 2024 in San Jose, CA! Learn more OR register
Motific – Accelerate your GenAI adoption journey. Rapidly deliver trustworthy GenAI assistants. Learn more at motific.ai
Notes & Links
Chapters
Chapter Number | Chapter Start Time | Chapter Title | Chapter Duration |
1 | 00:00 | Welcome to Practical AI | 00:34 |
2 | 00:35 | Sponsor: Intel Innovation 2024 | 01:58 |
3 | 02:44 | Bad news for Daniel | 02:56 |
4 | 05:41 | Gartner hype cycle | 04:14 |
5 | 09:55 | !sexy AI services | 02:00 |
6 | 11:55 | ML to AI Engineers | 01:38 |
7 | 13:33 | LLMs are not a product | 04:38 |
8 | 18:11 | Reading off the cycle | 01:35 |
9 | 19:47 | Fighting confusion | 01:50 |
10 | 21:37 | Drawing boundries | 01:44 |
11 | 23:21 | AI vs ML engineer | 00:24 |
12 | 23:45 | Prompt engineering fad | 00:24 |
13 | 24:10 | AI TRiSM | 01:19 |
14 | 25:28 | Synthetic data | 01:20 |
15 | 26:48 | What's missing | 01:28 |
16 | 28:16 | Demetrios' contribution | 01:44 |
17 | 30:00 | Where is multi-modal AI? | 01:34 |
18 | 31:34 | Transformers and SLMs | 02:33 |
19 | 34:22 | Sponsor: Motific | 01:53 |
20 | 36:24 | DB AI tools | 01:47 |
21 | 38:11 | Where's RAG? | 01:45 |
22 | 39:56 | Demetrios' AI hyped items | 04:13 |
23 | 44:10 | Fighting AI nepotism | 01:45 |
24 | 45:54 | Broccoli AI | 03:22 |
25 | 49:17 | Unsustainable AI | 01:13 |
26 | 50:30 | Neighborly AI | 00:46 |
27 | 51:15 | No vectors | 00:49 |
28 | 52:04 | Thanks for joining us! | 02:19 |
29 | 54:23 | Outro | 00:46 |
Transcript
Play the audio to listen along while you enjoy the transcript. 🎧
Welcome to another episode of Practical AI. This is Daniel Whitenack. I am founder and CEO at Prediction Guard. I’m joined as always by my co-host, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are you doing, Chris?
I’m doing fine. We’ve got a fun one today, Daniel. This is gonna be a good one.
Yes, of course. It was wonderful not that long ago to be in the great city of San Francisco, and run into our friend Demetrios from the MLOps community… And I figured I’d just bring him along for another conversation. So Demetrios, how are you doing?
I’m great, man. We’re back, and I’ve got some bad news to break to you right now. I wanted to do it on air…
Go for it.
Yeah, just to get your reaction.
Oh, boy…
You can be vulnerable. This is how we build community.
Yeah, I’m nervous.
Yeah… So Prediction Guard - awesome. Congratulations on all the success that you’ve had. We’re doing a data engineering for ML and AI virtual conference, and one of your colleagues, Daniel, filled out the CFP… I haven’t gotten back to him yet, but I can’t accept him. I just am way too full, way over my head. And as much as I want to, I’m going to have to divert him to doing his own special event, basically. We’re going to actually take what may have been a bad thing and turn it into a good thing.
That sounds great. I’m looking forward to learning more. [laughs]
There we go. I’ve gotta make sure that you get all the love and shine you deserve, because I’m super-stoked at what you’re doing.
Yeah. Well, I appreciate that. It was great to see you, and… You had your own event in SF. How was that?
I do not recommend doing live events to even my greatest enemies. If anyone out there is contemplating organizing an AI conference, you can do it, but… I don’t recommend it.
You’re gonna hurt.
It’s painful, man… But it was a big success, it was just a lot of work leading up to it, as you can imagine. And we had fun, and on the day of it was like I think over 750 people showed up. A lot of great conversations, a lot of fun, spontaneous, sporadic meetings with people… And that’s the stuff you get at in-person conferences that’s really hard to replicate virtually.
You know what the secret is? The secret is it’s AI, and it needs a lot of hype. It really needs a lot of hype. If there’s one thing we don’t have enough of in AI, it’s we don’t have enough hype. If you had hyped it more, it would have worked.
[laughs] You know, I do a fair amount of hyping… And so for those out there that are sick of the hype, like myself… I’ve only got myself to blame this.
Well, Chris, you sent me a very interesting-looking, hype-filled chart the other day… Do you want to go into what that was?
[00:05:52.14] I will. And I’m actually blaming it all on Demetrios… He was making fun of the Gartner Hype Cycle. And gosh, I hope they’re not a sponsor, because we’re making fun of them today. And he was going through that, and it was funny… And I said “Dude, we need to do an episode where we all analyze the Gartner Hype Cycle in 2024 for artificial intelligence, and we break it down. And we’re gonna assess it and decide what we think of those things.” And we’re not doing this in our normal, extremely serious manner. We are doing this in the fun way. And lest you don’t know Demetrios out there, which I can’t imagine, because he’s a regular guest on the show here, he is, in addition to being a brilliant guy in this field, he’s also the funniest man in all of artificial intelligence. So this is going to be good, and we’re going to dive into the Gartner Hype Cycle today, and break it down for you. We’re going to start with the real one, and then we’re going to maybe make some adjustments to it.
You know, Chris, you say making fun, but - I mean, Gartner seems to have fulfilled their mission. I mean, we’re talking about the hype cycle, we’re going into it… So maybe their mission was fulfilled, you know?
We are their fulfillment.
Yeah.
Oh, my gosh…
Yeah, we’re hyping it up right now.
We are. Okay, and we’re gonna have fun doing it.
I just have to say - please, if anyone knows how I can get a job doing this kind of stuff, just making up words and then putting them onto a waves graph, let me know, because I would love this as a job. It just seems like it’s too much fun.
Well, let’s see… I think Surf’s Up, the top on the wave, and let’s start talking our way through. Demetrios, do you want to lead off on some of your ideas there?
So I think the most surprising to me out of this whole graph - and for anybody that’s not familiar with the hype cycle, you’ve got the big upward side, and then it goes down, and it kind of crashes, and then it starts to climb back up. And it’s the traditional –
And the two-second version of that - and in a previous episode I did a longer version, when we were looking at some specific things on it… But the two-second version is new technology comes out, everyone’s super-excited about it, they think it’s gonna be the greatest thing since sliced bread… It doesn’t live up to the hype, they get frustrated, they go through “This thing sucks!”, aand it falls down on the hype popularity side. And then cooler heads prevail, and they kind of go “Okay, well, maybe you can do something okay.” And then it’s into a reasonable sense of productivity. So that’s Gartner in a nutshell.
So the biggest surprise for me is at the bottom of the slope - so after it’s gone all the way up the hype cycle, it’s come down and crashed down, and it’s at that the absolute bottom…
The trough of disillusionment.
Exactly there is cloud AI services. And for me, that is the biggest misnomer, because if anybody is making any money out of any of this – and I guess, maybe hype and actual money, they’re detached, and they’re very decoupled here… But for me, that was like “Wait, what?” There’s no hype in cloud AI services. So Bedrock - out of there. Hype is killed. It’s at the trough of disillusionment. Any type of SageMaker, if you’re using that, or Vertex… No. Out of there. It’s the lowest of the low. And so when I saw that, that was instantly like–
Dude, why are you even doing it?
Yeah… I did not believe a thing that I read afterwards. But that was my thing. Any big surprises from you guys?
I think your point on. If there’s anyone making a killer amount of money on this, it’s Microsoft, it’s Amazon, it’s Google… Uh-huh.
[00:09:55.03] Part of my struggle here is some of these terms – like, I could interpret them one way or another way. SageMaker, for example, which - for those that don’t know, it’s kind of like a Model Deployment service within AWS, and there’s various convenience around it, and that sort of thing. Like, that’s been around for quite a while now; a very long time, even before the kind of hyped Gen AI stuff.
Long before it, yeah.
Yeah. So is that a cloud AI service? Like, that’s been around for a huge amount of time. Or are we just talking about hosted model APIs, right?
They don’t say…
Which also, to be fair, have been around a long time. You look at something like OCR, or translation, or something like that… And cloud services have been around for a really long time, and are sort of ubiquitously used.
It’s funny that it’s down there. I get your point… Maybe it’s just like everyone knows that’s where the cloud, that’s where all the services are, we’re all paying for them…
Yeah. So does hype correspond to usage, I guess? In this chart, is it that people aren’t hyping cloud AI services, even if they’re used? Or…
I think it’s an emotional thing. You know, the hype side is how much people talk – so maybe it’s accurate in this context. There’s nothing sexy about AI services in cloud providers. And maybe that’s what they’re getting at, is like “Yes, we’re paying an arm and a leg, we’re giving them all of our money, but there’s nothing sexy.”
But productivity wise… It’s definitely productive.
I would think so.
Yeah, it’s very pragmatic, too. Especially for those people just starting, I don’t know any easier way than to just grab an API from – like, Amazon Bedrock is just a hosted model; hit that API like you would hit an Open AI API, but now you have a suite of models. So that seems to me like a near miss. But then at the top of the peak is the other one that was a huge surprise to me, because I’ve noticed this trend… I don’t know if you guys have noticed it, but people who were formally ML engineers - we’ve all converted into being AI engineers. And an AI engineer is so misleading, because you don’t know, is that somebody that is coming from like a frontend development world, and now they do a little prompt engineering, they use a few frameworks, and they can chain together some prompts to make a bit of a demo on Twitter? And now they’re an AI engineer? Or is it somebody that was deep, deep in the ML platform weeds, and because AI is now the new rage, they call themselves an AI engineer? So I don’t know about that, but it’s at the top?
I think it’s the same. I think it’s all I think people use AI, ML, and before it really fell out of vogue, deep learning interchangeably.
Yeah, exactly.
I don’t know if it’s also maybe connected to the fact - like, Chris and I talked about this, I believe it was maybe last week… The fact that some of the disillusionment around AI is sort of the realization that it turns out AI is integrated in software, and you still have to do engineering to build software… And it doesn’t just sort of – like, having a model as a solution doesn’t really like play out in reality.
You mean I can’t just buy an AI model and stick it out there and magic things happen?
Yeah. I mean, one would think…
I’m so disillusioned.
It’s funny you guys mentioned that too, because I’ve seen a few people talking about how LLMs are not a product; you have to build on top of LLMs your product, or whatever it is, your service that needs to be there. So you can’t look at an LLM as a product per se. And then I’ve also seen – or I’ve been thinking deeply about something that is, like, the companies that are really getting a ton of value out of this AI movement… I’m thinking about one of my friends’ companies, who does like support software, and now he’s leveraging AI and LLMs for creating like multi-agents, and helping answer feedback, or answer questions and queries for support… And he’s using AI. That’s awesome. He’s able to sell that support product to companies really well.
What I haven’t seen is companies that say “Hey, I am fraud detection as a service. And I’m going to sell you this, whatever traditional ML product as a service.” Whereas you can create regular business unit products as a service that leverage AI, but you can’t quite - or at least I haven’t seen anybody crack the nut - create some kind of a traditional ML service type of product. I don’t know if you guys have seen that. And I also don’t know if I’m making much sense right now, because it’s something that’s relatively fresh in my mind.
I’m going to turn that one over to Daniel.
So no, I wasn’t making much sense, I guess is what the nice way of saying it is… [laughter]
I mean, so you’ve got what I would say is – the things that I have seen most are either what you were talking about… So utilizing generative AI embedded in the functionality of sort of domain-specific applications, like the customer service you were talking about, or financial services, or whatever… Or access to models over some API infrastructure.
There’s maybe less general – I guess maybe the biggest one I’ve seen is sort of just general fine-tuning as a service, if you look at something OpenPipe, or something like that. But that’s still fairly general purpose. It’s not specific to any sort of use case that you might use.
Maybe to some degree certain RAG services would fit into that. We were talking to Pinecone about their recent – they have more kind of prebuilt things to have you do kind of load in all your documents, and have RAG set up, and all that stuff. So I don’t know, that’s maybe the closest that I’ve seen to that sort of scenario.
Yeah. Well, also, the big question is, everybody wants to - and this kind of ties back into the hype cycle. Everybody wants to be doing RAG, and wants to have all these great use cases with their RAG… Like you were talking about with Pinecone, they make it really easy for you to do your RAG. But then at the end of the day, is that a viable business? Or is that actually super-useful? As opposed to somebody’s got this support software that they can come in and really cut down the burden for your customer success engineers, or your customer success people. And that is fascinating to me, because it’s a booming business right now. The RAG business - maybe, yeah, that’s great, and maybe there’s some interest there. Is it a booming business? I don’t know. I haven’t seen numbers. But I think the really fascinating part to me is if you try to juxtapose that with a fraud detection as a service type of product. I just haven’t seen that anywhere, because I think a) you’re not able to really like give away everything as freely, and b) what works for one fraud detection use case doesn’t necessarily… It’s not like you can productize that and then go out and sell it as a service, in my opinion. So this is a little bit of a tangent, I know… But that all that to say is we’re at peak hype for AI engineers.
[00:18:08.21] Peak hype, yes.
So I’m going to draw us back over to the hype cycle just for a moment, and I’m going to something boring for a moment. I’m going to read off the things, where they are, for our listeners… Because the three of us have the benefit, obviously, of seeing the graph in front of us, and for our listeners who aren’t. So I’m gonna take a moment and then we can go back and start hitting them.
Very quickly, heading up the curve initially, the innovation trigger. We have autonomic systems, we have quantum AI, we have first principles AI, we have embodied AI, multi-agent systems, AI simulation, causal AI, AI-ready data, decision intelligence, neurosymbolic AI, composite AI, artificial general intelligence, otherwise known as AGI, and then we’re hitting the peak of inflated expectations. At the top of that hype cycle we have Sovereign AI, AI TRiSM, prompt engineering, responsible AI, and at the very peak, AI engineering. And then starting to slide down we have Edge AI, foundation models, synthetic data, model ops, and generative AI. And just going into the trough of disillusionment is neuromorphic computing, smart robots, followed at the bottom by cloud AI services. And then we slide up the slope of enlightenment to autonomous vehicles, knowledge, graphs, intelligent applications, and finally, the singular one on the plateau of productivity, which is where you want to end up, is computer vision, which is basically “Yeah, we can do that. It’s boring and no one talks about it anymore, but hey, we’re making money.”
So if the listeners out there are not confused…
Oh, there’s a whole bunch I don’t have any idea what they are. Gosh.
I was gonna say, which ones do you actually know what they are? Because –
What the hell is embodied AI?
Oh, I learned what that is after I put out the post. So someone said “Oh yeah, embodied AI is when you use AI in robots.”
It is so? But there’s also a smart robots on the cycle.
Yeah. And at a former employer I was specifically doing AI systems in robots, and I’ve never heard of it.
You never called it embodied AI? [laughs]
Well, it’s been a few years, I’ll give you that. But no, we weren’t calling it embodied.
I mean, so I think I’m at like a 30% hit rate on these… And I really would love to know what first principles AI is, because that feels like buzzword bingo to the fullest.
I don’t know.
Let’s see. First…
Yeah, Daniel’s going –
He’s cheating.
He’s going to models to find out.
The car AI-generated card in my Google Search says “When applied to AI, first principles AI suggests developing AI systems and algorithms by understanding the foundational principles of machine learning, neural networks and data science from the ground up.”
Don’t we do that anyway? Isn’t that kind of inherent in training new models, and stuff? “Oh, but no, no. We’re really going back. We’re going back to the very first ones. Here are the second or third principle. [unintelligible 00:21:16.20]
Yeah, no, because all you guys that are out there that are aren’t using first principles - that’s lower down on the hype cycle.
Okay.
So the other pieces… I mean, were there any other surprises for you guys? Because I have so many other pieces on here that I’m like “What…?”
I think for me – like, some of these things are themselves correlated, and yet in different places on the chart. So if you look at generative AI foundation models, Edge AI, AI engineering, prompt engineering, probably some others on there - all of those sort of fit into the same-ish bucket, and yet are on different sides of the hump. So yeah, I don’t know, some of these it’s also a matter of where do you draw the boundaries? Where’s the boundary between generative AI and foundation models? Or generative AI and prompt engineering.
[00:22:17.26] I’ll give you one… At the very bottom on the innovation trigger is quantum AI. Okay, so that’s not going to happen anytime soon. And I will note that they have it on the greater than 10 years, but I would suggest it’s probably greater than greater than 10 years.
But isn’t that – I mean, one of the things that’s interesting about this whole cycle is there’s that one… Maybe you all can tell me or I can look it up. There’s one law, it’s like a general law that people talk about where you underestimate short-term innovation and overestimate long-term innovation, or something like that.
I think it’s vice versa.
Yeah, sorry. I said that backwards. Yeah. So especially the time angle of this, it’s hard to – because things just pop up and you really didn’t see certain things coming, and others that you thought would come, don’t. So yeah, it’s extremely difficult.
100%. One thing that I am – just to tag on what you’re talking about, Daniel, with the bucketing these… Please tell me what the difference is between an AI engineer and a prompt engineer. A prompt engineer is someone that only does prompts, I guess, and that’s all that matters? So I can see how it’s like “Where’s the line here?”
When prompt engineering came out - Daniel, you might remember - I kind of made fun of that. People were saying – there were new jobs for prompt engineers, and stuff. And I’m like “That is a passing fad.” Like, that will be just so ingrained in what everybody does, all the time, that the notion of there being someone who that’s their entire job all the time for years is not going to happen.
Yeah. I also didn’t know… So I’ve never heard anyone use the word - if it’s a word; it’s an acronym. AI TRiSM. Do people go around saying that?
Yeah, what is that?
So it’s I looked it up, and you know what’s funny - because this is exactly the area that I’m working in every day. AI TRiSM is tackling Trust, Rsk and Security in AI models.
Okay.
You’ve never heard that –
And I’ve never heard that. But now I feel like I should put it on our website. Because it’s hyped.
Yeah, it should definitely be there. That’s right.
The funny part is it’s almost as hyped as prompt engineering, which is basically all you hear about is prompt engineering, right?
Yeah, they’re right there togther.
And AI TRiSM you never hear about.
Yeah, there you go. But the TRiSM, it’s out there.
It is.
We hear about the components that make that up all the time…
Sure.
…but just never the – I’ve never heard them put together that way. And I’m sure there are people that are out there that their focus is in that area, and they’re like “Of course it’s TRiSM.” But guess what? Most of us don’t know that.
No, not at all. I don’t even know – if I go and I just look at this, I don’t know what causal AI is, I don’t know what the AI simulation is… The multi-agent I do understand, but then… Even when you say Quantum AI, I don’t know what that is.
The one that I would say is probably in the wrong spot is synthetic data. It feels like that should be still going up on the hype train, because we’re just discovering what we can do with synthetic data. And every week I feel like we unlock new use cases. And synthetic data is just – it’s the gift that keeps on giving in my eyes.
[00:26:15.05] I think that’s the difference in you who actually does it, and somebody at Gartner, who was tasked to go put the chart together and doesn’t actually do the thing in real life. I’ve terribly offended somebody out there.
Well, we’re glad that it’s out there. Let’s just say that we are very happy that this exists, so we can have a whole episode dedicated to breaking it down.
Yes. It’s a conversation starter. That’s what I mean. Achievement made.
Yeah, unlocked.
So one thing that I noticed isn’t there at all, which really surprises me given how much it’s bantered about, is ethical AI. It’s not on the chart.
And that doesn’t go in the TRiSM?
Maybe it does. Maybe this is where I – is ethical AI now transformed from a labeling standpoint into TRiSM? Is that where we’re going? I don’t know.
Or what is the overlap between responsible AI, TRiSM, and ethical AI?
Okay, well –
And there isn’t really anything on here about GPUs or hardware. I think that’s because they made their own hype cycle for GPUs.
That’s right.
If I’m not mistaken, I feel like I’ve seen that somewhere on the internet.
You’d be cannibalizing your other chart.
Exactly. So you can’t put any GPU, hardware, anything on the AI one. You’ve got to refer people to the GPU hype cycle. And maybe it’s like that with ethical AI. Like, they made a whole other ethical AI chart that is the hype cycle for ethical AI.
Maybe so. I’m not familiar with it.
How many charts can you make? If you’re Gartner, I guess –
I mean, we have just the artificial intelligence hype cycle here, but they probably have – I think I’ve seen multiple subdivisions and stuff out there.
That’s why it’s a great business to be in, Gartner selling all these different hype cycles…
Well, speaking of what to hype, what’s not on hype cycle, but should be?
Alright, if I could have talked to somebody at Gartner before they were making this, I would have advised - and so this is basically my video job interview right now…
I’m busy typing an invoice up for you to send to them, okay?
Exactly. I would have advised AI Gateway. That is very popular. That’s climbing the hype cycle right now, because people really like to have the option to hit an AI gateway. And if it is not that complex of a query, you don’t need to hit GPT 4. You don’t need the most expensive model. If you have some kind of open source model that is cheap, then let the simple query go to that 7B model.
So I’ve been hearing people call it an AI Gateway. Others I think have called it like an LLM proxy maybe –
Router?
Or router. Yeah, that’s another one. So we would have to agree on the actual name, but that’s gaining hype, for sure.
Yeah, agreed. Yeah, I’ve definitely seen the router language… Whatever it is, the languages overlap with networking, which is basically - like, you’re just routing API calls. So I guess that makes sense.
Yeah. Any that you guys would have liked to have seen on here, and where?
I had the ethical – I’m still wondering what composite AI is. Did we ever get that answer, here, or am I just having a senior moment, or something…?
Yeah, what is it…?
[00:30:00.01] The one that really stands out to me, unless I’m just like – there’s a lot of words on this page, so maybe I’m totally missing it somewhere… But where is multimodal AI?
Oh, good catch there.
It’s not on here, is it?
No.
Who cares about multimodal…?
That’s so weird. That should be in the peak of inflated expectations.
This is like the thing of 2024, like multimodal AI.
Yeah. Even multimodal RAG should be on here, like climbing the innovation trigger. Multimodal models should be on the peak of inflated expectations.
That is such a good catch… I know tons of people who say multimodal and have no idea what it means.
Well, what does it mean, Chris? [laughs] Quiz time.
Well, it’s having different modalities of input there, so that you can combine different inputs to get a rich output, in a very general sense. I have no idea.
Yeah, so voice, photos…
I know it when I see it.
Yeah. Video…
Voice, photos, video… All the things.
Which is what we want. I want to throw a bunch of stuff that I have, and just have it sorted out and give me the best answer. And even with today’s multimodal models, that doesn’t happen very well. I’m often frustrated and disappointed with those outputs. So yeah, I’m expecting better.
Yeah. And along those lines, I have two that I would like to have seen. One is just transformers in general. Where’s that? Where are they on this hype cycle? Because that also feels like – are they climbing or are they going down?
It would be trough of disillusionment, heading downward, because we’re past that. And people are now talking about post-transformer models quite often. So it’s kind of like “Yeah, yesterday.”
So there needs to be another dot for post-transformer modes.
Yup. Going up. That’s definitely going up.
That’s right.
And speaking of which, it feels like, okay, we’ve got – small language models… Where are they? Because that is all the rage.
It is.
And maybe it’s all the rage for every vendor who is not Open AI, because they can’t compete on GPT 4… And so what do they do? They say “Well, you can just host your own small language model and fine-tune it and get better performance than GPT 4.” And so I think small language models are probably – they should be in that innovation trigger, maybe the peak of inflated expectations, because anyone who’s ever used the 7B model might not want to use it if they haven the choice…
Well, are you sure that’s going up? Or could it possibly be sliding into that disillusionment that you’ve just referred to?
Potentially. That’s true.
Maybe it is going into the trough of disillusionment, just hypothetically, because I do think that when it gets to the plateau of productivity, small models will be just the workhorse; you’ll have them out on the edge everywhere. Every frickin’ device you’ve ever imagined or seen is going to have small models in it, that are inferencing… We won’t ever have anything that doesn’t have them. It’ll be just the “Oh, yawn, Of course we have our small models in our watch.”
Which leads me to the next one that I’m like “Where is this?” Why do they not have wearable AI? That is a perfect buzzword that should be on here. And if you look at like what Meta is doing with the glasses, or if you see any of those necklaces that you can wear and it records everything… That’s wearable AI right there. I may have just made that up, or I may have seen that before, but that one should be on here.
It should be there, I agree.
Break: [00:34:08.17]
Maybe this fits into kind of the agentic stuff that is represented in certain ways on there, but this whole idea of tool function calling/text-to-SQL, interacting with structured databases, APIs, whatever that is… I don’t know maybe the general name for that other than tool and function calling, or text-to-SQL, but certainly, that’s like sliding into a zone where people are definitely doing some of those things in production, and there’s products released around it. So like the Hex Magic stuff and all that that other…
Where is it on the chart though, before I go on?
Oh, where is it on the chart? I mean, it’s got to be somewhere around AI engineering.
So it’s at the peak of –
Maybe. Maybe, I don’t know. Maybe it’s going down…
I think it’s just past that…
…because people are like “Agents aren’t reliable…”
I think that’s right. I think it’s heading down into the trough of disillusionment. That’s where I would guess.
Yeah.
Yup. And if you compare that to where they have it, multi-agent systems, it’s got a long way to go up. It is at the very bottom of this hype cycle. So yeah, I think we instinctively are like “No, please, no more agents.” And Gartner’s jut like “Oh, we’re just getting started, baby.”
And they’re like “No, please, more agents together. Multi-agents.”
Yeah. Gartner’s going to create their own agent hype cycle next. That’s gonna be the next one that they can create.
Maybe. Maybe.
So you know, we’ll take a commission for giving you that idea, Garther. No problem there. One thing… Can we call out the elephant in the room? Because where is retrieval-augmented generation?
Yeah.
How is that not on here?
RAG? What’s that?
Because I was thinking about it and I was like “Oh, you know what they missed? Graph RAG.” That is all the hype these days, and that’s probably right around where sovereign AI is, where it’s maybe at the border of the –
Yeah, it’s going up, nearing the peak of inflated expectations. You’re right.
More hype than the TRiSM.
Yup. More hype than the TRiSM.
But I would argue RAG is heading to the trough of disillusionment. Anyone wanna disagree with that?
No, no, I think so, too.
I think it’s over the hump.
I do, too. I mean, and people are kind of hitting the challenges… And actually, Daniel, advanced RAG, which we’ve talked about several time [unintelligible 00:39:06.00] well, we don’t just have RAG now. We have advanced RAG.
[unintelligible 00:39:13.02]
As things are starting to head over that peak of inflated expectations with RAG - “Well, guess what? We can juice it some more. We have advanced RAG.” But I think the whole thing is starting to go over the side, and people are like “Okay, well, we’ve kind of done at least the easy stuff.” To the advanced RAG point, there are people that are doing it better than others. But nonetheless, what’s next?
So I’m just curious, two-second deviation… We’ve talked about fine-tuning, we’ve talked about RAG… What’s coming next in that sphere? What are they missing there?
Yeah… A new model?
Yeah, I think you mentioned that you might have had some of these, Demetrios… What are AI hyped items that are your own, that you’ve come up with a name for, that other people will have to interpret to figure out their definition?
[00:40:16.13] [laughs] Alright… DO you want to guess on this one?
Yes.
Alright, here we go. I am going to start you off with a pretty simple one. This one is free range AI.
Free range… Is that is that open access LLMs?
Close. Close. What have you got, Chris?
Grain-fed…?
I can’t get off the free range thing. I’m an animal guy. I can’t even get into the AI headspace on this one.
That’s AI that was trained without guardrails.
Okay, I like that.
Gotcha. Well, we already talked about one here that you alluded to, Demetrios, but my name for it was Trinket AI.
Wearables?
Yes.
Trinket AI… Yeah, imagine it’s in your fidget spinner.
That sounds a lot – that’s a much better name than wearable AI. Trinket AI. It is. Every little thing you have on your body has a frickin’ model inferencing on it, you know?
Yeah… And it doesn’t bring you any extra value, if we’re gonna follow AI trend… [laughter]
You just don’t have to think anymore.
You can click that button and take a picture, Demetrios.
No, it just gives you some verbose answer to a question that you didn’t really ask… So your shirt is – you’re like “Hey, have I been sweating?” and then it tells you the origin of sweat in a three-page PDF that you have to go download… [laughter]
Do I get senior moment AI? That would be good for me. There’s a huge market for that. Everybody over the age of 50 is going to buy senior moment AI to “What–? Oh–” and “Oh, there we go.” And I can continue, instead of pausing for the next three minutes to try to figure out what it was I was about to do.
Or, I was thinking that that’s how seniors interface with AI, so they don’t get left behind. It’s like, this is the product that will make sure you stay up to date. You’re ahead of the curve.
Okay. Sounds good.
Alright, I’ve got another one for you all… This one is EQ AI.
Empathetic AI?
Yeah. It’s also been known as Empathetic AI…
Emotional Quotient, and stuff?
Yeah, you may hear other people out there on the streets calling it Empathetic AI… This one is a type of AI that has high emotional intelligence, and it feels empathy for you when you get frustrated that it’s not giving you the right answer, and your prompts aren’t working… But it doesn’t actually made your prompts work. It just feels bad for you.
Okay, that minus the AI bit, that happened to me yesterday. I was on Comcast, on their stupid tech support for four hours, texting… They passed me off, and every everyone was so empathetic, but they accomplished nothing. If you put that in AI, I’m quitting AI. If you put that into any AI that does that, I’m just done. I’m walking away from the whole field.
Are you sure it wasn’t already AI that you were talking to?
It could have been. I mean, it was just text. It was only text. But it was horrible.
We’ve already passed the Turing test, so…
I’m getting response of “I’m so sorry. I’m just very sorry. We’re here to help you”, and like “I’m gonna freakin’ kill you.” Yeah… That’s what four hours texting support will do. If you bring that to AI, it’ll ruin the whole thing for me.
Well, this one, funny enough, is actually on the uptick when you look at the slope. The EQ AI has got a lot of runway left.
Yup.
[00:44:07.28] So my next one either AI nepotism, or AI anti-nepotism.
Fighting AI nepotism.
Fighting AI nepotism.
You’re gonna have to go into that one for me.
I’ve stumped you. This is exciting.
It’s basically using AI against like the government using AI, or what?
No, no…
Foundation model related maybe?
Yeah, so this would be like multi-model AI in that you are not preferential to one language model family and
only using that family, but you are now multi-model, and as such not practicing nepotism.
But are you multimodal multi-model?
[laughs] Maybe not…
You know, I knew it by its other name, which is polygamy AI…
Yes.
Oh, gosh… Where are we going? [laughter]
Or some in San Francisco call it polyamorous AI, it tends to be… So the next one that I’ve got for you – oh, where is this nepotism AI on the hype cycle, by the way?
I think it’s still a bit on the rise. I saw a16z in their post, one of the things they call that was multi-model future.
Oh, yeah. There’s a future for this one, that is for sure. So I’ve got one that is called Broccoli AI.
Okay…
This one’s going down on the hypecycle.
Is it related to some sort of graph thing?
No, but that could be nice…
Branching?
Is it synonymous with healthy AI?
Yeah, exactly. Maybe you’ve heard it termed Healthy AI…
Efficient? Sustainable?
Oh, that’s another one that I’ve got though, but we’ll get to that in a minute… Which reminds me - it does feel like sustainable AI should have been on the real hype cycle. Like, that’s an actual term, isn’t it?
Yes, it is. And it’s not –
It’s not on there. The other one that should have been on there, that I was like “Why isn’t it on there?” is Ensemble AI. Ensemble models. That feels like it should have been on there.
See, one of the ones that I looked up was Composite AI.
Yeah, that’s the one I didn’t know.
Well, I don’t know – it’s slightly different than Ensemble, but I think that Composite was combining multiple AIs together, in some way or another…
For one inference? Like you have multiple models inferencing, but you have one inference back out to the user?
Yeah, something like that. I don’t know. Although Ensemble could very much mean for a single inference getting a majority vote, or something like that.
Okay, so it would be where Composite AI is on the chart, if they’re assuming they’re correct.
Yeah.
And before we leave it - Sustainable AI? Where is it on the chart?
That’s very much – like, it’s got a lot of hype to go.
Low to mid-level – mid-level on the curve up?
Yeah.
Okay.
Just think about how many people are talking about the energy that is wasted training foundational models…
True.
[00:47:46.10] …and how we need to build out all these data centers, and they need to be sustainable etc, etc. So yeah, sustainable AI for sure has some room to grow. Back to Broccoli AI… A.k.a. Healthy AI. This is AI – and this is very much on the downslope, again. It has passed its peak. People are a little disillusioned with it, because it’s AI that doesn’t taste good for the organization, but it’s needed. And so you can imagine the cybersecurity folks - they love this kind of AI.
Is this like a linear regression model, or what would you consider good for an organization? I think you used the word good.
Yeah. Healthy. It’s healthy for the – we could go to healthy for the organization. What could that be? I mean, I actually didn’t get to do enough market research in this section to figure that part out. I was just throwing spaghetti at the wall. But if I were to think about what’s healthy - yeah, it would probably be the traditional ML. Going back to what I was talking about before, fraud detection is one of those, where it’s not really AI; some people might know it as its former term, ML…
I’m telling you, they’re all the same, from a marketing standpoint.
Exactly.
Well, yeah, the waters are too muddied for them to make any actual difference.
That’s right. So what else you got?
Okay, so I’ve got Unsustainable AI, which is way different than Sustainable AI, just so we’re clear… But it’s a whole different sector of the universe that we’re talking about. It’s not like “Oh, it’s just the opposite of sustainable AI.” Unsustainable AI is at peak hype right now. Let’s be honest. If I could swap it out with the AI engineer, it is at peak height, because this is AI that was built for a product demo, but not for scale. That is unsustainable AI.
Happens all the time.
Yeah. So anything that you see… Basically, we can – hopefully none of these guys are your sponsors… But let’s just queue Devin, or Rabbit, or Humane… All those unsustainable AI.
The trinkets?
The trinkets, yeah. That’s true.
So it’s sort of analogous to prototyping software where you’re never intending to grow it into production.
Exactly. So that’s all of mine that I could think of.
Well, I think that was a pretty good list.
I did realize, I don’t know, maybe, maybe related to some of the discussion we had earlier, but… I don’t see Neighborly AI on here.
That’s kind of creepy when you think of that.
I wasn’t creeped out until you said that. But… [laughter]
[00:50:47.27] I had this image of Mr. Rogers’ Neighborhood. Instead of Mr. Rogers, it’s the AI. “Hi, girls and boys…”
Maybe they can help you clean up a few things with their RAGs?
No… [laughs]
Oh, boy.
Well, I was thinking it was like next door, where it was almost like the voting system, the ensemble, but it was for local LLMs.
Oh, gotcha. Yeah, I realized there’s nothing about vectors or embeddings on the chart. I was just thinking about that.
Actually, yeah, there’s no vectors stores on here.
Or even just general embeddings of any type.
Wouldn’t that be plateau of productivity now, that we’ve had this for so long that they’re just lexicon, no emotion left in them?
Yeah… What I was thinking is they probably aren’t on there because Gartner also has one of their best products ever, the Magic Quadrant. And that’ll be the next episode that I come and drop in on; we can remake the Magic Quadrant for the different sectors… And I imagine that they have a Magic Quadrant for vector databases.
Yes. That sounds delightful. Yeah. Well, it has been delightful to have you on, Demetrios. I’m glad you brought your various new AI terms to the hype cycle. And now I have some work to do on my Broccoli AI, so…
Incorporate that into your product, for sure. It’s right around there with TRiSM…
It would be a good AI logo, just like a broccoli floret.
Yeah, the broccoli, or the – I saw a great paper that was all about leaks, it was all about data leakage when you send API calls to Open AI… And the paper started with an emoji of a leak.
That’s awesome.
Like the leaks you eat. And it was basically showing how you send your data to Open AI, but a lot of other people are gonna get it too if you’re not careful.
Yeah.
Which is one thing that we haven’t really touched on, but that seems like it’s got some hype around it…
Is what?
Data leakage AI.
Data leakage, data poisoning…
I know in my day job that’s a common conversation.
Prompt injection should be there?
Prompt injection, yes…
I guess this fits under TRiSM…
Yeah.
We’re going over TRiSMs right now…
TRiSMs and trinkets.
On that note, that very profound note, it has been great to discuss all the TRiSMs with you, Demetrios.
I’ve had a blast, as always.
And please come back. As usual, give your own hype about the upcoming event before we close out, and where people can find out more about it.
You know, I always feel bad, I come on here and just shill my stuff. So this time, no shilling. I’ve just had a blast doing this with you guys. So if anybody wants to find out about the next virtual conference or in-person conference, they can just google “MLOps Community” and I’m sure it’ll pop up.
Cool. Alright.
Hey, much appreciated. We’ll talk to you soon, Demetrios.
Thanks, man.
Yeah. Thanks, guys.
Our transcripts are open source on GitHub. Improvements are welcome. 💚