Practical AI – Episode #252

Advent of GenAI Hackathon recap

with Rahul, Ryan, Eugenie & Ralph from Intel

All Episodes

Recently, Intel’s Liftoff program for startups and Prediction Guard hosted the first ever “Advent of GenAI” hackathon. 2,000 people from all around the world participated in Generate AI related challenges over 7 days. In this episode, we discuss the hackathon, some of the creative solutions, the idea behind it, and more.



Read Write Own – Read, Write, Own: Building the Next Era of the Internet—a new book from entrepreneur and investor Chris Dixon—explores one possible solution to the internet’s authenticity problem: Blockchains. From AI that tracks its source material to generative programs that compensate—rather than cannibalize—creators. It’s a call to action for a more open, transparent, and democratic internet. One that opens the black box of AI, tracks the origins we see online, and much more. Order your copy of Read, Write, Own today at

Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.

Fly.ioThe home of — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes


1 00:05 Welcome to Practical AI 00:30
2 00:35 Sponsor: Read Write Own 01:10
3 01:56 What is Intel Liftoff? 03:19
4 05:15 Origin of Advent of GenAI 02:49
5 08:04 A look at turnout 03:24
6 11:28 Different levels of challenges 06:49
7 18:17 Intel Dev Cloud 06:34
8 24:51 Ease of use 04:08
9 28:59 Intel & open source 00:48
10 29:57 Sponsor: Changelog News 01:33
11 31:42 Event highlights 05:20
12 37:01 Where to find more 02:42
13 39:43 Ralph's thoughts 01:39
14 41:22 Daniel's takeaway 02:25
15 43:48 Thanks to the team 01:40
16 45:28 Closing thoughts 01:29
17 47:03 Outro 00:36


📝 Edit Transcript


Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to a very special fireside chat, which corresponds with the ongoing Advent of GenAI hackathon, that will also be reposted on the Practical AI podcast. I am very pleased to have been participating in this hackathon as one of the organizers, but I’m also joined here in the fireside chat by an amazing team from Intel’s Liftoff program for startups, who helped organize this hackathon that we’ll be talking about throughout the day. So I’d like to kick it over maybe to Rahul to describe a little bit about what Intel Liftoff is.

Hey, thank you, Dan. This has been an incredible experience. Let me, before I even talk about the Advent of GenAI, two sentences… This was probably the biggest generative AI hackathon that our team has organized, and the submissions, all the different chats and things that we have seen - this has been really awesome. They got a lot of positive feedback and a lot of things that we need to improve for next time… So thank you for participating in the hackathon. We will be announcing the winners, so the final product development in a couple of hours. Before that, let me talk about Liftoff.

So Liftoff is an accelerator program, specifically a technical accelerator program for early-stage startups. So if you have an idea, you’re a seed startup, or till Series B, you want to scale, you want to build some cool things in AI or machine learning, please join the program. It’s free. I would categorize the benefits basically into three different pillars: world-class technical support and technical expertise. That’s me and Ryan here, we lead the engineering side of things, and we have an incredible engineering scale team. Some of the folks are here, like [unintelligible 00:03:55.09] Then you get access to technology, both Intel Software and Intel Developer Cloud. Intel Developer Cloud is a production-ready cloud, specifically designed for AI workloads. PredictionGuard is one of the startups that came to our program earlier this year, and they are running on Intel Developer Cloud now, using a Gaudi 2 accelerator. And I’m sure that many of you folks who have participated in the hackathon would have used PredictionGuard’s LLM APIs.

The third one is co-marketing, and bringing your – once you have built that product and deployed it, the next thing is to make some money, right? So we co-market your startup, your idea through all of Intel’s channels, and also we have a network of accelerators, a network of folks that’s beyond Intel, that we take the product you have built, the company you have built, and basically market it all over the world.

We also connect you with our sales teams to see if there’s a potential for selling the things that were built through Intel channels. It could be a service, an IDC, or it could be separate, that one of our customers of Intel is looking for. I would urge anyone who is looking to really bootstrap and accelerate your startup journey to join Intel Liftoff.

Yeah, that’s great. That’s great, Rahul. Could you describe a little bit – so I remember initial discussions between a few of us; you had this idea for the Advent of GenAI hackathon… Now, a lot of people in the audience out there might be familiar with Advent of Code. So how did you start thinking about this Advent of Gen AI hackathon, and what was your initial vision for it?

[00:05:43.25] A lot of the geeks in the call, which most of us are, would know Advent of Code. It’s a set of programming challenges, so you can take any sort of programming language you want to learn, to attempt to solve these algorithmic questions. I’ve been doing that for many years, and I thought Gen AI is something that’s new, and something a lot of people are talking about, but there are no fun set of exercises that you can use to learn and also build cool things with the technology that’s existing today. So I was thinking why not just create something, a set of challenges that’s tailored from a person who might not know how to code, but has seen prompt engineering and creating cool prompts, to create images, all the way to people building with LLM APIs?

So we’ve designed set of challenges to bring in as much audience as possible, to introduce them to Gen AI, and also learn through that process. Many folks I’ve seen in the chats, they came with just prompt engineering knowledge, and have built really cool things, graduating from one challenge to another. And there has been a really good community help also. People talking to each other and trying to help out how to run, how to build these things… This has been a really, really good exercise. Some of the challenges, when I was building it, I was like “Whoa, I would like to do that”, because we always wanted to add a fun element to it. So no challenge is dry, or just an academic question, but there is something, a fun element to that. That’s where the whole Advent of GenAI came about.

And we want to do this yearly. So next year it might be a new technology, it could be a multi-modality hackathon… Some other things - we already saw some cool engineering stuff folks are building for this challenge… But it will be something, but I’d like to do in December, Advent or something else, that Liftoff does for the community.

Yeah, that’s awesome. So if you’re listening to this after the holiday season, or as this will be coming out on the podcast later, you didn’t miss out on participating – well, you missed out on participating in 2023, but there’s going to be more opportunities to participate in things like this, that Intel is going to put on later on. So I definitely recommend people keep an eye out on the Liftoff program and social media to hear about things.

Could you speak a little bit to the response to this first Advent of GenAI, and like the participation that happened? I think we were all a bit surprised at how many people joined into this… So could you speak to that a little bit, and anyone else from the Liftoff program, kind of any of your observations for the type of people that joined in, and the range of experiences, and all of that?

This has been fantastic. We had grad students, we had even students who are in school, who have taken prompt engineering courses, who just wanted to have fun working on some of the earlier challenges. There has been experts also, some of the experts in LLMs, engineers that hve worked on it… And some of the products that they’ve built, or some of the challenge solutions - it’s like an MVP, a startup we’ll build. But we have many startups who are building a similar solution, who ae taking like six months to a year to build a full, solid solution… But some of the challenge answers, especially using the RAG example then, or the Python code explainer, things like that are difficult. And they have even gone further, where when we asked about “Can you create a story to tell a chatbot?”, people have created a story plus image chatbot, a multi-modality chatbot.

So these are many levels [unintelligible 00:09:22.25] and even some of the folks from Intel participated. That’s also a very positive thing, where we see you can in a level playing field work with folks from Intel, and solve the challenges together. We had folks from Berkeley, folks from many different enterprises participating.

So this has been a mix, and I was amazed at the level of participation. I didn’t expect this many people would participate, and we had to stop registrations after 2,000 people registering for the event. And this has been just great. Ryan, do you want to add something?

[00:10:01.14] Yeah, I just remembered, [unintelligible 00:10:01.05] first called me, I was like cleaning up after Thanksgiving in my basement, or something… He’s like “Do you know Advent of Code?” “Yeah, of course, of course.” “I want to do Advent of GenAI.” “Alright, what do you want to do?” We worked it out, and it was like, set the goals, “Let’s get –” It’s like “Yeah, we can get big. A couple hundred people, at least, would be a success… But stretch it - hey, I mean, we could get maybe a thousand even, and do the biggest event of the year.” Well, we ended up cutting it off at twice that, at twice the stretch goal.

So then, of course, the entire time we’re like “Oh, man, did we do a good job? Are people gonna like this? Are we gonna get submissions?” Every time, knock it out of the park; way more submissions than we were hoping for, and the quality was excellent. And the amount of people we saw in the chat helping each other… You know, we’re all trying – it’s 24 hours a day, so our team is trying to stand in and answer questions as much as possible; we’ve set that as a goal. But what we saw, from go, from when it started, was when people would ask questions, other people would jump in and like link them to the explanation, or the documentation, or whatever. These were all the dreams for the event. So it’s been incredible, and I want to thank every single person who was involved.

Yeah. And maybe it would be good for those – maybe some people jumped into certain challenges, and not other challenges, or they might be hopping in at one point or the other, or they’re learning about Advent of GenAI as they’re listening to this… So what were some of the challenges that were presented to the participants, and how would you consider them in terms of relative challenge level or skill required to complete them?

We designed this challenge, in a progressing level of – I wouldn’t say difficulty, but I mean, the ability to code, I would say. It’s not difficulty exactly. And creativity also. Because a lot of Gen AI is – what we see, at least in Liftoff, is that AI has become truly commoditized, and you don’t really need to go through masters or a couple of courses on neural networks to build an application right now. We have the amazing transformer ecosystem… It’s really easy to integrate some of these things, the AI superpowers to the applications you build.

I would take the first challenge - for example, the first challenge was to create a narrative based on a set of images. And if you look at that challenge, the first time it looks like it’s very easy. You just create a couple of images using Stable Diffusion… It’s all about prompt engineering. You don’t need to know a single line of code. All the notebooks and the models, everything is available on Intel Developer Cloud. You just create a standard account, log in there, get Jupyter Hub open and just play with it. But the thing is, creating a transition from an image to another, and creating a whole story with a set of [unintelligible 00:12:49.20] images, it’s not easy. It’s really difficult. And we even saw some folks creating a comic book generator using this challenge. That’s sort of imagination, right? That’s what we really wanted people to do, but I didn’t want to say that “Okay, please create a comic book generator” as a challenge, because it’s really difficult. Some of the folks, even without knowing that, built it.

If you take the final challenge - that was Python Code Explainer, where given a Python code, you use an LLM model to understand the code and give an explanation of it. We have an additional subchallenge there, “Show the source of a documentation, or Stack Overflow questions where I can go into and learn about it more.” These kinds of additions makes it very interesting and a little bit more complicated, where you have to use a vector database, you have to use PredictionGuard’s LLM APIs to get the right model. And you had to design a UI for it, all in the constraints of a Jupyter Notebook.

So I’d say there was a progression of difficulty – difficulty is a very relative word, but yeah, there was a progression of difficulty if you’re just coming to GenAI. And the whole idea is that it’s a single package, so even if you’re not participating in Advent of GenAI, we have released all the resources that we have built for this; you can take this to basically get an idea of what GenAI can actually help you to build and infuse into your applications. And you can just go through the different challenges. Or if you’re an expert in LLM RAG-based applications, go to that particular challenge and take a look at it. So it’s now become a learning resource also, not just a hackathon.

[00:14:24.07] If I can just add on to that real quick… It did on purpose go up in difficulty. The level of each challenge was supposed to get harder and more advanced, let’s say, in terms of coding ability, as we went on. But the real focus was about skills, and understanding the tools that are being used. Within the industry, there’s been a huge focus on creating new levels of abstraction, to make neural networks easier to use and build with. It used to be very challenging. Now - I mean, not nearly as much. You don’t even need to stand up your own neural network anymore. You can grab an API.

So if you look at the challenges, each one is kind of focused on a different skill. And if you go through all five, you cover prompting, specifically for images, text to image, you cover using an LLM API and the different things you can do with it, you cover image to image, so image editing with AI in the third one, and then RAG-based applications for the LLM APIs… And finally, we thought - you know, the fifth was the most advanced. The code explainer one - you know, there are companies, that are big companies, that are working on that exact problem, that are betting that it’s going to unlock a good solution for code explanation improvement in generation. It’s going to like many billions of dollars of value.

So all these things are focused on these different skills, and our hope was that for people who are maybe software engineers looking to move over to AI, or to students, whatever, anybody who’s interested in learning AI skills, that by going through the ones they chose to, or all of them, by the end of them, they’d have kind of a portfolio of knowledge in their head about the different skills, both on understanding how to use them, and how to do a good job with prompting… But also, by going through the code and understanding all of the code and all the notebooks, an idea of what else they could build outside of the narrow set of applications that we asked for over the course of five days.

I think one thing that impressed me about the set of challenges that you all came up with was that it focused really around image generation, coherent image generation, and on the LLM side sort of retrieval-based methods, along with chat. So I think all of these things are the things that people are finding most utility out of when they’re first implementing AI solutions within their actual enterprise, or industry, or startup, or whatever environment they’re working in… In particular retrieval-based methods and RAG systems, at least for our clients we’re seeing that’s the first thing that everybody is building. You have your own company’s set of data; in the case of the challenges that you all put together maybe that’s external Python documentation for the code explainer, or maybe that’s just some external PDFs, or YouTube videos, or whatever it is, for the RAG-based solution… But lots of companies have this data that has this sort of unlocked potential, and is unlocked via these retrieval-based methods, which is a lot of times what people are building first when they adopt this technology.

So I think it was great that you all tied that together with the participants to give them practical skills in that area, and kind of help them learn what is a vector database, what is a RAG system, how do you implement this with custom data… Rather than kind of immediately hopping to fine-tune a model, which I know of course you can do more easily than ever as well, but there’s a lot you can do even just by integrating your own data with retrieval, or other sorts of methods.

[00:18:04.25] I do want to ask here in a second some of the solutions that you saw, and what stood out to you, just to highlight some of those really cool things that we saw… But before we do that, so you can have that in the back of your mind and think through some things you’d want to highlight, I’m wondering if you all could speak to the Intel Developer Cloud specifically? …which is something, of course, I’ve found utility out of, but it was something that was kind of unique about this hackathon, and it might be something that the participants here are a little bit – like, this is their first time using it, but also, there’s a whole lot available there in terms of different ways to run AI models, that maybe some people are less familiar with. So could you describe a little bit the Intel Developer Cloud, and maybe also highlight some of those different, unique ways that people are running AI models, outside of just like throwing it on a GPU? There are actually some interesting kind of other either tooling, or hardware, or software available for people. Could you highlight a little bit of that, and the unique ways that people were specifically running AI models throughout the hackathon?

Sure. So Intel Developer Cloud is Intel’s production-ready cloud specifically for AI and machine learning workloads. And of course, when we say AI and machine learning, it’s [unintelligible 00:19:22.01] so many other compute-heavy workloads can run really well on IDC.

So for this particular hack, we provided anyone logging into Intel Developer Cloud registering on IDC as a standard or free tier user - you get a shared Jupyter Hub instance, where you get access to Intel’s data center GPUs, Intel Xeon processors. And I would say this system, for a free tier user, I don’t think any other service provides. I mean, there are many services with a Jupyter Hub frontend, but the amount of compute and the amount of memory and RAM and even file storage that you get in the systems, I’m not seeing a single cloud service provider providing that. And we have seen a lot of people really using it, and giving us feedback on how we could even improve it.

Today on IDC we have a lot of models or LLMs already, and other tens or even hundreds of local models that we are planning to add further to boost this, that are Stable Diffusion models, LLM models, and things like that. Beyond that, for productionizing the workload - Dan, in your case, you’re using the Gaudi 2 accelerators. Those are specifically designed for workloads that request high bandwidth, like LLM and GenAI workloads, and this… I lost my train of thought, but yeah. We have Gaudi 2 accelerators, which we are seeing incredibly competitive, and sometimes outclassing the best out there for your particular workloads.

Along with Gaudi accelerators - those are specifically designed for GenAI and AI workloads - we have general-purpose GPUs, the data center [unintelligible 00:21:01.09] both with 48 gigs and 128-gig versions. So folks in the hackathon, they actually used both our fourth generation Xeon; that is the latest Xeons that we have, which - what it particularly does is that it accelerates your machine learning workloads. We have dedicated instructions in the CPU to sometimes even take your workload to 2x the performance that you got in an earlier generation.

It’s all about making the CPU as efficient as possible, and making it as fast as possible, and still maintaining the general purpose utility of a CPU. But then, like I said, the data center [unintelligible 00:21:42.08] a little bit more generic solution, where you can run your AI workload, [unintelligible 00:21:48.06] HPC workloads. Each of these machines, when you’re productionizing, you get a VM, you get an 8-node [unintelligible 00:21:56.22] system that also clusters the systems available. Then comes the Gaudi accelerators, both – there are single-node machines, and also clustered machines if you want to do pretraining or big fine-tuning. All those cool things. And soon, we’ll have Kubernetes service, object store, file store, all those things coming up… So it’s gonna be great.

[00:22:17.29] What I see is that if you’re building a startup, it would be very difficult to find a performant accelerator cloud like IDC out there. I’m sure that there are different hyperscalers… But this, uniquely for startups, from my personal experience, is a really, really awesome solution.

I’d like to know more from you, Dan. You are one of the first customers of IDC, right? What are the things that you thought that really made you decide to choose IDC, for the performance, and also the team side also?

I appreciate that, and I appreciate the support that you all have given. I think it’s interesting maybe for people out there that are less familiar with the various options for model deployment to understand that there is really good tooling. Like you say, whether it’s optimizing a model and deploying it on a CPU for an edge environment, or just a cheaper inference solution… Or it’s like all the way to these Gaudi 2 processors that we’ve been experimenting with. I think there’s a lot of interesting and approachable tooling for that.

So I first came across some of the tooling around Gaudi 2 by actually seeing blogs on the Hugging Face blog about Gaudi 2, and I think at the time the blue model, which is a very large model, and running it on either a single accelerator, or spread across eight accelerators, with really high throughput on the inference side, and doing that with tools like OptimumHabana. So for those of you that are out there and wanting to explore things, actually, if you look up the Hugging Face Optimum library, there’s a lot of great tooling that you can play around with there; even not for Gaudi, but for other processors, too. So whether that be CPUs, GPUs, the Gaudi 2 HPUs, the Data Center Max GPUs… Optimum kind of provides you a way - if some of you can visualize, maybe you’re writing in your code and you’re importing a model from Hugging Face, that’s just like auto tokenizer, or auto causal LLM, or whatever it is, with Optimum a lot of times either you can just do like a couple-line replacement and just replace that with the Optimum version of those classes, or do some wrapping of the various models with optimizers. And this allows you to run your model very fast on a wide range of architectures.

So I think, to your point, Rahul, I think one of the things that we’ve found really useful is the actual ease of use, and coming in and saying “Okay, well, we have this stuff running on a GPU. Let’s try it on these various other architectures.” I remember even like maybe two or three years ago trying to do some of this model optimization things for edge deployments. It’s very, very challenging. A lot of times I would try to optimize a model - at the time I was working on like speech models and other things - and it just wouldn’t work, because operations wouldn’t be supported, or something like that. But this tooling, which – it’s cool, because Intel’s working directly with Hugging Face on this tooling… And of course, the ease of use has just been ramped up drastically. And we’ve been applying that with really good results, particularly for inference for LLMs. So that’s been a key feature to see that change happen.

[00:25:48.07] That’s really awesome to hear, Dan… And particularly also the thing you mentioned. So you can think of Intel in two ways; probably the biggest semiconductor manufacturer, the coolest chips… The other is Intel is an open source software company also. We are contributing to almost all big open source projects; Linux Kernel, and most all things. If you see, we would be anywhere in the top three - PyTorch, TensorFlow, Hugging Face, any sort of open source solutions out there, we work really hard to make sure that your adoption of a technology is as easy as possible, and try to upstream as much as possible to the core Python library, or TensorFlow library, and things like that.

In cases where we feel that there are further optimizations that could be done, and these things cannot be upstreamed in a couple of months to the mainline repositories, we release extensions also.

So for example, if you take the out of box PyTorch and run it on a CPU, you already get a lot of performance, because Intel’s new network accelerator library, oneDNN, that’s powering a lot of these operations when you’re running on a machine like an Intel Xeon, for example. But if you want to go a little bit further, we have things like Intel extensions for PyTorch, that with one line of code – essentially, it’s an Intel [unintelligible 00:27:08.24] the model. We add further optimizations to run it as fast as possible.

We are also working on even upstreaming whatever possible to a PyTorch main line. So that thing that you mentioned - it’s very important to work with the community and enable the software that the community uses, rather than having a completely different architecture, and something that’s sometimes closed source, and working on it. That’s not the way Intel thinks. Even the whole concept of oneAPI, heterogeneous programming - everything open about it, where other vendors can come in and add their accelerators to the oneAPI, and use the oneAPI standard… Where if you’re writing code for a CPU, there should be minimal to no changes that’s required to run that on another accelerator. That’s the philosophy that we are working with overall, and the oneAPI architecture that works underneath all this acceleration libraries.

OptimumHabana - yeah, we’ve been working very closely with the Hugging Face team. Almost all LLM models work out of the box. There are models that we have tested and benchmarked, that’s available on GitHub, and things like vLLM, our info support. All those things are enabled through Intel libraries. For example, BigDL, and things like that, giving a higher level abstraction, beyond PyTorch. Because when we talk to startups these days, we feel that PyTorch is considered as a low-level library right now… And that’s a little bit funny for folks who have worked in [unintelligible 00:28:39.19] or even beyond that, in 2016 and ‘17, and coming from the early days of TensorFlow, to see PyTorch going low level, and [unintelligible 00:28:51.14] higher abstraction libraries to work on top of that. It’s really an exciting time to be in and work with it all.

Yeah, for sure. And I also want to highlight, in addition to open source code, it’s been cool to see Intel recently release NeuralChat, which was a fine-tune on a Mistral model, which is openly accessible on Hugging Face and permissively licensed. So we’ve been experimenting with that, and we saw usage of that in the hackathon… So it’s cool to see people – like, a couple of these models, NeuralChat, which is a fine-tune of Mistral, came out like, I don’t know, maybe a week before the hack. And Notice, which is another fine-tune on Mistral, came out like a few days before the hack. And both of those were being used in the hackathon, which I think demonstrates the ability to kind of rapidly adopt this new stuff that’s coming up.

Well, I do want to make sure that we have time to highlight a couple of cool things… So what were a couple of the highlights for you all in terms of solutions that you saw, or methodologies that you saw, or just cool things you didn’t expect? What stands out in your mind?

I’ll start quickly, and I’ll let Ryan talk about the [unintelligible 00:32:01.22] So we both have been spending – I mean, Dan, you were also there, every day going through these submissions… And it was very difficult to figure out who is the best submission, because each time we think that “This is the best”, and “Look at that other one.” We’re like “Oh, my God. This is incredible!” Even the first submission, the quality of image creation… I was surprised that - how can you even create these kinds of images just with the models that were there? The time that was spent in prompt engineering, and even using custom models to combine these sort of images and create these solutions.

The other thing - there were a few really interesting RAG examples taking YouTube videos, parsing the audio, figuring out a YouTube search… That was something that stood out to me. And for the Python code explainer, there was a submission that came maybe I think in three hours, the first iteration of the submission by this person. That solution can do Python explanation, but also give references to where exactly that the model got this information from. Really, really good use of RAG and LLMs. Ryan, what are the things that stood out for you on submissions?

What always stands out is when somebody – the Jupyter Notebooks, which [unintelligible 00:33:18.20] had put together, I think they’e really well designed as learning activities, and to like get something done at school just by going through them. And so people use those to do a lot of amazing work. And I was stunned by the quality.

But what always stands out to me is when somebody takes the concept, takes what’s in there, and then runs with it. And we saw people setting up AI agents for some of these challenges, like the comic book generator… And the fifth challenge, where code explanation comes in, it’s like “Oh, explainability - listen, we’re going to do things like we’re going to do the explanation, and the model will then cite the sources it’s using.” Which made me think - you know, in years gone by, people were very concerned with explainable AI, and what they always meant is like “Well, if the model is making a recommendation, or classifying something in such a way, we should be able to figure out exactly why.” And so there were all these discussions of how best to do that, like “Oh, you can use [unintelligible 00:34:19.17] values”, whatever.

And I think what it turns out is like, well, now that we have Gen AI, and we have retrieval-based methods, it’s like, “Just ask.” Okay, so this is your explanation. Where did that come from? And we see the setting of sources.

So that creativity, not just in application, which was astounding, but then also in people bringing in methods, cutting edge methods from outside of like what we even included in the notebooks - that always blew me away. And there were some people that just always ended up in the top five. [unintelligible 00:34:52.29] I don’t know if I’m pronouncing that correctly, but I actually reached out to him, because it was like “What do you do for a living? Your work is incredible.” Because there are so many. Simon’s team…


Pranav, yeah… I haven’t sat down to compile a list. You know who you are, because you can go back to each winners post that Rahul made, and find those names. And that’s something that we’ll follow up probably with a blog about, and maybe reaching out to some of these to be on a podcast, or to talk to us, or whatever.

I would also like to highlight that our youngest participant, I think, might be on the – yeah, I see his name, [unintelligible 00:35:29.22] who is a middle school student, who owned us every day at around the time when we were supposed to be posting a video or whatever, with the same skeleton like waiting, tapping his fingers, like “Patiently waiting for this video that was supposed to be here five minutes ago.” That was a wonderful part of the hackathon for me.

Yeah, even the thing you mentioned, the Python explaining… I mean, there were submission where “Okay, you have explained the code. Now click this button to optimize the code. I’ll give you an optimized version of the solution.” Taking the challenge in spirit, and not just in words, and going beyond that… Incredible work.

[00:36:13.14] It’s truly – I really feel generative AI and the commoditization there really helped a lot more folks who might not have been here to do this AI kind of work. Really democratizing the solution. All the toolings, the API-based approach, for example, from Prediction Guard. The Hugging Face ecosystem. Making it as easy to use it.
And one thing - when Dan was mentioning that people were using NeuralChat for LLM APIs - that was because of Dan’s incredible team adding these models and scaling it in like a matter of hours. So it’s still a challenge to deploy and scale this… But you have an incredible team, Dan, who was also participating in the conversation.

Thanks. Well, I definitely think so. I appreciate that. And speaking of where people can find out more about some of the specific submissions, even seeing some screenshots, some code that people generated… Eugenie, do you want to comment – you’ve created some amazing blog posts already, and I think there’s more in the works… So do you want to just describe to those listening where they can find out more about some of the solutions, and maybe also where they can keep tabs on future events and things coming through the Liftoff team?

Thank you, Daniel. So we deposited already three blog articles at our landing page, And I just want to also give you insights, as I reviewed always the top submissions, and other honorable mentions, I had a look at the profiles of developers… And it is really an exciting mix across regions. And as you already said, they have students, they have individual developers, they have founders here, they have software engineers from big companies… But also, I saw very active software developers from Intel. This is very interesting that – I mean, indeed, Intel Liftoff is more targeting startups, but it was a very diverse portfolio of developers, from across regions. It’s really amazing, because in our Slack channel for the hackathon we see always from like 24 hours messages there, with submissions, with questions, because of this diversity. So it’s really like a global hackathon of the end of the year. We are very proud about it.

And we will post really articles about each challenges, and also about the last challenge with a two days development sprint. Now you can read three articles. Not only the announcement of winners, but also their own comments and results of work is what you can find in these blog articles.

Awesome. Thank you so much for your work on those. It was cool to see the traffic coming in, basically, all day and night, which was awesome… And it’s hard to sleep while all of this cool stuff is going on. So as we draw close to an end here, I want to kick it over to Ralph, who leads up the Liftoff program, and just get any sort of final thoughts. What did you think of this whole process? What were you encouraged to see, and what are you looking forward to in the new year in terms of things related to generative AI and Liftoff?

[00:40:07.06] Hello, everyone. It’s me, Ralph. I’m sorry for the noise here. That’s why I was on mute probably all the time, because at the office there is some kind of year-end party going on. So yeah, I was completely amazed by what happened during this hackathon, and I’m very grateful to the team, starting with Rahul, the rockstar developer of this hackathon. And also, thank you very much to you, Dan, for supporting this, for really running this with us… And then Ryan, who is the second rockstar developer here, and of course, Eugenie, who made it all happen.

And so I really look forward to the impact we can make in the developer ecosystem, in the AI developer ecosystem… I really look forward what’s gonna happen next year, and we want to have a share of what the future might bring to us. And I can tell you, the Intel Liftoff team is ready for whatever comes in the startup world. And yeah, so see you next time, and great to have you all here.

Thanks, Ralph.

Awesome. I mean, we co-created this together with Prediction Guard on day. We had meetings on how to do this, and what are the things that we need to do on this. Do you want to say, for the folks who don’t know about Prediction Guard, to introduce Prediction Guard also? What was your experience working on this hackathon with us? And what are things that we need to do next? And further, we need to do it big; 2000 is now sort of our baseline. So the next time we do maybe 4000 people.

It’s pretty big…

Yeah. So what’s your take on it, Dan?

I think one takeaway is when you do a hackathon with Intel Liftoff, you better be ready to scale your servers. So we’ll take that takeaway for next year, when it’s 10,000 people participating, I’m sure… But yeah, it’s been great. One of the things, like I say, is we really appreciated actually interacting with people creating practical solutions with LLMs. That’s what we’re about at Prediction Guard. And seeing people actually apply some of the latest models like the NeuralChat, Notice, Zephyr, [unintelligible 00:42:27.16] WizardCoder… Seeing them actually access these things, and even combine them together in unique agents, I think that gave us such encouragement to see people actually kind of fulfilling this vision that we have, which is providing these open, privacy-conserving hosted models to people, and them combining them in unique ways to create real enterprise value. That’s what we’re excited to see, and do it in a way that is actually trustworthy.

Intel, of course, has a great history with security and privacy, confidential computing… But to be able to sort of be partnered together and see people creating really both trustworthy, privacy conserving and scalable solutions with LLMs in this environment is really encouraging, I think, for the future of AI… Because as we’ve seen even over the past week, with Mixtral being released, and StripedHyena, and all of these models… The open models are just getting better and better, and providing ways for people to access those in a scalable way and build real solutions - yeah, it’s really exciting to see that happen in the industry. So thank you for hosting this, and making it happen. It was a great experience.

Good. And thank you to the entire team: Scott, Ralph… The team that I talked to daily. Ryan, we practically talk every hour. Eugenie, and the whole of the engineering team at Intel Liftoff. [unintelligible 00:44:01.17] Raj, you guys are incredible. And all the teams on Dan’s side also. Being in the Slack channel, and answering questions… All of us had reservations, but we kept that to ourselves. We didn’t know how it’s gonna go… But everyone pitched in with really cool ideas, and with the mindset to help… And that really shows in the community; all the messages we get… We had messages where folks were saying “Now I can take this thing to my boss and tell him “I need to implement these sorts of things in our day to day work.” And this is really, really gratifying to see that.
And next time we come in, we’ll fix all the shortcomings. We’ll do an internal review carefully if there were any shortcomings - I’m sure there are - to fix them. Bigger, better, more scalable, more cooler challenges… We want to continue this and grow this community.

So any sort of feedback – Eugenie I’m sure will be sending [unintelligible 00:44:56.06] I know it’s very difficult to answer any sort of service. It’s easier to delete that email… But we would really appreciate. I personally would really appreciate your feedback on what we can improve, what are the things that we could add more, and make it more community-driven effort. We don’t really like in Liftoff the top-down-top approach. We really want your feedback, and the things that you want to see, and build around it. So thank you once again.

Yeah, thank you all. Closing out here, I just want to encourage you also to not only keep tab for hackathons, but all of you who are building amazing startups, and I know many of you are who were part of the hackathon. They maybe are too humble to say it, but this Liftoff team is doing amazing things. And as a startup that’s participating in it, your startup should join Liftoff and reach out to them, because you’ll find amazing benefit and scale and access to expertise and hardware. So reach out to the team. They truly are rockstars, like Ralph said, so… Reach out and get involved in the program, and in the community.
With that, we’ll close this Advent of GenAI out. We’ll give you the last word, Rahul.

Alright. Yeah, I forgot to mention one person, Kelly. I don’t know how I forgot. She has been incredible, starting from the website, creating the content, editing the video. I mean, she was sick while she was doing it, she had a few hours that she had to take off, but she was incredible at the pace at which she was able to help us… And thank you, Kelly, for doing that. I’m sure that we’ll be doing many more of these things. Again, the entire team, if I miss missed anyone, I’m really sorry… But this was truly a team event. Everyone contributed, and without the small contributions, this would have just been an idea. So thank you all for doing that.

Thanks, everybody.


Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00