Practical AI – Episode #242

Deep learning in Rust with Burn 🔥

with Nathaniel Simard

All Episodes

It seems like everyone is interested in Rust these days. Even the most popular Python linter, Ruff, isn’t written in Python! It’s written in Rust. But what is the state of training or inferencing deep learning models in Rust? In this episode, we are joined by Nathaniel Simard, the creator burn. We discuss Rust in general, the need to have support for AI in multiple languages, and the current state of doing “AI things” in Rust.

Featuring

Sponsors

Neo4j – NODES 2023 is coming in October!

Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.io – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

đź“ť Edit Notes

burn-rs: This library strives to serve as a comprehensive deep learning framework, offering exceptional flexibility and written in Rust.

Chapters

1 00:00 Welcome to Practical AI
2 00:35 Sponsor: Neo4j
3 01:43 Nathaniel Simard & Burn
4 03:02 What is Rust?
5 04:53 Go & Rust
6 05:52 Not so low-level
7 07:18 Rust features
8 09:43 Rust workflow
9 13:33 The Rust community
10 14:45 The reason for Burn
11 17:17 Rust ecosystem
12 18:42 Challenges in Rust
13 21:04 Low-level advantage
14 22:28 Current Burn support
15 23:38 What are people using Burn for?
16 25:15 Organizing Burn development
17 26:26 Pushing the limits
18 28:50 Burn features
19 30:28 Versatile backends
20 31:51 The Burn book
21 32:42 Where users come from
22 34:01 Newbie PoV of Burn
23 36:16 Future of Burn
24 38:05 Getting started
25 39:16 Thanks for joining us
26 39:45 Outro (Changelog Beats!)

Transcript

đź“ť Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I am the founder at Prediction Guard, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. How are you doing, Chris?

I am doing very well today, Daniel. It is fall weather out, and I’m enjoying getting outside.

It’s fall, it’s raining here today.

Yeah, it’s a little cloudy out, but I’m enjoying this nice weather. So it’s like, part of me wants to stay inside and do the fun things, like especially about what we’re going to be talking about today, and part of me wants to get outside and enjoy the weather…

Well, it’s that time of year where you just want to curl up next to a fireplace and Burn some firewood.

Oh, my gosh, you took us right there… I’ll tell you what - before you say that, I’ll just say, this is an exciting episode coming up, because I think this is a little moment where we’re going to talk about our industry maturing a little bit through one effort, and with that said, I’ll let you go ahead and do the intro.

Well, the connection to Burn is because Burn is a deep learning framework that’s built in Rust, and today we have with us the creator of Burn, Nathaniel Simard. Welcome, Nathaniel.

Hi. Thanks for having me.

Yeah, well, I admitted to you before the episode that I am basically uninitiated in terms of Rust goes. I’ve looked at various articles, I think that I’ve run Rust programs just in a sort of Hello World sort of way… Probably my biggest use of Rust has been using Rust in the Python linter called Rough, which is really great… So that’s kind of a circular thing, but for those others out there in our audience that might not be as familiar with Rust as a programming language, could you just tell us a little bit about what is Rust, and why Rust?

Yeah, Rust I think is falsely categorized as a low-level programming language, probably because of a historical reason, but it’s a very general programming language that can be used for high-level stuff, as well as low-level stuff. So the main reason to use Rust is maybe when you need to go through multiple abstraction boundaries without having to pay for performance. So yeah, this is how I define it.

And I could be wrong about this, but I think one of the great features, along with Go having a really great mascot, we’ve got – isn’t it a crab? If you see crabs, or something, for Rust… Isn’t that a thing?

Yeah, I think it’s a cute crab.

It’s a cute crap.

That’s the mascot, yeah. [laughter] I think it’s important for a programming language to have that.

You have Python for the snake. With this programming language we’ve got – I don’t know what it is for Go.

It’s the gopher. The Go gopher. Yeah, it’s quite nice.

Yeah, the gopher.

It’s funny you mentioned that… Go is actually how Daniel and I got to know each other. We met in the Go programming language community… And we were kind of the two data-oriented people at the time; this is going way back… There are many, many data-oriented people these days. But we got to know each other. Subsequent to that, I had been hearing about Rust for a while, and I got very interested in it, not only for – because as you pointed out, it’s a fantastic general-purpose programming language all around, but it also does have a lot of really amazing low-level features and performance capability that attracted me to it. So I’m not nearly as accomplished in the language as you are, Nathaniel. I still love Go, but Rust is now another programming language that I have fallen in love with.

Yeah, I think Go is really well-suited for web services. We’ve got a lot of tooling around that; it’s very pragmatic to use it for that stuff. So yeah, Rust is getting there, but we’ve got the whole async stories behind that.

[00:05:53.11] And for Rust itself, you mentioned people have this stereotype of Rust as a low-level programming language, but could you give maybe some examples of the types of things either you’ve built in Rust over time, or that are possibilities, just to kind of give people a sense of what people are doing with the language? Obviously, we’re going to be talking about deep learning, which is, thanks to you, something that can be done with the language… But what are some of the other things that are out there that people are doing right now with Rust?

Well, I think it was first created as a replacement for C++ to write browser engines. So this is maybe why it was known as a low-level programming language. But now I think it’s used in game engines. It’s also used to do web frontend. So you’ve got like [unintelligible 00:06:40.13] which are frontend libraries, like React and Vue. So this is pretty high-level. We’ve got also command line libraries that you can use, like meta programming, so that it’s very easy to do your command line arguments, all of that kind of stuff… So yeah, there’s tons of things that are built with Rust, high-level and low-level, so you can mix and match in your own applications. Of course, there is like the web services, with Tokio, [unintelligible 00:07:06.27] If you want to do web services, there is also libraries for that. Yeah, this is a project that’s on top of my mind.

It was one of the first languages that really embraced Web Assembly, and got it out there… It’s interesting, speaking as kind of a novice in the language, and coming from most recently Go, there’s always this debate on Go versus Rust that you tend to see in articles out there… And I’ve really found room for both of them, and I go back and forth at this point. I will point out, whereas Go is one of those languages that has runtimes, that kind of manages memory, Rust has a really cool feature to it - it’s not specific to what we’re talking about today, but the compiler ensures that you don’t have memory faults, seg faults, which is something like 70% of all the bugs in software according to Microsoft. And so it has a really interesting way of approaching, ensuring that you can produce bug-free software, or at least much fewer bugs, far fewer bugs in it. So it’s a pretty cool language. I’m just curious, as we’re talking about the language in general, what’s your favorite feature, or what are some of the things that made you turn to Rust, versus some of the other languages you may have worked in?

This is hard, to just choose one feature… I think it’s the whole package.

You said it like a real Rust aficionado there.

Yeah… But my favorite feature is not the reason why I started writing in Rust… But now I think my favorite feature is just associated types, because it can abstract data types; something that is really hard to do with other languages. So yeah…

And could you explain a little bit of when might that be useful, or how is that useful in terms of like when that might come up in your programming?

It’s when you need to abstract the type you’re going to use, but you let the implementation decide the types. Normally, you have the generics; with generics, maybe you have a list, and you have to say “Okay, I want a list of strings.” But it’s when you use the list that you decide the type… Where associative type is “Okay, I’ve got maybe a list, but I don’t know of what.” It’s the implementation that decides of what’s going to be the list. So sometimes it makes sense. For instance, in Burn we’ve got the backend, the backend [unintelligible 00:09:24.26] which we can have multiple implementations, like CPU/GPU, and we have associated types for the memory, for the memory [unintelligible 00:09:31.27] for all those things that you can manipulate at a high level, but you don’t have to know which type it is. It’s to the implementation to decide.

[00:09:44.08] I’m just going to ask maybe an ignorant question, but I think maybe some people out there might be wondering it… If I’m working in Python, this is a language where I don’t have to compile my Python code. Some of the things that we’re talking about here with the compiler, and other things, a lot of people don’t think about, although there’s some intersection with that… So could you describe, like when you’re writing a Rust program, what does that look like in terms of “Is it a statically-typed language?” You were talking a little bit about type there… It sounds like you talked about a compiler, so am I right in that it’s a compiled program, and then you can run the binary on some architecture? What is it like to work in Rust as compared to something that people might be very familiar with, like Python, where a lot of people that are probably listening to our episode or have their Google Colab notebook pulled up right next to them, and they’re doing all sorts of things with the Python interpreter - what is the workflow and programming like in Rust as far as how the language is set up and how you work with it?

Obviously, it’s a bit different than working in a notebook. Like you said, it’s a strongly-typed static programming language, similar to like C++, Java, all of those older languages. So for people that come from Python, maybe you’re aware of the Python type int that you can use. It’s a bit like that, but you have to use them everywhere, in all of your functions and definitions. And the workflow… Something that I like is that in Rust it’s – I think it’s one of the only programming languages that does that, is that when you write a function, you can just write the test below it. So that’s kind of the way where you can get some feedback on what you’re actually writing. And it encourages good practice, because you’re writing a test that can be reused all the time; it’s not a script that you’re trying to just run on the side. You can actually commit that, and it describes how the code should run, and that’s how you get interactivity with this. And since you have a package manager, which is Cargo, it’s pretty easy to just execute the code directly.

To follow up on that, Cargo the package manager is based on a lot of the best practices we see in some of the other programming languages… For instance, in JavaScript, in the Node community, you have Npm, and there are several others… And the Rust community really drew from kind of best practices on that.

Another thing, to kind of follow up on the compiler notes that Nathaniel was mentioning, was a lot of Rust developers kind of see the compiler almost as a pair programming partner, in a sense, to where instead of just hitting compile from time to time, like you would in Java or something like that, the compiler is so comprehensive that it kind of helps you, and you kind of use it to write the right code, and you get to the end of the process and know that your code will actually work without runtime errors. So it’s a different way of thinking about being a developer. It takes a little bit of a mind shift to adjust over to it.

This is very different. In Python, an important skill is just to be able to read the stack trace, because you’re going to have a lot of exceptions when you run your programs, and you have to learn how to debug your program. This is kind of a hard skill you have to do when you learn Python. In Rust, you have to learn how to write the compiler errors. But they made – at least they try to make it as easy as possible… Even sometimes you’ve got links to the documentation, it opens a browser, you can read why you have that error; it explains the reasons why. So this is a different set of skills, and yeah, this is quite different from the workflow you use with Python.

Maybe just one more question about Rust in general, before we dive into some other things… What is the Rust community like, in terms of whether it be – is there active channels where the Rust community communicates with one another, conferences, meetups? What is the Rust community like? And is it growing? How is it changing over time? As you’ve been with the language for some time, how has it developed in the time that you’ve been part of the community?

[00:14:07.03] I’m not sure about all of the community, obviously, but I think it’s pretty friendly. There are some Discord channels where you can just go and ask your questions if you want to. There is an active GitHub issue, so the language is open source… If you have a problem, just open an issue and people maybe are going to help. So this is a pretty inviting community, I think. This is part of the reason why it succeeds, I think. Because if you don’t answer questions, you don’t help people use your technology, it doesn’t really work out. I never went to a conference for Rust yet, but I know there are many, so maybe I’m going to go to some later.

You know one, of the topics that has been kind of a recurring topic between Daniel and me over a number of episodes - we’ve been tracking kind of the maturity process of the AI community, and kind of what it takes to kind of level up, and to take it to the next level… And on a number of different occasions, we’ve talked about the fact that if you look at other communities that have arisen before this one, often it takes kind of broad support. Whereas in the kind of the early days, that we’re really still in, in my view, of modern AI, it has been largely dominated by a single programming language, which most of our listeners are very aware of, which is Python… Which has really been kind of the focus of where all the work is. That’s where all the APIs have been focused on, and everything. And we’ve discussed quite a bit about how for AI to mature, it needs to become more broadly available to other languages, so that you have different types of use cases addressing different business needs. And that requires languages other than just Python all the time. How do you get to AI, and what kind of bridging do you need to do?

It leads me, Nathaniel – I wanted to ask you, it’s clearly a need that the community has had, to be able to start getting Rust and other language in there. I’m curious, how did you approach this? What was it about trying to get Rust working as a framework that could work with AI tools of the day? How did you get into that? What was your motivation? What did you see as the need at a personal level?

Well, I started working on Burn because I was experimenting with asynchronous neural networks, and I wanted to make something a bit… Not standard, let’s say that. And I needed multi-threading, concurrency, and stuff like that… And it was really hard to do with Python. And I had a software engineering background, so I said to myself, “Well, if it’s hard for me to do that, then maybe it’s too hard for any researcher to do that. So that’s why maybe we don’t have yet an architecture for that kind of stuff.” So I said, “Well, let me try and make a framework in a language that has support for high-level programming, and concurrency, and all those things.” And yeah, it’s pretty much the description of Rust. So that’s why I started writing a framework in this language. And then it just was a personal project for a long time, I just was experimenting with it, and yeah, it grew with time.

When you first started thinking about Burn and these problems that you were looking at, what was the current support for doing - whether it be kind of, quote-unquote, traditional machine learning random forests, SVM, whatever that is, and all the way up to kind of deep learning in Rust? What was kind of the state of things? I’m looking at your Burn repo and I see - you’ve at least been submitting pull requests since July of 2022. I’m sure some of it goes back further than that. So back to those days, what did the ecosystem look like in terms of its support for these things?

[00:17:59.00] Well, I don’t think there was a lot of deep learning frameworks in Rust. There were some experiments, but nothing really pragmatic that you can use. So I think there was a library, or normal, like, SVM random forests in Rust. I never used it, but yeah, I don’t think it’s comfortable yet, to scikit-learn and PyTorch, which is very complete.

It’s interesting, because some of the sort of early stuff that we were doing in Go – well, it was similar there. There were certain packages for, whether it be kinds of regression, or hypothesis testing, statistical things, but not really a robust deep learning framework. One of my questions would be in Go I know one of the struggles with trying to support really robust deep learning is not necessarily the fact that you can’t create a nice package with a good API, but a lot of these sort of specialized libraries and toolkits, like CUDA and GPU support, make things a little bit more difficult. So it might not be that, but what did you see at the time you started working on Burn as the big challenges on the Rust side? And has that been the case as you developed the package, or have other things become the kind of dominant challenges over time?

Yeah, all of those things are hard to work with, like CUDA, having your own GPU kernels, all the drivers… Not necessarily easy to install on all platforms… There are GPU libraries in Rust, that [unintelligible 00:19:44.22] kernels. This is like wgpu, so it’s targeting the web… But when I started working on Burn, I acknowledged that it was pretty important to be generic over the backend, so that we can write the best backend for the specific hardware you’re actually targeting. Because it’s probably always going to be faster to write CUDA for NVIDIA, to write low-level C, or Rust, maybe, with [unintelligible 00:20:13.12] support for CPU. Or to write with the Metal graphics driver for Mac.

So I was aware that one backend cannot be written for all of them, and I just defined the API, and I just used LibTorch as a backend, because there was already bindings to LibTorch in Rust. So this allowed myself to iterate over the abstraction, over the user space API, and not necessarily worry about speed, and writing all of the kernels. Just getting the abstractions in place and the software architecture in place. And it’s more pragmatic. It’s probably as fast as LibTorch by default, and then I can just go and write more kernels afterwards, which is what we’re doing right now.

I’m curious, do you feel – given the low-level capabilities that Rust brings to bear, that so many other languages don’t have, and that when you’re looking at whether it be GPUs over time, and I know you’re talking about using LibTorch in this case… But do you think that as you move forward, that that low-level capability that you have in this language, that other languages don’t bring to bear, will be a helpful part of kind of developing it and maturing Burn over the years ahead? Does that low-level give you an advantage that you might not have with other languages that we’re trying to integrate in?

I think so. Mostly in the part where we need to handle memory. So that’s an important part of deep learning frameworks; you don’t have to waste memory. We can leverage all of the type system of Rust to actually do graph optimizations, and all of that kind of stuff that we’re going to work on soon. And I think it’s going to be easier to work with Rust to do that, with good performance, than it will be with maybe another programming language with the garbage collection. Because we have fine control over the memory.

[00:22:10.29] So not necessarily to write GPU kernels. When you do that, you’re actually writing compute shaders, so it’s not relevant to Python, or C++, or even Rust. But if you want to handle memory and write the optimization pipeline, then I think Rust can be really useful.

And just to get a sense of kind of the current state of Burn, what is possible in terms of support and what you can do right now, and what are some of the highest-requested things that you would like to work on, but kind of aren’t there yet?

I don’t know… There is so many things that I want to work on, but time is limited, so it’s quite hard. What I’m really excited to work on is kernel fusion, and really optimize the compute pipeline with lazy evaluation. So that’s something I’m really excited to work on.

Could you dive into that a little bit and kind of what that might mean for a user specifically?

Yeah, and in terms of user, it’s just going to be faster. So this is really like optimization techniques that a deep learning framework can use. So yeah, there isn’t a lot of impact in terms of user API and usability, but it’s just going to be faster.

Gotcha. And would you say that right now in terms of what people are doing with the package – now, you mentioned that part of what got you into it was building kind of experimental models or architectures that maybe you were experimenting with on the research side… So I’m wondering, with this package, what are you seeing as the people that are using it, what are they most doing with the package? Is it that sort of experimental research implementation side? Is it taking models that aren’t maybe experimental and embedding them in Rust applications where they wouldn’t have been able to before? Is it something else? What are you seeing in terms of what people are doing over and over again?

I think a lot of people are using it because it’s easy to deploy on any platform… Because we have different backends, so you can deploy on WebAssembly, you can deploy on even a device without operating systems… So this is pretty great in terms of deployment flexibility. But even though I started a framework because I had like a research idea I wanted to do, the goal of Burn isn’t necessarily to be only for research. I wanted to go with kind of a blank sheet, and thinking about all the constraints and who is going to use the framework. So I’m always thinking about the machine learning engineer perspective, the researcher’s perspective, and even the backend engineer’s perspective. So the one that is going to write the actual low level kernel code, and CUDA kernels, and stuff.

So there’s kind of different user profiles or use cases that you can assign to the framework…

Kind of as a follow-up to that, as you were looking – and I noticed that you had quite a few people that were making contributions. For being a relatively young project overall, you have a lot of people involved in it. So it looks like it’s really getting a lot of traction. How do you kind of organize the work around it, and kind of satisfy the interests of each of those personas along the way? Is there one that tends to lead, or do you tend to try to have certain people that do different ones? How do you approach that?

[00:25:46.13] To be honest, I’m not sure. I think the key is just to be reactive. So if there is an issue, just go and comment it. If there is a bug, try and go fix it. And I think the most important work I can do in terms of architecture is setting the stones in place, but then if I want to extend, maybe add more tensor operations, or if I want to add more neural network modules, then I can open issues, and people that are interested in it can just assign themselves and actually do a pull request. And I just have to be really conscious about that, do code review correctly, be kind, and I think that’s pretty much it. I don’t have any other secret.

So Nathaniel, I deploy a lot of models as part of my day job. Let’s say that I am interested in Rust, and I am interested to maybe take some model that I might have experimented a little bit with in a colab notebook or something like that, and I want to make it, like you said, have the support for multiple backends, implement it in a maybe more efficient application… What would be the process that someone would have to do to let’s say get one of the kind of popular, quote-unquote, models these days up and running in Rust using Burn? Is that something that’s possible right now? How are people kind of pushing the edges with respect to that?

Well, I think there are two different strategies. So we’re actually working on being able to import ONNX model. So if you have maybe an image classification model, then maybe our import is going to work; it’s still in whip, but if there is no crash, it’s going to work. Not all operations are supported. But maybe for other models you maybe need to write the model from scratch using our framework, and then translate the weights, and you will be fine to deploy it. So it’s a bit of work, but working with Burn is quite intuitive. The API is similar to PyTorch, the modeling API at least… So it’s not that hard, depending on obviously the size of the model and the complexity of the model.

Yeah. And I think I saw a few on the repo, that people have already sort of done this. What are some examples of some of these that people have brought over into Burn?

Yeah, I think there are community models for LLaMA, for Stable Diffusion, for Whisper… This is thanks to the community. I didn’t actually port those models. But yeah, since it’s open source, I think if you actually do the work to port maybe a model, I think it’s great to share it with the community, and people can start using it. So yeah, we have a few, but we would like more.

Yeah, so call-out to the listeners out there that are Rust people in the audience - check it out and submit some of your own model implementations. That’s a great way to contribute, I’m sure.

You mentioned it having a similar API to PyTorch… And I’m kind of looking through some of the documentation here… I’m wondering if you could just comment on a few of the things that you call out as far as features of Burn, and kind of explain what you mean by some of those things. We already talked a little bit about the customizable, intuitive, user-friendly neural network module, so this kind of familiarity with maybe a PyTorch API; maybe there’s more to that. But you also mentioned these comprehensive training tools, including metrics, logging, checkpointing… Could you describe that a little bit, in terms of what the thought process is in the framework around these things? …which are definitely important practically, as you said, for the machine learning engineer, for the actual practical person who’s trying to build models.

[00:29:42.26] Yeah, and then the researcher. Sometimes they don’t want to actually write all of the training loop; that’s not the core of their research. Yeah, there is a library which is called Burn Train, which tries to bring a training loop to the user, so they don’t have to write it. You’ve got like a basic CLI dashboard where you can follow all your metrics. You have your logging, so if you want to maybe synchronize the drive to maybe a Google account, you can probably do that… It’s similar maybe to PyTorch Lightning, so for the PyTorch users that are familiar with the project… But we also have that for Burn, and we just have that. It’s just easier to get started with the framework. I think it’s essential for now, if you’re starting a new framework, to provide that.

We already talked a little bit about the versatile backends… I don’t know if you want to say any more about the other options for that. You mentioned Torch and WebGPU, but I see a couple others here mentioned. Are there any call-outs that you’d like to make there? …both in terms of other options, but when also those other options might be useful. People might not realize in the audience when you would want to use a torch backend versus something else.

Yeah, I think the Torch backend is probably the fastest if you have an NVIDIA GPU. For the CPU, I’m not sure; it depends on the model. But we also have an ndarray backend. Ndarray is similar to NumPy, but for Rust. This isn’t maybe the fastest backend, but this is extremely portable, so you can deploy the backend everywhere. So if you’ve got a small model, it can be very handy to have that, or to write unit tests, and stuff like that.

We also have a Candle backend. Candle is also a new framework built by Hugging Face in Rust. So they’re trying to make it easier to deploy a model with that. So we actually have their framework as a backend for Burn, so we can benefit from their work. And yeah, we have the WebGPU backend as well. So we can target any GPU. So if you don’t have NVIDIA, don’t be sorry, we have you covered.

Awesome. So I also noticed on your GitHub repo, in addition to kind of the familiarizing us with kind of the capabilities and features, you also have the Burn book, which I assume was maybe inspired by the Rust book, that seems to be a common thing. What is the Burn book?, and how can we best use it? What’s it for in your mind?

Yeah, the Burn book is to help people getting started with the framework. So it’s like a big tutorial/reference that you can use to actually start using Burn. At the beginning, it tells how to install Rust, how to get started with the language, how to make basic models, the training loop, the data, the data pipeline, all of that. With all the explanations, and stuff like that. So it’s really to help people getting started with the framework in an easy way.

Of the people that are coming through and learning from the Burn book, interacting with you on the repo, do you see a lot of people coming from the non-Rust community in because they have either performance-related things, or maybe their company is exploring deploying things in Rust, or other people, that sort of thing? So people coming from maybe the Python community? Or do you see more people kind of Rust engineers who are already building things in Rust, and so now that everybody wants to integrate AI into their applications, you sort of have the influx from that way? Are you seeing both? Which side is kind of coming your direction more?

I’m not sure necessarily about the backend of users of Burn, but I think the main painpoint is that they want to deploy their model reliably, and they’re coming to Burn to do that. And some of them, once they get familiar with the framework, they actually port also the training part. So they can have all of their machine learning workflow working with Burn. So it can be people with Python background, or Rust engineers, I’m not sure, but I think this is the main attraction point.

[00:34:00.19] I will offer a kind of a Burn newbie perspective on that myself… When I ran across Burn and reached out to you, I was really excited about it, in part because as this industry is maturing and affecting many other vertical industries out there, we are seeing AI capability being pushed out from only being in data centers and stuff, out onto the edge. And you can define the edge in many, many ways, obviously, but the place where processing is happening, and even training is happening is evolving over time. And if you look at businesses and their other use cases, the fact that they need AI in all these other industry things that they’re doing, all these other businesses… They may be platforms that are mobile, such as we have autonomous cars out these days, and you name it; all sorts of stuff that are increasingly relying on AI, and they’re turning – because those are autonomous things, they need the performance, in many cases the safety and low-level performance capability that Rust offers.

I know that I got super-excited when I came across Burn, because I’m in this AI world, but I’m also in this high-performance, things moving around time and space world as well. And being able to combine those into one, have one language that is able to do both at the same time, and deploy out to the edge in a very safe way and highly-performant way was hugely exciting, and it’s been a point of conversation that I’ve had with colleagues for quite some time.

So I think you’ve hit a sweet spot with Burn that is gonna get probably – as people become aware of it, you’ll get a lot more uptake, because it solves what would otherwise be a big problem that they’re going to be faced with in the years ahead.

Yeah. And I think it’s not just about – there is a good amount of solutions to just deploy inference, like with ONNX, and stuff like that, but it’s not going to cover the training part. And I think it’s valuable to be able to do training everywhere. Like, maybe the next generation of mobile, they’re going to call backward during inference. We don’t know that. It’s cool to have one tool that you can do both, on any platform.

As you kind of look to the future of the project itself… I maybe have kind of two elements to this question. What are some of your hopes for what Burn becomes into the future as a framework, in terms of like the sweet spot and what it does really well, what people turn to it for? So what is your kind of hope and vision for the project, I guess? And then for yourself, in terms of your own work and how you’re using the project, or other things, what is your hope for the future? You have your own interests, obviously, in terms of developing AI-related applications, so I’d love to hear both of those things if you have a comment on them.

[00:37:01.09] I think I would like Burn to the widely-used for maybe complex models. I think Rust really shines when you’ve got complexity. So if you’ve got a convolutional neural network with just a few layers, maybe the benefits of using Rust aren’t as massive maybe for deployment. But if you’ve got like big models, and a lot of complexity in the building blocks, then I think Burn will shine in that place. So I would like to see innovative new deep learning applications being built with it, as well as maybe just normal deep learning models that we’re familiar with, like ResNet, Transformers, all of those ones, but deployed on any hardware, so that everybody can run maybe locally some models; maybe not the big ones, but at least the small ones. And what I would like to do with it is maybe more research. Like I said previously, maybe bigger models, maybe asynchronous neural networks, like trying to leverage the contrarian nature of the framework.

Yeah. And as we kind of get close to an end here, just for those – because it is a podcast, people are listening in their car, and maybe taking mental notes of some things, or on their run… Where do people go to find out more about Burn and what would you suggest – let’s say it’s a newbie to Burn. What should they do to get familiar with it and try things out? So where do they go, and what would you suggest they start with?

I think the best place to start is to go to the website. So it’s just Burn.dev, pretty simple. And from there, you can just go in the book that we spoke about, and just follow along. If you are not familiar with Rust, we’re going to provide links so that you can get familiar with the language, and then you can come back afterwards, follow the rest of the book. And if you’re interested, you can also go to the GitHub, try the examples. You can run them with one command line, so you can try to do inference, or to even launch a training on your own laptop… So that can be great. So yeah, that would be the place I would go to start.

Awesome. Well, thank you so much for taking time to join us… And not burn us. You are very kind. So thank you for your time. We’re really excited about what you’re doing, and hope to have you on the show maybe next year some time, to see all the exciting things that are happening in deep learning and Rust and Burn. So thanks so much, Nathaniel.

Thanks a lot, man.

Thanks to you for having me.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. đź’š

Player art
  0:00 / 0:00