Practical AI – Episode #209

3D assets & simulation at NVIDIA

with Beau Perschall, director of omniverse sim data ops at NVIDIA

All Episodes

What’s the current reality and practical implications of using 3D environments for simulation and synthetic data creation? In this episode, we cut right through the hype of the Metaverse, Multiverse, Omniverse, and all the “verses” to understand how 3D assets and tooling are actually helping AI developers develop industrial robots, autonomous vehicles, and more. Beau Perschall is at the center of these innovations in his work with NVIDIA, and there is no one better to help us explore the topic!

Featuring

Sponsors

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Welcome to Practical AI
2 00:42 Beau Perschall
3 05:30 Omniverse 101
4 10:27 What can you do in the omniverse?
5 13:26 Will Omniverse be unique for everyone?
6 20:13 Will Omniverse have good 3D?
7 26:39 Omniverse in context
8 30:43 Synthetic data
9 35:31 Nvidia's plan for low internet areas
10 38:44 What keeps you up at night?
11 41:11 Wrap up
12 41:46 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I’m a data scientist with SIL International, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist with Lockheed Martin. How’re you doing, Chris?

Doing good, Daniel. How are you today?

Well, I’m doing great. I had a conversation at breakfast on Monday this week with a company from the UK doing autonomous drones, and I felt very prepared for that, because you’ve talked to me so many times about, aeronautics and drones and all that… So thanks for your prep.

No problem. Happy to do it.

Yeah, it was a good breakfast.

Just think of the universe of possibilities out there… So many things.

Exactly. Yeah. Well, speaking of the universe, or I guess rather the Omniverse…

Even better.

…or the Metaverse, or whatever verse you want to you want to think of, we’re gonna get into all the verses today.

We’re gonna be well versed in those verses.

Yes, we’re gonna be well versed. Good stuff. We’ve got with us Beau Perschall, who is the director of Omniverse Sim Data Ops at NVIDIA, which I have to say is a really exciting title; one of the better ones we’ve had on the show… So welcome, Beau.

Thank you very much. I’m pleased to be here. Yeah, I imagine that my title doesn’t make a whole lot of sense to just about anybody… It’s a lot of words.

I bet it’ll make more sense after this conversation.

Hopefully, so.

I was gonna say, you have a whole episode to explain it to us, so we’re good.

[laughs] Fair enough.

I guess spinning off of kind of how Chris and I were starting that, it would be awesome to hear about what does Omniverse mean, and also maybe a little bit about like your background and how you came to be working on Omniverse… So this intersection of what I understand some type of 3D stuff, and AI, and simulation. What was that journey like, and how can we understand generally what Omniverse is?

Sure. So Omniverse is NVIDIA software. It is our computing platform for building and operating Metaverse applications. And again, it’s not necessarily so theoretical; these are like industrial metaverses. These are - whether you’re designing and manufacturing goods, or you’re simulating your factory of the future, or building a digital twin of the planet, which NVIDIA is doing to accelerate climate research, Omniverse is a development platform to help with that kind of simulation work. And it’s doing it in 3D.

Yeah. So it’s not just those people without the legs kind of hopping around in a place.

No, this is very practical. As a matter of fact, we have big and small customers that are using it, over 200,000 downloads for Omniverse as a platform that you can get from the NVIDIA site. You’ve got companies like BMW that are using it to plan their factory of the future, and part of that is worker safety, so they have to have legs. You can’t simulate the ergonomics of if you’re doing a repetitive task, are you going to hurt somebody by doing it? Or are they in danger of getting hit by something in a work cell, or something on the assembly line? So there’s all sorts of simulation around that kind of information as part of Omniverse.

[00:04:17.28] But it’s a really broad platform. It’s designed to be extendable, so that customers can come in and write their own tools and connectors. It’s not supposed to be just its own endpoint. In other words, we have connectors, which are basically bridges to other applications, whether you’re coming from the manufacturing side, like Siemens, or you’re coming from architectural software, like Revit, or you’re coming from animation software, like Blender, or Houdini, or Maya, or Unreal, for that matter. All of that data can be aggregated through USD, Universal Scene Description. That’s the file format that Omniverse is based upon, which was a Pixar open file format. It is very robust. And basically, we figure, we’re the kind of the connective glue between all of these platforms, so that simulations can be run inside of Omniverse, but all the data can move in and out. It’s not like captive data. Hopefully, that gives you a little bit of a background of Omniverse in and of itself. It is a visual platform.

It does. That sounds fascinating. And as you know from our pre-chat, I knew a little bit about Omniverse before coming into the conversation, but I know that there is a lot of confusion about how this fits in with all the other – you know, we were joking in the beginning about the various verses that people are hearing. There’s a lot of lingo out there. And as recently as yesterday, a friend of mine named Kevin texted me and I haven’t replied to him yet, but I will have by the time this is aired… He texted me saying, “I don’t understand this verse thing, and I know that you’re involved in this. Can you explain it?” And I think Kevin represents a lot of people in that way. And so could you – we’ve heard multiverse, we’ve heard Metaverse, we’ve now definitely heard Omniverse… Can you give us some context for how this whole industry fits together, so that as we dive into Omniverse, back into it in just a moment, we kind of have a sense of where it fits within – and some of the other companies. We know you’re with NVIDIA and you’re doing this great work… But we’ve heard things from other big companies, the usual array of social media and cloud companies. So can you kind of set the stage for us a bit on it?

A bit, yes. Metaverse is a very loaded term, and everybody has kind of their own connotation of what that is. For NVIDIA, certainly we consider Omniverse a tool, a platform to help enable an industrial Metaverse. Something that is real world. Not only that can do simulation, but can communicate with the real world and back. So there’s this kind of bi-directional messaging that’s aspirational for us. That’s where we want to be able to be, so that if you have a production line, you can actually understand what’s the uptime of the equipment in there, and then basically schedule maintenance, or be able to do factory planning and optimization, so that you’re getting the most throughput you can at any given moment if you have to move materials around a facility.

Let me ask you a question there, just to draw the distinction… As you just now were defining it, you kind of said “industrial Metaverse”, and I’d like, if you would - I know that people are reading things all the time, and there’s more of a generic concept of metaverse, and then obviously, there are certain companies that were formerly known as Facebook, that has kind of taken the word as a brand in some ways… I sense that you’re using the more generic version, obviously, of metaverse. Could you define what that is, what a metaverse is, so that we can kind of understand what the Omniverse branding of that fits into?

[00:08:15.04] Sure. So the metaverse - again, very overworked term, I think. But in general, it’s the next evolution of the internet. Instead of having connected pages, you’ll now have connected living ecosystems, living kind of worlds, if you will, that actually can intercommunicate. You’ll do hopping between those worlds, as opposed to just moving between pages. So it’s all based on kind of this 3D-centric representation of our existence in some ways.

You’ve seen it - the gaming industry has things like Fortnite and Roblox already, that are very much kind of persistent, ongoing worlds. The metaverse is designed to take that to a much broader level, in everything from entertainment to business and industry. And so NVIDIA is taking their software platform and the hardware that supports it to help real world applications. I mean, it’s why we’re building an entire platform essentially around how we start to do weather prediction decades into the future, called Earth 2, so that we can start to help with unlocking the climate as far as that goes.

We have customers like Ericsson build digital twins of cities, so that they could go ahead and place cell towers in optimal locations for maximum coverage before they ever deploy in the real world. So trying to find real world values. That’s kind of the distinction between the gaming space and kind of the entertainment or personal spaces that the metaverse can represent, with Meta and different companies that are helping work on that. And everyone thinks everyone’s competing, and that’s like saying, “Who’s building the internet?”

Fair enough.

At some level, it’s gonna require all of us cooperating at some level. There’s so much greenfield as far as this space goes that, yeah, it’s really exciting.

Yeah, I really love this sort of parallel that you’re giving, or metaphor of the internet… Because some of the applications that I’ve heard you talk about - like, it’s making some connections in my brain that’s making this maybe a little bit more practical to me… So when I think of like the internet generally, and what you can do on the internet, and what has happened with the internet over time, there have been things that happened in the “real world”, that kind of had a parallel on the internet, right? I can go into a bookstore and I can buy a physical book. Well, now there’s a way for me to do that on the internet.

But then the internet also had this sort of segment of new things that didn’t happen before the internet, but now happened because of the internet. Would you say it’s similar in terms of what you’re seeing with the metaverse space, these 3D worlds, and the omniverse, in terms of some of what you’ve talked about, like the cell tower thing? Like, in theory, you could do that in the real world, and learn what you need to learn. There’s probably cost advantages to not doing that, and that sort of thing, but it’s a parallel there. Is there another set of things, like – I don’t know if this would fit into the climate modeling stuff, or other things that you’re talking about, where you can do legitimately sort of new types of things in this world that maybe we don’t know the full extent yet, but we’re beginning to see those. Do you see it that way, or…? You’re much more plugged in.

Absolutely. I certainly see autonomous vehicles, which is another big industry for us, with our Drive Sim platform that’s based on Omniverse, is that if you’re trying to simulate multiple kinds of traffic situations, and different scenarios, a lot of them you can’t capture in the real world. They’re dangerous. What you want to do is be able to train the algorithms to react accordingly, before you ever get it onto the real world. But you also want to have the connectivity so that the way that it’s handled is that it doesn’t matter if the data coming in is synthetic, so the sensors, the Lidar and radar on the car, with hardware in the loop, essentially… You’re now at that point of saying it can’t distinguish whether it’s a real world scenario or it’s a simulation. It treats them both equal. So that sort of thing I think is absolutely critical to safety.

[00:12:37.27] I think that that also kind of gets to the industrial and the manufacturing side of things as well, is that there will be ways to train things in more efficient ways as well. So you’re saving cost. If you’re training a robotic arm on a production line for a new task, instead of having to take that work cell down in the real world, or crew costs while you’re going through and programming it and testing it. Now you can go in and actually test it and teach it essentially in the simulation, and then just pass all of that data back to the physical world, so that the robot now just changes its program pretty much on the fly. That’s a huge, huge kind of benefit.

For a moment, just as we kind of finish up kind of the how and what the ecosystem looks like, and you’re kind of talking about these use cases, I want to go back for one second and talk about with both NVIDIA and the other organizations that are participating in this with their various solutions - some gaming, some not - what does that evolution of a user, if we’re going into the future, a short distance, and it’s becoming commonplace for users to have different destinations in terms of metaverse-style 3D worlds… In the beginning, are they all very distinct and separate, almost like using a separate application on your laptop, where you close one and you go into another one? Or is there any – will it take a while to get to connection between those different types of environments? And what does that cross-compatibility across multiple environments start to look like?

I think that’s part of why I was hired a year ago, was to help kind of solve this. I was hired to create a new standard that we call Sim Ready, for 3D content specifically. Because yes, what you’re describing is essentially a walled garden kind of approach, where everyone’s doing their own thing, and nothing talks to one another, and it’s all kind of disjointed… And that’s not the goal of the metaverse. The whole idea of the metaverse is to be interconnected, and allow people to move, and allow data to move.

And so with Omniverse being based on a file format called USD - again, Universal Scene Description - a very robust format, now what we’re trying to do is understand how to standardize that, how to make it so that based on your needs… And this is what’s been fascinating for me in the last year, because I did not come from a data science background; I was a 3D artist for 20+ years. In fact, I learned 3D before the internet was a thing, just to carbon-date myself. I had manuals and didn’t see my family for months, and had to work on super-slow computers. But we’re now getting to a point where interchange is absolutely paramount, so everyone is starting to look at it from a very cooperative place.

So USD being an open file format, being something that is open-sourced… We’ve got connections to the Academy Software Foundation, which helps try and manage standards, the Linux Foundation for standards… It’s a long, hard process to figure out what is valuable for everybody. Because as you can imagine, everybody’s use cases is different. What BMW is trying to do is going to be different than what a watchmaker does, or what Ericsson is doing, or what autonomous vehicle manufacturers are trying to handle directly. And what we’re trying to do with SIM Ready is build this framework that allows Sim Ready to have flexibility, based on your needs.

[00:16:22.04] If you’re doing synthetic data generation, where you need thousands and thousands of images to identify what a car is, that’s one need. So you need semantic label; you need something in the data, in that 3D model that says “I am a car.” Fairly simple, but you can get very specific, even within a single 3D model. These are the tires, these are the doors, this is the windshield, and you can start to semantically label more and more granularly based on your needs.

I’ve been dealing for just under a year, trying to learn what is important, and it’s like drinking from the fire hose; everybody has different needs. Daniel, I assume, being a data scientist, that you have very specific needs for the kinds of data that you are processing. And how you want that data organized is somewhat different than an NVIDIA researcher might need.

So instead of trying to funnel people into one workflow, we’re trying to make sure that Sim Ready becomes this living, breathing organism that must evolve over time, and has that flexibility, so that we’re providing the planter and the soil, and saying “Plant your tree. Here’s how you do it so that you can customize it to your own needs.”

Again, another practical example is with Sim Ready, specifically, a piece of content right now has semantic labels. And what was shocking when I got here was finding out that our research scientist was like “Well, what semantic labels are you using right now? What’s your taxonomy? How are you identifying things, and what’s coming with those data sets?” And they’re like “We get nothing?” It’s like, what? Yes, they are basically having to kind of like from whole cloth, create their own semantic label taxonomies. I’m like “Well, that’s crazy. But what taxonomy would you like to use?” And everybody was kind of a little bit different, and so it’s like “Okay, what do we do there?”

So there’s kind of the starting point, and in terms of a simple taxonomy, that will allow people to identify the car. But some people want to call it a car, some want to call it an automobile, some want to call it – if you’re a French researcher, you might call it a voiture, if my high school French - if I remember it correctly. It’s like, how do you kind of synchronize all those, and it’s like, you’re crazy if you try. [laughs]

So essentially, what we’ve done is we’re building a framework and a reference implementation to be that planter, so that we can say, “Here’s how you can implement it for your specific needs. And what data do you want to manage? Do you want physics? Do you want to have rigid body physics on the objects right now? Great, you can go ahead and add those.” We have that as part of PhysX, that is built into the omniverse platform. So when I said simulation, it can do collisions and collision detection, but there’s more.

When you think about building digital twins, you’re trying to represent the real world as accurately as possible, and that is an endless quest, which is why it has to evolve over time. We’ll build stuff now, but in the future we’ll have more sophisticated electromagnetic materials that have thermal properties, and have Sonic properties, and have deformation, tensile strength and things like that, that we’ll want to build in so that the simulation can actually process it.

[00:19:56.08] So it is the rest of my life’s work and then some, I think. It’s going to continue to evolve, so what we’re trying to do right now is in the very early days set the standard up that it does have that ability to kind of breathe and move along as we get more sophisticated.

Well, Beau, I love how you kind of brought what to me is honestly a little bit of an intimidating subject, which is this whole area of 3D… And I’m sure you have a different perspective coming from the art world, but I’m very much – let’s just say I shouldn’t design any sort of applications that human humans look at with their eyes. I’m not that guy, I don’t have that skill… So it’s a little bit intimidating for me to think about these spaces, but I think the practicality that you’ve just described around – I can definitely see even applications… I don’t work in manufacturing, but I can see those. But even in my own space, I work in natural language processing, and in language of course a big area that is really neglected in the NLP space is sign language, which by its very nature is a 3D thing, right? A lot of people might think, “Oh, it’s just hands, and you can look from one direction…” Well, there’s gestures, there’s facial movement, there’s 3D movement that happens with sign language… And if you want to, for example, have an avatar where you could type something in and the avatar signed in American Sign Language, or Japanese Sign Language or something, that’s a 3D environment and would require certain labels around facial features, and hands, and all of those things. So all of that really connects with me well.

I’m wondering if you could kind of break down this Sim Ready project that you’ve been working on, and maybe think about it from the perspective of let’s say I am a manufacturer, I’m coming into the space, I want to kind of figure out – like you say, you’ve got the planners ready… What does it look like for me to come into the space and think about my use case, and then map that on to Sim Ready, the standard, and the file formats in the 3D space… What’s sort of required for me to enter that space as it stands now?

That’s a great question, because a lot of people understand 3D is still very hard to achieve in any kind of degree of fidelity. And Omniverse is trying to help create the highest visual fidelity on top of simulation fidelity possible. So that kind of pyramid of what it takes to build 3D content in the first place is still difficult, even with photogrammetry, and the new Nerf technologies, and things that can help start to capture that. And those are going to evolve. And NVIDIA, being an AI company, is certainly pushing into those areas to make it easier for kind of this art asset acquisition. But in terms of what it takes right now is – well, let me back up here… I’m kind of front-running myself in my head… Essentially, with 3D being difficult, it’s hard for kind of anyone to come in and just have a dataset and be able to do a lot with it.

I’ve never taken an animation class or anything, so you’re working with that sort of clay…

That’s okay, neither have I. [laughter] Essentially, it’s adding the value on top of the art asset. So if you’re a manufacturer, or if you’re doing sign language, one, you have to have the asset library. And ML researchers and data scientist have a voracious appetite for content, because you can’t have just one thing to train against. It is thousands, or tens of thousands. For humans it’s diversity, not just in terms of age, and ethnicity, and sex, and clothing, and look, and facial features… I mean, it’s endless there, just to be able to train the model with as little bias as humanly possible. The same thing for any other kind of research where you’re using 3D.

[00:24:26.23] I had a researcher ask me early on when I first started, “Can I get everything you find in a garage?” I was like “No. That’s an unbounded question. Let’s focus… What do you want?”

There’s a lot of strange garages out there.

Exactly. Am I a woodworker? Am I a mechanic? Am I a hoarder? Is it my garage? All of that kind of comes into play as to kind of like focusing down on first what does the dataset consists of, and then what metadata is important for the use case? So that’s really kind of where Sim Ready starts to differentiate, is it says, “Okay, now that I’ve got this dataset, what adds the value to it from this set of tooling that we’re building?” Also on top of Omniverse, so that at the end of the day I can take beautiful art assets, stuff that has no metadata for simulation or for AI at all, be able to push them through this tooling to add semantic labels, to add physics, to add physical materials, to add all of the kinds of things that matter to the dimensions of the object, whatever other kinds of metadata are important to that customer, and then be able to validate it and export it, so that now you’ve got a dataset that a data scientist can consume directly, practically, without having to spend their life trying to figure out how they add the value on their own.

At the end of the day, I don’t think NVIDIA envisions themselves, or me, having a team build all the content in the world for people. We want to enable all of the suppliers for BMW, the Siemens, and [unintelligible 00:26:07.01] and companies like that, who build infrastructure and build content, to also embrace the idea of Sim Ready, and the tooling, so that all of that content just plays nicely together. And then, again, it flows into and out of other simulation platforms. If you’re pushing it somewhere else, it’s a USD file, so that data is available to you, regardless of what platform you’re using it within. So that’s really kind of the benefit there.

I just would like to extend exactly what you’ve just said… Could you give us – and we often will ask guests just to kind of give us a nice clarifying way… What you’ve just said is you’ve described the concepts of going through that process… Could you give us either a fictional or a real world - whatever works for you; and I suspect you probably have one ready to go - of like, pick a manufacturer or whatever you want, and kind of walk us for a moment at a high level through the steps of what they’re doing, where you reference Omniverse, you reference Sim Ready, you reference the things in context, in a use case, so that we kind of follow your footsteps through that, and it kind of brings the concepts into a very tangible, touchable kind of understanding.

Right. So we actually have a project ongoing right now, and I can’t mention who, but essentially, there is a pick and place robotic arm on a conveyor system, that actually has sensors to indicate where parts are on that platform at any given moment. And what they want to be able to do is build that simulation inside of Omniverse, so that both the simulation can drive and time the real world application, and the real world application can report back so that there is this kind of cyclical nature of having data moving both ways. So a feeder drops a part onto the conveyor belt, the system always knows where it is, it can count it, it can track where it is in the process, when the arm is supposed to pick it up, it knows how to do that and move it into the right location.

[00:28:14.22] Those are the kinds of use cases where now if you have Sim Ready content that knows, it can identify itself, “This is a package, this is a conveyor”, this is this part of Omniverse can trigger when the real light sensor is tripped and be able to understand that as a “Hey, this is where this product should be.” So if the simulation or the real world is off, they can adjust on the fly, so that now you’ve got kind of this self-fueling round trip ability to track content out.

So is it fair to say like you would take assets, 3D assets, and you would apply USD, the Universal Scene Descriptor to it, to give it the context, so that it is “Sim Ready”, and you can use the Sim Ready tools on those assets to do whatever it is you’re doing.

Right. USD is actually the file format, but it’s more than that. So in most applications now either export USD directly, just like you would if you’re working in a CAD application you might export a DWG file, or a DXF file, or something like that, or a SolidWorks part file. If you’re in manufacturing, you can now export USD directly, in many of these apps. They’re all starting to get on board, which is great for the 3D industry, because I can tell you that when I was coming up, every 3D app, every tool had its own 3D file format. And so nothing played well together. It was always a nightmare to try and get content from one place to another, without question. It wasn’t like 2D imagery, where a pixel is a pixel. 3D is much more complex as far as that goes, orders of magnitude.

And so now we’re starting to kind of hone in on USD as a primary file format. Technically, there’s another file format that’s open, that’s run by the Khronos Group, called glTF, and it is essentially a web standard for 3D. And I was part of the group that was helping kind of define the standard for 3D commerce, so that you could see things on your Apple phone, and spin them around in the websites, and things like that as well. So that’s kind of the JPEG version of the 3D. Well, the USD file is more like a layered Photoshop file; much more robust. But they play very well together, and Omniverse supports both of them too, so this is great.

So one of the things that you mentioned briefly, Beau, which I think is a really fascinating topic, but also a really important topic for the future of sort of practical, artificial intelligence, machine learning, is the idea of simulated data. Now, you’ve kind of briefly mentioned this topic of creating 3D worlds, all the file formats and the things that are needed to label those, to make them useful for data scientists. You talked about the example of digital twin running in parallel with the real world robot arm… Could you set the context now for usage of this technology for synthetic data production, and from your perspective, where you’ve seen people do that, maybe a couple examples successfully, and maybe help people understand what synthetic data means and why it might be useful?

[00:31:40.26] Sure. So synthetic data, as far as I have been involved, is essentially generating randomized - what we call domain randomization; taking lots of objects, randomly placing them in scenes, with all of their labels in place, so that you can train machine learning for computer vision, to be able to identify something in a room or in a space or in an environment. So it doesn’t matter what the lighting conditions are, it doesn’t matter what the material is, it doesn’t matter the orientation of the model - it can be upside down in some arbitrary orientation, but at the end of the day, when you have that image, or that sequence, video sequences or whatever, that does all of this, the computer algorithm can always pick out whatever that piece is.

We have a version of our CEO, Jensen, and we call him toy Jensen. And here’s a little 3D model, toy model, you’ve probably seen him in our GTC talks and keynotes… And they wanted to do kind of a Where’s Waldo for SDG, for him as well, just to be able to train “Where is he in a scene?”, with all sorts of other random 3D content. And so that you would change lighting, you would change materials, you would change the orientations of everything, to train the algorithm to be able to spot toy Jensen no matter where he was in the scene, how much he was obscured by blocks, or sofas, or things like that.

From a more practical standpoint, think about what furniture manufacturers are trying to do today with augmented reality. They want to be able to scan your room; they want to eventually say, “I know that that’s a sofa, and that’s a chair, and that’s a table, and I want to be able to replace it with my stuff instead, and show you what my stuff looks like in your space.” And so having that computer vision trained against a huge variety of content now gives their algorithms the ability to kind of find and identify that stuff with high accuracy, or good fidelity.

I just wanted to say tongue in cheek that I think finding Jensen is not as hard as you say, because he always has his trademark motorcycle jacket on. I’m just saying. It’s always the jeans and the motorcycle jacket, so…

He does indeed. They actually put him in the midst of all of our Marbles content, the real time sequence that they put together for a real time demo for GTC two years ago, and there’s hundreds of elements. And so he would get pretty obscured, where you couldn’t see either his jeans or his jacket.

Okay, fair enough.

And you would see like a part of his gray hair and that would be about it.

Gotcha.

Some fascinating stuff. But you know, from what I’m trying to do with AI, just to kind of circle this all back around to Sim Ready, is that AI is important for Sim Ready in the future, too. I mean, again, just starting less than a year in. But my vision is to work with our data researchers as well, so that at the end of the day, instead of having a tool that you manually have to process content with, why wouldn’t our Sim Ready tools live in the cloud as a service for people to upload their content? And it doesn’t matter how materials are named; is it named metal, is it named wood? Ideally, AI would help us identify what that material should be to name it properly, and then do semantic labeling on it, and be able to apply the right physics… So that you can upload your library, no one has to get involved, the system now can process your library, give you a dashboard and your dataset that is now valuable. That’s my long-term vision specifically for AI for Sim Ready.

I’d like to ask you - and part of this just comes from the kind of the… I work for a company that has to deal with edge scenarios that are adversarial and challenging in all sorts of ways, and so I’m always going to that… One of the things I’m always curious about is as we look at simulation, and built on the larger cloud approach that we’ve been doing for the last 20 years, 15 years I guess now, as you move these capabilities, and you’re talking about having 3D assets, you’re doing augmented reality, and you want to be able to merge those, as you mentioned, like with the room, with your own stuff… But there’s an infinite number of variations there that we could talk about from a use case standpoint.

Absolutely.

[00:36:09.21] As you get out and you’re doing things that are away from the cloud, you either don’t have enough bandwidth to get all the GPU computation from the cloud back to you where you are out in Everest base camp, because – that actually probably does have enough of an internet connection. But let’s say you’re up in camp two, and you’re doing something in a fairly remote region… How do you envision these starting to merge into that, in terms of being able to have a consequential user experience, something that’s impactful in terms of augmented reality, where you’re combining all of these 3D assets that are Sim Ready, and it’s merging with your world, when you don’t have bandwidth and cloud assets immediately available due to technical limitations? How is NVIDIA thinking about it? Because I know you want it to be everywhere, so how are you thinking about bringing this future that we’re all hurtling toward, and that you’re inventing, into those spaces that are not just “I’m on a gigantic internet connection, sitting in my office, doing my thing”?

Right. I mean, certainly NVIDIA wants things to live in the cloud as much as any company at this point. Jensen publicly announced that in the keynote for GTC this past fall. And having kind of that unique position of having hardware and software, with our GPUs and the Omniverse platform giving us some distinct advantages, where you can actually do quite a bit from your own small workstation, in terms of streaming content and how we might do that in the future - honestly, I don’t know. To be completely fair, I don’t know what that looks at this point. I’m only a year old here, so…

I would argue that NVIDIA is very well positioned for answering that question, because you’re not strictly 100% in the cloud. I have bought products from you that I can go place into a computer that is not in the cloud, or that may have a connection, but I’m doing the GPUs out on the edge; you have a whole large product line of things. So I do think that you’re well positioned for that, but I think it’s a fair answer to say “I don’t know”, because we’re moving fast, and… What’s the cliché…?

It is still early days, there’s no question, and there’s going to be a lot of evolution. I know that what we’re focusing on this year as a company is awe-inspiring, and I can’t wait to see how we progress throughout the next 12 months, or 11 months now, to get closer to those goals. So there’s a lot to be done. But yeah, I don’t know.

As you do look to the future of your own work, and what NVIDIA is doing, but maybe also - now that you’re in this space of 3D and interfacing with data scientists, thinking about how that can influence AI, how AI could help you build the things that you’re doing, what’s on your mind as you’re looking towards the future? What excites you? What sorts of opportunities really keep you up at night and really keep you thinking about the potential in this space?

I know that you mentioned your background in art, and of course, this last year has been an amazing year in terms of the generative capabilities of AI, and that even sparks things in my mind about how the things you’re working on in 3D, interface with that sort of generative capability… What are you thinking about, what are you looking forward to as you’re moving forward?

For me, one, there’s almost nothing to not be excited about, including generative AI. But for me, when it comes to Sim Ready and kind of my focus is really the sophistication of what we’re trying to achieve with AI. It’s starting to kind of understand what the value is today and how you start to extend it forward, so that we can start to extrapolate out much further forward. Building that bi-directional communication between the simulated world and the real world - wow. I cannot wait to see how that starts to really kind of manifest, where you have data cleanly flowing both ways, and things start to synchronize, so that you’re not just simulating at this point, you’re now kind of replicating things that way. I think that’s huge.

I was lucky enough to be around when 3D first kind of went mainstream, where you could have computer PCs, consumer PCs, instead of $50,000 workstations that could do 3D. And with AI, I feel like we’re kind of in that similar early phase of creation and understanding, so that there is just this enormous green field in front of us to kind of explore. And it’s going to take all of us, too. It’s not just NVIDIA. I want to make that clear. We’re focusing on things that we feel we have distinct advantages on, but we need collaborators. Again, it’s back to the adage of “How do you build the internet?” With a lot of people and a lot of cooperation. There’s so much opportunity for across the board that we’ve all got to kind of pull together and do it.

Awesome. Well, I think that’s a super-inspiring and encouraging way to close things out. It’s been an awesome conversation, Beau. I really appreciate you taking time to talk about all the things that NVIDIA is doing in this space, and the things that you’re working on around standardization and making things useful and practical for people like myself and Chris. Thank you so much for your work and your contributions.

Thank you guys for having me. This has been a blast. I’ve enjoyed it thoroughly.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00