Ro Gupta from CARMERA teaches Daniel and Chris all about road intelligence. CARMERA maintains the maps that move the world, from HD maps for automated driving to consumer maps for human navigation.
Ro Gupta: Totally. Iāve seen a few different ways of framing this. Some people frame it as ā they use a dichotomy of AI versus rule-based, or something like that. I also more recently saw it, which I kind of like, as someone using sort of nature vs. nurture⦠Maybe letās start with that. We were talking about priors before, and thatās where Iāve seen ā actually, the nature vs. nurture dichotomy I saw because weāre connected⦠So one of the several academic institutions that we have pretty close ties with is NYU here in New York. NYU is known for Yann LeCun and his work, but also Gary Marcus⦠And in some ways, theyāre sometimes friendly at odds with each other, and Iāve seen them even debate this concept of nature vs. nurture for AI⦠And I might be morphing it a little bit simplistically or off what they mean in their arguments, but when it comes to what we think about, and what Elonās thinking about when heās saying what heās saying about Teslaās eschewing of things like maps is āCan an AI get to where it needs to get purely on just learning, and essentially nothing else? Or is there a need for there to be a certaināā I think Gary Marcus uses the term āinnantenessā. Again, Iām kind of morphing it for this conversation a little bit⦠But in our case, that might be analogous to the use of priors. So I think a lot of these debates boil down to that.
Actually, yeah, we kind of have a dog in the fight, sure; we are a mapping company, and thatās used for priors, but actually, for me, Iāve always felt like what is actually gonna solve the problem both now, but where can you future-proof yourself, including on the business side? Itās really important to future-proof yourself, so that if and when certain trends materialize, you can sort of seamlessly ride that wave, as opposed to completely flipping your technical approaches that youāve taken. This SD/HD to MD is a perfect example of that. Right now, I donāt care who you ask - especially for really high levels of autonomy, and also where the driver is truly not in the loop, and also in complicated environments, no one is able to do that without priors, and no one thinks that will be possible from a safety case, from regulatory and societal acceptance rate case for several years at least, if not more than that. The but is what if that changes?
The thing is, we always have to be humble, because these things change in a very nonlinear way, and our lizard brains still really struggle with nonlinearity in predicting things, because we just donāt know where we are in those [unintelligible 00:35:38.08] So thatās why we always exercise humility there, and kind of think about what-if scenarios⦠And I think the what-if scenario of allowing this AI to be more nurture than nature, so just purely whatever you expose it to, it learns, and it just gets better, and it needs less and less of what it was hardwired with from the beginning, I think it would be a great thing. For us, it would actually allow us to focus on higher-level problems, where youāre switching from certain problems on the lower rungs of the hierarchyā¦
[36:17] Again, I donāt mean to plug our blog, but another post we referred to in this last post was this thing we call āThe mapping hierarchy of needs.ā Itās sort of a take on the Maslow thing. Over time, thereās higher-order problems that the data that we create still are really important for. Itās just that itās stuff like user experience, or compute efficiency, or economics⦠Whereas the first-order problem that everyoneās really trying to get over the bar with is safety. So thatās where, as I said right now, everyone really wants to use good priors for that.
But in the steady state, you could totally envision maps being more used for things like comforts, and monetization, and things like that. If you think about aviation or other industries, thereās certain datasets that were much more critical for safety, but are really now much more for comfort. Say for example turbulence, or weather data. Iām old enough to remember when we did worry about poor-taste jokes about TWA, and stuff, about their safety.