Practical AI – Episode #229

Automated cartography using AI

with Gabriel Ortiz from Gobierno de Cantabria

All Episodes

Your feed might be dominated by LLMs these days, but there are some amazing things happening in computer vision that you shouldn’t ignore! In this episode, we bring you one of those amazing stories from Gabriel Ortiz, who is working with the government of Cantabria in Spain to automate cartography and apply AI to geospatial analysis. We hear about how AI tooling fits into the GIS workflow, and Gabriel shares some of his recent work (including work that can identify individual people, invasive plant species, building and more from aerial survey data).

Featuring

Sponsors

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

Fly.ioThe home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.

Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!

Changelog News – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today.

Notes & Links

📝 Edit Notes

Chapters

1 00:00 Welcome to Practical AI
2 00:42 Gabriel Ortiz
3 03:41 Dipping your toes in deep learning
4 06:05 AI meets geospatial
5 12:17 Deep learning tools in geospace
6 16:17 Counting people on beaches
7 21:55 What data are you working with?
8 24:58 Sponsor: Changelog News
9 27:07 More discoveries
10 30:33 Automated cartography
11 33:33 Implications of instant cartography
12 36:09 Current limitations of AI
13 41:45 What excites Gabriel?
14 43:35 Outro

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another episode of Practical AI. This is Daniel Whitenack. I’m a data scientist and founder at Prediction Guard, and I am really excited today, because my life has been filled with large language models for the past months, and I feel inundated with information about those… But there’s so much going on, so many amazing things happening in the AI world outside of the text modality… And today we have with us Gabriel Ortiz, who is principal geospatial information officer at the government of Cantabria in Spain. Welcome, Gabriel. How’s it going?

Thank you. Thank you. Thanks for having me on, and giving me the opportunity to share with you and the audience what we have been up to in the last few years regarding geospatial analysis, and particularly artificial intelligence.

Yeah, and one of the things that stood out when we started talking - well, first of all, you’re a listener of the show, so I love it that you now get to be a guest on the show… That’s so wonderful. I’m glad that we have listeners who are doing amazing things as practitioners… But also, you’re in Spain, which is one of my favorite places. My collaborators during my grad school days were in San Sebastian, and I spent time there, and I know there’s so much innovation happening in that region, and in Spain in particular. What’s it like to be working in AI in Spain?

Well, Spain, I think is a great country to find professionals in all the branches of engineering. There are many things happening in the AI industry; there is a lot of good environment of startups growing, and I really encourage you to engage and contract people from Spain.

Yeah, that’s so great. And not only is there amazing work going on there, but it’s one of the most beautiful places I’ve been. And even when you logged in – so our listeners can’t see it, but I see the beautiful sunshine, and trees, and town behind you through your windows… So I’m a little bit jealous.

You mentioned San Sebastian… I am pretty close to San Sebastian. Santander is a really, really beautiful city.

Yeah, yeah. So we mentioned that you work in geospatial… So I’ve been on the MapScaping Podcast a few times, which has been fun, and I know that that industry is really wrestling with kind of uses of Deep Learning, uses of AI, and understanding how to integrate that into workflows… If my understanding is right, you didn’t come from a data science researcher sort of background into this topic. You came more from the geospatial side. So could you tell us a little bit about how, as a geospatial practitioner, you first started kind of dipping your toes into deep learning and understanding what it meant for your industry?

Sure. I have been working in the geospatial industry for more than 30 years. I started working for topographic control, bathymetric control of works… Then I moved on to engineering companies designing highways, and railroads, dealing with environmental data… And always using GIS, which stands for Geographic Information System. And as many of you know, it’s a technology that lets you operate and do analysis over huge amounts of data.

Then I started to work for the government of Cantabria. I am now in the role of principal geospatial officer, as you mentioned… But literally, if you translate directly from the Spanish, it would be something like chief of the service of cartography and GIS. My role here has not only been in charge of the data production, but also in the development of the infrastructure infrastructure for the analysis of geospatial data. Within our organization, it means for our staff, but also for our stakeholders outside, which is something very important for us, for the citizenship, for the community, and for the companies that are working with geospatial data.

Me and my team, we have something very ingrained in our DNA, which is the public service that we provide using AI, and using another set of technologies, and we every day try to do our best to fulfill with that target.

[05:31] Yeah, it’s really inspiring to hear kind of the motivations behind how you think about doing your work, and the people you’re serving, which is so great. I’m wondering, just practically, you mentioned kind of GIS tooling, and the processing of data in that space… Of course, deep learning and the AI space has its own sort of unique tooling, and sometimes weird tooling… So I’m wondering, could you comment in terms of – what is it like for a geospatial practitioner to start adopting deep learning techniques and all of that, which I’m assuming have a different set of tools than geospatial people have used in the past? So what is the current state of the tooling around mixing deep learning with geospatial? Is it difficult? Is it fairly segmented, or is it more integrated at this point?

Well, at first sight it seems daunting and intimidating, but I have to say that it is not so difficult, to demystify it a little bit, the AI technology. As you mentioned before, I am not a researcher on AI; I am an expert in geospatial industry, and I will tell you my story, how I began, my first contact, or at least the first time that I paid attention to AI was in 2012, with AlexNet, what happened in the ImageNet challenge. At that point in time the classification of images was great, but it was not very applicable to the geospatial industry. It has an application, and you can leverage that, but it is not what we do every day.

Previous to that, I have to say that it was in 2010, or 2011, something like that, I knew about the work of media with GPUs, the general-purpose GPUs. I think Bill Dally talked about this in one of your earlier episodes… And that was very interesting for me, because in the geospatial industry we often have a lot of demand in terms of computing power. When we operate with what we call raster data, which is no more than data organizing in a grid, topologically in a grid, for instance, and an image is raster data… But also, for example, a digital trained model, which is a grid where you store it every pixel in the center of every cell the value of the altitude, of the terrain over the mean sea level, for example. And you perform calculations over that data models. For instance, getting the watershed or the [unintelligible 00:08:18.20] of one part of the territory. And that calculations can expand for several days, or even weeks, because in spite that the mathematics underlying running under the hood are not very complex, what happens is that you have so many pixels that it ends up being very demanding. And what NVIDIA started to do on those days was to enable to parallelize a lot of calculations, and instead of using 4 computational threads on your GPU, or 8 computational threads, they were able to spread all the calculations among hundreds or even thousands of computational threads.

That caught my eye, because it was very important for me. But at that point in time, I thought, “Gabriel, you are going to need a GPU”, but not for artificial intelligence. I was not thinking about that… But for calculations of a different nature.

Then in 2015-2016 we witnessed the blossom of a whole new generation of deep model architectures. Just to mention some of those who had a big impact in computer vision: ResNet in 2015… I think it was presented in 2016. I’m not very sure. Then U-Net, that has been extensively used. In 2017 the Facebook Artificial Intelligence Research Group presented and proposed Mask R-CNN; it was an evolution out of Fast R-CNN. And in 2018 I saw for the first time a demo within the geospatial realm of our data provider, which is S3, actually. I think you also have a couple of guys from S3 in a previous episode… And what they were demonstrating is how you can detect swimming pools and oil rigs automatically using a single-shot detector, in those days. And that was kind of an a-ha moment for me, because I realized “Well, you have to invest your time. This is going to be definitely a game-changer, and you have to start working on this.”

[10:32] So that was the moment, and from that point - you know, there are two kinds of persons. I will use a metaphor to explain that. When you see the results of AI, some people think it’s magic. And everybody likes magic. Magicians. Some people end up falling in love with the magician, they are obsessed with the persona, and the mystery, and the whole shtick. But some other people just want how the trick is done. And I think I belong to the second group. So it was not only that this looks like magic, the point was how this is done. And from that point, I started to work… We can delve into this if you want, but it’s not so difficult, as I said before.

Yeah, that’s so great. Yeah, I applaud you for digging in not too early, where it was only a research topic, but as it started getting into practical applications, you really took that and figured out how to apply it within your context appropriately, which - I think maybe not everybody takes that approach. So I appreciate that.

So with the tooling that you’re using - I think maybe this is useful for people that haven’t done geospatial as much… So I know there’s major tools, like ArcGIS, and other ones… And then you’ve got sort of like Jupyter Notebooks, where you train models or GPU services, where you can run inference and other things… Have those merged at all? So like from within the tooling that you’re using as a GIS professional, has some of the deep learning tooling been integrated into those tools? Or is it mostly at this point for you “I’m going to export my data from the geospatial side, and then use a notebook, and then import it back in”, or something like that?

Well, that’s a smart question. As I said before, our software provider is S3. We’ve worked with S3 for a number of years, and they are doing an excellent job in integrating many open source frameworks into their platform. And I think – because we try to follow the literature, but we are constantly falling behind. It’s extremely difficult, every week, every month –

It’s impossible. And even completing the puzzle on [unintelligible 00:13:12.09] of installing all the frameworks and putting everything into work can be very complex. So we have a big advantage working with the S3 technology. They have a research and development team based in India, and I think these people are doing a great job facilitating the application of that.

[13:37] In some of your previous episodes you have been talking about UX interfaces for using artificial intelligence, whether or not it makes a difference… And it really makes, because it’s a way of democratizing and making accessible the technology. That is one part of the story; I think it has facilitated a lot our work, because you not only need the frameworks, you need all the platform to move across terabytes of data. The geospatial industry is highly demanding in terms of the data that you have to work. And it’s not only the frameworks of open source, it’s how you prepare the labeling, how you structure the databases… There is a lot more science.

And also, apart from that, what I did is starting to gain the main concepts related with artificial intelligence. From all the great resources that are completely free on the internet - you know, on YouTube you have lessons from the MIT, from Stanford, that can introduce you to the simplest concepts such as a perceptron, or a map preparation, or a stochastic gradient descent… So I designed for myself a twofold strategy. First, trying to gain experience with getting hands-on with the shelf models, but at the same time trying to also learn about the concepts underlying/pinpointing the AI world. I think that’s important. Many people think that artificial intelligence is a black box; it’s not that black box, it’s mathematics in action. Of course, it’s not linear; you cannot fully predict what is going on, but many of the things can be understood.

Well, I love your perspective, Gabriel, on how you’ve developed a mental model of how these technologies work. I think that’s an encouragement to others to both explore these technologies, but also keep in mind what they are, and how they should interact with them as as tools… But I am so fascinated by some of the projects that you’ve been able to accomplish during your time using this technology… And I want to start diving into those a little bit.

One of the ones that you pointed me to that was really fascinating reminded me of standing on the beach in San Sebastian… Although it looks like you have maybe more nice beaches where you’re at. So tell us a little bit about how standing on beaches and counting people on beaches - why is that important, and how did you get into this project of applying deep learning in that context?

Yeah, definitely. I started working with deep learning - I think it was the end of 2019, or something like that. Then came the pandemic. And after the pandemic, with the release of restrictions, somebody here at the government of Cantabria said “Hey, we’re a little bit worried about the possibility of having uncontrolled crowds on the beaches”, because I have to say that Cantabria is a notable tourist destination. We have more than 100 beaches, so you can have a big problem in terms of the spreading of COVID-19. And they were worried; the first thing that they asked to me is “How can we get a calculation of how many people we have on every beach when the tide is up, and when the tide is down”, and things like that. But it was just a simple calculation in terms of the surface or the area that the beach has. And I said, “I think I can go further. I will count the people” and they say that “What?! Are you crazy?”

[17:37] Yeah, I’m not drunk. I think I can do it… Because I had some experience using single cell detectors… And at that point in time, more models than single cell detectors. And it’s what we did [unintelligible 00:17:50.13] the people, because normally we have an archive for aerial surveys conducted always, as is normal in clear skies, sunny days, when everybody is on the beach, in summer. So we had very well the behavior, of use, of every spatial behavior of use of every beach, all across Cantabria. At different days, different months… No matter if it is on the weekends, or for Labor Day, we had a huge amount of data to analyze. And we developed some deep learning models that even if you are changing the input signal, that means changing the aerial survey, it works. We could predict the sectors of every beach, not only in terms of absolute figures of population on a beach, but which are the sectors where the people try to concentrate. And after that, we released a small application that you can see in the notes of the podcast, when you can see some maps. Just for curious interest, if you want a place – I want to go to a beach and I want to stay quiet and loosey goosey, without many disturbances, you can see what places are the most suitable for that use. So it was a great experience, our first experience releasing something.

Yeah, that’s so fascinating… And it makes so much sense after you say it. I can think of so many more applications for something like this. I know in the US National Parks, thinking about crowding and the impact on the natural environment, or other things like that, and helping plan out for crowds at certain points of the year… There’s so much practical use of this. And this was amazing, because you took this knowledge that you had been building up and really applied it in the moment during COVID-19, when there was this specific need… But then it sounds like there’s continued usage past that, because even if I’m just a normal citizen and I want to enjoy the beaches, this information is really useful to me. I know myself I probably would go to the quiet places of the beach, and sit and listen to the waves. So that’s a–

There are much more interesting problems to try to solve than the one that I described now. Later, we started to work, trying to mobilize, to model certain aspects of how the territory works. You have to understand that territory as a whole is a living entity, where everything is related to everything. So we started to slice every variable and try to address those variables with the help of AI.

For example, we have developed some interesting models – we can delve into the architecture, that you want used later on, or whatever you have interested in… But some interesting models for detecting and classifying vegetation, also for the evolution of urban growth… Also for things like tracking cars, for example, that is like a kind of proxy of the society, how the society moves on… Because everything is on our aerial surveys; you only have to have the skills to bring back that information and convert it in something useful. And as the years went by, we have been able to produce some more relevant results. I will not talk about deep learning models, but about solutions for tracking the territory.

[21:55] Yeah. And you’ve mentioned aerial surveys a couple of times… It may be useful for those in our audience who don’t work in geospatial - they might have in their mind maps and things like Google Maps, where “Oh, I could go and I could look at a satellite image”, but it’s not current. It’s maybe one photo that was taken some while back. And you’ve talked about aerial surveys, where you can actually learn both current information about what’s going on in an area, but also historical information… Could you just help our audience understand, as a professional, what sort of data do you have access to, and how is that gathered practically, and made available to you?

Well, I have to say that everything that I have been talking about can be also executed with satellite images. There are some differences, but of course, you can do it with satellite images. The reason that we work more with aerial surveys is because we are more focused on capturing this kind of information, rather than working with satellite data.

My region, Cantabria, is not very big, and we have in Spain a national plan that covers every three years old the country with aerial surveys, and also we have a repository of satellite images…

So anyway, you can do both of the input signals; the results will differ slightly, but apart from image capture with sensor, no matter if it is airborne or satellite sensor, we also work with a range of technologies. For example, LIDAR data. I know that many of the audience have been working with LIDAR data. LIDAR can be also airborne. In fact, it was the origin of the technology later, using from a plane. And it has been increasingly important in our domain.

We also work with a system of records with traditional databases, and a number of things. If I had to say something about my job, it’s that it’s extremely interesting, because one day we are working with COVID data, for example, another day you are working with energy data, another day with environmental data… The government of Cantabria has [unintelligible 00:24:23.17] in many domains; it’s kind of one of your states. If you forget the difference of area covered, only Texas, or Florida, I think Spain is in the middle between the area of Texas and Florida; it’s something in between. But the whole country; my region is quite small. But these are very interesting places to work with, because of that reason. And the data comes from many different technologies and many different databases.

Break: [24:59]

So Gabriel, we talked a bit about this kind of first project related to population and crowding on beaches… But you’ve done so much more. Could you highlight a few of these things in terms of other things you’ve been able to identify or track with deep learning from these aerial surveys?

Yeah, we have an extensive work in the detection of vegetation. I have to say that we have been also only using supervised learning, that branch of the deep learning, and specifically working with different model architectures, such as - I mentioned before U-Net, Mask R-CNN, and some others. We are testing now SAM, Segment Anything Model, but we haven’t done anything with zero-shot learning from production. So what I am going to tell has been achieved using model architectures that have been almost forgotten for the community. Everybody is focused on the [unintelligible 00:28:13.21] architectures, and there are so much that can be extracted from the old school of artificial intelligence… Quote-unquote. It’s not so old, right?

Yeah, yeah. And I think - actually, this is maybe a misconception of people… Occasionally we try to mention this on the show, the majority of applications across enterprise, not just in GIS, but in manufacturing, or marketing even… People think of marketing with generative AI, but the majority of applications are still traditional, “traditional” machine learning… There’s a lot of scikit-learn models out there still, or there’s just supervised learning models out there. And yeah, it’s awesome to highlight that here, because I think it is a misconception.

Yeah, because when a paper appears, normally they do not run out the possibilities of the model. The professionals, who are not very specialized in the AI domain, but have a lot of knowledge in a specific domain out of AI, in my case - we can prepare and curate better labels, we can understand the process that we are trying to model, and we have so much to give and to propose to the community… And that’s one of the reasons that some people have said that “Your models are quite good. How have you done it? It’s a brand new architecture, it’s something that you have created on your own”, and I always say, “No, it is not. It’s just using in a smart way model architectures proposed back in 2015-2016, but with a lot of data, very well curated.” I also have to say that the computer power that we have at our disposal is quite modest. We don’t have, from that point, something big or very extensive. And the key is how you curate the data.

[30:18] Yeah. And one of the things that you had mentioned prior to recording was this idea of automated cartography as kind of an integration of a bunch of these different models that you’ve been working on. I’m wondering if you could kind of first describe what do you mean by automated cartography, and maybe even for people that aren’t familiar, what is cartography? I’m assuming modern cartography isn’t like Magellan getting on his paper and drawing maps on parchment paper, or something… But what does cartography look like these days, and then what do you mean by sort of automated cartography with these sorts of models?

Well, cartography is the art and the science of trying to model the reality, and abstract the reality, and plot it on a flat surface. It’s a science that has been developing for a number of centuries, from many centuries. And up until now, it was highly dependent on the human ability to trace and to draw everything on the surface of the Earth. As the technology has been developed from the ’90s, we started to move very rapidly into digital technologies. And the automatation of the cartography has taken place not only with the advent of AI, but several decades before. However, this is a revolution, because we have never been able to produce such high degree of quality, using so few people working.

There are some similar technologies, like remote sensing, which is the part of the technology in charge of analyzing from imagery of satellites, and producing cartography also. It recalls many things of the artificial intelligence, but it can match their results in many other fields.

So that revolution continues… It started, as I said before, in the ’80s, ‘90s, but now it’s a complete revolution, and I think that, for the first time – we have an example that you can check it out in the description of the podcast, where we have been able to produce a map with basic coverage, where you have trees, where you have shrub, where you have no vegetation, where you have buildings, or roads, or railroads, completely generated by AI. Of course, it has some mistakes, but we left those mistakes on purpose, because we wanted that the rest of the community could evaluate the capacity and the ability of the models to work alone.

This is a question that just popped into my mind as you were talking about these models, what’s possible… And it’s not perfect, right? No AI system is perfect, so there’s gonna be mistakes. I’m wondering, as someone who’s been in GIS, and been a practitioner for - I think you said 30 years now; I also imagine that human-based processes are error-prone, or at least they’re slow, right? So by the time a human maybe processes a certain map, or something, things have been updated, and it’s maybe not current anymore.

[33:57] What do you think about the – what are the implications for maybe cartography or GIS as we move to the future, where AI systems maybe can do things more up to date, but with some mistakes, but they’re up to date and can really maybe highlight certain areas that are incomplete, or something, combined with human efforts to correct those mistakes and keep the – what do you see as this balance between trying to be automated with AI-based techniques, and the role that human cartographers or GIS professionals play as these systems expand to more and more places?

Yes, it’s a very interesting question, because one of the big problems that we have is to maintain up to date every single database that we release into the market, or for our stakeholders. That’s a very big problem, because it’s always difficult. And one of the main advantages of artificial intelligence is that you can have a model, and it will not probably work perfectly with the next aerial survey, because it will have some differences in terms of colors, or shadows, or whatever… But you can fine-tune, or maybe you can train the model from scratch again, start from scratch with the training, and you can update something in a reasonable timeframe.

So that is one of the things that I’m most attracted by, the capacity of updating things. And it’s a game-changer, as I said before. No other artificial intelligence offers things that other technologies really don’t.

Yeah. And of course, there’s limitations… AI is never – the expectation should never be that it solves all of our issues… But it also should be that it’s going to solve some of our issues, or solve some of our problems, but not all of them. From your perspective, how do you think about the current limitations of AI within GIS and cartography? What are some of the things on your mind with respect to that?

Yeah. I think, of course, you have to bear in mind that we have limitations. What happens to me also happens in teams in India, or in the US, that I am always seeing what they are doing. I would like to point out two limitations. One is the computing power, and another thing is the limitations of CNNs, which is the technology that we are using right now, convolutional neural networks. We can talk a little bit about model architectures and things; in terms of computing power, I think it’s worth delving into the role of GPUs, because in the geospatial realm it is not well understood why do we need a GPU. And it’s something – I don’t know if it happens in other markets, but in our industry, when you talk to somebody about a GPU, normally, my fellows and mates, I don’t know, they try to say it’s something related with the IT department; “I don’t want to be in charge of that.” But it’s not at all. You have to be aware of what technologies do you have for calculation; the hardware is so important, and you have to speak the same language as a data scientist, that the rest of the community speaks.

And that is very, very important to understand that it’s not the same as the GPU in your laptop, the DGX, or talking about the [unintelligible 00:37:46.23] We are talking about NVIDIA hardware… It’s not the same. And it’s everything related with the amount of data that you want to put into the train, the quality of your training, the level of the convergence that you are going to get if you are going to stay in a local minimum in the convergence, or you are going to reach on a source the possibility of the level that you are ingesting into the model… Everything is related with the hardware.

[38:16] I think that Bill Dally and Anima Anandkumar, in many of their talks, always talk about the trinity of AI. One of them is the data, another is the software, the algorithms, and many of them have been with us for a while… Back-propagation, many of those algorithms have been from the ‘80s, if not before… But the hardware is the third part. Bill Dally always says that it’s the spike that starts this engine of creativity of AI. And I think it’s true. You have to pay a lot of attention to computer power.

And there is salary another limitation that is ingrained in the DNA of the CNNs. As far as I know from my experience, you cannot expect to perform exactly as a human being, and sometimes in spite that you curate very well your labels and your [unintelligible 00:39:11.12] and your data, the models do not learn as good as you expect. But somewhere in between, you could have a reasonable amount of success in that. What we do to overcome is combining different model architectures; it’s something very useful and widespread in the geospatial industry. For instance, we combine models at two different levels. From the architectural levels it’s quite common to see the combination of ResNet with, for example U-Net. In ResNet you remove the last part, the fully connected layers, and connect the remaining part with U-Net. So you are using Res-Net for feature extraction, and then the rest of the decoder happens during the rest of the U-Net architecture. It also happens with Mask R-CNN. We use constantly Res-Net as the backbone, but then the rest of the model goes.

And there is a second point, which is combining the results of the inference when you have inference from two different model architectures. For example, talking about vegetation - imagine that you have one model that detects very well the big areas of vegetation, but fails in the smaller spots. And you have another model that works very well for smaller spots, but fails detecting the big areas, because the big areas creates artificial holes, and mistakes… You can combine the results of the outcomes of those modern architectures with traditional GIS techniques to mash all the results together, and obtain a bigger, the best quality of the layer that you want to infer. That has worked for me, and it’s one of the ways that we are trying to overcome the limitations of artificial intelligence.

That’s great. Super-practical, and I know that’s what a lot of our listeners want to hear, is some of the practical ways they can explore these technologies. Well, Gabriel, it’s been an amazing pleasure to talk to you. As we close out here, there’s a million things we could talk about; I know some we didn’t get to, and we’ll link in the show notes… But as you look to the future, could you just briefly, in the last minute or so, just briefly share with us what’s exciting for you as a GIS professional looking to the future? …that either you want to dig into next, or what are you encouraged by or optimistic about as you look to the future of your own work, and how AI influences that?

Well, I have to say that in my 30 years plus of working in the geospatial industry, these two last years, two or so, have been the most exciting part of my career… Because it’s so creative. We are just scratching the surface of AI; great things are coming. I think that with the advent of zero-shot we have been watching from the first week of April what can be done with the SAM, the Segment Anything Model, and I’m sure that new versions will come, future versions of SAM. When we combine that with LLMs, with large language models, and we can interact with the boys and say “Hey, draw me all the trees in the image”, or… It will be much easier to use these set of technologies.

Anyway, just to finish, I would like to send a message to the audience for those who are not artificial intelligence researchers, like me, that it’s possible to apply this set of technologies even though you are not a specialist in a data-specific domain; it’s also to get hands-on, take one of the shelf models and start playing around with them… And I know the future will be absolutely focused on artificial intelligence. There will be a different geography in the next few decades.

Awesome. Yes, so inspiring. Thank you for your work, Gabriel, and it was awesome to have you on the show. Thank you so much.

Thank you.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00