What can art historians and computer scientists learn from one another? Actually, a lot! Amanda Wasielewski joins us to talk about how she discovered that computer scientists working on computer vision were actually acting like rogue art historians and how art historians have found machine learning to be a valuable tool for research, fraud detection, and cataloguing. We also discuss the rise of generative AI and how we this technology might cause us to ask new questions like: “What makes a photograph a photograph?”
Fly.io – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.
Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
|Chapter Number||Chapter Start Time||Chapter Title|
|1||00:00||Welcome to Practical AI|
|3||04:28||What is art history?|
|4||10:00||Integrating artworks for ML?|
|5||13:47||How are art historians adaption to ML?|
|6||19:16||Art models and the Tank Classifier|
|7||24:27||What ML devs can learn from art history|
|8||32:06||Deep learning paradoxes|
|9||38:12||Where is the field going?|
Play the audio to listen along while you enjoy the transcript. 🎧
Welcome to another episode of Practical AI. This is Daniel Whitenack. I’m a data scientist with SIL International, and I’m joined as always by my co-host, Chris Benson, who is a tech strategist at Lockheed Martin. How’re you doing, Chris?
Doing well, Daniel. How are you today?
I’m actually super-excited for this conversation, because I don’t know about you, but I’ve just been like swimming in generative text AI for like weeks and weeks…
Yes… As have we all, I think.
Yeah, this conversation feels like I can like come up for air and think more to both computer vision and generative image AI, and other things like that, because we’re privileged to have with us Amanda Wasielewski, who is an art historian working in the digital humanities program at Uppsala University, and she’s the author of a new book coming out in May, “Computational formalism: Art history and machine learning.” Welcome, Amanda.
Hi, thanks. Thanks for having me.
Yeah. Like I say, I’m really excited about this… So I have to be honest, I was a little bit intimidated maybe, because I don’t know a lot about art history… But in looking at your book, and also looking at your amazing research that you’ve been up to - there’s so much practicality in this, both in terms of what is applicable to art historians and those working in that area, but also the things that you’re talking about in terms of how we think about machine learning and art, and how those relate, and especially in light of generative things in recent years. So yeah, I’m super-excited about this conversation. I’m wondering, you mentioned in the lead-up, when we were talking pre-episode, that your background is more on the art history side. Where did the art history and machine learning start to collide for you?
I actually started out – well, I studied chemistry as an undergraduate briefly, before kind of discovering art and art history, and was a practicing artist for many years before I went back to studying art history again. So I had a kind of - I guess I’ve never been formally trained, but I had a kind of sideline doing artwork that was based through using various digital technologies, and certain kinds of programming… And I also kind of worked a little bit in web design, and things like that. So I had a kind of background in computational things, both from a kind of art perspective and a professional perspective before I actually went into Academia and academic art history. So I’ve always had those kind of interests in how art and technology collide… And I came to this whole field, or the kind of emerging image and AI fields through older things like image databases and how they’re sorted by metadata, textual metadata. So that was the kind of entrypoint, and then suddenly it seemed that more and more art collections or digital image collections were starting to use different computer vision techniques… And so that’s kind of how I came at the field, through the way that computer vision was increasingly being used to store large image collections, and image collections of art in kind of institutional contexts.
It’s interesting that you mentioned both elements of using machine learning to sort art, but also this background of like people using textual metadata to describe art. And I know that you used this word “formalism”, which in my understanding has some history in the art world… But how standardized is the sort of literature and research around how you describe the features of an artwork? That’s probably a very naive way to ask that question, as a person not in the field, but I imagine metadata to describe artwork is like artist Van Gogh, medium, like, whatever… It seems like what you’re talking about goes well beyond that. Can you kind of describe that space a little bit?
As a quick add-on, can you also add just a little bit about what is art history coming into that? Because we probably have a lot of people that are doing machine learning, but not a lot of art history background, and some people may be wondering - including me, a little bit - about trying to understand what it is. So kind of working your way towards where Daniel was, but starting a little bit earlier for me…
Well, one of the ideas of the book was actually to kind of, in my own way, try to like bridge this gap… Because as I said, I don’t have any formal training in any of these, like sort of the computer science side. But I’ve been in this kind of digital humanities milieu, where it’s a kind of combination of some computer science techniques with a kind of humanities focus and research. So I wanted with the book to both kind of introduce art history concepts to those people working in maybe computer vision, but also introduce people in art history to some of the things that are happening in computer vision. So kind of trying to play both sides a little bit, but obviously, from my own perspective in art history.
[06:14] So art history is not a very old academic discipline at all. Its origins in the 19th century revolved around sort of practices of collecting antiquities, so ancient Greek and Roman artifacts. And that kind of collecting practice started to become a more sort of studied and systematic area, coalescing into - like, the first academic art history departments came about in the late 19th century. And back then, all academic sort of subject matter, the humanities included, kind of aspired to the scientific model, in the same way that the natural scientists did. So empiricism, taxonomy, these kinds of things. So people at that point in time treated art objects kind of like specimens, like if they were studying plants, and the kind of evolution of plants. Early art historians studied art in much that same way; they sort of traced the evolution of art through time and through history… And so it was really focused on how the kind of superficial qualities of art change over time, rather than a kind of focus on other contextual things, like the artist’s biography, or other kind of circumstantial things about the historical time period.
But this has been a long-standing debate in the field pretty much since the beginning… So it goes both ways, and often falls into two camps: the so-called formalists, who are the ones who just care about the kind of external appearance of images or works of art, and then the people who care about the other stuff; what the artist was thinking, what their intentions were, what their kind of historical context was, and all that sort of thing.
So I’m kind of reaching back into that history of art history… One thing that kind of interested me in this area was I saw computer vision research, so research that had no contact with the art history world really, using datasets of artworks to answer computer science questions. So not answering art historical questions per se, but in the process, because they’re using artworks, they’re touching on things that are important to art historians, or that art historians might be interested in. But I saw that there was this kind of call back to these formalist methodologies, similar to what was happening in the late 19th and early 20th century. So I was interested in this kind of what I saw as like a revival of these taxonomies kind of matching… A really simple way. Or even the kind of object recognition by finding different motifs, or things like that.
So yeah, as having had training in art history and these methodologies, that was what kind of piqued my interest in what was happening in computer vision, because I saw it as kind of like rogue art history that was happening without art historians having any knowledge that it was happening. So I kind of wanted to call attention to it on one hand for art historians, but on the other hand call attention to some of the art historical issues that computer vision researchers may not have found or had access to. So I had that kind of both directional interests for me.
I think Daniel and I probably really liked the rogue art historian designation… Who knew that machine learning practitioners would be kind of the pirates of the art history world in that sense…
I’ve seen a lot of good parallels or memes recently… I think one of my recent ones was like AI is like computer LSD. I think probably like rogue art historian is another good one…
[09:59] So you mentioned that machine learning people were integrating artworks into their datasets, or to answer certain types of questions. Were those related to – I can imagine, oh, if I have these different artworks in my dataset, maybe I can do image classification and classify “This is an artwork”, or maybe even more detail, like “This is an artwork by person, or like in this time period, or in this medium”, or something. But I could also imagine artwork has objects in it, right? Like, can I recognize objects within an artwork, or certain features, that sort of thing? Is that the sort of questions that were being asked? Or what were these questions that you kind of started running across, that you connected with the art history world?
Yeah, so you hit on sort of two of the main areas that were being addressed. And I think, from my reading of the literature, as I understand it, the computer vision literature, there was a kind of – obviously, object recognition in images has been a huge focus from kind of the last 20 years plus, because it has so many quotidian and nefarious applications. You get lots of surveillance applications, but lots of like, you know, we open our phones with our face kind of applications, and the ability of a machine learning system to recognize an object has obvious practical applications. And so I came across a lot of papers that said something along the lines of, “Well, recognizing objects in a photograph is a solved problem.” So I think at a certain point in the last 10 to 15 years - I kind of cover like a 15-year trajectory of this research in my book. Researchers kind of were looking for more difficult datasets to tackle, and one of those was art datasets, because - sorry, to recognize an object in a kind of stylized painting would be something that would be slightly more difficult.
So we had these sort of object recognition activities that were happening, but from my perspective, in art history it’s not a very useful exercise. I don’t care really, as an art historian, if there are a bunch of dogs, if you can identify a dog in a painting. It’s not that interesting as like a tool to use for my research.
Simultaneously, there was a lot of research happening which is the kind of categorization by style. And this was really interesting to me, because this term “style” in art history is a really fraught term. It has a complicated history, and art historians have fought a lot about what does style mean, and how do we define it… And the categorization by style, in these terms - you’re looking at a kind of superficial quality, and you’re categorizing it by a known kind of textual label.
I think it’s interesting, because this has now really important kind of knock-on effects in generative AI. Like, if you open DALL-E and you see there a kind of suggestion for the initial prompt, they suggest you right, they say “An impressionist oil painting of sunflowers in a purple vase.” So right there, in the generative AI platforms, you always have these “style markers.” So I really wanted to sort of, I guess, unpack what style means for art history, and what it might mean when we’re suddenly applying things like impressionist in the context of generative AI.
Amanda, I love how you brought us along to understand both this intersection of art history and machine learning, and how machine learning was sort of dipping into these formalism elements over time… You talked about the prompts in DALL-E, or something like that; like, the style. When you’re talking about now like art historians kind of realizing how they can employ machine learning within art history, is that the sort of thing that they’re thinking about? I can imagine if I take a bunch of artwork, clustering image embeddings to look at the style of what is actually similar between all of these images, and that sort of thing… That was kind of where my mind went when you were talking about style, but how have practically art historians kind of been employing this once they realized that machine learning people were kind of extracting some of these interesting features?
Yeah, so exactly in the way that you just described. One of the sort of founding fathers of art history, Heinrich Wölfflin, he pioneered the – you know, art historians have always been kind of using tech for various teaching and/or research purposes… And he pioneered in the early 20th century the idea of having a double slide projector in an art history lecture, so that you could compare… Which doesn’t sound like much to us now, but it was the idea that you could compare side by side, in a lecture setting, two artworks at once, and so you would kind of see… But the human eye is only able to sort of kind of take in so many comparisons at once, and so the way that these type of technologies have been used in an art history context is exactly in this kind of mass comparison sense, comparing many, many artworks, many, many more than could be possibly compared in a kind of one single view.
So in literary studies they have something called distant reading, and there’s a kind of corollary in art historical studies called distant viewing. And the idea is you get a kind of top-down, very far away view of general patterns, or general trends, and the hope was that you can kind of notice new things through looking from this distant point of view. One of the things that is important in that is, again, you’re looking primarily at visual characteristics.
Can I ask a non-technical question? When you’re doing that remote viewing, and you’re making those comparisons - just to give me a sense of the field, what might be an example, like a typical example thing that you’re trying to compare? …aside from whether it’s machine learning, or entirely without technology in the process, just to give me a sense of a touchstone on what that is.
In terms of what that point of comparison is, or…?
Yeah, I’m just kind of curious, just as a newbie to art history and learning from you as we go, I’m just wondering what a momentary aside from the machine learning side of it, what are some of the things you’re trying to get to with it?
Yeah, so this is like the classic art history 101, something we call formal analysis, or visual analysis, where the basic step of art history is first looking; without jumping to context or content of an image, or work, to look at things like texture, line, shape, color, those sorts of basic building blocks of visual information. And once you’ve kind of understood that, you start to notice details. And I think it’s a way of looking very closely at an image, or an artwork, to sort of understand what that is doing visually, what the composition is doing.
And then the next tool to add on to that is comparison. So once you understand kind of what’s happening on a visual level, purely visual level, you start comparing it, and then you see, “Okay, so there’s different things going on in this other artwork, maybe from the same time period, or maybe from just after it”, and so you kind of start to build an idea or a narrative around how artworks change over time. So that’s the kind of standard art history like 101 skill that we start to cultivate.
[18:12] I’m sorry that I took you there, but I appreciate you doing it; it is helpful for me.
Yeah, no, of course. I mean, it’s important because it ties back into thinking about what we want to do; if we want to use machine learning methods to perform those same tasks, we have to realize or recognize that machine vision doesn’t understand images in the same way that we do, as much as we might remove how we interpret content, or context. The way we kind of dissect an image visually, or the way we kind of analyze the visual properties is going to be very different in a machine learning exercise. And the first way that that’s different is that the vast majority of things we’re dealing with are physical objects that have been digitized. So there’s like a kind of layer of representation; they’re photographs already, so there’s already a difference between, say, looking at an artwork in-person in a museum, and looking at the kind of digital reproduction. I think it is important to sort of understand the foundation as well.
So while you’re talking about that, and kind of the understanding – my best parallel would be from the NLP world, where ChatGPT or something does not understand user intent, right? There’s no understanding, right? It can produce text, but we process language different than ChatGPT does, as humans. And like you’re saying, someone standing in a museum processes that experience of standing in front of an artwork differently than a photograph, an intermediate representation, differently than a machine might find features that are good for image classification, or something like that. I’m wondering, because a lot of these computer vision models are so non-explainable, or like there’s an interpretability problem already, right? In terms of - I might not know why an image was classified in this class with a convolutional neural net, or something like that. Is that a struggle for taking this field forward in terms of applying machine learning in these contexts? Or are there ways to kind of extract some of those main features, like you’re talking about, like shape and color, and line, and other things like that?
Yeah, I think that there’s a lot of similar issues, actually, between the kind of text world and the image world, in terms of this idea of what constitutes meaning or understanding. Are you guys familiar with the tank classifier problem?
The tank classifier?
I’m not, I’m sorry.
I don’t think I am. Although Chris knows about military vehicles, but I don’t know about tanks…
I don’t think that’s what we’re talking about. [laughter]
It was a kind of apocryphal story that was passed around a lot in sort of machine learning circles… The story was - and actually, it dates back to kind of someone made this up as an example at some conference, I think in like the ’60s… But it became kind of passed around as like it actually happened. The story is that the US military during the Cold War wanted to recognize an image as the tanks…
I do – now that you go into it that way, I do remember this. Yes.
Yeah. So like differentiate Soviet versus American tanks in images… But then it ended up accidentally classifying the images by the background, weather, or environmental conditions. And that is the kind of thing that I think really illustrates what we deal with when we’re dealing with images, because we understand things like background and foreground, or the kind of subject and surround in a different way.
[22:02] We interpret those, the kind of illusionistic space of an image in a certain way, that for a lot of kind of algorithmic classification that surfaces what we might call a kind of a democratic surface. All areas initially are kind of treated the same on; it has to be some kind of training to differentiate those. And of course, it’s gotten very sophisticated, where we are able to sort of separate those things out a lot of the time. But of course, you still get lots of cases, like in medical imaging… I read a few things about during COVID they tried to classify, for instance, COVID-infected lungs versus healthy lungs, but they used a training set of children’s lung imagery, and so they accidentally classified by children versus adults, which seems like a very silly error to make…
So we get issues like that, and I think they are really important, because what it points to is that essentially we’re dealing with like a two-dimensional surface to interpret, but often, those are two-dimensional representations of a three-dimensional space that we, as kind of three-dimensional beings, intuitively understand when viewing an image like that, or a photograph, for instance. Whereas machine learning algorithms only know that we kind of isolate a certain pattern of pixels to be a specific object. And, given lots of examples, they’re quite good at differentiating whatever object we’ve designated. But still, there’s no kind of understanding of space. It’s not part of the understanding of images, and that framework.
So I think that that’s kind of one of these interesting examples of just because it successfully identifies something doesn’t mean it understands what that thing is. Like a dog in a photograph.
Very good explanation. But I do feel on behalf of the defense industry I should note that we are much better at identifying and classifying tanks today than we used to be.
I don’t know if I want to know how good you are… That might be something that I want to be ignorant of.
I just feel the need to say that, yeah.
I have confidence that things have moved on significantly since the ’60s.
Someone should tell Vladimir Putin. That’s all I’m saying. That’s all the politics I’m inserting.
I’m really interested in all sorts of things about what we just discussed in terms of the understanding elements, and other things… But I’m intrigued by this – in reading through some of the materials about your book and your work, you talk about how computer scientists often process these sort of like art image datasets, or images that are part of their datasets without any real sort of understanding of art, or art history. And one of the things you talk about in the book is how maybe there’s an enrichment of like the data science and computer vision side by understanding more of the sort of humanistic issues and elements of the artwork, and those sorts of things… Could you describe a little bit what you mean by that, and how you think – because we mostly talked about machine learning kind of enriching maybe art history, or things that could be done there… What about the other side of that, in terms of things computer scientists could learn based on this kind of background and research on the digital humanities side?
[25:36] Yeah, I mean, I think one of the things that is really important to me is this idea that the assumption that accepted categories are in some way static, or objective and unchanging, can lead to really misleading findings. So for example, there was one study that I looked at where they were classifying paintings by artistic style. And the authors noted that
action painting was confused with abstract expressionism… And said, “Oh, well, in the future we will be able to hopefully rectify this categorization error.” But for an art historian, those are two kind of contextually specific style terms that two competing art critics came up with, or groups of critics, and they have a kind of ideological background. So there’s a reason that some critics wanted to call this mid-century American art movement abstract expressionism, and some wanted to call it action painting. And neither term is really subservient to one another.
And you don’t need to necessarily understand the full kind of art historical picture. Say, if you’re using DALL-E and you want to make either an abstract expressionist or an action painting as a style, you probably get good results with both of those terms. But the kind of issue is that these are not stable categories; different style categories have very different kind of origins, they’re inconsistent amongst each other… Some of them span a few centuries, some a decade, some are small groups of artists who all knew each other, and work together, some are kind of catch-all terms, or contextual terms…
So I think people in computer science, they’re like “Great, I have a new dataset to work with, and here’s the categories, and I’m gonna work with this and then see how effective it is at categorizing…” And that’s fine, because they’re working on a problem that’s different than necessarily what an art historian might work on. But the reason I kind of insert myself there is I’m like “Hey, well, that is actually kind of an art historical problem that you’re working on, but in a kind of way that doesn’t understand that these terms are not fact.” That they’re not stable in the way that you can kind of - once you insert something into a database, it becomes kind of solid in a way that it doesn’t when you’re discussing it like I am. I could talk for another 20 minutes about who came up with these terms, and why, and what their political beliefs might be, and that sort of thing.
Could you talk - maybe not for 20 minutes, but for some period of time… I’m kind of curious, because you’ve kind of posed this problem that’s kind of brought by the data science is the way I’m seeing it, whereas you’re saying you may not have those categories correct… What are you proposing as a way of mitigating that, in a way that is consistent with art history in terms of approach? …that has that kind of qualitative aspect.
Yeah, something I was talking about with a colleague who comes from a kind of computer science background is how do we bring together some of the concerns and interests of computer science with art history in a way that is kind of interesting to both sides? One of those things is for art historians the context and the nuance of terms in a kind of qualitative way is important. But then how do you integrate that into a kind of data context is the question? And unfortunately, I don’t have a really good answer, but I know there are researchers who are beginning to sort of combine different – well, text and image, or different modalities of information together to try to create a sort of, or networks, bigger picture about how we might understand artworks beyond just the kind of textual category.
So of course, we can kind of dispense with categories altogether and do a kind of purely visual, kind of like unsupervised clustering type thing… But then what do we call those clusters? Or what do we call those collections? And that brings you right back to art history once again.
[29:50] So it’s this kind of how to integrate all this sort of qualitative nuance within a data context is the big problem as I see it, and I think that’s something that I still haven’t found or heard a really good solution, but I’ve been talking about it with some of my colleagues… So maybe we’ll come up with some bright idea in that area.
Could that change depending on what question you’re answering with a given training session? Like, you could take different reinforcement learning approaches, but I would imagine that that might change the output, and so you’d be looking for an approach that’s kind of consistent with what you’re trying to achieve from the art history side of things. Is there any thinking around different approaches based on - as you change those, that you get different types of outputs? There’s something that you’re going for that maybe a data science practitioner without the art history might be going for something different, kind of as you’ve already talked about. What’s the thinking around different approaches to it, with generative or reinforcement or a combination of them?
I mean, I don’t think that we can expect that me and a computer vision researcher will have the same goals or desires or outputs out of a research question or problem. But I think, from my end, I would like to add some nuance to the cold data… Because of course, even computer vision researchers, they have a kind of quantitative result, but they end up making an interpretation like the one I just said. They’re like “Oh, well, we’ve had this confusion between these two categories, and we’d like to fix that.”
So there’s always a kind of – you know, as much as data scientists or computer scientists might think they’re just concerned with sort of numbers, or output, or objective facts, there’s always actually a kind of interpretive thing that happens. So from my point of view, we might not be answering the same research questions, but we could come together in the same space somehow to build a bigger, better picture of whatever phenomena, or artworks, or collection of images that we might be looking at.
I think that’s a really good general vision to have, in multiple ways, and probably for multiple problems outside of this one. So one of the things that is mentioned in the book, and that you discuss, are a couple of these paradoxes that I find really interesting, in the fact that deep learning, as applied to these features of artwork, can be used to both create and detect forgeries. So both of those things are true. And there’s like this side of things where high artworks can become digital assets, and digitally-generated assets are in certain cases being considered sort of more like the high art side of things. How are you wrestling with these paradoxes coming up that machine learning and deep learning are operating on both sides of these things?
I obviously think it’s really fascinating, this kind of arms race… You know, there’s a famous quote by Virilio, that the invention of the ship is also the invention of the shipwreck. You can’t have one without the other. So I think it’s interesting that there’s always this sort of positive, forward, and this sort of destructive, negative element going on simultaneously. But I think in terms of – you know, we really saw generative AI explode, especially with the image tools, in the last year and some months… I think the latest, kind of the Pope jacket hoax of the last week really illustrates the extent to which – I mean, we’ve been kind of distrustful of the authenticity of photographs. I mean, since photography was invented, people were aware that it could be manipulated. In the 19th century we had hand techniques to manipulate photographs; there’s always kind of editing, there was always different kinds of manipulation… But of course, it’s only just gotten kind of easier.
[34:04] And Photoshop… A lot of the sort of fears that are currently being talked about in terms of authenticity, or believability, or fakeness, or trust in images were raised in the ‘90s around Photoshop. And then we kind of became accustomed to Photoshop. But I think this question of authenticity, whether that’s in detecting art forgeries, or if it’s in simply how we trust the images that we see is kind of rearing up again, because we have this access – now everyone has access to quite sophisticated tools to create photorealistic images that aren’t photographs at all. And this is something that I’ve been working around subsequent to after I wrote the book, is the idea of “Are the images that are created by some of these generative AI platforms that look indistinguishable from photographs - can we consider them photographs, actually?” So it’s a kind of new tool to make photographs that doesn’t have a camera, that doesn’t have a lens, doesn’t have a photographer. It’s a kind of composite of the learnings of vast datasets.
So all of those questions that I addressed in the book about like art authentication, and then on the flip side, the idea you could create a forged or a fake artwork from a generative tool are, I think, even more kind of relevant in the last year or few months, because of the new paradigm of creating manipulated images, or manipulated photographs.
Yeah, it’s interesting that there’s this element of what you’re talking about, where it’s like, well, if you would have asked me a year ago “What is a photograph?”, that would have been fairly clear cut. I think now it’s like “Well, what really –” like you’re saying, is a camera needed?
I saw the Pope running from the police. I’m sure. Did y’all see that one?
I don’t know if I saw the running one. I definitely saw the puffy coat…
It showed the Pope running, with police trying to capture him on the street.
But you know, I’ve been on these kind of – I guess doing a kind of auto ethnographic embedded study of lots of these communities on like Reddit and Facebook and other social media, that are just like kind of amateurs doing Midjourney or DALL-E images, or like night cafe, or these kinds of things… And I’ve been on them for over a year, just like reading posts and reading posts and looking at images… And even I, after spending so much time on these kind of venues, and looking at lots of AI-generated images, my husband has showed me briefly on his phone on Twitter, like “Oh, look, do you see the Pope was wearing this big puffy coat?” I was like “Oh, that’s weird…” I didn’t question it.
Yeah. And you’ve been embedded.
Yeah. I mean, I’m someone who’s like actively working on this… So how can we expect people to be sort of distrustful when we want to believe what we see? And I think also in the kind of last – I mean, not to get too political or anything, but in the last kind of decade, the idea of the photograph as a document of truth-telling medium in terms of things like police brutality, or like documenting abuse, and other situations as a kind of way to expose those things, and trust incidents where the police may not have told the truth about what happened in a particular situation… We put a lot of stake in those things. And so yeah, then the question becomes, “What are we facing now?” Yeah, we have a new way to have manipulated images.
As you were describing that, and given the industry I’m in, I can’t help but obviously put the filter of my own employment… But it made me realize that there are common problems that an art historian, and that people in the intelligence community, for instance, are struggling to deal with at the same time. Who knew that there could be career paths crossing between the two, with that kind of maybe ominous point…
Where do you think this field is going? As you look at doing these different types of qualitative analysis, where not everyone is necessarily trying to get the same thing out of combining these fields, and recognizing that there are a set of common challenges that art history has, that other fields may have…? Where do you see, from your perspective, from your filter, where do you see this going? Where do you see your field evolving into? What kinds of questions do you expect to be asked, and what new technologies in the AI world do you either expect, or maybe hope to see, to help you find those answers in the years ahead?
Yeah. I mean, I think art history in particular is fairly technophobic, in terms of maybe it wouldn’t be the earliest adopters of AI techniques per se… And I already think of – maybe I don’t have such a like sci-fi dystopian outlook, but rather a kind of very almost boring outlook on, I think a lot of these tools will just simply be integrated into our research practice the way that ChatGPT will be used as a kind of aid, or different GPT-type thing, aid to writing.
There’s a lot of fear right now in academic settings about “cheating” in terms of those text generators. But I think similarly in terms of image analysis, or image recognition, either stylistic recognition, or object recognition - it will be a really useful tool in terms of sorting through large art datasets. There’s certain kinds of – for instance, I had a friend who she was studying art in Israel, around and before the founding of the Israeli state, and they had a lot of art exhibitions, but they didn’t keep very good records of what the artworks were that they were exhibiting. So she just had a bunch of photographs of artworks on a wall, and had to set herself to the task of like determining what these artworks were. And they weren’t necessarily very well-known artworks. It sounds kind of like a boring application, but might be a very useful tool in terms of “Okay, if we had the ability to sort of put this image in and try to identify the artist of unknown artworks through these kinds of mechanisms”, from my disciplinary perspective that would be very useful.
I mean, already they’re being used, these kinds of computer vision, or machine learning techniques are being used to sort large art datasets; rather than accessing artworks through textual metadata, accessing them through what can be interpreted visually in particular images. Or isolating images, extracting images, matching images across different publications, or different exhibition venues…
So I have a very kind of boring outlook, I guess. I don’t think it’ll lead us to like some kind of scary, dystopian future, but it’ll just become a kind of naturalized tool or resource that we can use. But obviously, with the kind of caveat that we always have to think about ethical issues, and also think about what categories mean, and how we’re kind of organizing and arranging things, not just kind of giving over to the task of organizing to some unknown kind of black box.
I don’t think that’s boring. I’m kind of encouraged by that. As our listeners know, this is Practical AI, and I think we all, to some degree, love the practicalities that come out of this. So I think that is actually the exciting part. This goes beyond the hype, and it’s making a difference in people’s day to day. I think that’s where things really get exciting.
Well, I really appreciate you joining us, Amanda. It’s been a real pleasure to talk through these things. I’ve learned a lot, and I am so thrilled to see the work that you’re doing, and your contributions, which I think are really important. So yeah, keep up the good work, and happy to have you back on anytime to help us parse through some of these things.
Great. Yeah, thank you guys so much. It’s been really interesting and fun. Thanks.
Our transcripts are open source on GitHub. Improvements are welcome. 💚