Practical AI – Episode #33

Staving off disaster through AI safety research

with El Mahdi El Mhamdi

Featuring

All Episodes

While covering Applied Machine Learning Days in Switzerland, Chris met El Mahdi El Mhamdi by chance, and was fascinated with his work doing AI safety research at EPFL. El Mahdi agreed to come on the show to share his research into the vulnerabilities in machine learning that bad actors can take advantage of. We cover everything from poisoned data sets and hacked machines to AI-generated propaganda and fake news, so grab your James Bond 007 kit from Q Branch, and join us for this important conversation on the dark side of artificial intelligence.

Featuring

Sponsors

LinodeOur cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelog

RollbarWe move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog.

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to the Practical AI podcast. This is Chris Benson, your co-host, as well as the Chief AI Strategist at Lockheed Martin, RMS APA Innovations. This week you’re going to hear one of a series of episodes recorded in late January 2019, at the Applied Machine Learning Days Conference in Lausanne, Switzerland.

My co-host, Daniel Whitenack, was going to join me, but had to cancel for personal reasons shortly before the conference. Please forgive the noise of the conference in the background. I recorded right in the midst of the flurry of the conference activities.

Separately from the podcast, Daniel successfully managed the AI For Good track at Applied Machine Learning Days from America, and I was one of his speakers. Now, without further delay, I hope you enjoy the interview.

My guest today is El Mahdi El Mhamdi. He is a Ph.D. student who’s just finishing up here at EPFL in Switzerland. He has been focusing on technical AI safety and robustness in biological systems. Welcome to the show! Did I actually say your name correctly?

That was good.

And if you could start us off, we’ve talked a little bit before we started recording… You have a fascinating background - will you share a bit of that as we start this off with the listeners?

I’ve been trained as a physicist. I’ve studied maths and physics as a bachelors in Morocco, and then moved to France, Switzerland and Germany… But I’ve been trained as a physicist, I even worked in physics research, I’ve been a research engineer in physics - the physics of condensed matter, semi-conductors for things like photovoltaics/solar cells.

Then I drifted a bit for about five years before coming back for a Ph.D. So I did research in physics, but then at the same time, with some friends, we co-founded a media platform in Morocco called Mamfakinch, which is was sort of like a news aggregate during the 2011 events that some people call the Arab Spring. During that period, I was more and more convinced that the web was enabling through those platforms tools to help people circumvent usual intermediate bodies, like electoral political parties, established, news organizations, to self-organize… But at the same time, there was a harmful effect, which we would start being more aware of five years later, during the last events in the U.S, for example.

[04:09] And would that be misrepresentation of events, like fake news and that kind of thing?

True. And so back then, 2011-2012, there was another thing that caught my attention, which was that whenever we put a lot of effort - me and my colleagues at Mamfakinch - we’d put a lot of effort in doing a deep investigative work on some very relevant public issue and then publish it, the readership would be very low compared to a three-minute video by some activists who would just self-record himself/herself with a basic camera, and they’d start speaking in very simple words; this would take off.

Back then, in 2012, I kind of stopped being very involved in Mamfakinch. I was still working in physics, by the way, but I thought that the video platforms would play an ever-increasing role as the bandwidth and access to heavy content like video will be democratized. And I said, “Yeah, okay, the video sharing seems to be more powerful than text sharing on the web. I think this can help a lot into something I also care about, which is science education.”

As much as videos on political issues would have more spread than text, I thought that videos on science would do the same for a general audience like kids, to make kids motivated about science, or just to tutor people. I was not aware of the Khan Academy back then. Someone showed me Khan Academy after I did my first videos… So I said, “Okay, I will start a video project that would just do tutoring in physics and maths.” That was something I was kind of good at doing, which is tutor people in maths and physics… So I started a tutoring project in maths and physics while working in condensed matter physics back then. But then a professor here in computer science, Rachid Guerraoui, convinced me to join efforts and to do that full-time with him here, in Lausanne.

We got initial funding from Google and from the faculty. Then, of course, the faculty took over the funding and we got enough funding to do it full-time. And I started learning about computer science as a fundamental science, and I was realizing how epistemologically relevant computability and concepts like decidability were to understand.

And about when was that, just for the timeline…? From 2011-2012 - this would have been moved on another year or so?

Yeah, this transition would happen in 2013. In 2013 I left my job as a physicist-engineer and I came to Lausanne. By June 2013 I left my job as a research engineer in physics and came to Lausanne to fully start this tutoring project, that became an official tutoring platform of EPFL. It had a very good reception from bachelors students. It’s not like the kind of YouTube channel that would go popular, because it’s on very specific and technical topics, and it’s in French; most of the content is in French, because EPFL is a French-speaking university… So the audience was not huge; it was a small-sized audience, but there was high quality. For example, we had a high retention rate, compared to MOOC’s. MOOC platforms had a 7% or 8% retention rate; we had something close to 70% retention rate, because it was tutoring. We were addressing questions bachelors students of EPFL would struggle on before an exam, like how to compute this third derivative of this physicist stuff, whatever object.

[08:09] That really does sound quite a lot like Khan Academy, and what you were doing - obviously, you’re doing it in French and doing it for the students here, but… So we can think of it in that kind of context, and where you’re going. So where did that lead you?

Because I was funded by the Computer Science Department, it led me to learn more about computing. Back then, when I was trained as a physicist, I viewed computer science as this engineering thing where you debug Java and C++ code; I didn’t like that, really. But I was not aware and I was not educated on this fundamental science of computing.

Little by little, I started educating myself. I started learning learning… That’s so meta – so I started reading about learning theory, the work of Leslie Valiant, for example, the work of Vapnik–Chervonenkis, and also the fundamental CS part, Turing. And I kind of buy into a few calls, for example from Leslie Valiant to make computing a natural science. I think it’s a very powerful epistemological tool to understand natural phenomena. I’d like to call computing as the science of the feasible, like what can be done, like complexity theory, like what can be done in an amount of time, with an amount of resource… And I’d like to view learning theory as the science of the learnable - what can be learned, given an amount of time and an amount of data points, an amount of samples.

I loved that, so I wrote a proposal to start a Ph.D. trying to understand biological processes with computability tools… Not as a computer scientist collaborating with biologists and coding stuff for them; not bringing the engineering part of CS, but bringing the epistemological part of CS, that views complex systems through complexity theory resource etc. And the main guiding line was robustness. So could we explain robustness in biological processes, with computational tools? Could we explain, for example, why an ant colony is robust to randomly killing some of the ants, up to a certain level, without having a central authority allocating tasks, and telling “Oh, by the way, we had a certain amount of foragers that died… Yeah, those of you that are doing nursing should switch to foraging.” We know myrmecologists (biologists who study ants) know that there’s no central authority doing that. It’s self-organized, and it’s robust, and it’s fault-tolerant.

The brain also is a very good example of a robust structure, where there’s no central authority telling neurons what to do. To a certain extent, it’s very distributed and robust, and it can tolerate of some of the nodes.

So that was the starting line… Let’s understand the fault-tolerance of biological processes, with tools from algorithmic theory and from the distributed computing group. So that was a very physics-y – that was something that could bring the physicist in me again to light, and doing research, five years after I left my masters.

[11:51] Little by little, my awareness on more applied aspects of machine learning would grow. I was trying to understand fault tolerance in neural networks; so how does error propagate in a neural network when some of the neurons are removed? Today, this is not a practical problem, because neurons in neural networks do not fail. A neural network is simulated in a machine, so the unit of failure is the whole machine, not a single neuron.

This will become a problem when we’ll have neuromorphic hardware, if you heard about this…

Could you define what that is specifically?

Neuromorphic hardware is a class of hardware that is itself built as a neural net. So the hardware itself contains pieces that behave like a neuron, and pieces that behave like a synapse… While today we just simulate neural networks as a software.

So would it be fair to say then that because you are implementing hardware in the form of a neural network, that you can have, just like any other machine out there, you can have parts of the machine fail, and therefore unlike today, where it’s just software and it’s all working or it’s not, you can have parts of that hardware in the form of a neural network fail, and therefore it’s a new problem for us to solve, which is why you’re saying it’s not practical? Am I understanding you correctly?

It’s not really a new problem… It was a very popular problem in the ’90s and ‘80s, before the last AI winter, because people were expecting neuromorphic hardware to arise the next day. So you’d find a lot of papers in the ’80s and ‘90s about fault tolerance in neural nets, and they would talk about VLSI circuits (Very Large Scale Integration), but then neuromorphic hardware didn’t happen, and we simulated neural nets on machines and people stopped caring about this problem.

But yeah, I find it a very good problem for someone who thinks like a physicist, like me. So I cared about it, and even though there’s no neuromorphic hardware roaming the earth today, little by little, people who are relevant in machine learning would tell me “Look, we don’t care about this yet, though if you could understand how error impacts learning in distributed frameworks, like when we train machine learning systems over a set of machines, that might be relevant today.” So I switch a bit my interests.

I published a paper – I proved some bounds on error propagation in neural nets, and the mathematical modeling I did there was also useful to study biomolecular networks with some friends from the John Hopkins medical school, because it turns out that biomolecular networks are just weighted graphs of nonlinear nodes, just like neural nets…

Now, that’s pretty cool. I had never thought of it that way. Okay, so I was gonna ask you that - you had talked about that factness that you were dealing with robustness in biological systems, with the technical AI safety… Is that the crossover there, are we getting to that, or am I jumping in?

Not yet – but the glue is already there; the glue is fault tolerance. There are two hemispheres in my Ph.D. One hemisphere was doing robustness in biological systems, and one hemisphere was doing technical AI safety. They don’t seem to be related, but they are actually fault tolerant. I cared about fault tolerance - what happens in a complex system when some nodes are knocked out, or are misbehaving, or are lying to the group.

Okay, so you’ve gotten to the crux of it. I know that as we were talking when we first met, and you started talking about that, I can’t wait to hear how this goes in, because it’s fascinating how you’ve pulled together multiple fields that may not be obviously related upfront, but through fault tolerance… And then you were making comments earlier about how this affects things like fake news, and falsified information that goes forward… So take us there.

[15:55] Alright, let’s go to the more technical AI safety part of my research… I like to tell this - when I say to my friends “Yeah, for the past two years I switched interests a bit on caring about technical AI safety”, they would go like “Oh, isn’t this about killer robots, and rogue self-driving cars, and things we’ll have in the far future?” And I think partly because the media was always showing those kinds of motivations when we’re talking about AI safety. I always like to tell them that there are killer robots already among us; they’re very dumb and primitive and doing very basic machine learning, and they’re called recommender systems.

That’s great, but you’ll have to explain what you mean by that, because that’s a little bit of a shocker when you hear that.

Imagine a young couple of parents who just had a kid, and then they go to a search engine and type “medical advice on vaccines for young kids.” Then they get an initial piece of content that tells them “This is harmful. It can cause autism, and your kids can die, and this is really a conspiracy by big pharma to make us buy their products.” Then the platform recommends them another video telling them similar stuff, and another one, and another one, and another one. Actually that could also happen to people who didn’t even search for that. They were just looking for medical advice on some random topic, for like herpes, and then they end up on a video telling them “Oh, there’s this big pharma conspiracy. Don’t take your kids for vaccine.”

It’s funny that you say that, because I actually have friends, and even extended family members that that exact use case has applied for them, and we have gotten into debates on the benefit of vaccine. I love the fact that you started from that academic perspective, but you’re now touching on something that affects lives every day by millions of people out there, and is a very common misconception. So I love the fact of where you’re going. Keep going, sorry about that.

This year, I think for the first time maybe in several years, for the first time in at least the past five years or so, the World Health Organization listed “vaccine hesitancy” as a public health issue. I’ll give you the reference, so you can give the link to the audience.

Yeah, we’ll include that in the show notes.

So the World Health Organization listed “vaccine hesitancy” in its 2019 report on the major – like, it’s now in the rank of HIV and Ebola, because there is a surge of anti-vaccine resentment and there’s a surge of vaccine-preventable diseases. Some estimations I remember from that report - you can maybe check it, I might miss some details, but for example there is a surge of 30% in measles in developed countries; I’m talking about countries that solved, that used to solve that. Measles, 30%. There are some other reports who speak about 1,600 deaths in youth per year from vaccine-preventable diseases. That’s three per day, more than terrorism. This is a less reliable report, but the World Health Organization one is talking about a 30% surge of measles. That’s a vaccine-preventable disease… And the resentment is growing.

There were also studies on people’s opinion on vaccines in France today and ten years ago, and they consistently show a growth of this resentment, so this is clearly a public health issue, and we can say with confidence that poisoned machine learning already kills. People think about killer robots; I’d like to tell them, “Let’s care about poisoned recommender systems, and probably what you will do to solve that might probably help in preventing something in the long-term.”

[20:22] People tend to think about killer robots in the long-term, and far-future stuff we shouldn’t worry about too much. I always like to reply that “No, we should care about killer recommender systems that are pushing parents into not vaccinating their kids.” There are surges of cases like measles, not only in the U.S. In Switzerland here there was an outbreak in a primary school, I think, or a kindergarten… In this region, in the Lausanne region. This is a serious problem that is literally already killing some people.

I think that new generations who didn’t witness the past – my generation didn’t see what does a non-vaccination past look like. I grew up in Morocco until I was 21; my aunt had polio, she was handicapped for life. She was born in the ’50s and she was not vaccinated back then… So I can see what a non-vaccinated past looks like. I think it was even uglier than what I could see, because I just saw the survivors… And I think that my generation, in the West, is not aware of how lucky we are today. Recommender systems today, as they maximize watch time - the problem is that when we maximize for some metric, we tend to screw stuff in other metrics. Maybe maximizing watch time is now leading to what we do today.

So how could we turn that into formalizable scientific questions? If you look at machine learning today, if you look at how it is done, you’ll find that fundamentally there is an averaging mechanism. So when you do gradient descent, that’s just a protocol to update parameters. Okay, you do it thanks to some data points, so you leverage some data points. You compute gradients using those data points, and then you aggregate those gradients… And how it is done today - it’s mostly with averaging those gradients, or variants of averaging.

If you ask a sociologist about averaging, like “Would you do averaging to do socio-economics of a religion?”, any reasonable sociologist would tell you “Please don’t take the average.”

As a funny illustration - and maybe it’s not really funny, but it’s a bit sad - in my talks I always ask people who thinks that the GDP per capita in Finland, Denmark and Sweden is higher than the GDP per capita in the U.S.? Most people in the room raise their hand, because they think that the GDP per capita in Sweden, Finland and Denmark is higher; actually, that’s the opposite. It’s slightly higher in the U.S.

I know that I was one of those people you were referring to. I would have said the other way around. It’s interesting, I did not realize that.

Yeah. And you have even more striking cases. You can take the GDP per capita in Germany and the U.S, and you would find that the one in the U.S. is way higher, I think. But for sure, Denmark, Finland and Sweden have GDPs per capita, according to the last OECD or CIA reports, slightly lower than the one in the U.S, but no one is fool enough to say that the typical Swedish citizen has poorer life or a comparable life to a typical U.S. citizen. Unfortunately, the typical U.S. citizen tends to have less access to public education, health care etc. Why? Because averaging is not robust. If you take the average and you have a bunch of over-rich billionaires and several homeless people - yeah, the average might be good.

[24:23] I come from a country where this is also – I think when I meet a Moroccan or an Algerian (we have a neighboring country called Algeria), and if you ask any educated Moroccan or Algerian in Europe, “Where do you think the median income or access to health care is higher?”, they would tend to say Morocco… Because there’s these big outlying cities like Rabat and Casablanca where you see fancy constructions and very nice cars on the road, and they think “Yeah, this country seems to be a bit richer than Algeria”, but it turns out that’s not the case. The median Algerian has a better life than the media Moroccan. Morocco has a bunch of outliers that think of themselves as middle class while they are not.

So sociologists (long story short) were aware of the weakness of averaging from at least the 19th century, if you read Émile Durkheim… Or no, sorry, Weber. The first data scientists who were probably sociologists, they were aware of this problem. And they will tell you, “Yeah, take the salaries, rank them, take the one that splits the distribution in two halves - that could be a better way to evaluate a country than taking the average.”

I see what you’re saying on that… And I was ask you how the weakness of averages was tying back in to the use case that you’re addressing there.

So yeah, a naive idea is saying “Yeah, let’s port that into machine learning. Let’s take median gradients instead of average gradients.” People behave on a social network, their behavior creates gradients. What’s happening today is that the social network will use the average gradients to update the model. If there is a minority of hyperactive, hyper-motivated extremists, they might screw the recommender system.

So to tie this back in - this is exactly what we’re seeing day in and day out with the impact of social media in a negative way on our lives. It’s fascinating – as you’ve come in through this academic path that you’ve taken, but you’ve landed squarely in the middle of a gigantic problem that we’re facing around the world. I know as a U.S. citizen we are having a lot of political conversation right now around exactly this… So what are the implications of this?

The implications might be, for example, what happened last year with the crisis actor conspiracy. I don’t know if you remember, there was this very sad shooting in Florida, in Parkland, FL, in that high school, and a few survivors of that shooting - David Hogg, Emma González and others, they raised to prominence with their campaign promoting more safety measures and gun control measures that would protect high schools from shootings… And there was a video claiming that those kids are not real survivors from the shootings; that they were crisis actors used to promote gun control on television. And this video went on the front page of YouTube.

So basically you’re talking about an instance of pure fake news, in terms of you’re having a bad actor that is creating a fiction just to serve their end, with no basis in reality…

[28:08] But it doesn’t end on the video and being featured on the site. If you went to YouTube.com that day, you would find this video. In the U.S, that was the featured video on the front page. But it didn’t end there; those kids received death threats, because people believed the video. The video spread, it became very popular, and the spread was done. YouTube apologized, of course, later, and they fixed the problem, but it was too late; the harm was done. The kids received death threats.

Imagine you are surviving a shooting and then you receive death threats, because people massively saw a video saying that you are a crisis actor, going to the television to promote a political ideology of gun control.

So is your research into robustness – how can you research be applied to these real-life situations that we’re all trying to figure out right now? What are your solutions?

Of course. Real-life solutions are very complex. I’m not claiming that we have bulletproof solutions to complex real-life problems, but we could at least fix the obvious real-life problems… And the obvious real-life problems is that recommender systems should stop averaging gradients, for example.

I’m not claiming that this is pure poisoning, what’s happened to YouTube. I don’t know exactly what happened to YouTube, but I would say a first fix would be to stop taking the average – maybe YouTube already fixed that, or maybe that’s another problem that I was not aware of… Let’s say there is a situation where you average people’s behavior. A first fix would be to stop averaging, because you’d be vulnerable to extremists groups.

So earlier you mentioned median - would that be a better selection?

Fundamentally, the approach that we’re taking in machine learning in terms of the choices that we’re making as we’re putting our algorithms together for a given use case or solution - in some cases maybe we’re following the herd, and we’re doing what other people have done on other projects, but in the case that we’re talking about it’s not serving as well, because you can have extreme ends of that distributions that are able to take advantage of it.

Most importantly, spotting those extreme ends today is becoming harder and harder. I talked to bankers and insurance companies - they’re very good at doing fraud detection, and they typically would do it tools like PCA. I don’t know how much details I should go into on this podcast, but this is a method that detects big tendencies in the dataset.

The problem with that – so it’s very good to spot outliers, but the cost of doing it grows quadratically as the dataset is big. So it prevents you from leveraging high-dimensional big data, as we like to say today. It narrows down the scope of your tool to simple linear regression, logistic regression. You can’t do those kinds of fraud detection mechanisms on something as massive as a video platform. So we need something that scales most linearly with the dimension of the model, of the data, and finding something that behaves like a median in high dimension is a hard problem.

[32:03] The technical solution we’ve been working on - me and my colleagues - since I jumped on this problem two years ago or so… I took a break from the biological robustness track; I’m getting back to it now, but I took a break for two years and I fully worked on this poisoning resilience and another AI safety question called safe interruptability, with some friends.

On the poisoning side we’ve been trying to find alternatives to the median, because in high dimensions, as I said, you can’t rank – like, you rank salaries and then you spot the salaries that split the salaries into two halves. Half the population earns less than $3,000, half the population earns more than $3,000. $3,000 is the median - fine. How do you do that for vectors, for multi-dimensional data? You can’t rank vectors. Imagine you have a million spreadsheets, each spreadsheet containing a million cells. You can’t rank them. So you want to find the median spreadsheets - that’s more or less what we’re trying to do - in a practical manner. That’s what we’ve been doing - we’ve derived a series of algorithms that behave like a median, and that provides guarantees that it is bounded in between a majority of points etc, and we proved it.

We’ve been also promoting the fact that security measures are rigorously proven. Whenever we found a bug, we’d have to go back and modify… But security measures should not be supported only by empirical evidence, because you can never simulate all the possible attacks, so we always tried to prove that this protocol called Gradient Descent will always converge, despite the existence of a fraction of poisoners. We had the first paper on that in NeurIPS 2017. I will give references, if you want…

Yeah, we’ll definitely include those in the show notes. Is it fair to say these higher-order algorithms that you’re talking about - is this a way of maybe evolving gradient descent, or maybe replacing it in such a way that we start having real tools to deal with poisoning and with fake news instances, and such?

Yeah. Talking about tools, my work has been – I was the guy who would find an algorithm and prove that this algorithm satisfies this requirement… But then I’ve been trying also to work with my colleagues and co-authors who are more on the engineering side, to port this on tools as soon as possible. So we had this first paper in NeurIPS, then we published two follow-ups in ICML 2018, one in asynchronous settings and one in very high-dimensional settings… But now we have a fourth work, where we took TensorFlow - this famous Google framework to do machine learning - and we replaced every averaging in the gradient aggregation parts of it with other algorithms I’ve been promoting for the past two years. My colleagues, Sebastien, Arsany and Georgios, they made it work on TensorFlow, and not only that, also as a side bonus they also made TensorFlow work communicating with UDP.

[35:56] The version of TensorFlow will publish on GitHub this week. It’s Byzantine resilient, so it tolerates a poison in gradients up to a certain fraction, but it also can communicate over UDP, which is an unreliable communication protocol… Instead of the previous one, which required TCP/IP, because you cannot afford losing packages etc. So as a bonus, now you can communicate over a faster, but less reliable communication channel. That doesn’t have to do only with the median stuff, they also did some technical changes.

So if you were an engineer out there and you’d listen to this, and wanted to take advantage of that… Because I had a sense that that’s where you’re going in terms of the research - you now have your own approach to gradient descent… Do you foresee that ever being included with TensorFlow, or is the usage of the output of your work, these tools that you’ve created - do you think it will be common enough for dealing with things like poisoning, and with dealing with bad actors trying to take advantage of the dataset? Do you think we’re gonna gradually evolve into using these types of updated algorithms to replace the average base stuff, or do you think it’s always gonna be a little bit more of a specialized thing?

I don’t know if you know Stuart Russell, this famous professor at Berkeley… Stuart Russell is one of the pioneers of modern AI. He wrote that textbook, “Artificial Intelligence: A Modern Approach” with Peter Norvig… And I like one of his arguments. We met in a conference weeks ago in Puerto Rico, in this Beneficial AI conference in Puerto Rico, by the Future of Life Institute… And I like one of his arguments for AI safety where he said “If you talk to civil engineering people, you will never find someone talking about bridges and someone else talking about safe bridges, which are bridges that do not fall apart after three hours.” So not falling apart after three hours of deployment is part of the definition of a bridge.

The feeling I had from talking to attendants of Applied Machine Learning Days is we are going slowly towards this good direction where most of the people involved in machine learning research are more and more aware that not falling apart after a few hours of production is part of the definition of a bridge. I think we’ll stop talking about safe AI, and safe – it should become part of the definition.

Yeah, so it sounds like it’s a foundational thing that we probably should have been thinking about ahead of time, but it will become the de facto standard. The success of safety AI essentially eclipses itself. It just becomes AI and the tools we use.

Now, coming back to your question – maybe I’m rephrasing and it’s not exactly what you said, but is poisoning really solvable like that? The bad news – there’s always a bad news in computing… People tend to forget that computer science was founded by an impossibility theorem. Turing, before proving what algorithms could do, he started by proving what algorithms could never do - the halting problem. You could never find an algorithm that audits algorithms and says whether this algorithm would terminate or not.

So algorithmic science, computer science started out of an impossibility result. We have to really remember that. We are a field of science (I like that) where impossibility results are foundational, because they narrow down the scope of what you can do. You cannot do this, so you can only do what is within this scope on the left. Good.

[40:10] Distributed computing, so the field I’m part of (partially) also has strong impossibly results. You can’t solve consensus, you can’t agree if a fraction of the nodes is malicious and exceeding a certain fraction. For example, if we want to agree on a common decision, and 51% of the group are malicious, we will not agree on the safest choice. This is trivial.

There are similar theorems in game theory, by the way, like the Arrow-Gibbard theorem, the impossibility theorems for democracy and social choice. We also have the impossibility results for distributed machine learning, or you can just think of it like gradient-based machine learning. These are not new. I don’t claim that we were behind that. We just renewed the interest in them. They were proven in particular in ‘85 by a Belgian guy called Peter Rusev, a mathematician, and the community of robust statistics. You could actually prove that if you have a group of random variables following some distribution, an estimator could not guess the mean of those random variables if more than a half of them are adversarial. And then he coined this thing called “the breakdown point” - we call it Byzantine fault tolerance in distributed computing, because it has to do with a thought experiment called the Byzantine General’s Problem, but we don’t really need to go there. It’s just an agreement problem between three general surrounding a city. If one of them is corrupt, they can’t agree whether to attack or not. So if you have N generals surrounding a city, the city only needs to corrupt a third; it doesn’t need to corrupt everyone. If it corrupts only a third of the generals, the generals cannot agree on a common decision. And the same - you cannot make gradient descent work if a certain fraction is not reliable. If most people are promoting anti-vaccine, of course, no solution will work. I’m not claiming that we have it bulletproof.

So there’s limitations, in other words. There’s success to be had, but there’s also some limitations. In certain circumstances, like that many are working against you, you won’t be able to overcome that.

But then people on those big platforms I think are smart enough to realize that, and they are realizing that. I saw a very good press release from YouTube last week, where they said that they will actively now try to work to prevent phony medical advice to be recommended on YouTube. So this is not about censorship, it’s just about not recommending. So they are actively looking at the problem, and I believe they have enough smart people to think about that.

What I’m working on now as a follow-up of what I’ve mentioned before, are situations where you don’t have a majority of reliable notes, but you have a minority of experts. It’s some sort of epistocracy. So you give the power to those who know.

Imagine you have the John Hopkins Medical School YouTube account, the Pasteur Institute in France YouTube account, and then you have Hospital of Lausanne etc, and they are producing content on safe vaccine. But then you have a majority of poisoners, of anti-vaxxers, and you might want to do something in the page rank style, some sort of page rank gradient descent, where you follow the experts.

[44:07] Gotcha. So you wanna take advantage of their expertise, which is a way of countering the fact that you have a majority of poisoners in there… It sounds like you’re almost taking a couple of tools and making a composite out of it.

As we start to finish up, how can practitioners out there start to take advantage of these results that you’ve found and the research that you’ve done to help better the situation we find ourselves in now, where we have so much poisoning going on, with so many people in search of an answer? What are some practical tips that you can offer?

I would say start by reading the literature. The literature on poisoning has been there before I even started doing machine learning; there were people who started looking at that since at least 2004, and people who had made significant progress in 2012, 2013… So yeah, there is a good literature to be read.

We will release a GitHub repo with the code based on the algorithms I’ve been promoting before. My colleagues will release that on GitHub, so they could take it, play with it, find potential bugs in it, find new vulnerabilities we didn’t see… The space of vulnerabilities is technically unlimited, so you can always find new vulnerabilities, or a new threat model for which our – because you always make a threat model, and maybe we overlooked another threat model, and they can make progress on that. So I would also advice taking datasets that might give you a sense of what a recommender system does, and try to poison it, just to understand how easy it is, and maybe you might find a vulnerability.

Now we’re entering an era where you don’t need to be a classical hacker; you don’t need to penetrate, you don’t need to do a penetration in the servers and the system to poison a recommender system. You just need to behave - like, comment, dislike, post… So maybe there are still much more vulnerabilities that could be allowing people to just behave, and look legit, and poison – I don’t know, make a movie platform, recommend suicidal content to a depressed user; this is something we don’t want to have.

I would bet that those things do not need hacking inside the servers, and finding a zero-day, and switching the code… Because of high dimensionality – we have a paper called “The hidden vulnerability of distributed learning in Byzantium”, and the hidden vulnerability is basically high dimension. Today, as we are making machine learning powerful, we are learning more and more high-dimensional models, and these high dimensionalities give a lot of leeway, a lot of margin to attackers… So the bad news is that as machine learning is going to be high dimensional and powerful, it is also becoming very wide in the amount of leeway it gives to attackers.

So yeah, I think a good starting point is trying to play with those algorithms and finding eventual vulnerabilities we overlooked. And if you are a theoretician, I would also be very happy to hear about what we might have missed in the theoretical analysis; maybe there’s a bug in our proof, and I’ll be happy to learn that and work on fixing that. But if you are a practitioner and you don’t care much about the theory, I would say download the GitHub repo of my colleagues and try to improve it and try to apply it on public datasets that are more relevant for recommender systems, and maybe for other stuff, not only recommender systems.

So yeah, for something to conclude on - I’ve been overusing recommender systems here, because I think this is the most pressing example of killer robots we have. Today, people are not massively killed by self-driving cars; they’re more killed with hate speech, and anti-vaccine. But of course, poisoning will become a problem also for self-driving cars. If you poison the traffic sign, and then you make self-driving cars learn an irrelevant model, you might start leading them into unsafe behavior.

The idea of poisoning resilience is very broad, so it doesn’t apply only to recommender systems. You can think of your own problem and your own motivation and try to improve on that.

That’s fantastic. We’ll certainly include the GitHub repo in the show notes, but I’ll tell you what - that was a strong conclusion. If there’s anything that makes me realize how relevant what you’re talking about is, even beyond social media, is the fact that we now have cars and trucks and other vehicles and other IoT devices that may be mobile, that could be poisoned along the way, and that itself can present a physical danger, separate from that. It’s amazing how relevant what you’re working on is gonna be to our future.

Thank you very much for coming on the show, and I really appreciate you taking the time late in the conference to do this.

Thank you, you’re welcome.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00