Practical AI – Episode #153

Federated Learning 📱

get Fully-Connected with Chris and Daniel

All Episodes

Federated learning is increasingly practical for machine learning developers because of the challenges we face with model and data privacy. In this fully connected episode, Chris and Daniel dive into the topic and dissect the ideas behind federated learning, practicalities of implementing decentralized training, and current uses of the technique.

Featuring

Sponsors

RudderStack – Smart customer data pipeline made for developers. RudderStack is the smart customer data pipeline. Connect your whole customer data stack. Warehouse-first, open source Segment alternative.

Changelog++ – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with no ads, extended episodes, outtakes, bonus content, a deep discount in our merch store (soon), and more to come. Let’s do this!

Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com

LaunchDarkly – Ship fast. Rest easy. Deploy code at any time, even if a feature isn’t ready to be released to your users. Wrap code in feature flags to get the safety to test new features and infrastructure in prod without impacting the wrong end users.

Notes & Links

đź“ť Edit Notes

Transcript

đź“ť Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Welcome to another Fully Connected episode of the Practical AI podcast. This is where Chris and I keep you fully connected with everything that’s happening in the AI community. We’ll take some time to discuss some of the latest AI topics and news, and then we’ll dig into some learning resources to help you level up your machine learning game.

I’m Daniel Whitenack, I’m a data scientist with SIL International, and I’m joined as always by my co-host, Chris Benson, who is a strategist at Lockheed Martin. How are you doing, Chris?

Doing very well, Daniel. Looking forward to having a good conversation between the two of us… No guests today, so we’re just gonna have to do something ourselves here, man…

No guests, yeah. And it is – you know, whenever we do these Fully Connected episodes, we try to take a look at what’s trending in the AI world… One of those things is privacy and security concerns as related to AI systems.

I don’t know, before we jump into the topic for today, which impacts those areas - so we’re gonna talk today about federated learning, which is sort of a recent trend that we’re seeing… Before we jump into that, do you wanna say anything about – I know you’ve done some more deep thinking about some of the ethics concerns, and bias, and privacy type things. What are you seeing in recent days as related to that, and how are companies – are companies taking that seriously? What are companies thinking about in regard to that?

[03:53] Well, what I’ve observed is that – and it’s a little bit of a mixture of all these things. There’s a whole bunch of influences that are affecting the way companies are thinking, and a lot of those are legal. It’s where is your data located, moving across national boundaries, even though it’s electronic, is a big deal, because of the laws, things like the global war on terror… The global war on terror created sets of laws to support that, and that’s now having unexpected consequences on how different countries wanna share data down the road here in 2021, even though we’re kind of moving past that era. We have a mixed mash of different laws in different countries, some of which have a little bit more thought and strength behind them, some of which don’t… And here in the U.S. where we are there are some laws that other countries are very wary of, of having their data here in the U.S, based on what that is. So I’ve seen there are so many factors now that are playing into this, and some of them you would expect and some of them were kind of unintentional… But it’s leaving us in a moment where – you know, going back to the topic, we’ve seen federated learning on the rise lately as a possible solution to some of these issues, or at least a good way to tackle it; the best option available right now.

Yeah, I think everyone that’s working in this space is encountering issues around ethics and privacy… In my space, in the NLP space recently, one of the AI ethics pioneers, Margaret Mitchell, she joined the Hugging Face team, which is, of course, the sort of darling of the NLP community right now, and is developing sort of a five-year plan around open source AI at Hugging Face, and the ethics around that, and what privacy and bias and all of those concerns are… And we’re starting to see more, of course, around the data side as well, and people, as they’re releasing their models and they’re putting their results on leaderboards and such, there’s a sort of general call to provide model cards and such with those, describing a bit more about the model, what data it was trained on, and some more details about the statistics around the data, bias in the data, all of that, which kind of goes along with releasing these models. So it’s definitely a trend –

Transparency.

Yeah, transparency… And on the data side, speaking of that - that kind of gets us into our topic today, because most of the time, at least in my experience, most of the time when I’ve been tackling an AI problem, my first thought is “Let’s aggregate all of the data together”, do whatever pre-processing and such on it that we need to do, and in an ideal world, if we’re acting responsibly, also analyze that dataset for biases, make sure that we’re not violating any privacy concerns, all of those things on the frontend in terms of the data prep.

But there is a different approach to this that we’re seeing more and more being talked about in the sort of trends of the AI world, and that is federated learning. That’s what we’re gonna talk about today. I’m not an expert, Chris… Are you an expert in federated learning?

I am definitely not an expert in federated learning, but I’m coming across it a lot right now, in a lot of different contexts… So it’s definitely something that’s become part of my world.

So for our listeners - you’re listening to two non-experts in federated learning, trying to get a grip on what federated learning is… We’ve done a bit of looking and can discuss some of that today. Hopefully, as you listen, you learn a little bit of that as well, and of course, we’ll provide some learning resources about that.

[07:43] But federated learning has to do with – the kind of main idea behind it is that we want to train a centralized model on decentralized data. Now, that’s kind of interesting… So we still want a centralized model. There are other paradigms out there that are sort of privacy-preserving sort of ways of going about machine learning… One of those is “Hey, we could get our model, train it on some centralized data, and then port it to mobile devices, and then sort of update or fine-tune it on the device, using device data.”

You could do that with TensorFlow Light, or integrations with JavaScript, or Swift and other things on mobile devices… However, that’s not what we’re talking about when we’re talking about federated learning. It’s related, but in that case there’s no centralized model. In that case, you’re sort of porting a parent model out to a whole bunch of client devices… And then maybe doing some learning on the device, so updating the model on the device… But all of those changes that are local never get ported back into a centralized model.

So in that case, there are some advantages to that. I don’t know – before we jump into how federated learning is different, from your perspective, Chris, what are some of the advantages of that kind of framework, where you’re porting models to devices, and kind of updating them on devices? Any thoughts?

Sure. Well, for one thing, it’s more mature now. It’s something we’ve been doing for a little while, in terms of having those different models… There’s a history, there’s a track record of that at this point. But it’s also proven itself to be insufficient for a lot of use case. So at this point – I think it’s interesting, because we’ve seen this topic evolve over time. Federated learning is not new. As we are recording this today in late 2021, this is not a new topic… But it’s really come into its own. I think for a long time it was a discussion, it had limited implementation capability…

As I was looking around at different things for today’s episode - you know, there was talk of federated learning in 2016, 2017, where people were talking about the way forward into that… But we haven’t, in those initial years - with some exceptions that were really kind of edge cases, you didn’t see it on the rise. You saw these other approaches that you’ve just described there.

We’re seeing what I think is a real shift right now this year into trying to find a better solution into being able to have a central model that is decentrally trained in how it does that. So I think this is natural evolution… I think both the need is there. Other approaches are effective in some ways, but also have some deficiencies… And the technology, from an implementation standpoint, has finally arrived with federated learning. So we’re starting to see a lot different implementations paths at this point, from vendors and various frameworks.

Yeah. So that previously adopted way of going about things, which is still valuable… So having the model on the device, maybe updating the model on the device, but never communicating any model changes back to a central model - that has been useful, and maybe one advantage is its privacy. So if you’re using user data to update the model, that data actually never leaves the device. It stays on the device with the model. The model is updated.

However, all the other devices out in the world that are updating their models, they never get the benefit of the model updates that are on your own device. So if we think of something like speech recognition or interfaces on phones, if my device is learning how to better recognize my accent of English or whatever it is, and it fine-tunes on the device - that’s great. But then all of those other people out there that have a similar accent of English to me - I don’t know how many of them there are - they’re not getting the benefit of those updates. So they have to do their own training.

So in some cases it’s a lot of duplicated effort as well, potentially, where people aren’t gaining the value out of other people’s model updates… But it does have an advantage for privacy.

So now that we’re talking about federated learning, we’re also talking about doing certain things and certain training on client devices… For example phones, or it could be people’s own computers, or their tablets, or IoT devices, or whatever it might be…

[12:21] Yeah, edge devices. And I think that’s important at this point to call it out, because you’re kind of going there anyway… And that is that the rise of federated learning in a practical sense is also happening concurrent to the rise of edge computing in a practical sense - it’s tremendously scaled, and widely available.

Yeah. And so that is now widely available… But we also have learned how to do some training on edge devices, and we have the knowledge that hey, if we gathered a whole bunch of data together centrally, or had a central model, it would probably be better than training all these child models, potentially. So federated learning I think tries to take the approach of the best of both worlds. So doing some training and operations locally on client devices, while still having a centralized model which can benefit all users - so it does that in a way that preserves privacy, and distributes the sort of training and compute out to the client devices. So in that sense, you have this sort of decentralized compute, and centralized model. This is definitely a very interesting approach…

So that’s what we mean by federated learning, and when we say a centralized model on decentralized data. Now, the question is sort of how that works, and like you said, I think there’s been a whole bunch of effort in this direction… But before we get there, I wanna emphasize what you’re talking about. I think this has been a topic that has been in our minds for some time. I remember – I think it was 2017, there was a blog post by Google, which you can still read on their research or AI blog… And it had some really cool pictures about phones doing some of the training and communicating things back to a central server… But to be honest, I didn’t hear a lot about it in that sort of interim time. Like, I heard it every once in a while, but now, part of the reason why we’re doing this episode is we’re hearing about it a lot more.

I’m curious, from your perspective, do you think that’s mostly driven by the privacy concerns, or mostly driven by the desire to have decentralized compute, versus trying to always have like a big farm of GPUs?

There’s an answer that I want to give, but I don’t actually believe it. The answer I wanna give is that there’s such concern for privacy issues out there, in the corporate world, that that is driving it. I don’t, however – you know, this is pure opinion… But I don’t however think that that is the driving factor. I think it is primarily legal constraints and logistics, personally. With large organizations that are trying to put products and services into the field, and to get those deployed, and they are constrained by those various, both legal aspects and technical aspects such as networking and such… Even if you can move data around, if you’re working on a large model that is on highly scaled data that it would be trained on, trying to get that data in the right place, at the right time, especially if you have ongoing training, it can be really challenging.

[15:58] And given that federated learning is – and we’ll get into the details later, but basically pushing weights and biases around, as opposed to all the data, it makes it logistically much, much better in that sense. So that’s what my gut is, and that’s what conversations and presentations that I see are largely geared around - logistics and legalities.

Break: [16:22]

Okay, Chris - well, this is Practical AI, so let’s get into some of the practicalities around federated learning. First of all, at least based on my understanding, this sort of architecture of federated learning differs from the typical AI training architecture in that there is a centralized server or set of servers - maybe in the cloud, maybe on premise, it doesn’t matter… But this is centralized. Maybe a larger server, like what we would normally think about doing training or having as a cloud server… Sometimes that’s called a curator. And that coordinates all the training activities with all of the clients. And then, of course, there’s a bunch of clients which are edge devices, and these could be hundreds of devices, thousands of devices, millions of devices, if we’re thinking about phones… And that central curator server coordinates the training of a model with all of these edge devices.

Now, you talked a little bit about what’s communicated back and forth. Do you wanna go into a little bit more detail based on your understanding there?

Sure. And we can dive into the detail of this, but kind of the high level is that you have that model on your central server that you’re talking about…

And model - we’re meaning like a neural network, for example.

A neural network, correct. Thanks for clarifying that. And you are going to put that model out to your client. We’re doing the opposite of what we’ve historically done, where we’ve pulled the data to where our model was gonna be trained, and now we are pushing the model out to be trained where the data is. And there does have to be a capability that at that point you have to have hardware and software on the client that can do training at some level. So it changes the architecture in that sense.

So you’re pushing the model. In the beginning you were pushing the model with its initial values, the weights and biases and such, and it’s going to train based on the dataset that’s there. And there’s different ways, which we can dive into later - there are different ways of evaluating whether or not the data that is available on a particular client supports the training process. So there are some gateways, if you will, that you can evaluate the data with… And you do training on the device.

[19:42] And as we know - you know, we keep talking about our phones being kind of the classical example of this. All of our phones these days are getting these capabilities for doing that kind of training. You know, they have the chips on them now. So you’re doing that, you get a result within a particular accuracy range, then you’re passing the resulting weights back up to the server. So that centralized server is receiving those from all of those hundreds, thousands or millions of client devices connected to it, and it has to do a form of aggregation on all of those model weights coming back in, which is referred to as federated averaging… And we can dive into what that means as we go. But then it averages those out and it measures that, and then it does it again for the next iteration. So without going all the way back through it, you’re gonna keep going through that process, that cycling, over and over again, until your centralized model is yielding the level of accuracy that you’re desiring. Then at that point, then you are able to finally deploy that model with those weights back out, and you can run that as a production model in all those clients.

Yeah, great description. For my mind, I always try to put that sort of description and pair it with some examples. I’m imagining on my phone – I don’t know about you, but sometimes when I’m walking or something like that, I don’t type a text message, I just click the speech recognition thing, and then awkwardly, while everyone else watches me on the sidewalk, I talk into my phone, and it records my voice. But I always look at what was recognized from my voice before I hit send, usually… And then I correct it, because sometimes it didn’t get a correct word, or something like that.

I’m impressed, because I’ll see people this a lot, and they just hit Send, and everyone just expects the thing to be off. But anyway, go ahead.

Yeah, yeah. So let’s say that I did that a hundred times, or something like that… So I have my voice, I have what was recognized, and I have what should have been recognized, because I have my correction. And I don’t actually know if this is how it works in practice. I’m not on the voice team at Google, or anything like that.

You should be, Daniel.

You know, it would be fun. If anyone wants to fly me out for like a speech visiting scholar position at Google, I’m open to that.

Okay, Google, OpenAI, you guys have heard it. Right there. Daniel Whitenack.

[laughs] So let’s say I have that set… It’s a small set, right? It’s not enough to train a full speech recognition model. But let’s say that Google then - they have their centralized English speech recognition model, and they then send that model - an updated version of that - to my phone, and my phone then, in this federated learning scheme, would use the data that’s just on my phone… So the data hasn’t been transferred back up to Google.

Correct.

It uses that audio and the text from my phone, does a retraining of that model, or a fine-tuning of that model based on the data that I’ve seen, and looks – so now I’m gonna have updated weights and biases, or updated parameters from that model on my phone. And then my phone can send not the data, but the weights and biases, the parameters of that model, the updates ones - or maybe just the deltas, the changes - back up to the centralized server or curation server.

So if thousands or millions of devices do this same thing, they’re gonna all be sending their updated weights back…

Yes, which are evaluated before; they’re not just lumped in. That averaging process scores them based on that… And this approach is really cool, in that you’re getting the benefit of the average weights and values across all the datasets, across all clients. So you’re getting a training benefit as though you have access to everything there is, while having only that limited dataset that you’re there, which then gets scored as that goes through the process. It’s pretty cool when you think about it.

Yeah. And I think it’s important to emphasize that this is a practical reality now… I mean, people are still doing research on this, no doubt. This is an active research topic.

[24:08] But there are practical ways to go about that, that have been developed. We’ll list out a few of those a little bit later in the episode, but I’m just looking at PyGrid, which is one of these that’s been released… And in that, just to give you a sense of what this might look like - there’s a couple of Flask-based applications… So Flask is a Python framework that allows you to build web applications, like APIs, REST APIs, that sort of thing… And so there’s a Flask application that is centralized. I think they call it the network, and it manages and monitors and controls routing instructions to various domains, which are in my understanding hosted on workers, so PyGrid workers… And that domain is another Flask-based application that receives instructions and executes a worker application on the device to do ephemeral updates to the model, and communicate those back.

The device will request to train a model, so the device actually has to sort of opt in to the training bit, which makes sense… And then the model and some sort of parameters about the training plan will be sent to the device. The training will take place on that device, with the private data. Once the training is completed, in this case, with this framework, it’s the delta or the diff of the parameters, the original state of the model is communicated back up to the server, and then that’s averaged, like you say, into the centralized model. So that’s what they called a model-centric federated learning type of technique.

There are other kind of versions or flavors of federated learning that might include some communication of privacy-preserving data, but I think the one that we’ve been mostly emphasizing here is the model-centric version, which is what we’re talking about here, which is the data stays on the device.

Right. Correct.

I mean, that seems practical to me, in the sense that I’ve worked with Flask applications before and I’ve done some AI training a few times… So it seems like something I could work out. Although I haven’t worked with phones much myself. So maybe that part’s a little bit scary to me, in how that actually works. I’ve never developed a mobile app.

I have, but I haven’t done it from a deep learning standpoint.

Yeah, yeah. So I don’t know all of the mobile application development pieces that you’d have to tie in. I know that there’s some JavaScript and Kotlin and Swift libraries for these frameworks that will allow you to build and support that worker capability on the device.

One thing that came up in some of the information that I was reading was “Doesn’t this just suck away all the battery of the device?” Like, what are the implications for the device user? Because it’s maybe useful to talk about the disadvantages for the device user, rather than just the advantages, like they get a cool new model… Which is good, but you kind of – what was that program where it was like a citizen science type thing, where you could register your computer with a science lab…

Oh, I know what you’re talking about.

…and they would run astronomical calculations in a decentralized way on your computer…

Yeah, there was the seti program that was doing that with computers. That was the earliest one that I can remember. That’s way back now…

And there’ve been other since.

I mean, it’s in the same vein… That’s gonna drain your computer power, right?

[27:54] It can, but I think our conversation right now is also leaning toward that phone assumption… And it’s not always a phone assumption. It can be a larger device. Your edge device might be a mobile platform. And when I say mobile, I mean like with wheels, or wings, or rockets, or something else.

Oh, right. Like a car.

It could be a car, exactly. So it may be, going forward, and this is complete, pure speculation, but if you’re gonna do a lot of federated learning and doing that processing, maybe there’s another battery that’s in that car, that’s there to run your training. Hardware that’s there. So it kind of depends. If it’s the phone - yeah, that’s probably gonna start sucking battery down if you’re doing any substantial amount of training… But if you’re in literally a vehicle that is tremendously benefitting from that over time, as use cases get founded… I think we’re still – and this goes out; we can get into use cases in a few minutes, but I think we’re still at a point where we are exploring use cases and where does federated learning give us a strategic advantage to implement for that… And there may be cases out there where in doing that then you simply architect in the ability to do on-platform learning out there on the edge, so to speak, to accomplish this.

Before we jump into frameworks and some of the use cases that we’ve seen out there, Chris, one of the things that I think is worth noting as related to this federated learning topic is related to security. I remember very clearly in my mind back when I did go to conferences in person… I think it was an ODSC conference, or something like that… I saw Jim Klucar who was with Immuta at the time. Shout-out, Jim. I don’t know if you listen, but… Great guy. He talked about privacy in his talk, and he showed some examples where – I think it was facial recognition; I’m pretty sure it was facial recognition. Anyway, you could take a model – because our models are so big now, and there’s so much encoded into our models, and you could actually, from a prediction, sort of work back to the original data that was used… So you could reconstruct people’s faces in the training set, just from the model parameters.

So in theory, you could imagine sort of reconstructing the data that’s on client devices from what’s sent to the central curator server, or coordinator server… So that’s one thing to note here about the aggregation, and I think this is probably what you were getting to when you were talking about the aggregation. There is a method of securing that aggregation.

So there’s two things - there’s encryption, when the data is sent back to the central server, but then there’s a way to securely aggregate that… And that has to do with differential privacy. So yeah, I don’t know if that’s what you were meaning when you were getting at averaging some of those results back together.

I do… That’s one of those areas that there’s gonna be a lot more research on, in terms of being able to do that. Because right now there are some federated averaging algorithms that are in use… And the most basic one is just called federated averaging, but there are others that are being built on top of that. And I think that’s gonna be one of those areas that people are having to explore, data scientists on the research side are gonna have to explore, is “Can you get back to the data?” And there’s so much that’s gonna depend on where that research goes.

Going back to our example of national boundaries - you have laws protecting citizens (rightly so), but they vary across national boundaries, and therefore for you to have a model, a) for you to be able to participate in federated learning in that capacity, and to be able to deploy a subsequent model across national boundaries, that is one of those areas that we need to ensure that even though the data itself will reside on a client and maybe across a national boundary, that you cannot recreate that coming back across. So I’m expecting to see a lot more on that in the years ahead.

[31:57] Yeah, I agree with that. Differential privacy and this sort of aggregation could be a topic in and of itself, and maybe we’ll have a follow-up episode on that… Just a few sort of very brief phrases about differential privacy is that – so if we’re thinking about phones and them contributing their data, or edge devices, it limits how much any single contribution from a phone can contribute to the overall changes in the model… And in that way, the model isn’t sort of overly skewed or memorizing results from a single device, which could lead to reconstruction of rare private data…

And also, that noise is added to that sort of rare data. All of those ideas are kind of – if you wanna learn more about that, you might look up more on differential privacy.

Yeah. I’m speculating that the more diversity you have in your dataset will help protect you from that as well… Because it is an averaging function. So if your core data that you’re training on is close to the average, if the data that’s coming back from all of your client devices is very similar, that obviously is something that would have to be addressed, in that sense.

So when I was looking up how I might go about implementing some federated learning, I was curious as to the state of the various frameworks and tools that you can use to actually achieve this process… I was pleasantly surprised with – of course, I haven’t run a real experiment with millions of devices; maybe that is in my future, it’d definitely be a fun experimenting… But it looks like, for example, one of these frameworks is TensorFlow Federated. TensorFlow has an open source framework for doing this computation on decentralized data, and some of what you have to do to your model to enable that is kind of wrap some of your model definitions in classes and helper functions that are provided by the TensorFlow Federated Framework… But you do get to sort of preserve your Keras model which you love. You have your Keras model and you kind of wrap things around it to use it in this federated way, which seems like a nice approach. You don’t have to throw out everything that you did with Keras that you love. You can kind of wrap it and use it from there.

There’s a bunch of other organizations, large organizations that are really contributing to open source frameworks. Intel has their open federated learning framework, and Facebook was involved in PySyft and PyGrid, which I mentioned before… PyGrid was the methodology that I talked through before. And then there’s some other ones too, like Flower, which is a friendly federated learning framework, which is nice, and lots of F’s in there… [laughs]

Very important.

That looked really cool. And there’s other ones, too. I’m sorry if I’m leaving out your favorite one for those listening out there…

No, no. I guess at this point I think we’re really to a point where I’m really curious to see if anyone in our audience is actually doing this. I know that the teams at my employer are now into federated learning, and we love to hear from folks about who is engaging in it. Have you guys done any federated at the non-profit that you work at yet? Or have you not yet had any need to?

We haven’t, although I do wonder about that, because one of our use cases is translations… So we have over 1,000 translation projects going on around the world, and part of what I’m working on is augmented quality assessment types of tools for those translations. That very much fits in this framework where there’s a centralized set of models that are maybe used on all of these client devices, and could be improved by data that can be gathered on those client devices. But also, these are people working all around the world on their own translation stuff that oftentimes includes their own copyright restrictions, for example, where they might not be able to share that translation data in certain contexts or otherwise. So there’s rights holders, and copyright information associated with all those translations… It’s definitely got me thinking along those lines.

[36:29] That’s a fascinating use case right there. You could ask the users’ permission to be able to do it, because as you pointed out earlier, you’re also using some of the power that’s available. And if you’re gonna do this federated learning, then you’re gonna be training on the device, and if their device is capable of doing that, then you’re presumably draining the battery faster… But you also have these other ancillary issues that the end user may or may not know. So have you put any thought into how you might address that? Is there any way of evaluating that?

To be honest, I don’t know. My first thought is that this can be a little bit tricky, because for so long people have been exposed to messages that pop up on their device, that say “Share data with us and we’ll make your experience better.” That’s been a sort of common thing when you accept terms and services, or when you a new phone, or a new phone service; you’re like “Share data back to us about your network usage, and we can make the network better for everybody.” And that’s really ingrained in people’s minds… So if you try to put some messaging in an app around that, that’s probably what people are gonna assume at first, like “Oh, they’re collecting our data. I don’t wanna share my data.”

So it’s very interesting, how and how much do you share with the end user, and what is the phrasing around that to help them understand what is actually happening? Yes, you are sharing something with a centralized server, but in a differentially private way, and it may such away some of your battery, or run certain things on your device that you weren’t running before, so maybe there’s battery issues and other things… But I think that there’s a lot of assumptions that we’ll have to overcome as we do that, because people are so used to the fact of like “Oh, everybody’s just gathering my data. And when I see one of these pop-ups, it’s asking me for data, and I don’t wanna share my data.”

It makes for a fascinating consideration to have to try to overcome and mitigate. I work in the defense industry, and governments that are interested - they kind of control their entire environment. It may be contested by another government, so to speak, but at the end of the day, any given government that might be interested in that - they are running both the central server, and they also own those endpoints, those edge devices, whatever those are. And so there is potentially much less to have to consider from a user rights standpoint, a privacy standpoint. So it becomes just a logistical thing, to some degree.

Fascinating consideration, and I think that your use case is much closer to what most end users would have to deal with. They’re gonna have customers, and they’re gonna have user communities they’re serving, so…

Yeah. And there have been examples of successful uses of this across industry, where people have at least started navigating these concerns. Google, of course, I mentioned they were investigating this even a few years ago or more… And they’ve shown various actual real-world applications of this in mobile keyboard development and auto-complete prediction type stuff, voice and audio data being used to improve things like Google Assistant… Also, other tech giants you might expect are investigating this as well. Facebook is sort of rebuilding – my understanding at least, from public articles, they’re rebuilding some of their ad infrastructure and models to do things in a more decentralized way, with federated learning.

[40:08] One that I think is cool, which is dominating the actual applications that I’ve seen are related to healthcare - I don’t know if you’ve seen some of these, Chris, but one I was reading in a nature article from Harvard Medical School, where they actually were predicting sort of clinical outcomes of patients with Covid using federated learning. So they had something running, presumably on patient devices, or at least clinic devices, and preserve the privacy around the patient’s actual health data, while also maybe providing some predicted outcomes to doctors to help them augment maybe their treatments, and that sort of thing. So yeah, that was a really interesting example that I ran across.

Another one that I’ve run across is predictive maintenance. You have all of these different types, both in the civilian world, and in the military world etc. all these vehicles out there, all these devices, factory machines, and they do have their own dataset there. And even if privacy is not a concern, just logistically being able to benefit from that diversity of things out there, that you can then train on, and train toward that… I think it lends itself very naturally to federated learning.

So towards the end of each of these Fully Connected episodes we always like to leave you with a few different learning resources. One of the things we will do is include in the show notes links to all of these different federated learning frameworks and open source projects that we’ve mentioned, like TensorFlow Federated, and the Intel Open Federated Learning, so that you can go there and actually get hands-on and try out a few things.

However, one of the things that I think is really great - you know, if this is a new topic to you and you just wanna think about it a little bit more, and its implications, Google put out this federated learning comic, which is really good at sort of leading you through both the motivation of federated learning, how it works, maybe some concerns or questions that come out of that… And you’ll see some of the themes that we’ve talked about in this episode represented in that comic, which is a great way, a sort of visual and fun way to get introduced to federated learning. We’ll include the link in our show notes.

What about you, Chris? What’s some of the stuff that you were looking at?

You know, one of the websites that we have talked about with various learning resources over many episodes is one called Towards Data Science. Towards Data Science had a good tutorial for stepping into federated learning with TensorFlow, it’s called “Federated Learning: A Step by Step Implementation in TensorFlow.” It was in April 10th of 2020, so about a year and a half ago, and it’s a really good introduction into the basics of it, and doing kind of a toy network to try it out. So that was a good one.

And then I’ll mention this maybe – almost funny to say that, but go to the federated learning page on Wikipedia. It has quite a lot of information there. It’s probably not where I would start if you’re not familiar with federated learning; it’s not the first place. But after you’ve read a few other things and maybe gone through some of the other resources, it may throw some terms at you that you missed in certain areas. You can say “Oh, what is this? What is this?” So I thought I’d mention those two - the Towards Data Science is a good beginner, and then after you’ve gotten a little under your belt, Wikipedia.

Awesome. Well, we hope that our listeners will explore this, as Chris said. We’re interested to hear how you are exploring this topic. Connect with us in our Slack community. You can find that at Changelog.com/community. We love to hear what you’re working on, and what you would like us to be talking about on the podcast. We really do value your input.

You can also find us on LinkedIn or on Twitter as well, so keep in contact. Let us know what you’re learning about as related to federated learning and other things. Thanks for the conversation today, Chris. It was fun.

I enjoyed it. Thanks a lot, Daniel.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. đź’š

Player art
  0:00 / 0:00