Skip to content

Latest commit

 

History

History
60 lines (60 loc) · 2.19 KB

2021-07-01-acar21a.md

File metadata and controls

60 lines (60 loc) · 2.19 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Debiasing Model Updates for Improving Personalized Federated Training
We propose a novel method for federated learning that is customized specifically to the objective of a given edge device. In our proposed method, a server trains a global meta-model by collaborating with devices without actually sharing data. The trained global meta-model is then personalized locally by each device to meet its specific objective. Different from the conventional federated learning setting, training customized models for each device is hindered by both the inherent data biases of the various devices, as well as the requirements imposed by the federated architecture. We propose gradient correction methods leveraging prior works, and explicitly de-bias the meta-model in the distributed heterogeneous data setting to learn personalized device models. We present convergence guarantees of our method for strongly convex, convex and nonconvex meta objectives. We empirically evaluate the performance of our method on benchmark datasets and demonstrate significant communication savings.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
acar21a
0
Debiasing Model Updates for Improving Personalized Federated Training
21
31
21-31
21
false
Acar, Durmus Alp Emre and Zhao, Yue and Zhu, Ruizhao and Matas, Ramon and Mattina, Matthew and Whatmough, Paul and Saligrama, Venkatesh
given family
Durmus Alp Emre
Acar
given family
Yue
Zhao
given family
Ruizhao
Zhu
given family
Ramon
Matas
given family
Matthew
Mattina
given family
Paul
Whatmough
given family
Venkatesh
Saligrama
2021-07-01
Proceedings of the 38th International Conference on Machine Learning
139
inproceedings
date-parts
2021
7
1