具有閾值安全聚合的聯邦學習之研究
No Thumbnail Available
Date
2024
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
聯邦學習是一種去中心化的且允許多個客戶端一起參與協作的隱私保護機制,讓客戶端之間無需交換資料集,透過上傳自身的梯度即可共同訓練模型。但近期的研究表示攻擊者透過客戶端的梯度就可以還原原始的訓練資料,聯邦學習也變得不再安全。因此開始有越來越多的研究使用不同的技術來保護梯度。常見的技術之一就是秘密分享,但以往使用秘密分享保護梯度隱私的研究,只要有一個份額遺失或是一台伺服器毀損便無法還原原始的梯度,聯邦學習的運作就會因此中斷。
在本篇論文中,我們針對聯邦學習梯度聚合提出一種結合加法秘密共享的方法,讓攻擊者無法輕易地取得客戶端的原始梯度。此外,我們提出的方法也確保在一定的機率下,即便有伺服器毀損或部分梯度份額遺失也不會對聯邦學習運作造成任何影響。我們還額外加上了會員等級制度,讓不同等級的會員在最終會拿到不同準確度的模型。
Federated learning is a decentralized privacy-preserving mechanism that allows multiple clients to collaborate without exchanging their datasets. Instead, they jointly train a model by uploading their own gradients. However, recent research has shown that attackers can use clients' gradients to reconstruct the original training data, compromising the security of federated learning. Therefore, there has been an increasing number of studies using different techniques to protect gradients. One common technique is secret sharing. However, in previous research on using secret sharing to protect gradient privacy, as long as one share is lost or a server is damaged, the original gradient cannot be reconstructed, causing federated learning to be interrupted. In this paper, we propose an approach that combines additive secret sharing for federated learning gradient aggregation, making it difficult for attackers to easily access clients' original gradients. Additionally, our proposed method ensures that with a certain probability, that even in the event of server damage or the loss of some gradient shares, it will not have any impact on the federated learning operation. We also added a membership level system, allowing members of varying levels to ultimately obtain models with different levels of accuracy.
Federated learning is a decentralized privacy-preserving mechanism that allows multiple clients to collaborate without exchanging their datasets. Instead, they jointly train a model by uploading their own gradients. However, recent research has shown that attackers can use clients' gradients to reconstruct the original training data, compromising the security of federated learning. Therefore, there has been an increasing number of studies using different techniques to protect gradients. One common technique is secret sharing. However, in previous research on using secret sharing to protect gradient privacy, as long as one share is lost or a server is damaged, the original gradient cannot be reconstructed, causing federated learning to be interrupted. In this paper, we propose an approach that combines additive secret sharing for federated learning gradient aggregation, making it difficult for attackers to easily access clients' original gradients. Additionally, our proposed method ensures that with a certain probability, that even in the event of server damage or the loss of some gradient shares, it will not have any impact on the federated learning operation. We also added a membership level system, allowing members of varying levels to ultimately obtain models with different levels of accuracy.
Description
Keywords
聯邦學習, 秘密共享, Federated Learning, Secret Sharing