当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FLPurifier: Backdoor Defense in Federated Learning via Decoupled Contrastive Training
IEEE Transactions on Information Forensics and Security ( IF 6.8 ) Pub Date : 2024-04-04 , DOI: 10.1109/tifs.2024.3384846
Jiale Zhang 1 , Chengcheng Zhu 1 , Xiaobing Sun 1 , Chunpeng Ge 2 , Bing Chen 3 , Willy Susilo 4 , Shui Yu 5
Affiliation  

Recent studies have demonstrated that backdoor attacks can cause a significant security threat to federated learning. Existing defense methods mainly focus on detecting or eliminating the backdoor patterns after the model is backdoored. However, these methods either cause model performance degradation or heavily rely on impractical assumptions, such as labeled clean data, which exhibit limited effectiveness in federated learning. To this end, we propose FLPurifier, a novel backdoor defense method in federated learning that can effectively purify the possible backdoor attributes before federated aggregation. Specifically, FLPurifier splits a complete model into a feature extractor and classifier, in which the extractor is trained in a decoupled contrastive manner to break the strong correlation between trigger features and the target label. Compared with existing backdoor mitigation methods, FLPurifier doesn’t rely on impractical assumptions since it can effectively purify the backdoor effects in the training process rather than an already trained model. Moreover, to decrease the negative impact of backdoored classifiers and improve global model accuracy, we further design an adaptive classifier aggregation strategy to dynamically adjust the weight coefficients. Extensive experimental evaluations on six benchmark datasets demonstrate that FLPurifier is effective against known backdoor attacks in federated learning with negligible performance degradation and outperforms the state-of-the-art defense methods.

中文翻译:

FLPurifier:通过解耦对比训练进行联邦学习中的后门防御

最近的研究表明,后门攻击可能会对联邦学习造成重大安全威胁。现有的防御方法主要集中在模型被后门后检测或消除后门模式。然而,这些方法要么导致模型性能下降,要么严重依赖于不切实际的假设,例如标记的干净数据,这在联邦学习中表现出有限的有效性。为此,我们提出了FLPurifier,这是联邦学习中一种新颖的后门防御方法,可以在联邦聚合之前有效地净化可能的后门属性。具体来说,FLPurifier将完整的模型拆分为特征提取器和分类器,其中提取器以解耦对比的方式进行训练,以打破触发特征与目标标签之间的强相关性。与现有的后门缓解方法相比,FLPurifier 不依赖于不切实际的假设,因为它可以在训练过程中有效地净化后门效应,而不是已经训练好的模型。此外,为了减少后门分类器的负面影响并提高全局模型精度,我们进一步设计了自适应分类器聚合策略来动态调整权重系数。对六个基准数据集的广泛实验评估表明,FLPurifier 可以有效抵御联邦学习中已知的后门攻击,性能下降可以忽略不计,并且优于最先进的防御方法。
更新日期:2024-04-04
down
wechat
bug