Label noise analysis meets adversarial training: A defense against label poisoning in federated learning
Author ORCID Identifier
https://orcid.org/0000-0002-9956-4003 : Ehsan Hallaji
Document Type
Article
Publication Date
2023
Publication Title
Knowledge-Based Systems
Volume
266
First Page
110384
Keywords
Noisy labels, Federated learning, Intrusion detection systems, Label poisoning attacks, Deep learning, Adversarial training
Abstract
Data decentralization and privacy constraints in federated learning systems withhold user data from the server. As a result, intruders can take advantage of this privacy feature by corrupting the federated network using forged updates obtained on malicious data. This paper proposes a defense mechanism based on adversarial training and label noise analysis to address this problem. To do so, we design a generative adversarial scheme for vaccinating local models by injecting them with artificially-made label noise that resembles backdoor and label flipping attacks. From the perspective of label noise analysis, all poisoned labels can be generated through three different mechanisms. We demonstrate how backdoor and label flipping attacks resemble each of these noise mechanisms and consider them all in the introduced design. In addition, we propose devising noisy-label classifiers for the client models. The combination of these two mechanisms enables the model to learn possible noise distributions, which eliminates the effect of corrupted updates generated due to malicious activities. Moreover, this work conducts a comparative study on state-of-the-art deep noisy label classifiers. The designed framework and selected methods are evaluated for intrusion detection on two internet of things networks. The results indicate the effectiveness of the proposed approach.
DOI
10.1016/j.knosys.2023.110384
ISSN
0950-7051
E-ISSN
1872-7409
Recommended Citation
Hallaji, Ehsan; Razavi-Far, Roozbeh; Saif, Mehrdad; and Herrera-Viedma, Enrique. (2023). Label noise analysis meets adversarial training: A defense against label poisoning in federated learning. Knowledge-Based Systems, 266, 110384.
https://scholar.uwindsor.ca/electricalengpub/187