Fair detection of poisoning attacks in federated learning

Federated learning is a decentralized machine learning technique that aggregates partial models trained by a set of clients on their own private data to obtain a global model. This technique is vulnerable to security attacks, such as model poisoning, whereby malicious clients submit bad updates in order to prevent the model from converging or to introduce artificial bias in the classification. Applying anti-poisoning techniques might lead to the discrimination of minority groups whose data are significantly and legitimately different from those of the majority of clients. In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on a standard machine learning data set shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with standard poisoning detection techniques.

Tags
Data and Resources
To access the resources you must log in
  • Link to PublicationPDF

    The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
Additional Info
Field Value
Author Rebollo-Monedero, David
Author Sánchez, David
Author Domingo-Ferrer, Josep josep.domingo@urv.cat
Author Blanco-Justicia, Alberto
Author Khandpur Singh, Ashneet
DOI 10.1109/ICTAI50040.2020.00044
Group Select Group
Publisher IEEE
Source 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI)
Thematic Cluster Privacy Enhancing Technology [PET]
system:type ConferencePaper
Management Info
Field Value
Author Wright Joanna
Maintainer Jesus Manjon
Version 1
Last Updated 29 April 2021, 11:20 (CEST)
Created 29 April 2021, 11:20 (CEST)