Date of Award

1-31-2024

Publication Type

Thesis

Degree Name

M.Sc.

Department

Computer Science

Keywords

Fairness in Artificial Intelligence, Machine Learning

Supervisor

Hossein Fani

Rights

info:eu-repo/semantics/openAccess

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Abstract

Team formation aims at forming a collaborative group of experts to accomplish complex tasks, which is a recognized objective in the industry. While state-of-the-art neural team formation models can efficiently analyze massive sets of candidate experts to form effective collaborative teams, they overlook fairness. In this work, we adopt state-of-the-art probabilistic and deterministic greedy reranking algorithms to achieve fairness with respect to (1) popularity or (2) gender in neural models in view of two notions of fairness, demographic parity and equality of opportunity. Specifically, we ensure a minimum representation for experts from the disadvantaged, nonpopular or female, groups by reranking the neural model’s ranked list of recommended experts. Our experiments on two large-scale benchmark datasets demonstrate three key findings: (i) neural team formation models heavily suffer from biases toward popular and male experts; (ii) probabilistic greedy reranking algorithms can substantially mitigate such biases while maintaining teams’ efficacy; (iii) in the presence of extreme biases, e.g., 95% male vs. 5% female experts in the training datasets, post-processing reranking methods alone fall short, urging further tandem integration of pre-process and in-process debiasing techniques.

Share

COinS