Abstract
Algorithmic decision-making using rankings— prevalent in areas from hiring and bail to university admissions— raises concerns of potential bias. In this paper, we explore the alignment between people’s perceptions of fairness and two popular fairness metrics designed for rankings. In a crowdsourced experiment with 480 participants, people rated the perceived fairness of a hypothetical scholarship distribution scenario. Results suggest a strong inclination towards relying on explicit score values. There is also evidence of people’s preference for one fairness metric, NDKL, over the other metric, ARP. Qualitative results paint a more complex picture: some participants endorse meritocratic award schemes and express concerns about fairness metrics being used to modify rankings; while other participants acknowledge socio-economic factors in scorebased rankings as justification for adjusting rankings. In summary, we find that operationalizing algorithmic fairness in practice is a balancing act between mitigating harms towards marginalized groups and societal conventions of leveraging traditional performance scores such as grades in decision-making contexts.
BibTeX
@inproceedings{alkhathlan2024balancing,
title={Balancing Act: Evaluating People’s Perceptions of Fair Ranking Metrics},
author={Alkhathlan, Mallak and Cachel, Kathleen and Shrestha, Hilson and Harrison, Lane and Rundensteiner, Elke},
booktitle={The 2024 ACM Conference on Fairness, Accountability, and Transparency},
pages={1940--1970},
year={2024}
}