
Slide

Centre Interdisciplinaire
de Recherche et d’Innovation
en Cybersécurité et Société
de Recherche et d’Innovation
en Cybersécurité et Société
1.
Damadi, M. S.; Davoust, A.
Fairness in Socio-Technical Systems: A Case Study of Wikipedia Article de journal
Dans: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 14199 LNCS, p. 84–100, 2023, ISSN: 03029743, (ISBN: 9783031421402 Publisher: Springer Science and Business Media Deutschland GmbH).
Résumé | Liens | BibTeX | Étiquettes: Algorithmics, Bias, Case-studies, Causal relationships, Cultural bias, Fairness, Gender bias, Machine learning, Machine-learning, Parallel processing systems, Sociotechnical systems, Wikipedia
@article{damadi_fairness_2023,
title = {Fairness in Socio-Technical Systems: A Case Study of Wikipedia},
author = {M. S. Damadi and A. Davoust},
editor = {Alvarez C. Marutschke D.M. Takada H.},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85172720004&doi=10.1007%2f978-3-031-42141-9_6&partnerID=40&md5=172c8c6ae5b09536efdf983e9be965e7},
doi = {10.1007/978-3-031-42141-9_6},
issn = {03029743},
year = {2023},
date = {2023-01-01},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
volume = {14199 LNCS},
pages = {84–100},
abstract = {Wikipedia content is produced by a complex socio-technical systems (STS), and exhibits numerous biases, such as gender and cultural biases. We investigate how these biases relate to the concepts of algorithmic bias and fairness defined in the context of algorithmic systems. We systematically review 75 papers describing different types of bias in Wikipedia, which we classify and relate to established notions of harm and normative expectations of fairness as defined for machine learning-driven algorithmic systems. In addition, by analysing causal relationships between the observed phenomena, we demonstrate the complexity of the socio-technical processes causing harm. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.},
note = {ISBN: 9783031421402
Publisher: Springer Science and Business Media Deutschland GmbH},
keywords = {Algorithmics, Bias, Case-studies, Causal relationships, Cultural bias, Fairness, Gender bias, Machine learning, Machine-learning, Parallel processing systems, Sociotechnical systems, Wikipedia},
pubstate = {published},
tppubtype = {article}
}
Wikipedia content is produced by a complex socio-technical systems (STS), and exhibits numerous biases, such as gender and cultural biases. We investigate how these biases relate to the concepts of algorithmic bias and fairness defined in the context of algorithmic systems. We systematically review 75 papers describing different types of bias in Wikipedia, which we classify and relate to established notions of harm and normative expectations of fairness as defined for machine learning-driven algorithmic systems. In addition, by analysing causal relationships between the observed phenomena, we demonstrate the complexity of the socio-technical processes causing harm. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.