
Slide

Centre Interdisciplinaire
de Recherche et d’Innovation
en Cybersécurité et Société
de Recherche et d’Innovation
en Cybersécurité et Société
1.
Davoust, A.; Rovatsos, M.
Social contracts for non-cooperative games Article d'actes
Dans: AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, p. 43–49, Association for Computing Machinery, Inc, 2020, ISBN: 978-1-4503-7110-0.
Résumé | Liens | BibTeX | Étiquettes: Agent society, Agents, Behavioral research, Ethical aspects, Game theory, Game-theoretic, Moral philosophy, Noncooperative game, Selfish behaviours, Social benefits, Social contract, Social welfare
@inproceedings{davoust_social_2020,
title = {Social contracts for non-cooperative games},
author = {A. Davoust and M. Rovatsos},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85082175399&doi=10.1145%2f3375627.3375829&partnerID=40&md5=972ba2201a1c2450895935dc03ec39b9},
doi = {10.1145/3375627.3375829},
isbn = {978-1-4503-7110-0},
year = {2020},
date = {2020-01-01},
booktitle = {AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society},
pages = {43–49},
publisher = {Association for Computing Machinery, Inc},
abstract = {In future agent societies, we might see AI systems engaging in selfish, calculated behavior, furthering their owners' interests instead of socially desirable outcomes. How can we promote morally sound behaviour in such settings, in order to obtain more desirable outcomes? A solution from moral philosophy is the concept of a social contract, a set of rules that people would voluntarily commit to in order to obtain better outcomes than those brought by anarchy. We adapt this concept to a game-theoretic setting, to systematically modify the payoffs of a non-cooperative game, so that agents will rationally pursue socially desirable outcomes. We show that for any game, a suitable social contract can be designed to produce an optimal outcome in terms of social welfare. We then investigate the limitations of applying this approach to alternative moral objectives, and establish that, for any alternative moral objective that is significantly different from social welfare, there are games for which no such social contract will be feasible that produces non-negligible social benefit compared to collective selfish behaviour. © 2020 Copyright held by the owner/author(s).},
keywords = {Agent society, Agents, Behavioral research, Ethical aspects, Game theory, Game-theoretic, Moral philosophy, Noncooperative game, Selfish behaviours, Social benefits, Social contract, Social welfare},
pubstate = {published},
tppubtype = {inproceedings}
}
In future agent societies, we might see AI systems engaging in selfish, calculated behavior, furthering their owners' interests instead of socially desirable outcomes. How can we promote morally sound behaviour in such settings, in order to obtain more desirable outcomes? A solution from moral philosophy is the concept of a social contract, a set of rules that people would voluntarily commit to in order to obtain better outcomes than those brought by anarchy. We adapt this concept to a game-theoretic setting, to systematically modify the payoffs of a non-cooperative game, so that agents will rationally pursue socially desirable outcomes. We show that for any game, a suitable social contract can be designed to produce an optimal outcome in terms of social welfare. We then investigate the limitations of applying this approach to alternative moral objectives, and establish that, for any alternative moral objective that is significantly different from social welfare, there are games for which no such social contract will be feasible that produces non-negligible social benefit compared to collective selfish behaviour. © 2020 Copyright held by the owner/author(s).