
Slide

Centre Interdisciplinaire
de Recherche et d’Innovation
en Cybersécurité et Société
de Recherche et d’Innovation
en Cybersécurité et Société
1.
Nosrati, S.; Motaghi, H.
The AI Complacency Model: Integrating Bounded Rationality and Information Processing Article d'actes
Dans: Am. Conf. Inf. Syst., AMCIS, p. 4590–4599, Association for Information Systems, 2025, ISBN: 979-833132774-3 (ISBN).
Résumé | Liens | BibTeX | Étiquettes: AI complacency, AI systems, Artificial intelligence, Behavioral research, Bounded rationality, Cognitive bias, cognitive biases, decision-making, Decisions makings, Heuristic processing, Human computer interaction, Human-AI interaction, information processing, Information systems, Information use, Perceived AI reliability, Reliability theory, Systematic processing, Vigilance
@inproceedings{nosrati_ai_2025,
title = {The AI Complacency Model: Integrating Bounded Rationality and Information Processing},
author = {S. Nosrati and H. Motaghi},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105025349267&partnerID=40&md5=14d676bedd3ea18921ed24a830889a8b},
isbn = {979-833132774-3 (ISBN)},
year = {2025},
date = {2025-01-01},
booktitle = {Am. Conf. Inf. Syst., AMCIS},
volume = {7},
pages = {4590–4599},
publisher = {Association for Information Systems},
abstract = {This study addresses a critical gap in understanding why users exhibit reduced oversight when interacting with generative AI systems despite their known limitations. While existing research documents AI errors across domains, theoretical frameworks explaining the underlying psychological mechanisms remain underdeveloped. We propose a comprehensive model of AI complacency that extends beyond traditional automation complacency theories by integrating bounded rationality constraints with dual-process information processing models. Our framework demonstrates how Perceived AI Reliability triggers a shift from systematic to heuristic information processing, subsequently reducing vigilance and impairing task performance. The relationship between Perceived AI Reliability and information processing is moderated by three key factors derived from bounded rationality theory: knowledge limitations, cognitive processing capabilities, and time constraints. The proposed conceptual model of this study contributes to the literature by identifying unique psychological mechanisms underlying AI complacency, explaining the processing shifts in human-AI interaction, positioning vigilance as a critical mediating mechanism, and introducing the Vigilance-Reliability Matrix as a diagnostic tool for identifying different interaction patterns. This framework offers both theoretical insights and practical guidance for maintaining appropriate human oversight as AI systems become increasingly sophisticated and widespread across domains. Copyright © 2025 by Association for Information Systems (AIS). All rights reserved.},
keywords = {AI complacency, AI systems, Artificial intelligence, Behavioral research, Bounded rationality, Cognitive bias, cognitive biases, decision-making, Decisions makings, Heuristic processing, Human computer interaction, Human-AI interaction, information processing, Information systems, Information use, Perceived AI reliability, Reliability theory, Systematic processing, Vigilance},
pubstate = {published},
tppubtype = {inproceedings}
}
This study addresses a critical gap in understanding why users exhibit reduced oversight when interacting with generative AI systems despite their known limitations. While existing research documents AI errors across domains, theoretical frameworks explaining the underlying psychological mechanisms remain underdeveloped. We propose a comprehensive model of AI complacency that extends beyond traditional automation complacency theories by integrating bounded rationality constraints with dual-process information processing models. Our framework demonstrates how Perceived AI Reliability triggers a shift from systematic to heuristic information processing, subsequently reducing vigilance and impairing task performance. The relationship between Perceived AI Reliability and information processing is moderated by three key factors derived from bounded rationality theory: knowledge limitations, cognitive processing capabilities, and time constraints. The proposed conceptual model of this study contributes to the literature by identifying unique psychological mechanisms underlying AI complacency, explaining the processing shifts in human-AI interaction, positioning vigilance as a critical mediating mechanism, and introducing the Vigilance-Reliability Matrix as a diagnostic tool for identifying different interaction patterns. This framework offers both theoretical insights and practical guidance for maintaining appropriate human oversight as AI systems become increasingly sophisticated and widespread across domains. Copyright © 2025 by Association for Information Systems (AIS). All rights reserved.



