

de Recherche et d’Innovation
en Cybersécurité et Société
Bienvenu
If needed, this section can contain some or all of the following:
- A large, engaging image of the university, department, or an abstract representation of the academic field can set a professional and inspiring tone.
- A brief welcome message or introduction that explains what visitors will find on the page. This could be a short paragraph detailing the purpose of the page, such as highlighting the academic and research achievements of the faculty.
- Key facts, achievements or statistics about the professor or department. For instance, number of published papers, years of experience, key projects, or awards won.
- Interactive timeline that highlights major milestones, such as significant publications, awards, and other achievements.
- A short video where the professor introduces themselves and talk about their work and interests providing a personal touch, and making the page more engaging and approachable.
Alan Davoust
Professeur
Université du Québec en Outaouais (UQO)
Département d'informatique et d'ingénierie
Alan Davoust détient depuis 2015 un Ph.D. en génie informatique de l’Université Carleton à Ottawa. Professeur régulier à l’UQO depuis Décembre 2018, et détenteur d’un financement Découverte du CRSNG et chercheur principal d'un projet financé FRQSC sur la Désinformation au Québec, il apporte à notre équipe son expertise sur les enjeux liés à l'intelligence artificielle, vus d'un point de vue de systèmes socio-techniques.
Productions incluses dans la recherche:
AUT (Autres), BRE (Brevet), CAC (Publications arbitrées dans des actes de colloque), CNA (Communication non arbitrée), COC (Contribution à un ouvrage collectif), COF (Communication arbitrée), CRE, GRO, LIV (Livre), RAC (Revue avec comité de lecture), RAP (Rapport de recherche), RSC (Revue sans comité de lecture).
Année : 1975 à 2026
Publications sélectionnées
2026 |
Damadi, M. S.; Davoust, A. Fairness in social machines: a systematic review Article de journal Dans: Journal of Information, Communication and Ethics in Society, p. 1–40, 2026, ISSN: 1477996X (ISSN). @article{damadi_fairness_2026,Purpose – The purpose of the paper is to provide a systematic review of biases in social machines to better understand the general problem of fairness in these systems. It aims to identify and categorize phenomena described as biases toward specific demographic groups, frame them normatively as harmful and relate them to established fairness concepts originally defined for algorithmic systems. Design/methodology/approach – The phenomenon of algorithmic bias refers to systematic biases against identifiable demographic groups that occur in automated decisions systems. Such biases have mostly been studied in the context of black-box decision systems built using machine learning (ML). However, similar problems have also been reported in complex socio-technical systems such as Wikipedia and Airbnb, known more generally as social machines, where the observed biases cannot necessarily be attributed to specific automated decision systems. Instead, the biases may emerge as a result of complex processes involving numerous users and a computational infrastructure. To gain a better understanding of fairness in social machines, the authors select a representative sample of social machines from six distinct categories, and systematically review the literature reporting biases in these systems, covering 196 papers. The authors classify the reported bias phenomena, identify the affected demographic groups and relate the phenomena to established notions of harm from algorithmic fairness research. Finally, the authors identify the normative expectations of fairness associated with the different problems and discuss the applicability of existing criteria proposed for ML-driven decision systems. The analysis highlights the conceptual similarity of bias phenomena between algorithmic systems and social machines, allowing for a shared vocabulary to describe and compare phenomena across a broad class of systems. Findings – The paper identifies two key biases in social machines: representational harm, from underrepresentation or biased portrayal of disadvantaged groups, and allocative harm, from unfair decision processes, measurable via metrics like demographic parity. Gender bias is prevalent and easier to detect due to explicit markers, offering insights for identifying other biases. Unique biases arise from user categorizations, creating unintended discrimination linked to protected characteristics. These biases result from complex user interactions, not isolated algorithms. Addressing them requires redesigning social machines, focusing on computational infrastructure and interaction norms, such as visibility settings, to mitigate harmful outcomes. Originality/value – The paper’s originality lies in its systematic review of biases in social machines, offering a novel perspective on fairness in these systems. Unlike prior studies focusing solely on algorithmic fairness, this work examines the broader socio-technical interactions within social machines, identifying biases that emerge from user interactions and design choices. By linking these biases to established fairness concepts like demographic parity and representational harm, the paper bridges the gap between algorithmic fairness and social dynamics. © 2025 Emerald Publishing Limited |
2025 |
Souza, J. V.; Amamou, H.; Chen, R.; Salari, E.; Gubelmann, R.; Niklaus, C.; Serpa, T.; Lima, M. M. F.; Pinto, P. T.; Kshirsagar, S.; Davoust, A.; Handschuh, S.; Avila, A. R. Cross-Lingual Keyword Extraction for Pesticide Terminology in Brazilian Portuguese and English Article de journal Dans: Journal of the Brazilian Computer Society, vol. 31, no 1, p. 973–990, 2025, ISSN: 01046500 (ISSN). @article{de_souza_cross-lingual_2025,Agriculture plays a crucial role in Brazil’s economy. As the country intensifies its activities in the sector, the use of pesticides also increases. Hence, the risks associated with pesticide-laden food consumption have become a concern for chemistry researchers. An issue affecting regulatory standardization of pesticides in Brazil is the difficulty in translating pesticide names, particularly from English. For example, the word malathion can be translated from English to Portuguese as malatiom or malatião, resulting in inconsistent labeling. This issue extends to the broader problem of translating highly technical terms between languages, in particular for low-resource languages. In this work, we investigate terminological variation in the chemistry of organophosphorus pesticides. Our goal is to study strategies for domain-specific multilingual keyword extraction. To that end, two corpora were built based on pesticide-related scientific documents in Brazilian Portuguese and English, which led to a total of 84 and 210 texts, respectively, representing the low-and high-resource languages in this study. We then assessed 6 methods for keyword extraction: Simple Maths, TF-IDF, YAKE, TextRank, MultipartiteRank, and KeyBERT. We relied on a multilingual contextual BERT embedding to retrieve corresponding pesticide names in the target language. Finetuning was also explored to improve the multilingual representation further. Moreover, we evaluated the use of large language models (LLMs) combined with the recent retrieval-augmented generation (RAG) framework. As a result, we found that the contextual approach, combined with fine-tuning, provided the best results, contributing to enhancing Pesticide Terminology Extraction in a multilingual scenario. © 2025, Brazilian Computing Society. All rights reserved. |
Ngouanfouo, C.; Davoust, A. Detecting Machine-Generated Text using Grammatical Features Article d'actes Dans: Proc. Int. Conf. Tools Artif. Intell. ICTAI, p. 843–848, IEEE Computer Society, 2025, ISBN: 10823409 (ISSN); 979-833154919-0 (ISBN). @inproceedings{ngouanfouo_detecting_2025,Large Language Models (LLMs) have advanced natural language generation but pose ethical and practical challenges, making it crucial to detect machine-generated texts. Traditional detection methods rely on complex, hard-to-interpret neural encodings and model-specific features like perplexity. This study explores whether grammatical patterns-specifically sequences of parts of speech (POS), including punctuation and symbols-can distinguish machine-written texts from human ones. Using a CNN classifier on POS sequences, the approach achieves nearly 90 % accuracy on a benchmark dataset. Combining POS-based features with neural embeddings improves performance, and the model shows robustness against adversarial attacks, though it is less effective on short texts. © 2025 IEEE. |
Some Heading
If needed, this section can contain some or all of the following:
- Recent news, updates, or upcoming events related to the professor or their department, such as guest lectures, seminars, and conferences.
- Social media feed.
- A quote from the professor about their philosophy on education and research or a testimonial from a peer or student adding a personal and inspirational element to the page, placing this information just above the share icons can give visitors current and relevant reasons to engage and share.
- Call to Action to attend or participate in some even.
- Contact Form
- Subscribe form



