{"id":4175,"date":"2024-04-28T17:50:58","date_gmt":"2024-04-28T22:50:58","guid":{"rendered":"https:\/\/cirics.uqo.ca\/caroline-blais\/"},"modified":"2025-02-07T10:04:09","modified_gmt":"2025-02-07T15:04:09","slug":"caroline-blais","status":"publish","type":"page","link":"https:\/\/cirics.uqo.ca\/en\/caroline-blais\/","title":{"rendered":"Caroline Blais"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"4175\" class=\"elementor elementor-4175 elementor-2492\">\n\t\t\t\t<div class=\"elementor-element elementor-element-e09216a e-flex e-con-boxed e-con e-parent\" data-id=\"e09216a\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-9386cfe elementor-hidden-desktop elementor-hidden-tablet elementor-hidden-mobile e-flex e-con-boxed e-con e-parent\" data-id=\"9386cfe\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-ea9c07d elementor-widget elementor-widget-heading\" data-id=\"ea9c07d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Welcome<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-70e53c3 elementor-widget elementor-widget-text-editor\" data-id=\"70e53c3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>If needed, this section can contain some or all of the following:<\/p>\n<ul>\n<li>A large, engaging image of the university, department, or an abstract representation of the academic field can set a professional and inspiring tone.<\/li>\n<li>A brief welcome message or introduction that explains what visitors will find on the page. This could be a short paragraph detailing the purpose of the page, such as highlighting the academic and research achievements of the faculty.<\/li>\n<li>Key facts, achievements or statistics about the professor or department. For instance, number of published papers, years of experience, key projects, or awards won.<\/li>\n<li>Interactive timeline that highlights major milestones, such as significant publications, awards, and other achievements.<\/li>\n<li>A short video where the professor introduces themselves and talk about their work and interests providing a personal touch, and making the page more engaging and approachable.<\/li>\n<\/ul>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-9b8f585 e-flex e-con-boxed e-con e-parent\" data-id=\"9b8f585\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t<div class=\"elementor-element elementor-element-99badd5 e-con-full my-profs-page-image-card e-flex e-con e-child\" data-id=\"99badd5\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-d8d58ce eael-team-align-centered elementor-widget elementor-widget-eael-team-member\" data-id=\"d8d58ce\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"eael-team-member.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\n\n\t<div id=\"eael-team-member-d8d58ce\" class=\"eael-team-item eael-team-members-simple team-avatar-rounded\">\n\t\t<div class=\"eael-team-item-inner\">\n\t\t\t<div class=\"eael-team-image\">\n\t\t\t\t<figure>\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/cirics.uqo.ca\/wp-content\/uploads\/2024\/04\/Caroline_Blais-227x300-1.webp\" alt=\"\">\n\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\n\t\t\t\t\n\t\t\t<\/div>\n\n\t\t\t<div class=\"eael-team-content\">\n\t\t\t\t<h2 class=\"eael-team-member-name\">Caroline Blais<\/h2><h3 class=\"eael-team-member-position\"><span>Professor<\/span><br> Universit\u00e9 du Qu\u00e9bec en Outaouais (UQO)<br> Department of Psychoeducation and Psychology<\/h3>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<ul class=\"eael-team-member-social-profiles\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<li class=\"eael-team-member-social-link\">\n\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/uqo.ca\/profil\/blaica02\" target=\"_blank\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<svg aria-hidden=\"true\" class=\"e-font-icon-svg e-fas-university\" viewBox=\"0 0 512 512\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M496 128v16a8 8 0 0 1-8 8h-24v12c0 6.627-5.373 12-12 12H60c-6.627 0-12-5.373-12-12v-12H24a8 8 0 0 1-8-8v-16a8 8 0 0 1 4.941-7.392l232-88a7.996 7.996 0 0 1 6.118 0l232 88A8 8 0 0 1 496 128zm-24 304H40c-13.255 0-24 10.745-24 24v16a8 8 0 0 0 8 8h464a8 8 0 0 0 8-8v-16c0-13.255-10.745-24-24-24zM96 192v192H60c-6.627 0-12 5.373-12 12v20h416v-20c0-6.627-5.373-12-12-12h-36V192h-64v192h-64V192h-64v192h-64V192H96z\"><\/path><\/svg>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<li class=\"eael-team-member-social-link\">\n\t\t\t\t\t\t\t\t\t\t<a href=\"http:\/\/lpvs-uqo.ca\/en\/%20\" target=\"_blank\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<svg aria-hidden=\"true\" class=\"e-font-icon-svg e-fas-book-open\" viewBox=\"0 0 576 512\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M542.22 32.05c-54.8 3.11-163.72 14.43-230.96 55.59-4.64 2.84-7.27 7.89-7.27 13.17v363.87c0 11.55 12.63 18.85 23.28 13.49 69.18-34.82 169.23-44.32 218.7-46.92 16.89-.89 30.02-14.43 30.02-30.66V62.75c.01-17.71-15.35-31.74-33.77-30.7zM264.73 87.64C197.5 46.48 88.58 35.17 33.78 32.05 15.36 31.01 0 45.04 0 62.75V400.6c0 16.24 13.13 29.78 30.02 30.66 49.49 2.6 149.59 12.11 218.77 46.95 10.62 5.35 23.21-1.94 23.21-13.46V100.63c0-5.29-2.62-10.14-7.27-12.99z\"><\/path><\/svg>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\t\t<\/li>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t\t\t\t\t<p class=\"eael-team-text\">Caroline Blais heads the Laboratoire de Perception Visuelle et Sociale at the Universit\u00e9 du Qu\u00e9bec en Outaouais, where she is also a full professor. His research focuses on the impact of socio-cultural factors on perceptual and cognitive functioning. She pays particular attention to the impact of the cultural environment in which a person has grown up on the visual processing of faces and the communication of affective signals. She also holds the Canada Research Chair in Cognitive and Social Vision (Level 1).  <\/p>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-aaa41d5 e-con-full my-profs-page-publ-card e-flex e-con e-child\" data-id=\"aaa41d5\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t<div class=\"elementor-element elementor-element-a474f56 elementor-widget elementor-widget-heading\" data-id=\"a474f56\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Productions included in the research:<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ce5ef54 elementor-widget elementor-widget-text-editor\" data-id=\"ce5ef54\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p style=\"text-align: left;\"><strong>AUT<\/strong> (Other), BRE (Patent), <strong>CAC<\/strong> (Refereed publications in conference proceedings), <strong>CNA<\/strong> (Non-refereed paper), <strong>COC<\/strong> (Contribution to a collective work), <strong>COF<\/strong> (Refereed paper), <strong>CRE<\/strong>, <strong>GRO<\/strong>, <strong>LIV<\/strong> (Book), <strong>RAC<\/strong> (Refereed journal), <strong>RAP<\/strong> (Research report), <strong>RSC<\/strong> (Non-refereed journal).<\/p>\n<p style=\"text-align: center;\"><span style=\"text-shadow: 1px 1px 2px gray;\">Year: 1975 to 2024<\/span><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-107306a elementor-align-center my-prof-publ-but elementor-widget elementor-widget-button\" data-id=\"107306a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/cirics.uqo.ca\/publications\/?tsr=&#038;auth=155\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t<span class=\"elementor-button-icon\">\n\t\t\t\t<svg aria-hidden=\"true\" class=\"e-font-icon-svg e-fas-book-open\" viewBox=\"0 0 576 512\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M542.22 32.05c-54.8 3.11-163.72 14.43-230.96 55.59-4.64 2.84-7.27 7.89-7.27 13.17v363.87c0 11.55 12.63 18.85 23.28 13.49 69.18-34.82 169.23-44.32 218.7-46.92 16.89-.89 30.02-14.43 30.02-30.66V62.75c.01-17.71-15.35-31.74-33.77-30.7zM264.73 87.64C197.5 46.48 88.58 35.17 33.78 32.05 15.36 31.01 0 45.04 0 62.75V400.6c0 16.24 13.13 29.78 30.02 30.66 49.49 2.6 149.59 12.11 218.77 46.95 10.62 5.35 23.21-1.94 23.21-13.46V100.63c0-5.29-2.62-10.14-7.27-12.99z\"><\/path><\/svg>\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">All publications<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-eb6a7a7 elementor-widget elementor-widget-heading\" data-id=\"eb6a7a7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Selected publications<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-29ef351 elementor-widget elementor-widget-shortcode\" data-id=\"29ef351\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"shortcode.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-shortcode\"><div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><table class=\"teachpress_publication_list\"><tr>\r\n                    <td>\r\n                        <h3 class=\"tp_h3\" id=\"tp_h3_2026\">2026<\/h3>\r\n                    <\/td>\r\n                <\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> C\u00f4t\u00e9, L.;  Lamontagne, J.;  Bellerose, A.;  Blais, C.;  Fiset, D.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('533','tp_abstract')\" style=\"cursor:pointer;\">The eyes are central to face detection: revisiting the foundations of face processing<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Vision Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 243, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 00426989 (ISSN)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_533\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('533','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_533\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('533','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span>  |  <span class=\"tp_pub_links_label\">Links: <\/span><a class=\"tp_pub_link\" href=\"https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105030389147&amp;doi=10.1016%2Fj.visres.2026.108785&amp;partnerID=40&amp;md5=752aa5d9923ac60539e36118ad41e1e6\" title=\"https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105030389147&amp;doi=10.1016%2Fj.visres.2026.108785&amp;partnerID=40&amp;md5=752aa5d9923ac60539e36118ad41e1e6\" target=\"_blank\"><i class=\"fas fa-globe\"><\/i><\/a><a class=\"tp_pub_link\" href=\"https:\/\/dx.doi.org\/10.1016\/j.visres.2026.108785\" title=\"Follow DOI:10.1016\/j.visres.2026.108785\" target=\"_blank\"><i class=\"ai ai-doi\"><\/i><\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_533\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{cote_eyes_2026,<br \/>\r\ntitle = {The eyes are central to face detection: revisiting the foundations of face processing},<br \/>\r\nauthor = {L. C\u00f4t\u00e9 and J. Lamontagne and A. Bellerose and C. Blais and D. Fiset},<br \/>\r\nurl = {https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105030389147&doi=10.1016%2Fj.visres.2026.108785&partnerID=40&md5=752aa5d9923ac60539e36118ad41e1e6},<br \/>\r\ndoi = {10.1016\/j.visres.2026.108785},<br \/>\r\nissn = {00426989 (ISSN)},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Vision Research},<br \/>\r\nvolume = {243},<br \/>\r\nabstract = {Face detection feels effortless, yet it requires finely tuned computations to extract socially meaningful signals from the visual stream. Here, we used the Bubbles method to isolate the facial features and spatial frequency information that support face categorization. Across three experiments varying in task demands and visual context, the eye region consistently emerged as the most diagnostic source of information, particularly in high spatial frequencies. This finding held whether participants distinguished faces from noise, from non-face objects, or from real-world categories\u2014suggesting that the eyes serve as an anchor point for categorization across contexts. Strikingly, this diagnostic profile mirrors that found in face identification tasks, implying that detection and recognition may rely on shared perceptual mechanisms rather than sequential, independent processes. This overlap sheds light on longstanding ambiguities in the prosopagnosia literature, indicating that detection impairments found in patients may stem from a broader failure to extract critical eye information. More broadly, our results invite a rethinking of the early stages of face processing, suggesting that detection already involves selective use of diagnostic facial features that supports recognition, emotional decoding, and social perception. \u00a9 2026 The Author(s).},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('533','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_533\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Face detection feels effortless, yet it requires finely tuned computations to extract socially meaningful signals from the visual stream. Here, we used the Bubbles method to isolate the facial features and spatial frequency information that support face categorization. Across three experiments varying in task demands and visual context, the eye region consistently emerged as the most diagnostic source of information, particularly in high spatial frequencies. This finding held whether participants distinguished faces from noise, from non-face objects, or from real-world categories\u2014suggesting that the eyes serve as an anchor point for categorization across contexts. Strikingly, this diagnostic profile mirrors that found in face identification tasks, implying that detection and recognition may rely on shared perceptual mechanisms rather than sequential, independent processes. This overlap sheds light on longstanding ambiguities in the prosopagnosia literature, indicating that detection impairments found in patients may stem from a broader failure to extract critical eye information. More broadly, our results invite a rethinking of the early stages of face processing, suggesting that detection already involves selective use of diagnostic facial features that supports recognition, emotional decoding, and social perception. \u00a9 2026 The Author(s).<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('533','tp_abstract')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gingras, F.;  Fiset, D.;  Plouffe-Demers, M. -P.;  Est\u00e9phan, A.;  N\u2019Guiamba, M.;  Sun, D.;  Zhang, Y.;  Blais, C.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('537','tp_abstract')\" style=\"cursor:pointer;\">Cultural differences in spatial frequency tunings to faces do not generalize to visual scenes and object stimuli<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Psychonomic Bulletin and Review, <\/span><span class=\"tp_pub_additional_volume\">vol. 33, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 10699384 (ISSN)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_537\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('537','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_537\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('537','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span>  |  <span class=\"tp_pub_links_label\">Links: <\/span><a class=\"tp_pub_link\" href=\"https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105025378146&amp;doi=10.3758%2Fs13423-025-02832-0&amp;partnerID=40&amp;md5=43840b8cfa4c2df54e647f03a452f8e5\" title=\"https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105025378146&amp;doi=10.3758%2Fs13423-025-02832-0&amp;partnerID=40&amp;md5=43840b8cfa4c2df54e647f03a452f8e5\" target=\"_blank\"><i class=\"fas fa-globe\"><\/i><\/a><a class=\"tp_pub_link\" href=\"https:\/\/dx.doi.org\/10.3758\/s13423-025-02832-0\" title=\"Follow DOI:10.3758\/s13423-025-02832-0\" target=\"_blank\"><i class=\"ai ai-doi\"><\/i><\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_537\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{gingras_cultural_2026,<br \/>\r\ntitle = {Cultural differences in spatial frequency tunings to faces do not generalize to visual scenes and object stimuli},<br \/>\r\nauthor = {F. Gingras and D. Fiset and M. -P. Plouffe-Demers and A. Est\u00e9phan and M. N\u2019Guiamba and D. Sun and Y. Zhang and C. Blais},<br \/>\r\nurl = {https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105025378146&doi=10.3758%2Fs13423-025-02832-0&partnerID=40&md5=43840b8cfa4c2df54e647f03a452f8e5},<br \/>\r\ndoi = {10.3758\/s13423-025-02832-0},<br \/>\r\nissn = {10699384 (ISSN)},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Psychonomic Bulletin and Review},<br \/>\r\nvolume = {33},<br \/>\r\nnumber = {1},<br \/>\r\nabstract = {Previous research has identified cultural differences in visual perception, where East Asians focus more on global object structure and display a larger breadth of attention compared with Westerners. East Asians rely on lower spatial frequencies (SFs) compared to Westerners for face recognition, which may be linked to this. Investigating whether such differences extend to other high-level stimulus categories would clarify if SF tuning differences reflect more general or face specific cognitive processes. The present study compared the SF tunings of Canadians and Chinese during object (Exp. 1; N = 50) and scene (Exp. 3; N = 47) categorization. In both experiments, results did not indicate a significant difference between groups. In Experiment 3\u00a0(N = 128), we conducted an online replication of Experiment 1\u00a0while measuring the SF tunings of the same participants during face perception. Again, no significant difference between the groups was found during object categorization, but the finding that East Asians rely on lower SF than Westerners was replicated. Together, these results suggest that unique mechanisms may underlie the cultural differences in face processing, though alternative explanations, such as the feature consistency of faces, could also account for these findings. \u00a9 The Psychonomic Society, Inc. 2025.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('537','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_537\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Previous research has identified cultural differences in visual perception, where East Asians focus more on global object structure and display a larger breadth of attention compared with Westerners. East Asians rely on lower spatial frequencies (SFs) compared to Westerners for face recognition, which may be linked to this. Investigating whether such differences extend to other high-level stimulus categories would clarify if SF tuning differences reflect more general or face specific cognitive processes. The present study compared the SF tunings of Canadians and Chinese during object (Exp. 1; N = 50) and scene (Exp. 3; N = 47) categorization. In both experiments, results did not indicate a significant difference between groups. In Experiment 3\u00a0(N = 128), we conducted an online replication of Experiment 1\u00a0while measuring the SF tunings of the same participants during face perception. Again, no significant difference between the groups was found during object categorization, but the finding that East Asians rely on lower SF than Westerners was replicated. Together, these results suggest that unique mechanisms may underlie the cultural differences in face processing, though alternative explanations, such as the feature consistency of faces, could also account for these findings. \u00a9 The Psychonomic Society, Inc. 2025.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('537','tp_abstract')\">Close<\/a><\/p><\/div><\/td><\/tr><tr class=\"tp_publication tp_publication_article\"><td class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ledrou-Paquet, V.;  Fiset, D.;  Carr\u00e9, M.;  Gu\u00e9rette, J.;  Blais, C.<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('566','tp_abstract')\" style=\"cursor:pointer;\">The facial information underlying economic decision-making<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Perception, <\/span><span class=\"tp_pub_additional_volume\">vol. 55, <\/span><span class=\"tp_pub_additional_number\">no. 3, <\/span><span class=\"tp_pub_additional_pages\">pp. 243\u2013265, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 03010066 (ISSN)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_566\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('566','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_566\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('566','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span>  |  <span class=\"tp_pub_links_label\">Links: <\/span><a class=\"tp_pub_link\" href=\"https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105020458633&amp;doi=10.1177%2F03010066251387848&amp;partnerID=40&amp;md5=7c1ead96fa13944073ed63bc952ca723\" title=\"https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105020458633&amp;doi=10.1177%2F03010066251387848&amp;partnerID=40&amp;md5=7c1ead96fa13944073ed63bc952ca723\" target=\"_blank\"><i class=\"fas fa-globe\"><\/i><\/a><a class=\"tp_pub_link\" href=\"https:\/\/dx.doi.org\/10.1177\/03010066251387848\" title=\"Follow DOI:10.1177\/03010066251387848\" target=\"_blank\"><i class=\"ai ai-doi\"><\/i><\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_566\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{ledrou-paquet_facial_2026,<br \/>\r\ntitle = {The facial information underlying economic decision-making},<br \/>\r\nauthor = {V. Ledrou-Paquet and D. Fiset and M. Carr\u00e9 and J. Gu\u00e9rette and C. Blais},<br \/>\r\nurl = {https:\/\/www.scopus.com\/inward\/record.uri?eid=2-s2.0-105020458633&doi=10.1177%2F03010066251387848&partnerID=40&md5=7c1ead96fa13944073ed63bc952ca723},<br \/>\r\ndoi = {10.1177\/03010066251387848},<br \/>\r\nissn = {03010066 (ISSN)},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-01-01},<br \/>\r\njournal = {Perception},<br \/>\r\nvolume = {55},<br \/>\r\nnumber = {3},<br \/>\r\npages = {243\u2013265},<br \/>\r\nabstract = {Faces are rapidly and automatically assessed on multiple social dimensions, including trustworthiness. The high inter-rater agreement on this social judgment suggests a systematic association between facial appearance and perceived trustworthiness. The facial information used by observers during explicit trustworthiness judgments has been studied before. However, it remains unknown whether the same perceptual strategies are used during decisions that involve trusting another individual, without necessitating an explicit trustworthiness judgment. To explore this, 53 participants completed the Trust Game, an economic decision task, while facial information was randomly sampled using the Bubbles method. Our results show that economic decisions based on facial cues rely on similar visual information as that used during explicit trustworthiness judgments. We then manipulated facial features identified as diagnostic for trust to test their influence on perceived trustworthiness (Experiment 2) and on trust-related behaviors (Experiment 3). Across all experiments, subtle, targeted changes to facial features systematically shifted both impressions and monetary trust decisions. These findings demonstrate that the same perceptual strategies underlie explicit judgments and trust behaviors, highlighting the applied relevance of even minimal alterations in facial appearance. These findings should be replicated with real faces from diverse demographic backgrounds to confirm their generalizability. \u00a9 The Author(s) 2025. This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https:\/\/us.sagepub.com\/en-us\/nam\/open-access-at-sage).},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('566','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_566\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Faces are rapidly and automatically assessed on multiple social dimensions, including trustworthiness. The high inter-rater agreement on this social judgment suggests a systematic association between facial appearance and perceived trustworthiness. The facial information used by observers during explicit trustworthiness judgments has been studied before. However, it remains unknown whether the same perceptual strategies are used during decisions that involve trusting another individual, without necessitating an explicit trustworthiness judgment. To explore this, 53 participants completed the Trust Game, an economic decision task, while facial information was randomly sampled using the Bubbles method. Our results show that economic decisions based on facial cues rely on similar visual information as that used during explicit trustworthiness judgments. We then manipulated facial features identified as diagnostic for trust to test their influence on perceived trustworthiness (Experiment 2) and on trust-related behaviors (Experiment 3). Across all experiments, subtle, targeted changes to facial features systematically shifted both impressions and monetary trust decisions. These findings demonstrate that the same perceptual strategies underlie explicit judgments and trust behaviors, highlighting the applied relevance of even minimal alterations in facial appearance. These findings should be replicated with real faces from diverse demographic backgrounds to confirm their generalizability. \u00a9 The Author(s) 2025. This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https:\/\/us.sagepub.com\/en-us\/nam\/open-access-at-sage).<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('566','tp_abstract')\">Close<\/a><\/p><\/div><\/td><\/tr><\/table><\/div><\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-5558b72 elementor-hidden-desktop elementor-hidden-tablet elementor-hidden-mobile e-flex e-con-boxed e-con e-parent\" data-id=\"5558b72\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-92ea5b7 elementor-widget elementor-widget-heading\" data-id=\"92ea5b7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Some Heading<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-de7d7a3 elementor-widget elementor-widget-text-editor\" data-id=\"de7d7a3\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>If needed, this section can contain some or all of the following:<\/p>\n<ul>\n<li>Recent news, updates, or upcoming events related to the professor or their department, such as guest lectures, seminars, and conferences.<\/li>\n<li>Social media feed.<\/li>\n<li>A quote from the professor about their philosophy on education and research or a testimonial from a peer or student adding a personal and inspirational element to the page, placing this information just above the share icons can give visitors current and relevant reasons to engage and share.<\/li>\n<li>Call to Action to attend or participate in some even.<\/li>\n<li>Contact Form<\/li>\n<li>Subscribe form<\/li>\n<\/ul>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t<div class=\"elementor-element elementor-element-931098e e-flex e-con-boxed e-con e-parent\" data-id=\"931098e\" data-element_type=\"container\" data-e-type=\"container\">\n\t\t\t\t\t<div class=\"e-con-inner\">\n\t\t\t\t<div class=\"elementor-element elementor-element-ccca712 elementor-widget__width-auto eael-dual-header-content-align-center elementor-widget elementor-widget-eael-dual-color-header\" data-id=\"ccca712\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"eael-dual-color-header.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<div class=\"eael-dual-header\">\n\t\t\t\t<h2 class=\"title eael-dch-title\"><span class=\"eael-dch-title-text eael-dch-title-lead lead solid-color\">Share teacher Caroline Blais<\/span> <span class=\"eael-dch-title-text\">publications with your network!<\/span><\/h2><div class=\"eael-dch-separator-wrap\"><span class=\"separator-one\"><\/span>\n\t\t\t<span class=\"separator-two\"><\/span><\/div>\t\t\t<\/div>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Welcome If needed, this section can contain some or all of the following: A large, engaging image of the university, department, or an abstract representation of the academic field can set a professional and inspiring tone. A brief welcome message or introduction that explains what visitors will find on the page. This could be a [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-4175","page","type-page","status-publish","hentry","entry"],"_links":{"self":[{"href":"https:\/\/cirics.uqo.ca\/en\/wp-json\/wp\/v2\/pages\/4175","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cirics.uqo.ca\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cirics.uqo.ca\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cirics.uqo.ca\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/cirics.uqo.ca\/en\/wp-json\/wp\/v2\/comments?post=4175"}],"version-history":[{"count":1,"href":"https:\/\/cirics.uqo.ca\/en\/wp-json\/wp\/v2\/pages\/4175\/revisions"}],"predecessor-version":[{"id":4177,"href":"https:\/\/cirics.uqo.ca\/en\/wp-json\/wp\/v2\/pages\/4175\/revisions\/4177"}],"wp:attachment":[{"href":"https:\/\/cirics.uqo.ca\/en\/wp-json\/wp\/v2\/media?parent=4175"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}