
Slide

Centre Interdisciplinaire
de Recherche et d’Innovation
en Cybersécurité et Société
de Recherche et d’Innovation
en Cybersécurité et Société
1.
Allaoui, M. L.; Allili, M. S.
MixLVMM: A Mixture of Lightweight Vision Mamba Model for Enhancing Skin Lesion Segmentation Across High Tone Variability Article de journal
Dans: IEEE Access, vol. 13, p. 121234–121249, 2025, ISSN: 21693536 (ISSN).
Résumé | Liens | BibTeX | Étiquettes: Attention mechanism, Attention mechanisms, Computational efficiency, Critical challenges, Dermatology, Diagnosis, Image segmentation, Lesion segmentations, Lung cancer, Mixture of experts model, Mixture-of-experts model, Segmentation performance, Skin lesion, Skin lesion segmentation, Skin/lesion tone variability, Vision mamba
@article{allaoui_mixlvmm_2025,
title = {MixLVMM: A Mixture of Lightweight Vision Mamba Model for Enhancing Skin Lesion Segmentation Across High Tone Variability},
author = {M. L. Allaoui and M. S. Allili},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105012036322&doi=10.1109%2FACCESS.2025.3588476&partnerID=40&md5=1cf51dcf43653e1677ad36a1360392ac},
doi = {10.1109/ACCESS.2025.3588476},
issn = {21693536 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Access},
volume = {13},
pages = {121234–121249},
abstract = {Accurate skin lesion segmentation remains a critical challenge in automated dermatological diagnosis due to heterogeneous lesion presentations, ambiguous boundaries, imaging artifacts, and significant variability in skin and lesion tones across diverse populations. Current segmentation methods inadequately address these multifaceted complexities, particularly failing to handle extreme tone variations that can lead to diagnostic bias. To address these limitations, we present the Mixture of Lightweight Vision Mamba Model (MixLVMM), a novel expert-based framework that enhances segmentation performance across high tone variability through specialized processing. Our approach employs a Siamese network with triplet loss as a gate mechanism to categorize lesions based on tonal characteristics, routing each image to specialized Vision Mamba Model (VMM) experts optimized for specific lesion categories. Each expert utilizes a U-shaped architecture incorporating Focused Vision Mamba blocks and Adaptive Salient Region Attention modules to capture lesion-specific features while maintaining computational efficiency. Comprehensive evaluation on ISIC and PH2 datasets demonstrates that MixLVMM achieves superior segmentation performance with an average Dice coefficient of 93%, surpassing state-of-the-art methods while maintaining efficiency with only 2.5M parameters. These results establish MixLVMM as a robust solution for addressing tone-related segmentation challenges in clinical dermatology, offering both high accuracy and practical deployment feasibility for real-world applications. © 2013 IEEE.},
keywords = {Attention mechanism, Attention mechanisms, Computational efficiency, Critical challenges, Dermatology, Diagnosis, Image segmentation, Lesion segmentations, Lung cancer, Mixture of experts model, Mixture-of-experts model, Segmentation performance, Skin lesion, Skin lesion segmentation, Skin/lesion tone variability, Vision mamba},
pubstate = {published},
tppubtype = {article}
}
Accurate skin lesion segmentation remains a critical challenge in automated dermatological diagnosis due to heterogeneous lesion presentations, ambiguous boundaries, imaging artifacts, and significant variability in skin and lesion tones across diverse populations. Current segmentation methods inadequately address these multifaceted complexities, particularly failing to handle extreme tone variations that can lead to diagnostic bias. To address these limitations, we present the Mixture of Lightweight Vision Mamba Model (MixLVMM), a novel expert-based framework that enhances segmentation performance across high tone variability through specialized processing. Our approach employs a Siamese network with triplet loss as a gate mechanism to categorize lesions based on tonal characteristics, routing each image to specialized Vision Mamba Model (VMM) experts optimized for specific lesion categories. Each expert utilizes a U-shaped architecture incorporating Focused Vision Mamba blocks and Adaptive Salient Region Attention modules to capture lesion-specific features while maintaining computational efficiency. Comprehensive evaluation on ISIC and PH2 datasets demonstrates that MixLVMM achieves superior segmentation performance with an average Dice coefficient of 93%, surpassing state-of-the-art methods while maintaining efficiency with only 2.5M parameters. These results establish MixLVMM as a robust solution for addressing tone-related segmentation challenges in clinical dermatology, offering both high accuracy and practical deployment feasibility for real-world applications. © 2013 IEEE.



