
Slide

Centre Interdisciplinaire
de Recherche et d’Innovation
en Cybersécurité et Société
de Recherche et d’Innovation
en Cybersécurité et Société
1.
Allaoui, M. L.; Allili, M. S.; Belaid, A.
HA-U3Net: A modality-agnostic framework for 3D medical image segmentation using nested V-Net structure and hybrid attention Article de journal
Dans: Knowledge-Based Systems, vol. 327, 2025, ISSN: 09507051 (ISSN).
Résumé | Liens | BibTeX | Étiquettes: 3D medical image, 3D medical image segmentation, Diagnosis, Diagnosis planning, Disease diagnosis, Disease treatment, Generalization capability, Image segmentation, Magnetic resonance imaging, Medical image processing, Medical image segmentation, Nested volume-structure, Net structures, Self hybrid attention, Structures (built objects)
@article{allaoui_ha-u3net_2025,
title = {HA-U3Net: A modality-agnostic framework for 3D medical image segmentation using nested V-Net structure and hybrid attention},
author = {M. L. Allaoui and M. S. Allili and A. Belaid},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105011370963&doi=10.1016%2Fj.knosys.2025.114127&partnerID=40&md5=d98a109f015445adb3001bb4017bf953},
doi = {10.1016/j.knosys.2025.114127},
issn = {09507051 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {Knowledge-Based Systems},
volume = {327},
abstract = {3D medical image segmentation is essential for disease diagnosis and treatment planning across a wide range of imaging modalities (e.g., MRI, CT, ultrasound, and PET). However, modality-specific challenges, such as noise, artifacts, low contrast, and anatomical variability, along with the presence of small lesions and fuzzy boundaries, hinder the generalization capability of existing segmentation models. In this work, we present HA-U3Net, a novel 3D U-Net-based model designed to address these limitations through a stepwise approach. First, we introduce a deeply nested U3-shaped structure built upon 3D V-Net modules, enabling multi-scale hierarchical representation learning. Second, we integrate a hybrid attention mechanism combining spatial and channel-wise attention to enhance salient features extraction and the delineation of small or poorly defined structures. Third, we demonstrate the cross-modality generalization capabilities of HA-U3Net through extensive evaluations on several datasets, where our model consistently outperforms baseline methods. Finally, we propose a lightweight variant, U3Mamba, reducing computational complexity while maintaining high performance. © 2025 Elsevier B.V.},
keywords = {3D medical image, 3D medical image segmentation, Diagnosis, Diagnosis planning, Disease diagnosis, Disease treatment, Generalization capability, Image segmentation, Magnetic resonance imaging, Medical image processing, Medical image segmentation, Nested volume-structure, Net structures, Self hybrid attention, Structures (built objects)},
pubstate = {published},
tppubtype = {article}
}
3D medical image segmentation is essential for disease diagnosis and treatment planning across a wide range of imaging modalities (e.g., MRI, CT, ultrasound, and PET). However, modality-specific challenges, such as noise, artifacts, low contrast, and anatomical variability, along with the presence of small lesions and fuzzy boundaries, hinder the generalization capability of existing segmentation models. In this work, we present HA-U3Net, a novel 3D U-Net-based model designed to address these limitations through a stepwise approach. First, we introduce a deeply nested U3-shaped structure built upon 3D V-Net modules, enabling multi-scale hierarchical representation learning. Second, we integrate a hybrid attention mechanism combining spatial and channel-wise attention to enhance salient features extraction and the delineation of small or poorly defined structures. Third, we demonstrate the cross-modality generalization capabilities of HA-U3Net through extensive evaluations on several datasets, where our model consistently outperforms baseline methods. Finally, we propose a lightweight variant, U3Mamba, reducing computational complexity while maintaining high performance. © 2025 Elsevier B.V.



