
Slide

Centre Interdisciplinaire
de Recherche et d’Innovation
en Cybersécurité et Société
de Recherche et d’Innovation
en Cybersécurité et Société
1.
Abdollahzadeh, S.; Allili, M. S.; Boulmerka, A.; Lapointe, J. -F.
Visual Safety Mapping for UAV Landings Using Ordinal Regression Networks Article de journal
Dans: IEEE Transactions on Artificial Intelligence, 2025, ISSN: 26914581 (ISSN).
Résumé | Liens | BibTeX | Étiquettes: automatic UAV navigation, deep ordinal regression, safe landing zones (SLZ), Semantic segmentation
@article{abdollahzadeh_visual_2025,
title = {Visual Safety Mapping for UAV Landings Using Ordinal Regression Networks},
author = {S. Abdollahzadeh and M. S. Allili and A. Boulmerka and J. -F. Lapointe},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-105023324811&doi=10.1109%2FTAI.2025.3635093&partnerID=40&md5=14d5d4e4558cf5f4db08bd7d2a61a945},
doi = {10.1109/TAI.2025.3635093},
issn = {26914581 (ISSN)},
year = {2025},
date = {2025-01-01},
journal = {IEEE Transactions on Artificial Intelligence},
abstract = {As Unmanned Aerial Vehicles (UAVs) see growing use in civilian applications, reliably identifying Safe Landing Zones (SLZs) in varied environments is essential for autonomous navigation and emergency response. Passive vision sensors offer a low-cost, lightweight solution for real-time terrain analysis and 3D scene reconstruction, making them ideal for onboard systems. We introduce OR-SLZNet, an original deep learning model based on ordinal regression to predict SLZs from UAV imagery. Unlike prior approaches, OR-SLZNet produces dense, multi-level safety maps by jointly leveraging photometric (e.g., color and texture) and geometric cues (e.g., flatness, slope, and depth), assigning each pixel an ordinal safety score that reflects landing suitability. With real-time inference (textasciitilde0.02s/frame), the model supports onboard deployment and rapid decision-making in time-critical situations. Extensive experiments on five diverse datasets demonstrate OR-SLZNet effectiveness and strong generalization across a wide range of structural complexities. © 2020 IEEE.},
keywords = {automatic UAV navigation, deep ordinal regression, safe landing zones (SLZ), Semantic segmentation},
pubstate = {published},
tppubtype = {article}
}
As Unmanned Aerial Vehicles (UAVs) see growing use in civilian applications, reliably identifying Safe Landing Zones (SLZs) in varied environments is essential for autonomous navigation and emergency response. Passive vision sensors offer a low-cost, lightweight solution for real-time terrain analysis and 3D scene reconstruction, making them ideal for onboard systems. We introduce OR-SLZNet, an original deep learning model based on ordinal regression to predict SLZs from UAV imagery. Unlike prior approaches, OR-SLZNet produces dense, multi-level safety maps by jointly leveraging photometric (e.g., color and texture) and geometric cues (e.g., flatness, slope, and depth), assigning each pixel an ordinal safety score that reflects landing suitability. With real-time inference (textasciitilde0.02s/frame), the model supports onboard deployment and rapid decision-making in time-critical situations. Extensive experiments on five diverse datasets demonstrate OR-SLZNet effectiveness and strong generalization across a wide range of structural complexities. © 2020 IEEE.



