OMNI-CONV: Generalization of the Omnidirectional Distortion-Aware Convolutions - Université de Bourgogne Accéder directement au contenu
Article Dans Une Revue Journal of Imaging Année : 2023

OMNI-CONV: Generalization of the Omnidirectional Distortion-Aware Convolutions

Résumé

Omnidirectional images have drawn great research attention recently thanks to their great potential and performance in various computer vision tasks. However, processing such a type of image requires an adaptation to take into account spherical distortions. Therefore, it is not trivial to directly extend the conventional convolutional neural networks on omnidirectional images because CNNs were initially developed for perspective images. In this paper, we present a general method to adapt perspective convolutional networks to equirectangular images, forming a novel distortion-aware convolution. Our proposed solution can be regarded as a replacement for the existing convolutional network without requiring any additional training cost. To verify the generalization of our method, we conduct an analysis on three basic vision tasks, i.e., semantic segmentation, optical flow, and monocular depth. The experiments on both virtual and real outdoor scenarios show our adapted spherical models consistently outperform their counterparts.
Fichier principal
Vignette du fichier
VF.pdf (18.72 Mo) Télécharger le fichier
Origine : Publication financée par une institution

Dates et versions

hal-03963383 , version 1 (30-01-2023)

Identifiants

Citer

Charles-Olivier Artizzu, Guillaume Allibert, Cédric Demonceaux. OMNI-CONV: Generalization of the Omnidirectional Distortion-Aware Convolutions. Journal of Imaging, 2023, 9 (2), pp.16. ⟨10.3390/jimaging9020029⟩. ⟨hal-03963383⟩
36 Consultations
17 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More