Video-based heart rate estimation from challenging scenarios using synthetic video generation - Université de Bourgogne
Article Dans Une Revue Biomedical Signal Processing and Control Année : 2024

Video-based heart rate estimation from challenging scenarios using synthetic video generation

Résumé

Remote photoplethysmography (rPPG) is an emerging technology that allows for non-invasive monitoring of physiological signals such as heart rate, blood oxygen saturation, and respiration rate using a camera. This technology has the potential to revolutionize healthcare, sports science, and affective computing by enabling continuous monitoring in real-world environments without the need for cumbersome sensors. However, rPPG technology is still in its early stages. It faces challenges such as motion artifacts, low signal-to-noise ratio, and the challenge of conducting near-infrared measurements in low-light or nighttime conditions. The performance of existing rPPG techniques has been significantly improved by deep learning approaches, primarily due to the availability of large public datasets. However, most of these datasets are limited to the regular RGB color modality, with only a few available in near-infrared. Additionally, training deep neural networks for specific applications with distinctive movements, such as sports and fitness, would require extensive amounts of video data to achieve optimal specialization and efficiency, which can be prohibitively expensive. Therefore, exploring alternative methods to augment datasets for specific applications is crucial to improve the performance of deep neural networks in rPPG. In response to these challenges, this paper presents a novel methodology to generate synthetic videos for pre-training deep neural networks to estimate heart rates from videos captured under challenging conditions accurately. We have evaluated this approach using two near-infrared publicly available datasets, i.e. MERL (Nowara et al., 2020) and Tokyotech (Maki et al., 2019), and one challenging fitness dataset, i.e. ECG-Fitness (Špetlík et al., 2018). Furthermore, we have collected and made publicly available a novel collection of near-infrared videos named IMVIA-NIR. Our data augmentation strategy involves generating video sequences that animate a person in a source image based on the motion captured in a driving video. Furthermore, we integrate a synthetic rPPG signal into the faces, considering various important aspects such as the temporal shape of the signal, its spatial and spectral distribution, as well as the distribution of heart rates. This comprehensive integration process ensures a realistic incorporation of the rPPG signals into the synthetic videos. Experimental results demonstrated a significant reduction in the mean absolute error (MAE) score on all datasets. Overall, this approach provides a promising solution for improving the performance of deep neural networks in rPPG under challenging conditions.
Fichier principal
Vignette du fichier
1-s2.0-S1746809424006566-main.pdf (3.1 Mo) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04662709 , version 1 (26-07-2024)

Identifiants

Citer

Yannick Benezeth, Deepak Krishnamoorthy, Deivid Johan Botina Monsalve, Keisuke Nakamura, Randy Gomez, et al.. Video-based heart rate estimation from challenging scenarios using synthetic video generation. Biomedical Signal Processing and Control, 2024, 96, pp.106598. ⟨10.1016/j.bspc.2024.106598⟩. ⟨hal-04662709⟩
117 Consultations
51 Téléchargements

Altmetric

Partager

More