Exploiting photoplethysmography signals (PPG) for non-invasive blood pressure (BP) measurement is interesting for various reasons. First, PPG can easily be measured using fingerclip sensors. Second, camera based approaches allow to derive remote PPG (rPPG) signals similar to PPG and therefore provide the opportunity for non-invasive measurements of BP. Various methods relying on machine learning techniques have recently been published. Performances are often reported as the mean average error (MAE) on the data which is problematic. This work aims to analyze the PPG- and rPPG based BP prediction error with respect to the underlying data distribution. First, we train established neural network (NN) architectures and derive an appropriate parameterization of input segments drawn from continuous PPG signals. Second, we use this parameterization to train NNs with a larger PPG dataset and carry out a systematic evaluation of the predicted blood pressure. The analysis revealed a strong systematic increase of the prediction error towards less frequent BP values across NN architectures. Moreover, we tested different train/test set split configurations which underpin the importance of a careful subject-aware dataset assignment to prevent overly optimistic results. Third, we use transfer learning to train the NNs for rPPG based BP prediction. The resulting performances are similar to the PPG-only case. Finally, we apply different personalization techniques and retrain our NNs with subject-specific data for both the PPG-only and rPPG case. Whilst the particular technique is less important, personalization reduces the prediction errors significantly.
The estimation of respiratory rates from contineous respiratory signals is commonly done using either fourier transformation or the zero-crossing method. This paper introduces another method which is based on the autocorrelation function of the respiratory signal. The respiratory signals can be measured either directly using a flow sensor or chest strap or indirectly on the basis of the electrocardiogram (ECG). We compare our method against other established methods on the basis of real-world ECG signals and use a respiration-based breathing frequency as a reference. Our method achieved the best agreement between respiration rates derived from directly and indirectly measured respiratory signals.
In recent years, considerable progress has been made in the non-contact based detection of the respiration rate from video sequences. Common techniques either directly assess the movement of the chest due to breathing or are based on analyzing subtle color changes that occur as a result of hemodynamic properties of the skin tissue by means of remote photoplethysmography (rPPG). However, extracting hemodynamic parameters from rPPG is often difficult especially if the skin is not visible to the camera. In contrast, extracting respiratory signals from chest movements turned out to be a robust method. However, the detectability of chest regions cannot be guaranteed in any application scenario, for instance if the camera setting is optimized to provide close-up images of the head. In such a case an alternative method for respiration detection is required.It is reasonable to assume that the mechanical coupling between chest and head induces minor movements of the head which, like in rPPG, can be detected from subtle color changes as well. Although the strength of these movements is expected to be much smaller in scale, sensing these intensity variations could provide a reasonably suited respiration signal for subsequent respiratory rate analysis.In order to investigate this coupling we conducted an experimental study with 12 subjects and applied motion-and rPPGbased methods to estimate the respiratory frequency from both head regions and chest. Our results show that it is possible to derive signals correlated to chest movement from facial regions. The method is a feasible alternative to rPPG-based respiratory rate estimations when rPPG-signals cannot be derived reliably and chest movement detection cannot be applied as well.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.