One of the most effective vital signs of health conditions is blood pressure. It has such an impact that changes your state from completely relaxed to extremely unpleasant, which makes the task of blood pressure monitoring a main procedure that almost everyone undergoes whenever there is something wrong or suspicious with his/her health condition. The most popular and accurate ways to measure blood pressure are cuff-based, inconvenient, and pricey, but on the bright side, many experimental studies prove that changes in the color intensities of the RGB channels represent variation in the blood that flows beneath the skin, which is strongly related to blood pressure; hence, we present a novel approach to blood pressure estimation based on the analysis of human face video using hybrid deep learning models. We deeply analyzed proposed approaches and methods to develop combinations of state-of-the-art models that were validated by their testing results on the Vision for Vitals (V4V) dataset compared to the performance of other available proposed models. Additionally, we came up with a new metric to evaluate the performance of our models using Pearson’s correlation coefficient between the predicted blood pressure of the subjects and their respiratory rate at each minute, which is provided by our own dataset that includes 60 videos of operators working on personal computers for almost 20 min in each video. Our method provides a cuff-less, fast, and comfortable way to estimate blood pressure with no need for any equipment except the camera of your smartphone.
Meditation practice is mental health training. It helps people to reduce stress and suppress negative thoughts. In this paper, we propose a camera-based meditation evaluation system, that helps meditators to improve their performance. We rely on two main criteria to measure the focus: the breathing characteristics (respiratory rate, breathing rhythmicity and stability), and the body movement. We introduce a contactless sensor to measure the respiratory rate based on a smartphone camera by detecting the chest keypoint at each frame, using an optical flow based algorithm to calculate the displacement between frames, filtering and de-noising the chest movement signal, and calculating the number of real peaks in this signal. We also present an approach to detecting the movement of different body parts (head, thorax, shoulders, elbows, wrists, stomach and knees). We have collected a non-annotated dataset for meditation practice videos consists of ninety videos and the annotated dataset consists of eight videos. The non-annotated dataset was categorized into beginner and professional meditators and was used for the development of the algorithm and for tuning the parameters. The annotated dataset was used for evaluation and showed that human activity during meditation practice could be correctly estimated by the presented approach and that the mean absolute error for the respiratory rate is around 1.75 BPM, which can be considered tolerable for the meditation application.
Developing a driver monitoring system that can assess the driver’s state is a prerequisite and a key to improving the road safety. With the success of deep learning, such systems can achieve a high accuracy if corresponding high-quality datasets are available. In this paper, we introduce DriverMVT (Driver Monitoring dataset with Videos and Telemetry). The dataset contains information about the driver head pose, heart rate, and driver behaviour inside the cabin like drowsiness and unfastened belt. This dataset can be used to train and evaluate deep learning models to estimate the driver’s health state, mental state, concentration level, and his/her activity in the cabin. Developing such systems that can alert the driver in case of drowsiness or distraction can reduce the number of accidents and increase the safety on the road. The dataset contains 1506 videos for 9 different drivers (7 males and 2 females) with total number of frames equal 5119k and total time over 36 h. In addition, evaluated the dataset with multi-task temporal shift convolutional attention network (MTTS-CAN) algorithm. The algorithm mean average error on our dataset is 16.375 heartbeats per minute.
АннотацияПредмет исследования. Рассмотрена задача навигации для мобильных роботов на основе метода одновременной локализации и построения карты. Камера INTEL Realsense Depth использована для получения данных от окружающей среды. Методы. Использован метод Real-Time Appearance-Based Mapping для построения облака точек. Спроецировано изображение на плоскость для получения двухмерной карты стоимости. Применен алгоритм D* для планирования глобального пути к желаемой цели, а подход с динамическим окном использован в качестве локального планировщика. Основные результаты. Представлены методы построения: облака точек изображения, полученного от камеры INTEL Realsense Depth, и пути от местоположения робота до желаемой цели. Практическая значимость. Предлагаемый подход является быстрым и надежным, может быть использован для внутренней навигации (заводы, компании и т. д.), и позволяет проводить вычисления с использованием центрального процессора без необходимости использования графического процессора.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.