Capabilities in continuous monitoring of key physiological parameters of disease have never been more important than in the context of the global COVID-19 pandemic. Soft, skin-mounted electronics that incorporate high-bandwidth, miniaturized motion sensors enable digital, wireless measurements of mechanoacoustic (MA) signatures of both core vital signs (heart rate, respiratory rate, and temperature) and underexplored biomarkers (coughing count) with high fidelity and immunity to ambient noises. This paper summarizes an effort that integrates such MA sensors with a cloud data infrastructure and a set of analytics approaches based on digital filtering and convolutional neural networks for monitoring of COVID-19 infections in sick and healthy individuals in the hospital and the home. Unique features are in quantitative measurements of coughing and other vocal events, as indicators of both disease and infectiousness. Systematic imaging studies demonstrate correlations between the time and intensity of coughing, speaking, and laughing and the total droplet production, as an approximate indicator of the probability for disease spread. The sensors, deployed on COVID-19 patients along with healthy controls in both inpatient and home settings, record coughing frequency and intensity continuously, along with a collection of other biometrics. The results indicate a decaying trend of coughing frequency and intensity through the course of disease recovery, but with wide variations across patient populations. The methodology creates opportunities to study patterns in biometrics across individuals and among different demographic groups.
Healthy subjects were recruited to represent a variety of ages, genders and cultural backgrounds.
Ethics oversightAll procedures in the in vivo trials were performed in accordance with the experimental protocol approved by the Committee on the Use of Humans as Experimental Subjects of the Massachusetts Institute of Technology (COUHES Protocol 2101000301). The participants gave informed consent.Note that full information on the approval of the study protocol must also be provided in the manuscript.
Conventional vision-based systems, such as cameras, have demonstrated their enormous versatility in sensing human activities and developing interactive environments. However, these systems have long been criticized for incurring privacy, power, and latency issues due to their underlying structure of pixel-wise analog signal acquisition, computation, and communication. In this research, we overcome these limitations by introducing in-sensor analog computation through the distribution of interconnected photodetectors in space, having a weighted responsivity, to create what we call a computational photodetector. Computational photodetectors can be used to extract mid-level vision features as a single continuous analog signal measured via a two-pin connection. We develop computational photodetectors using thin and flexible low-noise organic photodiode arrays coupled with a self-powered wireless system to demonstrate a set of designs that capture position, orientation, direction, speed, and identification information, in a range of applications from explicit interactions on everyday surfaces to implicit activity detection.
Estimation of crop damage plays a vital role in the management of fields in the agriculture sector. An accurate measure of it provides key guidance to support agricultural decision-making systems. The objective of the study was to propose a novel technique for classifying damaged crops based on a state-of-the-art deep learning algorithm. To this end, a dataset of rapeseed field images was gathered from the field after birds’ attacks. The dataset consisted of three classes including undamaged, partially damaged, and fully damaged crops. Vgg16 and Res-Net50 as pre-trained deep convolutional neural networks were used to classify these classes. The overall classification accuracy reached 93.7% and 98.2% for the Vgg16 and the ResNet50 algorithms, respectively. The results indicated that a deep neural network has a high ability in distinguishing and categorizing different image-based datasets of rapeseed. The findings also revealed a great potential of deep learning-based models to classify other damaged crops.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.