Objective monitoring of food intake and ingestive behavior in a free-living environment remains an open problem that has significant implications in study and treatment of obesity and eating disorders. In this paper, a novel wearable sensor system (automatic ingestion monitor, AIM) is presented for objective monitoring of ingestive behavior in free living. The proposed device integrates three sensor modalities that wirelessly interface to a smartphone: a jaw motion sensor, a hand gesture sensor, and an accelerometer. A novel sensor fusion and pattern recognition method was developed for subject-independent food intake recognition. The device and the methodology were validated with data collected from 12 subjects wearing AIM during the course of 24 h in which both the daily activities and the food intake of the subjects were not restricted in any way. Results showed that the system was able to detect food intake with an average accuracy of 89.8%, which suggests that AIM can potentially be used as an instrument to monitor ingestive behavior in free-living individuals.
Presence of speech and motion artifacts has been shown to impact the performance of wearable sensor systems used for automatic detection of food intake. This work presents a novel wearable device which can detect food intake even when the user is physically active and/or talking. The device consists of a piezoelectric strain sensor placed on the temporalis muscle, an accelerometer, and a data acquisition module connected to the temple of eyeglasses. Data from 10 participants was collected while they performed activities including quiet sitting, talking, eating while sitting, eating while walking, and walking. Piezoelectric strain sensor and accelerometer signals were divided into non-overlapping epochs of 3 s; four features were computed for each signal. To differentiate between eating and not eating, as well as between sedentary postures and physical activity, two multiclass classification approaches are presented. The first approach used a single classifier with sensor fusion and the second approach used two-stage classification. The best results were achieved when two separate linear support vector machine (SVM) classifiers were trained for food intake and activity detection, and their results were combined using a decision tree (two-stage classification) to determine the final class. This approach resulted in an average F1-score of 99.85% and area under the curve (AUC) of 0.99 for multiclass classification. With its ability to differentiate between food intake and activity level, this device may potentially be used for tracking both energy intake and energy expenditure.
Research suggests that there might be a relationship between chew count as well as chewing rate and energy intake. Chewing has been used in wearable sensor systems for the automatic detection of food intake, but little work has been reported on the automatic measurement of chew count or chewing rate. This work presents a method for the automatic quantification of chewing episodes captured by a piezoelectric sensor system. The proposed method was tested on 120 meals from 30 participants using two approaches. In a semi-automatic approach, histogram-based peak detection was used to count the number of chews in manually annotated chewing segments, resulting in a mean absolute error of 10.40% ± 7.03%. In a fully automatic approach, automatic food intake recognition preceded the application of the chew counting algorithm. The sensor signal was divided into 5-s non-overlapping epochs. Leave-one-out cross-validation was used to train a artificial neural network (ANN) to classify epochs as “food intake” or “no intake” with an average F1 score of 91.09%. Chews were counted in epochs classified as food intake with a mean absolute error of 15.01% ± 11.06%. The proposed methods were compared with manual chew counts using an analysis of variance (ANOVA), which showed no statistically significant difference between the two methods. Results suggest that the proposed method can provide objective and automatic quantification of eating behavior in terms of chew counts and chewing rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.