Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72%, along with an accuracy of 94.47% on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55%, which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients.
As individuals seek increasingly individualised nutrition and lifestyle guidance, numerous apps and nutrition programs have emerged. However, complex individual variations in dietary behaviours, genotypes, gene expression and composition of the microbiome are increasingly recognised. Advances in digital tools and artificial intelligence can help individuals more easily track nutrient intakes and identify nutritional gaps. However, the influence of these nutrients on health outcomes can vary widely among individuals depending upon life stage, genetics and microbial composition. For example, folate may elicit favourable epigenetic effects on brain development during a critical developmental time window of pregnancy. Genes affecting vitamin B12 metabolism may lead to cardiometabolic traits that play an essential role in the context of obesity. Finally, an individual’s gut microbial composition can determine their response to dietary fibre interventions during weight loss. These recent advances in understanding can lead to a more complete and integrated approach to promoting optimal health through personalised nutrition, in clinical practise settings and for individuals in their daily lives. The purpose of this review is to summarise presentations made during the DSM Science and Technology Award Symposium at the 13th European Nutrition Conference, which focused on personalised nutrition and novel technologies for health in the modern world.
Objective: The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new foodmatching technology to automate the data collection and analysis. Design: The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. Results: The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. Conclusions: The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system. Keywords Fake food buffetFood replica Food image recognition Food matching Food standardization Measuring dietary behaviour using traditional, non-automated, self-reporting technologies is associated with considerable costs, which means researchers have been particularly interested in developing new, automated approaches. There is a clear need in dietary assessment and health-care systems for easy-to-use devices and software solutions that can identify foods, quantify intake, record health behaviour and compliance, and measure eating contexts. The aim of the present study was to test the combination of an established and validated food-choice research method, the 'fake food buffet' (FFB), with a new food-matching technology to automate the data collection and analysis.The FFB was developed as an experimental method to study complex food choice, meal composition and portionsize choice under controlled laboratory conditions. The FFB is a selection of very authentic replica-food items, from which consumers are invited to choose. The FFB method was validated by a comparison of meals served from real and fake foods (1) . The food portions served from the fake foods correlated closely with the portions served from the real foods (1) . Furthermore, significant correlations between the participants' energy needs and the amounts served were found in several studies (1)(2)(3)(4) . It has also bee...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.