Cassava is the third largest source of carbohydrates for human food in the world but is vulnerable to virus diseases, which threaten to destabilize food security in sub-Saharan Africa. Novel methods of cassava disease detection are needed to support improved control which will prevent this crisis. Image recognition offers both a cost effective and scalable technology for disease detection. New deep learning models offer an avenue for this technology to be easily deployed on mobile devices. Using a dataset of cassava disease images taken in the field in Tanzania, we applied transfer learning to train a deep convolutional neural network to identify three diseases and two types of pest damage (or lack thereof). The best trained model accuracies were 98% for brown leaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage (GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic disease (CMD). The best model achieved an overall accuracy of 93% for data not used in the training process. Our results show that the transfer learning approach for image recognition of field images offers a fast, affordable, and easily deployable strategy for digital plant disease detection.
Convolutional neural network (CNN) models have the potential to improve plant disease phenotyping where the standard approach is visual diagnostics requiring specialized training. In scenarios where a CNN is deployed on mobile devices, models are presented with new challenges due to lighting and orientation. It is essential for model assessment to be conducted in real world conditions if such models are to be reliably integrated with computer vision products for plant disease phenotyping. We train a CNN object detection model to identify foliar symptoms of diseases in cassava (Manihot esculenta Crantz). We then deploy the model in a mobile app and test its performance on mobile images and video of 720 diseased leaflets in an agricultural field in Tanzania. Within each disease category we test two levels of severity of symptoms-mild and pronounced, to assess the model performance for early detection of symptoms. In both severities we see a decrease in performance for real world images and video as measured with the F-1 score. The F-1 score dropped by 32% for pronounced symptoms in real world images (the closest data to the training data) due to a decrease in model recall. If the potential of mobile CNN models are to be realized our data suggest it is crucial to consider tuning recall in order to achieve the desired performance in real world settings. In addition, the varied performance related to different input data (image or video) is an important consideration for design in real world applications.
Nuru is a deep learning object detection model for diagnosing plant diseases and pests developed as a public good by PlantVillage (Penn State University), FAO, IITA, CIMMYT, and others. It provides a simple, inexpensive and robust means of conducting in-field diagnosis without requiring an internet connection. Diagnostic tools that do not require the internet are critical for rural settings, especially in Africa where internet penetration is very low. An investigation was conducted in East Africa to evaluate the effectiveness of Nuru as a diagnostic tool by comparing the ability of Nuru, cassava experts (researchers trained on cassava pests and diseases), agricultural extension officers and farmers to correctly identify symptoms of cassava mosaic disease (CMD), cassava brown streak disease (CBSD) and the damage caused by cassava green mites (CGM). The diagnosis capability of Nuru and that of the assessed individuals was determined by inspecting cassava plants and by using the cassava symptom recognition assessment tool (CaSRAT) to score images of cassava leaves, based on the symptoms present. Nuru could diagnose symptoms of cassava diseases at a higher accuracy (65% in 2020) than the agricultural extension agents (40–58%) and farmers (18–31%). Nuru’s accuracy in diagnosing cassava disease and pest symptoms, in the field, was enhanced significantly by increasing the number of leaves assessed to six leaves per plant (74–88%). Two weeks of Nuru practical use provided a slight increase in the diagnostic skill of extension workers, suggesting that a longer duration of field experience with Nuru might result in significant improvements. Overall, these findings suggest that Nuru can be an effective tool for in-field diagnosis of cassava diseases and has the potential to be a quick and cost-effective means of disseminating knowledge from researchers to agricultural extension agents and farmers, particularly on the identification of disease symptoms and their management practices.
Background Adolescents’ consumption of healthy foods is suboptimal in low- and middle-income countries. Adolescents’ fondness for games and social media and the increasing access to smartphones make apps suitable for collecting dietary data and influencing their food choices. Little is known about how adolescents use phones to track and shape their food choices. Objective This study aimed to examine the acceptability, usability, and likability of a mobile phone app prototype developed to collect dietary data using artificial intelligence–based image recognition of foods, provide feedback, and motivate users to make healthier food choices. The findings were used to improve the design of the app. Methods A total of 4 focus group discussions (n=32 girls, aged 15-17 years) were conducted in Vietnam. Qualitative data were collected and analyzed by grouping ideas into common themes based on content analysis and ground theory. Results Adolescents accepted most of the individual- and team-based dietary goals presented in the app prototype to help them make healthier food choices. They deemed the overall app wireframes, interface, and graphic design as acceptable, likable, and usable but suggested the following modifications: tailored feedback based on users’ medical history, anthropometric characteristics, and fitness goals; new language on dietary goals; provision of information about each of the food group dietary goals; wider camera frame to fit the whole family food tray, as meals are shared in Vietnam; possibility of digitally separating food consumption on shared meals; and more appealing graphic design, including unique badge designs for each food group. Participants also liked the app’s feedback on food choices in the form of badges, notifications, and statistics. A new version of the app was designed incorporating adolescent’s feedback to improve its acceptability, usability, and likability. Conclusions A phone app prototype designed to track food choice and help adolescent girls from low- and middle-income countries make healthier food choices was found to be acceptable, likable, and usable. Further research is needed to examine the feasibility of using this technology at scale.
Background There is a gap in data on dietary intake of adolescents in low- and middle-income countries. Traditional methods for dietary assessment are resource intensive and lack accuracy with regards to portion size estimation. Technology assisted dietary assessment tools have been proposed but few have been validated for feasibility of use in LMICs. Objectives We assess the relative validity of FRANI (Food Recognition Assistance and Nudging Insights), a mobile Artificial Intelligence (AI) application for dietary assessment in adolescent females (n = 36) aged 12–18y in Vietnam, against weighed records (WR) standard, and compared FRANI performance to a multi-pass 24-hour recall (24HR). Methods Dietary intake was assessed using 3 methods: FRANI, WRs and 24HRs undertaken on three non-consecutive days. Equivalence of nutrient intakes was tested using mixed effect models adjusting for repeated measures, using 10%, 15% and 20% bounds. The concordance correlation coefficient (CCC) was used to assess the agreement between methods. Sources of errors were identified for memory and portion size estimation bias. Results Equivalence between FRANI app and WR was determined at the 10% bound for energy, protein and fat and four nutrients (iron, riboflavin, vitamin B6 and zinc), and at 15% and 20% bounds for carbohydrate, calcium, vitamin C, thiamin, niacin, and folate. Similar results were observed for differences between 24HR and WR with 20% equivalent bound for all nutrients except for vitamin A. The CCCs between FRANI and WR (0.60,0.81) were slightly lower CCCs between 24HR and WR (0.70,0.89) for energy and most nutrients. Memory error (food omissions or intrusions) was ∼21% with no clear pattern apparent on portion size estimation bias for foods. Conclusions AI assisted dietary assessment and 24HR accurately estimate nutrient intake in adolescent females when compared to WR. Errors could be reduced with further improvements of AI-assisted food recognition and portion estimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.