Fruits produce a wide variety of secondary metabolites of great economic value. Analytical measurement of secondary metabolites is tedious, time-consuming and expensive. Additionally, metabolite concentration varies greatly from tree to tree, making it difficult to choose trees for fruit collection. The current study tested whether deep learning-based models can be developed using fruit and leaf images alone to predict a metabolite's concentration class (high or low). We collected fruits and leaves (n = 1045) from neem trees grown in the wild across 0.6 million sq km, imaged those, measured concentration of five metabolites (azadirachtin, deacetyl-salannin, salannin, nimbin and nimbolide) using high-performance liquid chromatography and used those to train deep learning models for metabolite class prediction. The best model out of the seven tested (YOLOv5, GoogLeNet, InceptionNet, EfficientNet_B0, Resnext_50, Resnet18, and SqueezeNet) provided a validation F1 score of 0.93 and a test F1 score of 0.88. The sensitivity and specificity of the fruit model alone in the test set were 83.52 ± 6.19 and 82.35 ± 5.96 and 79.40 ± 8.50 and 85.64 ± 6.21, for the low and the high class, respectively. The sensitivity was further boosted to 92.67 ± 5.25 for the low class and 88.11 ± 9.17 for the high class and the specificity to 100% for both classes, using a multi-analyte framework. We incorporated the model in an Android mobile App Fruit-In-Sight that uses fruit and leaf images to decide whether to ″pick″ or ″not pick″ the fruits from a specific tree based on the metabolite concentration class. Our study provides evidence that images of fruits and leaves alone can predict the concentration class of a secondary metabolite without using extensive analytical laboratory procedures and equipment and makes the process of choosing the right tree for fruit collection easy and free of equipment and additional cost.