An accurate vision system to classify and analyze fruits in real time is critical for harvesting robots to be cost-effective and efficient. However, practical success in this area is still limited, and to the best of our knowledge, there is no research in the area of machine vision for date fruits in an orchard environment. In this work, we propose an efficient machine vision framework for date fruit harvesting robots. The framework consists of three classification models used to classify date fruit images in real time according to their type, maturity, and harvesting decision. In the classification models, deep convolutional neural networks are utilized with transfer learning and fine-tuning on pre-trained models. To build a robust vision system, we create a rich image dataset of date fruit bunches in an orchard that consists of more than 8000 images of five date types in different pre-maturity and maturity stages. The dataset has a large degree of variations that reflects the challenges in the date orchard environment including variations in angles, scales, illumination conditions, and date bunches covered by bags. The proposed date fruit classification models achieve accuracies of 99.01%, 97.25%, and 98.59% with classification times of 20.6, 20.7, and 35.9 msec for the type, maturity, and harvesting decision classification tasks, respectively. INDEX TERMS Dates classification, maturity analysis, automated harvesting, deep learning, convolutional neural networks.
Electroencephalography-based motor imagery (EEG-MI) classification is a critical component of the brain-computer interface (BCI), which enables people with physical limitations to communicate with the outside world via assistive technology. Regrettably, EEG decoding is challenging because of the complexity, dynamic nature, and low signal-to-noise ratio of the EEG signal. Developing an end-to-end architecture capable of correctly extracting EEG data’s high-level features remains a difficulty. This study introduces a new model for decoding MI known as a Multi-Branch EEGNet with squeeze-and-excitation blocks (MBEEGSE). By clearly specifying channel interdependencies, a multi-branch CNN model with attention blocks is employed to adaptively change channel-wise feature responses. When compared to existing state-of-the-art EEG motor imagery classification models, the suggested model achieves good accuracy (82.87%) with reduced parameters in the BCI-IV2a motor imagery dataset and (96.15%) in the high gamma dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.