The adoption of automated crop harvesting system based on machine vision may improve productivity and optimize the operational cost. The scope of this study is to obtain visual information at the plantation which is crucial in developing an intelligent automated crop harvesting system. This paper aims to develop an automatic detection system with high accuracy performance, low computational cost and lightweight model. Considering the advantages of YOLOv3 tiny, an optimized YOLOv3 tiny network namely YOLO-P is proposed to detect and localize three objects at palm oil plantation which include fresh fruit bunch, grabber and palm tree under various environment conditions. The proposed YOLO-P model incorporated lightweight backbone based on densely connected neural network, multi-scale detection architecture and optimized anchor box size. The experimental results demonstrated that the proposed YOLO-P model achieved good mean average precision and F1 score of 98.68% and 0.97 respectively. Besides, the proposed model performed faster training process and generated lightweight model of 76 MB. The proposed model was also tested to identify fresh fruit bunch of various maturities with accuracy of 98.91%. The comprehensive experimental results show that the proposed YOLO-P model can effectively perform robust and accurate detection at the palm oil plantation. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Marine litter has been one of the major challenges and a well-known issue across the globe for decades. 6.4 million tonnes of marine debris per year is estimated to enter water environments, with 8 million items entering each day. These statistics are so worrying, and mitigation steps need to be taken for the sake of a sustainable community. The major contributor to marine litter is no other than riverine litter. However, when there is not enough data about the amount of litter being transported, making quantitative data for monitoring impossible. Nowadays, most countries still use visual counting, which limits the feasibility of scaling to long-term monitoring at multiple locations. Therefore, an object detector using one of the deep learning algorithms, You Only Look Once version 4 (YOLOv4), is developed for floating debris of riverine monitoring system to mitigate the problem mentioned earlier. The proposed automated detection method has the capability to detect and categorize riverine litter, which can be improved in terms of detection speed and accuracy using YOLOv4. The detector is trained on five object classes such as styrofoam, plastic bags, plastic bottle, aluminium can and plastic container. Image augmentation technique is implemented into the previous datasets to increase training and validation datasets, which results in the increase of accuracy of the training. Some YOLOv4 and YOLOv4-tiny parameters have also been studied and manipulated to see their effects on the training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.