2022
DOI: 10.3390/mi13111983
|View full text |Cite
|
Sign up to set email alerts
|

YOLOv4-Tiny-Based Coal Gangue Image Recognition and FPGA Implementation

Abstract: Nowadays, most of the deep learning coal gangue identification methods need to be performed on high-performance CPU or GPU hardware devices, which are inconvenient to use in complex underground coal mine environments due to their high power consumption, huge size, and significant heat generation. Aiming to resolve these problems, this paper proposes a coal gangue identification method based on YOLOv4-tiny and deploys it on the low-power hardware platform FPGA. First, the YOLOv4-tiny model is well trained on th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…Batch normalization helps to reduce the problem of internal covariate drift, which occurs when there is high variation in the input data. This can lead to slower convergence and overfitting [34]. Batch normalization is a powerful technique that improve the performance of deep neural networks [35].…”
Section: Deep Neural Network Multitasking Architecturementioning
confidence: 99%
“…Batch normalization helps to reduce the problem of internal covariate drift, which occurs when there is high variation in the input data. This can lead to slower convergence and overfitting [34]. Batch normalization is a powerful technique that improve the performance of deep neural networks [35].…”
Section: Deep Neural Network Multitasking Architecturementioning
confidence: 99%
“…Their aim is to support multiple network functions while achieving a high performance at 100 Gbps. In the deep learning field, Xu et al (reference [ 2 ]) propose a low-power design for the YOLOv4-tiny model using an FPGA. Their design utilizes 16-bit fixed-point operators, which trade precision for the achievement of over 10 times and 3 times the power dissipation compared to CPU and GPU, respectively.…”
mentioning
confidence: 99%