2020
DOI: 10.48084/etasr.3573
|View full text |Cite
|
Sign up to set email alerts
|

Multi-national and Multi-language License Plate Detection using Convolutional Neural Networks

Abstract: Many real-life machine and computer vision applications are focusing on object detection and recognition. In recent years, deep learning-based approaches gained increasing interest due to their high accuracy levels. License Plate (LP) detection and classification have been studied extensively over the last decades. However, more accurate and language-independent approaches are still required. This paper presents a new approach to detect LPs and recognize their country, language, and layout. Furthermore, a new … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 37 publications
0
8
0
Order By: Relevance
“…Research on recognizing licensed plates from different countries has not yet been studied enough. A full-fledged study [26] with classification by Brazil, USA, Europe, Turkey, Saudi Arabia, and United Arab Emirates countries was carried out. The use of YOLOv2 [27] as an object detection model and VGG16 [3] as a classifier showed good performance on country recognition task.…”
Section: Related Workmentioning
confidence: 99%
“…Research on recognizing licensed plates from different countries has not yet been studied enough. A full-fledged study [26] with classification by Brazil, USA, Europe, Turkey, Saudi Arabia, and United Arab Emirates countries was carried out. The use of YOLOv2 [27] as an object detection model and VGG16 [3] as a classifier showed good performance on country recognition task.…”
Section: Related Workmentioning
confidence: 99%
“…The first use of YOLO CNN was attempted by [18] to detect LPs of vastly different plate orientations, yielding 99.5% F1-score. YOLOv2 algorithm with modified ResNet50 CNN was proposed by [19] to localise and detect the nature of multi-national LP (country, size, and languages but did not work on recognising the characters on LPs), achieving 99.57% detection precision. [20] also used YOLOv2 because they claimed YOLOv3 has more layers that slow down the training, which is not entirely true depending on which CNN model to utilise in YOLO workflow.…”
Section: Related Work a Transition Of Alpr To Deep Learning Algorithmmentioning
confidence: 99%
“…Nevertheless, they also achieved 95 to 97.5% precision score with YOLOv2 algorithm. Similarly, [19] extended the dataset for multinational and multi-language LP, reaching 99.57% of AP in LP detection. [20] further enhance datasets by synthesising LPs to overcome small dataset size and train a custom CNN model ported to Fast-YOLO to perform ALPR.…”
Section: Related Work a Transition Of Alpr To Deep Learning Algorithmmentioning
confidence: 99%
“…Park, Yoon & Park (2019) concerned USA and Korean LPs describing the problem as multi-style detection. CNN shrinkage-based architecture was studied in Salemdeeb & Erturk (2020) , utilizing the maximum number of convolutional layers that can be added. Salemdeeb & Erturk (2020) studied the LP detection and country classification problem for multinational and multi-language LPs from Turkey, Europe, USA, UAE and KSA, without studying CR problem.…”
Section: Introductionmentioning
confidence: 99%
“…CNN shrinkage-based architecture was studied in Salemdeeb & Erturk (2020) , utilizing the maximum number of convolutional layers that can be added. Salemdeeb & Erturk (2020) studied the LP detection and country classification problem for multinational and multi-language LPs from Turkey, Europe, USA, UAE and KSA, without studying CR problem. These researches studied LPs from 23 different countries where most of them use Latin characters to write the LP number, and totally five languages were concerned (English, Taiwanese, Korean, Chinese and Arabic).…”
Section: Introductionmentioning
confidence: 99%