2020 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM) 2020
DOI: 10.1109/cenim51130.2020.9297873
|View full text |Cite
|
Sign up to set email alerts
|

Kawi Character Recognition on Copper Inscription Using YOLO Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0
3

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 10 publications
0
6
0
3
Order By: Relevance
“…According to previous research, you only look once (YOLO) object detection methods are used in [5], [6], which implements the CNN architecture. YOLOv3-tiny was able to recognise the Kawi character on copper inscriptions due to its high detection accuracy (average of 97.93% in [5] and high detection speed.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…According to previous research, you only look once (YOLO) object detection methods are used in [5], [6], which implements the CNN architecture. YOLOv3-tiny was able to recognise the Kawi character on copper inscriptions due to its high detection accuracy (average of 97.93% in [5] and high detection speed.…”
Section: Literature Reviewmentioning
confidence: 99%
“…According to previous research, you only look once (YOLO) object detection methods are used in [5], [6], which implements the CNN architecture. YOLOv3-tiny was able to recognise the Kawi character on copper inscriptions due to its high detection accuracy (average of 97.93% in [5] and high detection speed. Meanwhile, the Oracle Bone inscriptions (OBIs) were recognised using two deep learning models in [6]: first, YOLOv3-tiny was used to detect and recognise OBIs, and second, MobileNet was used to detect undetected OBIs, as YOLOv3-tiny's limitations prevent all OBIs from being correctly recognised.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Yang et al [10] proposed a recognition guided detector (RGD) method that achieves tight Chinese character detection in historical documents. Santoso et al [11] detected Kawi characters using the YOLO architecture. The experimental results of this study demonstrated that the proposed method achieved the highest accuracy compared to other methods.…”
Section: Related Workmentioning
confidence: 99%
“…This was the case in Bui, et al, 2016, where their CNN was trained on RGB images but tested on RGB converted greyscale images and found that the greyscale test images yielded an improvement of +1.4% in detector accuracy over the same RGB test images (Bui, et al, 2016). In addition, because greyscale lacks colour information, it requires less computing memory to process convolutional calculations (Bui, et al, 2016;Santoso, Suprapto, & Yuniarno, 2020) and allows for faster training time (Ng, Tay, & Goi, 2013).…”
Section: Neural Network Training With Greyscale Imagesmentioning
confidence: 99%
“…There have also been studies using greyscale images to train CNNs. In Santoso, et al, 2020, YOLOv3-Tiny was trained on greyscale images and used RGB as test images to detect copper inscriptions which yielded a high average detection accuracy of 97.93% (Santoso, Suprapto, & Yuniarno, 2020). An investigation completed by Ng, et al, 2013, compared training CNNs with greyscale, RGB and YUV images, and it was found that training on greyscale images produced the lowest detector error rate (Ng, Tay, & Goi, 2013).…”
Section: Neural Network Training With Greyscale Imagesmentioning
confidence: 99%