2020
DOI: 10.1007/978-3-030-57058-3_21
|View full text |Cite
|
Sign up to set email alerts
|

Annotation-Free Learning of Deep Representations for Word Spotting Using Synthetic Data and Self Labeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(20 citation statements)
references
References 30 publications
0
20
0
Order By: Relevance
“…QbE systems require that users provide some examples of the word they want to search in the document collection [ 13 , 14 , 15 ], whereas QbS systems allow to provide a text string, named keyword , as query [ 16 , 17 , 18 , 19 ]. In the last few years, word spotting systems that can be used with both QbE and QbS search options have been proposed by exploiting, for example, an end-to-end deep neural network architecture [ 20 ] or pyramidal histogram of characters embeddings [ 21 , 22 ].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…QbE systems require that users provide some examples of the word they want to search in the document collection [ 13 , 14 , 15 ], whereas QbS systems allow to provide a text string, named keyword , as query [ 16 , 17 , 18 , 19 ]. In the last few years, word spotting systems that can be used with both QbE and QbS search options have been proposed by exploiting, for example, an end-to-end deep neural network architecture [ 20 ] or pyramidal histogram of characters embeddings [ 21 , 22 ].…”
Section: Introductionmentioning
confidence: 99%
“…The use of convolutional neural networks [ 23 , 24 ] increased the performance of word spotting systems but these networks need a training set with a large amount of annotated data for being trained. Many solutions have been proposed for improving the word spotting performance without increasing the size of the training set: sample selection [ 25 ], data augmentation [ 23 ], transfer learning [ 26 , 27 ], training on synthetic data [ 22 , 28 ] and relaxed feature matching [ 29 ].…”
Section: Introductionmentioning
confidence: 99%
“…In the following, we will discuss some basic technologies involved during the preprocessing step. [4, 41, 91, 92, 99-101, 105-115, 117, 118, 144, 145, 147, 148, 150, 155, 159] conditions [36, 160, 163, 165, 166, 168-170, 172-177, 177-180] Learning-free or [51-56, 58, 61, 62, 64, 67, 78, 84, 85, 88, 90-92, 95-98, 102, 103, 105] annotation-free [114-116, 118, 123-129, 132, 133, 136, 138, 147, 148, 150, 151, 154, 157, 159] methods [161,163,164,168,173,174,176,177] Segmentation-free [27, 54, 55, 58, 59, 61, 70, 72, 73, 84-86, 88, 90, 95, 96, 102, 116, 129, 148, 154] methods [156,168,170,172,[175][176][177]] Out-of-vocabulary [29,31,32,34,37,43,50,57,59,63,66,68,69,71,72,75,76,79,82,83,87] (OOV) KWS for [3, 4, 100-102, 104, 120, 125, 144, 150, 153, 156, 159, 160, 165, 166, 169, 175] QBS scenario [36,172,…”
Section: Basic Document Image Analysis Technologies Involvedmentioning
confidence: 99%
“…Although the recent explosion of utilizing deep learning models for binarization has achieved seminal performance, most of the proposed deep learning methods for document image KWS [4,32,34,36,63,69,70,74,101,144,159,160,165,166,169,170,[172][173][174][175]178] prefer features extracted directly from the unprocessed input images without relying on a binarization step [161] at all. Most of these works argue that deep features detected on preprocessed images might miss distinctive information found in original input images [80].…”
Section: Binarizationmentioning
confidence: 99%
See 1 more Smart Citation