2018 13th IAPR International Workshop on Document Analysis Systems (DAS) 2018
DOI: 10.1109/das.2018.70
|View full text |Cite
|
Sign up to set email alerts
|

Word Spotting and Recognition Using Deep Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
82
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 86 publications
(82 citation statements)
references
References 12 publications
0
82
0
Order By: Relevance
“…In this application, text and symbols are treated as noise that must be removed, but the use is planned of these features as information for the automatic creation of map labels as points in a vector map. The word spotting technique [74] will be tested with the query-by-string (QBS) approach [62,75], using the available list of toponyms as input. This approach has already been tested for maps, but text portions on the maps are obtained through image binarization [76,77], usually applying Otsu's global thresholding [78], rather than segmentation, and the focus is on words recognition rather than the separation of areas and labels on the map.…”
Section: Transects In Bluementioning
confidence: 99%
“…In this application, text and symbols are treated as noise that must be removed, but the use is planned of these features as information for the automatic creation of map labels as points in a vector map. The word spotting technique [74] will be tested with the query-by-string (QBS) approach [62,75], using the available list of toponyms as input. This approach has already been tested for maps, but text portions on the maps are obtained through image binarization [76,77], usually applying Otsu's global thresholding [78], rather than segmentation, and the focus is on words recognition rather than the separation of areas and labels on the map.…”
Section: Transects In Bluementioning
confidence: 99%
“…In order to show the generic nature of the proposed architecture and its extension for query-by-string (QBS) spotting, in our set of parallel works [35,36], we have used HWNet architecture for both embedding into word attribute space defined by phoc [35] and also proposed an end2end architecture [36] which learns a common subspace between a text and image modality. It enables both QBE and QBS based word spotting along with word recognition using a fixed lexicon.…”
Section: Query-by-string Spotting Resultsmentioning
confidence: 99%
“…In the later set of works [75,76] from the same group, PHOCNet was adapted with temporal pooling layer (TPP-PHOCNet) and evaluated under different loss functions and optimization algorithms which further improved the word spotting performance. In [35,36], the features computed from HWNet [37] are embedded into word attribute space by training attribute based svm classifiers and projecting both image and textual attributes to a common subspace. In [83], the authors propose a two stage architecture where a triplet cnn network is trained to reduce the distance between the anchor word image and a similar labeled (positive) word image, while simultaneously increasing the distance between the anchor and negative labeled word image.…”
Section: Deep Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…This task of searching for words in images either by string or by example is called word spotting [7]. In recent years, the performance of word spotting systems has greatly improved through the use of learning based approaches using convolutional neural networks (CNNs), for example in the work by Sudholt and Fink [25] or by Krishnan et al [14]. However, one drawback of these approaches is the large amount of labeled data required to train these approaches.…”
Section: Introductionmentioning
confidence: 99%