2012
DOI: 10.1007/978-3-642-35289-8_30
|View full text |Cite
|
Sign up to set email alerts
|

Learning Feature Representations with K-Means

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
346
0
8

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 480 publications
(359 citation statements)
references
References 23 publications
5
346
0
8
Order By: Relevance
“…K-means has already been identified as a successful method to learn features from images by computer vision researchers. The popular "bag of features" model [15] [16] from the computer vision community is very similar to the pipeline that we will use in this chapter, and many conclusions here are similar to those identified by vision researchers [17] [18]. So the paper introduces the K-means to realize the vein feature representation system so as to find the feature distribution without adopting hand-crafted feature as the prior knowledge, and the experimental results demonstrate the efficiency of the proposed single-layer feature learning model in solving the feature learning problem in hand vein recognition task.…”
Section: Introductionmentioning
confidence: 57%
See 1 more Smart Citation
“…K-means has already been identified as a successful method to learn features from images by computer vision researchers. The popular "bag of features" model [15] [16] from the computer vision community is very similar to the pipeline that we will use in this chapter, and many conclusions here are similar to those identified by vision researchers [17] [18]. So the paper introduces the K-means to realize the vein feature representation system so as to find the feature distribution without adopting hand-crafted feature as the prior knowledge, and the experimental results demonstrate the efficiency of the proposed single-layer feature learning model in solving the feature learning problem in hand vein recognition task.…”
Section: Introductionmentioning
confidence: 57%
“…The expression of stands for the code vector of the input vector .The first constrain is set to guarantee the sparsity with the condition that there exists one nonzero item at most, while the second criterion is set to avoid that there exists some unexpected data with large value in the feature vectors, which is the set to realize the constrain on the dictionary, the specific solving process is according to the methods in [18]. The final process is coding on the new input vectors to get the features after getting the training dictionary:…”
Section: Weight Matrix Learning Based On the K-means Algorithmmentioning
confidence: 99%
“…For future extension of our work, we are going to work on designing an unsupervised method (i.e., k-means [13] and CNN [14]) to figure out the better control factor in edge gradient information calculation. What's more, direct enhancement on RGB images will be tried and discussed later to find another efficient framework on not only vein images but also other low-contrast medical images enhancement problem.…”
Section: Discussionmentioning
confidence: 99%
“…Suppose that the maximum decomposition scale is set as n, the result after multiscale top-hat processing based on the above equations could be expressed as (12) Xop → {Xop o1 , Xop 2 , ⋯ , Xop ⋯ , Xop }, (13) Xcl → {Xcl 1 , Xcl 2 , ⋯ , Xcl ⋯ , Xcl }. Xop , Xcl represent the set of detailed information at the adjacent scale, and the key design for the later enhancement process is how to effectively enhance the detailed information while avoiding importing fake vein information.…”
Section: Multi-scale Morphological Filtering Theorymentioning
confidence: 99%
“…As mentioned in [18] and [19], they found out that K-means algorithm attains best performance with only one parameter to tune. On the other hand, [20] proposed a new feature learning algorithm called sparse filtering.…”
Section: Pre-trainmentioning
confidence: 93%