2015
DOI: 10.1049/el.2015.0080
|View full text |Cite
|
Sign up to set email alerts
|

Bag‐of‐binary‐features for fast image representation

Abstract: The possibility of integrating binary features into the bag-of-features (BoFs) model is explored. The set of binary features extracted from an image are packed into a single vector form, to yield the bag-ofbinary-features (BoBFs). The efficient BoBF feature extraction and quantisation provide fast image representation. The trade-off between accuracy and efficiency in BoBF compared with BoF is investigated through image retrieval tasks. Experimental results demonstrate that BoBF is a competitive alternative to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…It is common for BoF to use SIFT as visual detectors for feature extractions. However, SIFT is a very time-consuming task and often failed to extract sufficient interest points for classification problems [27]. Anthimopoulos et al [28] conducted multiple experiments to find out an optimized BoF model for food recognition.…”
Section: A Handcrafted Features Basedmentioning
confidence: 99%
“…It is common for BoF to use SIFT as visual detectors for feature extractions. However, SIFT is a very time-consuming task and often failed to extract sufficient interest points for classification problems [27]. Anthimopoulos et al [28] conducted multiple experiments to find out an optimized BoF model for food recognition.…”
Section: A Handcrafted Features Basedmentioning
confidence: 99%
“…In [21] the visual vocabulary was calculated by binarizing the centroids obtained using the standard k-means. In [82,26,45] the k-means clustering was modified to fit the binary features by replacing the Euclidean distance with the Hamming distance, and by replacing the mean operation with the median operation. In [76] the VLAD image signature was adapted to work with binary descriptors: kmeans is used for learning the visual vocabulary and the VLAD vectors are computed in conjunction with an intra-normalization and a final binarization step.…”
Section: Related Workmentioning
confidence: 99%
“…Since quantization and aggregation methods are defined and used almost exclusively in conjunction with non-binary features, the cost of extracting local descriptors and to quantize/aggregate them on the fly, is still high. Recently, some approaches that attempt to integrate the binary local descriptors with the quantization and aggregation methods have been proposed in literature [21,26,45,76,73,82]. In these proposals, the aggregation is directly applied on top of binary local descriptors.…”
Section: Introductionmentioning
confidence: 99%
“…As compared to the BoF technique, the proposed method achieved a considerable gain in computational speed with only marginal loss in accuracy. As reported in [4], this was due to the trade-off between accuracy and computational efficiency of the binary feature itself. In addition, the proposed method unexpectedly outperformed the BoBF method, which implemented the same binary feature, both in accuracy and computational efficiency.…”
mentioning
confidence: 95%
“…These operations are very computationally cheap, and it has been shown that extraction of binary features is nearly two orders of magnitude faster than that of gradient‐based local features. In [4], to achieve fast image representation, binary features are aggregated into a single vector by implementing a BoF algorithm to yield bag‐of‐binary features (BoBFs). Since binary features consist of a sequence of bit strings, to aggregate these binary features, k ‐means clustering has been modified to fit the binary features by implementing Hamming distance and median operation instead of typical Euclidean distance and mean operation, respectively.…”
Section: Introductionmentioning
confidence: 99%