2015
DOI: 10.1016/j.adhoc.2015.01.008
|View full text |Cite
|
Sign up to set email alerts
|

Cooperative image analysis in visual sensor networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…Our paper is different from these works as it provides a characterization of a variety of statistical properties of interest points that are relevant for in-network processing is VSNs. While our study is motivated by applications in VSNs [19]- [21], our results may provide insight into parallel or distributed processing of visual features on multi-core computing platforms and for stream processing in cloud environments [22], [23]. Furthermore, the evaluation and the comparison of the spatial distributions of the interest points using SURF and BRISK are also relevant for vi-sual tasks such as homography estimation, 3D visualization and tracking, as demonstrated in [17].…”
Section: Introductionmentioning
confidence: 98%
“…Our paper is different from these works as it provides a characterization of a variety of statistical properties of interest points that are relevant for in-network processing is VSNs. While our study is motivated by applications in VSNs [19]- [21], our results may provide insight into parallel or distributed processing of visual features on multi-core computing platforms and for stream processing in cloud environments [22], [23]. Furthermore, the evaluation and the comparison of the spatial distributions of the interest points using SURF and BRISK are also relevant for vi-sual tasks such as homography estimation, 3D visualization and tracking, as demonstrated in [17].…”
Section: Introductionmentioning
confidence: 98%
“…Similarly, Redondi et al [81,82] propose a framework for cooperative feature extraction on low-power visual sensor nodes. Several different network configurations and protocols are proposed and empirically evaluated in terms of speed up of feature extraction task, network lifetime, and energy consumption.…”
Section: V I S U a L F E A T U R E T R A N S -M I S S I O N A N Dmentioning
confidence: 99%
“…Global encoding Bag-of-Words (BoW) [5], Pyramid Kernel [60], Tree codebook [61], Kernel codebook (KC) [62], Sparse coding [63], Locality-constrained Linear Coding (LLC) [64], Hamming Embedding (HE) [65], VLAD [66], Fisher Kernel (FK) [67], Super Vector [68], Bag-of-Binary-Words [69], BVLAD [70] Other Location coding [71,72], SIFT-Preserving JPEG [73] and H.264/AVC [74], Chen and Moulin [75], Hybrid ATC (HATC) [7], Interframe patch [9] and descriptor [76] coding, VideoSIFT [10], VideoBRISK [77] Feature networking Section V -Y a n g et al [78,79], feature extraction offloading [80][81][82], lossy feature transmission [3], Mobile Visual Search [83] exploited to encode visual features, providing a significant coding gain with respect to the case of still images. Similar works in the previous literatures focus on either feature extraction [12,13] or encoding [14,15].…”
mentioning
confidence: 99%
“…In [15] we provided closed form expressions for the minimization of the completion time of distributed feature extraction of a single image.…”
Section: Feature Networkingmentioning
confidence: 99%