2017 IEEE International Conference on Image Processing (ICIP) 2017
DOI: 10.1109/icip.2017.8296652
|View full text |Cite
|
Sign up to set email alerts
|

Multi-view task-driven recognition in visual sensor networks

Abstract: Nowadays, distributed smart cameras are deployed for a wide set of tasks in several application scenarios, ranging from object recognition, image retrieval, and forensic applications. Due to limited bandwidth in distributed systems, efficient coding of local visual features has in fact been an active topic of research. In this paper, we propose a novel approach to obtain a compact representation of high-dimensional visual data using sensor fusion techniques. We convert the problem of visual analysis in resourc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 30 publications
0
3
0
Order By: Relevance
“…As a model is well trained for a specific task, it can be hardly implemented to other targets or environments. Al-though some works have been proposed on generalizing pre-trained models to unfamiliar tasks, such as target-driven network [4], dueling network [5], context grid [6] and multiview representation learning [7], these methods fail to make full use of previous experience and guarantee their stability when dealing with novel experiments. To tackle this challenge, rather than setting up multi-tasking network or other similar approaches to improve compatibility, we introduce meta-learning mechanism and enable our navigation model to integrate its prior experience with new cognition obtained from the current task.…”
Section: Introductionmentioning
confidence: 99%
“…As a model is well trained for a specific task, it can be hardly implemented to other targets or environments. Al-though some works have been proposed on generalizing pre-trained models to unfamiliar tasks, such as target-driven network [4], dueling network [5], context grid [6] and multiview representation learning [7], these methods fail to make full use of previous experience and guarantee their stability when dealing with novel experiments. To tackle this challenge, rather than setting up multi-tasking network or other similar approaches to improve compatibility, we introduce meta-learning mechanism and enable our navigation model to integrate its prior experience with new cognition obtained from the current task.…”
Section: Introductionmentioning
confidence: 99%
“…Once a navigation model is fully updated based on a particular task, it cannot be employed to solve navigation problems of other targets or environments. To tackle this problem, plenty of works have been proposed such as scene-specific model [ 8 ], value and advantage saliency maps [ 9 ], learning spatial context [ 10 ], and multiview fusion technique [ 11 ]. However, none of these approaches can make the best of former experience and ensure good stability when configured for unfamiliar tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, learning-based mapless navigation methods [1,2] have become popular since they donot require any environmental assumptions and any human guidance. Generally, there are many works on generalizing pre-trained models to unseen tasks, e.g., additional human checkpoints [3], double Q-Learning [4], dueling network [5], target-driven [2], multi-view representation learning [6], neural task graphs [7], multi-task SSD face detector [8], and so on. This paper focuses on the parameterized skills based methods [9,10,11], which can be adapted to well predict optimal models by parameterizing tasks.…”
Section: Introductionmentioning
confidence: 99%