2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7965889
|View full text |Cite
|
Sign up to set email alerts
|

Fast on-line kernel density estimation for active object localization

Abstract: Abstract-A major goal of computer vision is to enable computers to interpret visual situations-abstract concepts (e.g., "a person walking a dog," "a crowd waiting for a bus," "a picnic") whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. In this paper, we propose a novel method for prior learning and active object localization for this kind of knowledge-driven search in static images. In our system, prior situation knowledge is capture… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Many data analysis problems, reaching from population analysis to computer vision [6,28], require estimating continuous models from discrete samples. Formally, this is the density estimation problem, where, given a sample fx i g $ pðxÞ, we would like to estimate the probability density function (PDF) p(x).…”
Section: Introductionmentioning
confidence: 99%
“…Many data analysis problems, reaching from population analysis to computer vision [6,28], require estimating continuous models from discrete samples. Formally, this is the density estimation problem, where, given a sample fx i g $ pðxÞ, we would like to estimate the probability density function (PDF) p(x).…”
Section: Introductionmentioning
confidence: 99%
“…Following [30] and [33], in the current study we use a dataset consisting of single pedestrian instances from the Portland State Dog-Walking Images for our proof of concept and comparative experiments [45]. This dataset contains 460 high-resolution annotated photographs, taken in a variety of locations.…”
Section: Datasetmentioning
confidence: 99%
“…Following [17] and [20], in the current study we use a subset consisting of single pedestrian instances from the Portland State Dog-Walking Images for our proof of concept and comparative experiments [30]. This subset contains 460 high-resolution annotated photographs, taken in a variety of locations.…”
Section: Datasetmentioning
confidence: 99%