2008
DOI: 10.1109/jproc.2008.916383
|View full text |Cite
|
Sign up to set email alerts
|

Application Potential of Multimedia Information Retrieval

Abstract: Abstract-This paper will first briefly survey the existing impact of MIR in applications. It will then analyze the current trends of MIR research which can have an influence on future applications. It will then detail the future possibilities and bottlenecks in applying the MIR research results in the main target application areas, such as consumer (e.g. personal video recorders, web information retrieval), public safety (e.g. automated smart surveillance systems) and professional world (e.g. automated meeting… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(15 citation statements)
references
References 52 publications
(46 reference statements)
0
15
0
Order By: Relevance
“…We propose Emotion Based Contextual Semantic Relevance Feedback(ECSRF) to learn, refine, discriminate and identify the current context present in a query. We will further investigate: (1) whether multimedia attributes(audio, speech along with visual) can be purposefully used to work out a current context of user's query and will be useful in reduction of search space and retrieval time; (2) whether increasing the Affective features (spoken emotional word(s) with facial expression(s)) in identifying, discriminating emotions would increase the overall retrieval performance in terms of Precision, Recall and retrieval time;(3)whether increasing the discriminating power of classifier algorithm in query perfection would increase the search accuracy with less retrieval time. We introduce an Emotion Recognition Unit(ERU) that comprises of a customized 3D spatiotemporal Gabor filter to capture spontaneous facial expression, and emotional word recognition system (combination of phonemes and visemes) to recognize the spoken emotional words.…”
Section: Problem Formulationmentioning
confidence: 99%
See 2 more Smart Citations
“…We propose Emotion Based Contextual Semantic Relevance Feedback(ECSRF) to learn, refine, discriminate and identify the current context present in a query. We will further investigate: (1) whether multimedia attributes(audio, speech along with visual) can be purposefully used to work out a current context of user's query and will be useful in reduction of search space and retrieval time; (2) whether increasing the Affective features (spoken emotional word(s) with facial expression(s)) in identifying, discriminating emotions would increase the overall retrieval performance in terms of Precision, Recall and retrieval time;(3)whether increasing the discriminating power of classifier algorithm in query perfection would increase the search accuracy with less retrieval time. We introduce an Emotion Recognition Unit(ERU) that comprises of a customized 3D spatiotemporal Gabor filter to capture spontaneous facial expression, and emotional word recognition system (combination of phonemes and visemes) to recognize the spoken emotional words.…”
Section: Problem Formulationmentioning
confidence: 99%
“…3. (The Distance transform is applied on source image [40] downloaded 1 to show the convex and concave regions on face) | | (2) where is the initial value of maxima or minima as the case of convex point or concave point. Z d is the relative distance.We incorporate the effect of mapping of relative distances in z-direction with intensity differences of neighbourhood pixels to calculate the filter's response in z-direction as stimuli moves in zdirection.…”
Section: Customized Spatiotemporal Gabor Filtermentioning
confidence: 99%
See 1 more Smart Citation
“…Using face annotation for cost-effective management of personal photo collections is an area of current research interest and intense development [6], [10], [11]. In the past few years, considerable research efforts have been dedicated to the development of face annotation methods that facilitate photo management and search [16]- [29].…”
Section: Related Workmentioning
confidence: 99%
“…6, the first three most significant dimensions of the PCA feature subspace are plotted for each face image. It can be seen that two outliers (f (5) q and f (6) q )-subject to a significant variation in pose and illumination-are located far from the feature vector f (1) t of the correct target subject, compared to the f (2) t of the incorrect target subject. As a result, the two outliers may be misclassified as the target identity f (2) t when performing FR in an independent way.…”
Section: |C|)mentioning
confidence: 99%