2013
DOI: 10.1007/s11042-013-1609-3
|View full text |Cite
|
Sign up to set email alerts
|

Speech-driven talking face using embedded confusable system for real time mobile multimedia

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
4
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…Finally on using simulation results which show that the end-to-end delay is minimized in cooperative communication according to the vehicle priority. There are lots of challenges ahead when V2V environment meets up M2M with various soft computing systems [32,[43][44][45], in this ear billions f ubiquitous device are connected together to form a huge mesh of big data [46,47] apart from that now a days voice controlled vehicles are facing real-time implementation challenges. …”
Section: Resultsmentioning
confidence: 99%
“…Finally on using simulation results which show that the end-to-end delay is minimized in cooperative communication according to the vehicle priority. There are lots of challenges ahead when V2V environment meets up M2M with various soft computing systems [32,[43][44][45], in this ear billions f ubiquitous device are connected together to form a huge mesh of big data [46,47] apart from that now a days voice controlled vehicles are facing real-time implementation challenges. …”
Section: Resultsmentioning
confidence: 99%
“…This will be a problem when the system is implemented for real-time animation. Map performance improvements were also made in the phoneme confusion section [5]. Whereas another research uses linguistics to create P2V maps and uses surveys for validation [6].…”
Section: Introductionmentioning
confidence: 99%
“…This special issue covers a wide range of topics in human-machine interaction, including text-or speech-driven facial animation [11,23,28], emphatic speech synthesis [20], head and facial gesture synthesis [10], human pose estimation [24], crowd counting [6], person identification [25] and facial expression recognition [8].…”
mentioning
confidence: 99%
“…In the mobile computing age, talking avatars, or virtual assistants, are playing an increasingly important role in serving a natural user interface due to the physical limits of smartphones and special features of mobile platforms. In [23], Shih et al aim to synthesize a real-time speech-driven talking face for mobile multimedia. A lifelike talking avatar requires not only natural speech articulation, but also expressive head motions, emotional facial expressions and other meaningful facial gestures [15].…”
mentioning
confidence: 99%