2013
DOI: 10.1108/s1548-6435(2013)0000010012
|View full text |Cite
|
Sign up to set email alerts
|

An Introduction to Audio and Visual Research and Applications in Marketing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 83 publications
0
12
0
Order By: Relevance
“…The CaF belongs to the research domain called “artificial empathy,” defined as “the ability of nonhuman models to predict a person’s internal states (e.g., cognitive, affective, physical) given the signals he or she emits (e.g., facial expression, voice, gesture) or to predict a person’s reaction (including, but not limited to internal states) when he or she is exposed to a given set of stimuli (e.g., facial expression, voice, gesture, graphics, music, etc. )” (Xiao, Kim, and Ding 2013, p. 244) . The CaF framework enables us to predict how a person will react when they see someone’s face, without revealing the face itself.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The CaF belongs to the research domain called “artificial empathy,” defined as “the ability of nonhuman models to predict a person’s internal states (e.g., cognitive, affective, physical) given the signals he or she emits (e.g., facial expression, voice, gesture) or to predict a person’s reaction (including, but not limited to internal states) when he or she is exposed to a given set of stimuli (e.g., facial expression, voice, gesture, graphics, music, etc. )” (Xiao, Kim, and Ding 2013, p. 244) . The CaF framework enables us to predict how a person will react when they see someone’s face, without revealing the face itself.…”
Section: Discussionmentioning
confidence: 99%
“…For example, face-based perceptions such as maturity, dominance, and competence have been shown to influence the selection of employees (e.g., Gorn, Jiang, and Johar 2008; Graham, Harvey, and Puri 2016; Keh et al 2013). Second, perceptions from customer facial data can be used to improve targeting and segmentation, firms’ understanding of customer preferences, and marketing effectiveness (Xiao, Kim, and Ding 2013). For example, Lu, Xiao, and Ding (2016) analyzed customers’ facial expressions in videos to infer the customers’ product preferences.…”
Section: Literature Review and Motivationmentioning
confidence: 99%
“…FaceReader has been used to capture emotions in previous research by, for example, Danner, Sidorkina, Joechl, and Duerrschmid (2013) and de Wijk, Kooijman, Verhoeven, Holthuysen, and de Graaf (2012). A main assumption is that the individual's facial expressions provide useful indications of the individual's emotions (Xiao, Kim, and Ding 2013). The level of the seven emotion variables (range 0-1; the higher the value, the higher the emotional intensity) are computed 30 times per second.…”
Section: Methodsmentioning
confidence: 99%
“…Thanks to advanced computer vision techniques, a customer's facial expressions and the areas of a garment that a customer touches can both be automatically inferred from video data with reasonable accuracy (see a recent review by Xiao et al 2013). We describe the analysis process below (see Figure 2).…”
Section: Step 1: Infer Preferences Frommentioning
confidence: 99%
“…Applications of video analysis in marketing may include detecting gender, skin color, and eye gaze; recognizing facial expressions and body or hand gestures; tracking trajectories; and counting people (see Xiao et al 2013). As a first attempt to use video analysis to infer customers' individual preferences, the VAR system may yield benefits such as reducing customer searching effort, increasing retail sales by recommending garments that customers are likely to purchase, and helping companies adjust designs or inventory to match customer preferences.…”
Section: Figurementioning
confidence: 99%