2022
DOI: 10.1016/j.leaqua.2021.101535
|View full text |Cite
|
Sign up to set email alerts
|

Predicting leadership perception with large-scale natural language data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 29 publications
(24 citation statements)
references
References 108 publications
0
24
0
Order By: Relevance
“…By initializing our model’s response vectors with pretrained word embeddings, we were able to train effective models using far less data than would have been necessary if we had trained these parameters from scratch. We have previously proposed related techniques for modeling judgments of words on various dimensions (Bhatia, 2019; Bhatia et al, in press; Gandhi et al, 2022; Richie et al, 2019; Zou & Bhatia, 2021; also see Hollis et al, 2017; Van Rensbergen et al, 2016; Sedoc et al, 2017). Other work, including ours, has also shown the applicability of such methods for the study of concept knowledge (Bhatia & Richie, in press; Derby et al, 2019; Lu et al, 2019; Richie & Bhatia, 2021).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…By initializing our model’s response vectors with pretrained word embeddings, we were able to train effective models using far less data than would have been necessary if we had trained these parameters from scratch. We have previously proposed related techniques for modeling judgments of words on various dimensions (Bhatia, 2019; Bhatia et al, in press; Gandhi et al, 2022; Richie et al, 2019; Zou & Bhatia, 2021; also see Hollis et al, 2017; Van Rensbergen et al, 2016; Sedoc et al, 2017). Other work, including ours, has also shown the applicability of such methods for the study of concept knowledge (Bhatia & Richie, in press; Derby et al, 2019; Lu et al, 2019; Richie & Bhatia, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…Semantic judgments about words (e.g., the tastiness of a food) can also be approximated by calculating the relative vector similarity of a judgment target (e.g., apple ) to words high (e.g., delicious , tasty ) and low (e.g., disgusting ) on a judgment dimension (Grand et al, 2022; Richie et al, 2019). However, even better (out-of-sample) approximations of semantic judgments can be made by directly regressing human ratings for a semantic dimension onto the vectors for judgment targets (Bhatia, 2019; Bhatia et al, in press; Gandhi et al, 2022; Hollis et al, 2017; Richie et al, 2019; Utsumi, 2020; Zou & Bhatia, 2021; see Snefjella & Blank, 2020, for a comprehensive list of “semantic norm extrapolation” studies, as well as caveats thereof). The advantage of this approach is that it allows for human data to directly supervise the setting of flexible weights on the attributes of the target representation.…”
Section: Theoretical Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach has also been used to model evaluations of naturalistic entities on a continuous scale, such as evaluations of the risk posed by technological hazards, the effectiveness of prominent politicians, the excitement generated by consumer brands, and the healthiness of food items (Bhatia, 2019; Bhatia et al, 2021; Gandhi et al, in press; Richie et al, 2019). Results have shown that linear models, common in judgment research, can be combined with word vector representations of these natural entities to predict participants’ evaluations on both an aggregate and an individual level.…”
Section: Judgment and Decision Makingmentioning
confidence: 99%
“…Social scientists increasingly use text-based measures as dependent or independent variables (1)(2)(3)(4)(5)(6). Texts are complex, high-dimensional objects; thus, researchers must find simpler, lower-dimensional representations for their texts to use them in scientific analyses.…”
Section: Introductionmentioning
confidence: 99%