Handbook of Employee Selection 2017
DOI: 10.4324/9781315690193-44
|View full text |Cite
|
Sign up to set email alerts
|

The Impact of Emerging Technologies on Selection Models and Research

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 1 publication
0
7
0
Order By: Relevance
“…As a final example, Arthur, Doverspike, Kinney, and O'Connell (2017) provide a strong note of caution regarding game-thinking in selection contexts, pointing out that job candidates are likely already highly motivated and so the enhanced engagement which game elements are meant to deliver may not have appreciable effects. Further, game mechanics related to the value of feedback may have negative consequences such as increased anxiety (Arthur et al, 2017). Hawkes et al (2018) go so far as to note that the reliability of game assessments may be affected negatively by practice effects such as those seen in the gaming world or by adding noisy variance associated with hand-eye coordination and mouse control.…”
Section: Effects Of Contextual Enhancementmentioning
confidence: 99%
See 1 more Smart Citation
“…As a final example, Arthur, Doverspike, Kinney, and O'Connell (2017) provide a strong note of caution regarding game-thinking in selection contexts, pointing out that job candidates are likely already highly motivated and so the enhanced engagement which game elements are meant to deliver may not have appreciable effects. Further, game mechanics related to the value of feedback may have negative consequences such as increased anxiety (Arthur et al, 2017). Hawkes et al (2018) go so far as to note that the reliability of game assessments may be affected negatively by practice effects such as those seen in the gaming world or by adding noisy variance associated with hand-eye coordination and mouse control.…”
Section: Effects Of Contextual Enhancementmentioning
confidence: 99%
“…There is ample research available to indicate that some measures are equivalent across modes (non-cognitive tests; see review by Tippins, 2015) and others are not (e.g., speeded cognitive tests; King, Ryan, Kantrowitz, Grelle, & Dainis, 2015;Mead & Drasgow, 1993). For example, non-cognitive measures may be equivalent when moving from PC to mobile assessment but cognitive measures and SJTs might not, depending on features like scrolling… (see Arthur et al, 2017;King et al, 2015 for comparisons). Morelli, Potosky, Arthur, and Tippins (2017) note that "reactive equivalence" studies comparing assessment modes are not theoretically (or even practically) informative as they do not address why mode differences occur or the reasons for the construct non-equivalence.…”
Section: Effects Of Efficiencymentioning
confidence: 99%
“…It is also worth noting that the tenets of the SCIP framework are not necessarily limited to UIT devices; and indeed, they can conceptually inform discussions of a wider range of assessments, particularly in any domain in which construct-irrelevant information processing demands associated with the testing method or medium are pertinent. For instance, the SCIP framework would be germane in the context of other technologically mediated assessment methods, such as virtual roleplays, immersive simulations, and gamified assessments (Arthur, Doverspike, Kinney, & O'Connell, 2017). As another example, one could envisage characteristics associated with situational judgment tests (e.g., item length, complexity of instructions, response format, presentation mode [paper-and-pencil vs. video]) that could very well engender differential construct-irrelevant cognitive load, which then results in differential test scores (Arthur et al, 2014;Chan & Schmitt, 1997).…”
Section: Arthur Keiser and Doverspike's (2017) Scip Frameworkmentioning
confidence: 99%
“…As a result, applicants are afforded the opportunity to complete employment‐related tests and assessments in virtually any location and using any device of their choosing. Consistent with the persistent rise in mobile device ownership among the general population (Pew Research Center, 2015, 2019), the use of mobile devices for unproctored employment‐related testing has become increasingly commonplace (Arthur et al., 2017, 2018). Although several advantages associated with UIT have been proffered, the seemingly unrestricted internet access permitted by new technologies also raises concerns regarding potential device‐type effects on the observed scores of UITs.…”
Section: Introductionmentioning
confidence: 99%