2020
DOI: 10.1093/arclin/acaa080
|View full text |Cite
|
Sign up to set email alerts
|

The night out task and scoring application: an ill-structured, open-ended clinic-based test representing cognitive capacities used in everyday situations

Abstract: Objective The night out task (NOT) was developed as a naturalistic, open-ended, multitasking measure that requires individuals to complete eight subtasks comparable to those encountered during real-world functioning (e.g., pack travel bag, prepare tea). We examined psychometric properties and administration feasibility of this direct observation measure within a clinic-like setting using a tablet-based coding application. Method … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 48 publications
0
7
0
Order By: Relevance
“…Notably, most studies (16 out of 19) that did rely on associations between the test scores and functional outcomes assessed outcomes using rating scales filled out by participants and/or collateral sources. Only one study examined whether performance on a novel test could predict an actual real-world outcome, that is, occupational status after traumatic brain injury (Scott et al, 2011), and two studies used observations of performance of functional tasks (Josman et al, 2014; Schmitter-Edgecombe et al, 2021). Interestingly, two of the articles that did not explicitly link EV to prediction of functioning (relying instead on participant feedback about the test’s face validity) nevertheless did examine the associations between test performance and functional outcome measures.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Notably, most studies (16 out of 19) that did rely on associations between the test scores and functional outcomes assessed outcomes using rating scales filled out by participants and/or collateral sources. Only one study examined whether performance on a novel test could predict an actual real-world outcome, that is, occupational status after traumatic brain injury (Scott et al, 2011), and two studies used observations of performance of functional tasks (Josman et al, 2014; Schmitter-Edgecombe et al, 2021). Interestingly, two of the articles that did not explicitly link EV to prediction of functioning (relying instead on participant feedback about the test’s face validity) nevertheless did examine the associations between test performance and functional outcome measures.…”
Section: Resultsmentioning
confidence: 99%
“…Clark et al, 2000; Kenworthy et al, 2020; Renison et al, 2012; Scott et al, 2011; Siu & Zhou, 2014) for assessing tests’ association with functional outcomes, for deriving all results from sufficiently large samples (56 to 274 participants), and for having acceptable participant-to-variable ratios (six to 55; for discussion of sample sizes and participant-to-variable ratios, see Tabachnick & Fidell, 2007; Van Voorhis & Morgan, 2007). In addition to these five studies, Schmitter-Edgecombe et al (2021) also deserve highlighting for conducting half of their EV-related analyses on a large sample ( n = 117) with a participant-to-variable ratio of 39 (although the other half of EV-related analyses was conducted on a subsample of only 18 participants). Of note, all six of these highlighted studies appropriately shied away from examining large numbers of variables, interpreting only between two and 11 coefficients 3 .…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Education effects are likely to impact tests that involve reading and arithmetic skills and could thereby reduce the measures' predictive validity. Age correction is however more problematic, as age may affect both performance on the measure and performance in the individual's natural environment such that adjusting for age could obscure the test-takers real-world difficulties (Schmitter-Edgecombe et al, 2021).…”
Section: Measure Development and Validationmentioning
confidence: 99%
“…To improve the utility of EF assessment, numerous researchers have begun to develop more ecologically valid instruments (Hamera & Brown, 2000; Josman et al, 2009; Jovanovski, Zakzanis, Campbell, et al, 2012; Lalonde et al, 2013; Lamberts et al, 2010; Schmitter-Edgecombe et al, 2021), with the assumption that ecologically valid tests would represent a “silver bullet” that would dramatically improve predictions of patients’ IADL capacities (Burgess et al, 2006). However, after some 30 years of such efforts, only a handful of such tests have been translated into clinical use (e.g., Wilson et al, 1996), and the superiority of such tests over traditional batteries has not been unequivocally demonstrated (e.g., Jansari et al, 2014; Jovanovski, Zakzanis, Campbell, et al, 2012; Jovanovski, Zakzanis, Ruttan, et al, 2012; Maeir et al, 2011; Rand et al, 2009; Robertson & Schmitter-Edgecombe, 2016; Spitoni et al, 2018).…”
Section: Introductionmentioning
confidence: 99%