Adjunct Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications 2014
DOI: 10.1145/2667239.2667293
|View full text |Cite
|
Sign up to set email alerts
|

The MIT AgeLab n-back

Abstract: This paper briefly describes the background of the MIT AgeLab implementation of a delayed digit recall or nback task, and the capabilities of an android application developed to implement a multi-modal version. The MIT AgeLab n-back task is a well-established methodology for inducing graded levels of cognitive workload. It has been adopted for broad use as a multimodal surrogate demand and calibration task, and recently introduced as a driver and pedestrian distraction education tool.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7

Relationship

3
4

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…Therefore, glance data for the current study were manually coded based on video of the driver following the taxonomy and procedures outlined in Reimer et al ( 2013 , Appendix G). Software, now available as open source (Reimer, Gruevski, and Couglin 2014 ), allowed for rapid frame-by-fame review and coding. Each task period of interest was independently coded by two evaluators.…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, glance data for the current study were manually coded based on video of the driver following the taxonomy and procedures outlined in Reimer et al ( 2013 , Appendix G). Software, now available as open source (Reimer, Gruevski, and Couglin 2014 ), allowed for rapid frame-by-fame review and coding. Each task period of interest was independently coded by two evaluators.…”
Section: Methodsmentioning
confidence: 99%
“…Following procedures detailed in Mehler et al ( 4 ), two trained coders independently manually coded glances in each dataset and a third coder mediated any discrepancies. Open source software, which allowed for frame-by-frame review and annotation of driver eye movements, was used for this coding ( 15 ).…”
Section: Methodsmentioning
confidence: 99%
“…In the case of manual coding of video images, the timing of glance is labelled from the first video frame illustrating movement to a ‘new’ location of interest to the last video frame prior to movement to a ‘new’ location. Glance data for this study were manually coded using software, now available as open source (Reimer, Gruevski, and Coughlin 2014), that allowed for rapid frame-by-fame review and coding. Each task period of interest was independently coded by two evaluators.…”
Section: Methodsmentioning
confidence: 99%