Attention deficit/hyperactivity disorder (ADHD) is frequently characterized as a disorder of executive function (EF). However, behavioral tests of EF, such as go/No-go tasks, often fail to grasp the deficiency in EF revealed by questionnaire-based measures. This inability is usually attributed to questionnaires and behavioral tasks assessing different constructs of EFs. We propose an additional explanation for this discrepancy. We hypothesize that this problem stems from the lack of dynamic assessment of decision-making (e.g., continuous monitoring of motor behavior such as velocity and acceleration in choice reaching) in classical versions of behavioral tasks. We test this hypothesis by introducing dynamic assessment in the form of mouse motion in a go/No-go task. Our results indicate that, among healthy college students, self-report measures of ADHD symptoms become strongly associated with performance in behavioral tasks when continuous assessment (e.g., acceleration in the mouse-cursor motion) is introduced.
The accurate detection of attention-deficit/hyperactivity disorder (ADHD)
symptoms, such as inattentiveness and behavioral disinhibition, is crucial for
delivering timely assistance and treatment. ADHD is commonly diagnosed and
studied with specialized questionnaires and behavioral tests such as the
stop-signal task. However, in cases of late-onset or mild forms of ADHD,
behavioral measures often fail to gauge the deficiencies well-highlighted by
questionnaires. To improve the sensitivity of behavioral tests, we propose a
novel version of the stop-signal task (SST), which integrates mouse cursor
tracking. In two studies, we investigated whether introducing mouse movement
measures to the stop-signal task improves associations with questionnaire-based
measures, as compared to the traditional (keypress-based) version of SST. We
also scrutinized the influence of different parameters of stop-signal tasks,
such as the method of stop-signal delay setting or definition of response
inhibition failure, on these associations. Our results show that a) SSRT has
weak association with impulsivity, while mouse movement measures have strong and
significant association with impulsivity; b) machine learning models trained on
the mouse movement data from “known” participants using nested cross-validation
procedure can accurately predict impulsivity ratings of “unknown” participants;
c) mouse movement features such as maximum acceleration and maximum velocity are
among the most important predictors for impulsivity; d) using preset stop-signal
delays prompts behavior that is more indicative of impulsivity.
The human mind is multimodal. Yet most behavioral studies rely on century-old measures of behavior—task accuracy and latency (response time). Multimodal and multisensory analysis of human behavior creates a better understanding of how the mind works. The problem is that designing and implementing these experiments is technically complex and costly. This paper introduces versatile and economical means of developing multimodal-multisensory human experiments. We provide an experimental design framework that automatically integrates and synchronizes measures including electroencephalogram (EEG), galvanic skin response (GSR), eye-tracking, virtual reality (VR), body movement, mouse/cursor motion and response time. Unlike proprietary systems (e.g., iMotions), our system is free and open-source; it integrates PsychoPy, Unity and Lab Streaming Layer (LSL). The system embeds LSL inside PsychoPy/Unity for the synchronization of multiple sensory signals—gaze motion, electroencephalogram (EEG), galvanic skin response (GSR), mouse/cursor movement, and body motion—with low-cost consumer-grade devices in a simple behavioral task designed by PsychoPy and a virtual reality environment designed by Unity. This tutorial shows a step-by-step process by which a complex multimodal-multisensory experiment can be designed and implemented in a few hours. When conducting the experiment, all of the data synchronization and recoding of the data to disk will be done automatically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.