Approach biases to foods may explain why food consumption often diverges from deliberate dietary intentions. Yet, the assessment of behavioural biases with the approach-avoidance tasks (AAT) is often unreliable and validity is partially unclear. The present study continues a series of studies that develop a task based on naturalistic approach and avoidance movements on a touchscreen (hand-AAT). In the hand-AAT, participants are instructed to respond based on the food/non-food distinction, thereby ensuring attention to the stimuli. Yet, this implies the use of instruction switches (i.e., ‘approach food – avoid objects’ to ‘avoid food – approach objects’), which introduce order effects. The present study increased the number of instruction switches to potentially minimize order effects, and re-examined reliability. We additionally included the implicit association task (IAT) and several self-reported eating behaviours to investigate the task’s validity. Results replicated the presence of reliable approach biases to foods irrespective of instruction order. Evidence for validity, however, was mixed: biases correlated positively with external eating, increase in food craving and aggregated image valence ratings but not with desire to eat ratings of the individual images considered within participants or the IAT. We conclude that the hand-AAT can reliably assess approach biases to foods that are relevant to self-reported eating patterns.
Reaction time (RT) data are often pre-processed before analysis by rejecting outliers and errors and aggregating the data. In stimulus-response compatibility paradigms such as the Approach-Avoidance Task (AAT), researchers often decide how to pre-process the data without an empirical basis, leading to the use of methods that may hurt rather than help data quality. To provide this empirical basis, we investigated how different pre-processing methods affect the reliability and validity of this task. Our literature review revealed 108 different pre-processing pipelines among 163 examined studies. Using simulated and real datasets, we found that validity and reliability were negatively affected by retaining error trials, by replacing error RTs with the mean RT plus a penalty, by retaining outliers, and by removing the highest and lowest sample-wide RT percentiles as outliers. We recommend removing error trials and rejecting RTs deviating more than 2 or 3 SDs from the participant mean. Bias scores were more reliable but not more valid if computed with means or D-scores rather than with medians. Bias scores were less accurate if based on averaging multiple conditions together, as with compatibility scores, rather being than based on separate averages per condition, as with double-difference scores. We call upon the field to drop the suboptimal practices to improve the psychometric properties of the AAT. We also call for similar investigations in related RT-based cognitive bias measures such as the implicit association task, as their commonly accepted pre-processing practices currently involve many of the aforementioned discouraged methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.