We report three experiments investigating source memory for words that were called "new" on a recognition test. In each experiment, participants could accurately specify the source of words that they failed to recognize. Results also demonstrated that source memory for unrecognized items varied with the bias to respond "old" in recognition decisions: Participants displayed unrecognized source memory when they were told that 25% of the recognition test words were old (promoting conservative responding) but not when they were told that 75% of the test words were old (promoting liberal responding). Our results were successfully predicted by a multivariate signal detection approach to recognition/source memory.
The effect of delay on prospective memory (PM) is mixed. Research has typically shown that PM either decreases or remains unchanged as the time increases between intention formation and encounter with a PM cue. However, the results of one study demonstrated that PM sometimes increases with increasing delays (Hicks, Marsh, & Russell, 2000). Hicks et al. hypothesised that increasing the delay may afford an opportunity for people to spontaneously rehearse the intention, or to be reminded of the intention. In the present work, we tested delays of 6 minutes, 21 minutes, and 36 minutes. Two factors were orthogonally manipulated between-subjects. One was the duration of the filler task that came between intention formation and the beginning of the ongoing task in which PM cues were embedded. The second was the duration of the ongoing task prior to the presentation of the first PM cue. Lengthening the ongoing task delay decreased PM. However, lengthening the filler task nominally increased PM. These results suggest that delays within the ongoing task replicate the effects traditionally found in retrospective memory work. In contrast, delays between intention formation and the beginning of the ongoing task may not have straightforward effects on PM retrieval.
Military branches rely on selection and placement testing to help identify individuals who will be well-suited for jobs that they likely have little exposure to or experience with. The goal for this research was to update the Direction Orientation Task (DOT), which is used by both the United States Navy and Air Force as part of their aviation selection batteries. The current version of DOT has limitations such as ceiling effects in some populations and a restriction of range within test scores. We created a new version (DOT2) that attempted to increase the difficulty and variance of the test (among other things discussed in the paper). Ninety-five student Naval Aviators completed an experiment where they completed DOT1, DOT2, and the operation span. Results showed that DOT2 was significantly more difficult and had increased variance compared to DOT1. Scores on DOT1 were highly related to scores on DOT2, both were similarly unrelated to scores the operation span, and both were similarly related to the number of math errors made on the operation span. These preliminary data suggest that DOT2 may be a viable upgrade to DOT1.
With the proliferation of automated tasks, software, and systems, humans are moving from an active participant in the function of a task to a passive monitor of an automated system that is completing that task. Unfortunately, humans are not well-suited for monitoring roles and there is a need to better understand the factors involved when humans successfully identify when an automated system fails. The goal for this research was to determine whether individual differences in attention control (as measured by the anti-saccade task) and working memory capacity (as measured by the shortened operation span) related to an individual’s ability to detect automation failures. In experiment 1, there was a significant positive relationship ( r = .31) between scores on the anti-saccade task and the number of automation failures that participants detected. In experiment 2, there was a significant positive relationship ( r = .32) between scores on the shortened operation span and the number of automation failures that participants’ detected. The results suggest that certain individuals are better suited for detecting automation failures. Selecting for these individuals may be a fruitful endeavor as automated systems continue to grow across society.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.