With the advancement of technologies like in-car navigation and smartphones, concerns around how cognitive functioning is influenced by "workload" are increasingly prevalent.Research shows that spreading effort across multiple tasks can impair cognitive abilities through an overuse of resources, and that similar overload effects arise in difficult single-task paradigms. We developed a novel lab-based extension of the Detection Response Task, which measures workload, and paired it with a Multiple Object Tracking Task to manipulate cognitive load. Load was manipulated either by changing within-task difficulty or by the addition of an extra task. Using quantitative cognitive modelling we showed that these manipulations cause similar cognitive impairments through diminished processing rates, but that the introduction of a second task tends to invoke more cautious response strategies that do not occur when only difficulty changes. We conclude that more prudence should be exercised when directly comparing multitasking and difficulty-based workload impairments, particularly when relying on measures of central tendency.
The emotional Stroop effect (ESE) is the result of longer naming latencies to ink colors of emotion words than to ink colors of neutral words. The difference shows that people are affected by the emotional content conveyed by the carrier words even though they are irrelevant to the color-naming task at hand. The ESE has been widely deployed with patient populations, as well as with non-selected populations, because the emotion words can be selected to match the tested pathology. The ESE is a powerful tool, yet it is vulnerable to various threats to its validity. This report refers to potential sources of confounding and includes a modal experiment that provides the means to control for them. The most prevalent threat to the validity of existing ESE studies is sustained effects and habituation wrought about by repeated exposure to emotion stimuli. Consequently, the order of exposure to emotion and neutral stimuli is of utmost importance. We show that in the standard design, only one specific order produces the ESE.
Recently, Veksler, Myers, and Gluck (2015) proposed model flexibility analysis as a method that "aids model evaluation by providing a metric for gauging the persuasiveness of a given fit" (p. 755) Model flexibility analysis measures the complexity of a model in terms of the proportion of all possible data patterns it can predict. We show that this measure does not provide a reliable way to gauge complexity, which prevents model flexibility analysis from fulfilling either of the 2 aims outlined by Veksler et al. (2015): absolute and relative model evaluation. We also show that model flexibility analysis can even fail to correctly quantify complexity in the most clear cut case, with nested models. We advocate for the use of well-established techniques with these characteristics, such as Bayes factors, normalized maximum likelihood, or cross-validation, and against the use of model flexibility analysis. In the discussion, we explore 2 issues relevant to the area of model evaluation: the completeness of current model selection methods and the philosophical debate of absolute versus relative model evaluation. (PsycINFO Database Record
FluTracking experienced major growth in 2018, with participation numbers increasing 34.1% from 2017. The addition of 16,881 new participants brought the total number of participants for 2018 to 45,532. A majority of participants continued to complete their survey within 24 hours of the email being sent (mean 74.3% responses received in 24 hours). The rate of influenza-like illness (ILI) in 2018 was the lowest since FluTracking commenced in 2007 and was consistently low across all ages. The peak weekly ILI rate was consistent with previous years, occurring during the week ending 19 August. This preceded the peak in laboratory-confirmed influenza notifications by three weeks. During the peak week of FluTracking, 2.1% of unvaccinated, and 1.9% of vaccinated participants reported fever and cough. By the final survey of 2018, 65.6% of participants had received the annual influenza vaccine, compared with 60.2% in 2017. Vaccination rates in participants under five years of age doubled from 23.7% in 2017, to 55.6% in 2018. During the peak four weeks of reported ILI, a lower percentage of participants sought medical care in 2018 compared to 2017 (36.7% and 42.3% respectively), and fewer participants reported a positive laboratory test for influenza (0.8% and 4.8%). Overall the severity of the 2018 season was one of the lowest FluTracking has recorded. Rates of both influenza laboratory notifications and general practitioner (GP) ILI consultations were lower in 2018 than most prior years. We found a reduction in the percentage of FluTracking participants with ILI who were tested for influenza (3.2% compared with 5.0% in 2017), and who visited a medical practitioner (36.7% compared with 42.3% in 2017). The drop in laboratory-confirmed cases and Australian Sentinel Practices Research Network (ASPREN) reported GP consultations concurs with our survey results that 2018 was a milder influenza season than many previous.
The accurate and objective measurement of cognitive workload is important in many aspects of psychological research. The Detection Response Task (DRT) is a well-validated method for measuring cognitive workload that has been used extensively in applied tasks, for example to investigate the effects of fatigue and phone usage on driving. Given its success in applied tasks, we investigated whether the DRT could be used to measure cognitive workload in cognitive tasks more commonly used in experimental cognitive psychology, and whether this application could be extended to online environments. We had participants perform a multiple object tracking task while simultaneously performing a DRT. We manipulated the cognitive load of the multiple object tracking task by changing the number of dots to be tracked. Measurements from the DRT were sensitive to changes in the cognitive load, establishing the efficacy of the DRT for experimental cognitive tasks in lab-based situations. This sensitivity continued when applied to an online environment (with our code for the online DRT implementation being freely available at \url{https://osf.io/dc39s/}), though to a reduced extent compared to the in-lab situation, opening up the potential use of the DRT to a much greater range of tasks and situations, but suggesting that in-lab applications are best when possible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.