Assessment in clinical psychology typically relies on global retrospective self-reports collected at research or clinic visits, which are limited by recall bias and are not well suited to address how behavior changes over time and across contexts. Ecological momentary assessment (EMA) involves repeated sampling of subjects' current behaviors and experiences in real time, in subjects' natural environments. EMA aims to minimize recall bias, maximize ecological validity, and allow study of microprocesses that influence behavior in real-world contexts. EMA studies assess particular events in subjects' lives or assess subjects at periodic intervals, often by random time sampling, using technologies ranging from written diaries and telephones to electronic diaries and physiological sensors. We discuss the rationale for EMA, EMA designs, methodological and practical issues, and comparisons of EMA and recall data. EMA holds unique promise to advance the science and practice of clinical psychology by shedding light on the dynamics of behavior in real-world settings.
Doctors often ask patients to recall recent health experiences, such as pain, fatigue, and quality of life. 1 Research has shown, however, that recall is unreliable and rife with inaccuracies and biases.2 Recognition of recall's shortcomings has led to the use of diaries, which are intended to capture experiences close to the time of occurrence, thus limiting recall bias and producing more accurate data. 3The rationale for using diaries would be undermined if patients failed to complete diaries according to protocol. In this study we used a newly developed paper diary that could objectively record when patients made diary entries in order to compare patients' reported and actual compliance with diary keeping. For comparison, we also used an electronic diary designed to enhance compliance in order to assess what compliance rates might be achieved. Methods and resultsWe recruited 80 adults with chronic pain (pain for >3 hours a day and rated >4 on a 10 point scale) and assigned 40 to keeping a paper diary and 40 to an electronic diary. On satisfying the eligibility criteria, each patient was assigned to the next training session for which he or she was available, regardless of which diary it was for. We conducted one training session for each diary each week, with each training session for the paper diary matched by time and day of the week with an electronic diary training session. Participants were paid $150 and gave their informed consent; patients given the paper diary were not told that compliance would be recorded electronically.The paper diary comprised diary cards bound into a DayRunner Organizer binder. The cards contained 20 questions drawn from several common pain instruments and included fields to record time and date of completion. The diary binders were unobtrusively fitted with photosensors that detected light and recorded when the binder was opened and closed; these were extensively tested and validated. The electronic diary was a Palm computer with software for data collection in clinical trials and presented identical pain questions via a touch screen and recorded time and date of entries. This system (invivodata) incorporated several features to maximise compliance, including auditory prompts, and has demonstrated good compliance. 4 Patients were instructed to complete daily entries at 10 am, 4 pm, and 8 pm within 15 minutes of the target times. With the electronic diary, entries could not be initiated outside the designated 30 minute windows. We considered paper diary entries to be compliant if they were made within the 30 minute windows. A more liberal secondary outcome allowed a 90 minute window around the target times. Reported compliance was based on the time and date that patients recorded on their paper diary cards. Actual compliance was based on the electronic record (from the record of diary binder openings for paper diaries). Paper diary entries were deemed compliant if the binder was opened or closed at any point during the target time window. We also assessed "hoarding" with the pape...
Research and treatment assessments often rely on retrospective recall of events. The accuracy of recall was tested using accounts of smoking lapse episodes from 127 participants who had quit smoking, and lapses and temptations were recorded in near-real time using a hand-held computer. These computer records were compared with retrospective accounts elicited 12 weeks later, with a focus on recall of lapses in 4 content domains: mood, activity, episode Triggers, and abstinence violation effects. Recall of lapses was quite poor: Average kappas for items ranged from 0.18 to 0.27. Mean profile rs assessing recall for the overall pattern of behavior were .36, .30, .33, and .44 for these domains, respectively. In recall, participants overestimated their negative affect and the number of cigarettes they had smoked during the lapse, and their recall was influenced by current smoking status. The findings suggest caution in the use of recall in research and intervention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.