Affect dynamics are often studied by means of first-order autoregressive (AR) modeling applied to intensive longitudinal data. A key target in these studies is the AR parameter, which is often tied conceptually to regulatory behavior in the affective process. The data is typically gathered using experience sampling methods, which are designed to pick up on fluctuations in affective variables as they evolve over time in naturalistic settings. In this manuscript, we compare classical time-contingent sampling designs to episode-contingent sampling designs, which initiate sampling when an emotional episode has been signalled. We define emotional episodes as periods where an affective process strays relatively far away from its mean. Compared to time-contingent designs, episode-contingent designs leverage on increased affective variability, which can have beneficial implications for the precision of the ordinary least squares AR effect estimator. Using an extensive simulation study, we attempt to delineate which characteristics of an episode-contingent design are important to consider, and how these characteristics are related to estimation benefits. We conclude that episode-contingent designs can have marked benefits for the precision of the AR effect estimator, and discuss a number of challenges when it comes to implementing episode-contingent designs in practice.
In order to shed light on the dynamics of affective processes, researchers often collect intensive longitudinal (IL) data by asking people to repeatedly report on their momentary affect in daily life. Two important decisions when designing an IL study are the number of persons and the number of measurement occasions to be included. These sample size decisions are ideally based on statistical power considerations. Statistical power is a function of the statistical model used to test the research question of interest. A widely used statistical technique to analyze IL data is multilevel modeling. For multilevel modeling, power analyses are conducted by specifying the population values of all model parameters. This is a daunting task, given the large number of model parameters in multilevel models. Therefore, these values are usually set based on data from previous studies. As we will show in this paper, the use of previous studies for determining the parameter values for a priori power analysis is problematic if one does not account for differences in study design and preprocessing choices. Regarding study design, in affect dynamic research it is common practice to combine an ad hoc selection of specific emotion items to construct affect scores, but what items are used and how they are combined differs across studies. Regarding preprocessing, removing measurement occasions due to delayed responses, or participants based on low compliance changes the data used to determine the values of the model parameters for computing statistical power. In this paper, we demonstrate how to investigate the effect of different operationalizations of affect and of different preprocessing choices on power-based sample size recommendations using data from a recent study. This approach paves the way for more thoughtful and robust sample size decisions.
Researchers increasingly study short-term dynamic processes that evolve within single individuals using N=1 ESM studies. The processes of interest are typically captured by fitting a VAR(1) model to the resulting data. A crucial question is how to perform sample size planning and thus decide on the number of measurement occasions needed. Although the most popular approach is to perform power analysis, this approach has a number of limitations. Therefore, we propose to consider out-of-sample predictive accuracy as a sample size planning criterion. This criterion quantifies how well the estimated VAR(1) model will allow to predict unseen data from the same individual. To this end, we propose a new simulation-based sample size planning method, called Predictive Accuracy Analysis (PAA), and an associated Shiny App. This approach makes use of a novel predictive accuracy metric that accounts for the multivariate nature of the prediction problem. We showcase how the value of the different VAR(1) model parameters impact power and predictive accuracy-based sample size recommendations using simulated data sets and real data applications. The range of recommended sample sizes is lower for predictive accuracy analysis than for power analysis and for real data the median value of these recommended numbers is lower for the former.
Parental burnout is a growing subject of research, but thus far this research has not examined whether the features of parental burnout fluctuate over time. Moreover, parenting and parental burnout are inextricable from their family context. Therefore, a critical next step involves examining how parental burnout features temporally unfold and interact with the ever-changing family environment. To do so, we developed an 11-item experience sampling methodology (ESM) tool to measure self-reported parental burnout features (specifically emotional exhaustion, emotional distance, and feeling fed up), as well as partner relationship, children’s behavior, behavior toward children, social support, and perceived resources. We conducted two two-week periods of ESM data collection (one with French-language ESM items; n = 9; one with English-language ESM items; n=23) and one eight-week data collection with the French-language ESM items (n=50). We collected the ESM data using formr, an open-source platform, and we provide open access to all materials (including a formr template, allowing free use of the assessment tool), analysis code, and data: https://osf.io/s2yv5/. Participants’ responses indicated sufficient within-person variability (assessed via intraclass correlation) and support for convergent and discriminant validity (assessed by correlating aggregated ESM responses with retrospective questionnaire scores on parental burnout, depression, anxiety, and stress). Lastly, we found that the three parental burnout ESM items had high between-subject reliability and moderate within-subject reliability. Participating parents found the ESM survey easy to answer and not burdensome. Finally, we discuss how assessing parental burnout over time can help usher parental burnout research and treatment forward.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.