In response to the Coronavirus disease 2019 global health pandemic, many employees transitioned to remote work, which included remote meetings. With this sudden shift, workers and the media began discussing videoconference fatigue, a potentially new phenomenon of feeling tired and exhausted attributed to a videoconference. In the present study, we examine the nature of videoconference fatigue, when this phenomenon occurs, and what videoconference characteristics are associated with fatigue using a mixed-methods approach. Thematic analysis of qualitative responses indicates that videoconference fatigue exists, often in near temporal proximity to the videoconference, and is affected by various videoconference characteristics. Quantitative data were collected each hour during five workdays from 55 employees who were working remotely because of the COVID-19 pandemic. Latent growth modeling results suggest that videoconferences at different times of the day are related to deviations in employee fatigue beyond what is expected based on typical fatigue trajectories. Results from multilevel modeling of 279 videoconference meetings indicate that turning off the microphone and having higher feelings of group belongingness are related to lower postvideoconference fatigue. Additional analyses suggest that higher levels of group belongingness are the most consistent protective factor against videoconference fatigue. Such findings have immediate practical implications for workers and organizations as they continue to navigate the still relatively new terrain of remote work.
The psychometric soundness of measures has been a central concern of articles published in the Journal of Applied Psychology (JAP) since the inception of the journal. At the same time, it isn't clear that investigators and reviewers prioritize psychometric soundness to a degree that would allow one to have sufficient confidence in conclusions regarding constructs. The purposes of the present article are to (a) examine current scale development and evaluation practices in JAP; (b) compare these practices to recommended practices, previous practices, and practices in other journals; and (c) use these comparisons to make recommendations for reviewers, editors, and investigators regarding the creation and evaluation of measures including Excel-based calculators for various indices. Finally, given that model complexity appears to have increased the need for short scales, we offer a user-friendly R Shiny app (https:// orgscience.uncc.edu/about-us/resources) that identifies the subset of items that maximize a variety of psychometric criteria rather than merely maximizing alpha.
Although interaction hypotheses are increasingly common in our field, many recent articles point out that authors often have difficulty justifying them. The purpose of this article is to describe a particular type of interaction: the restricted variance (RV) interaction. The essence of the RV interaction is that, as the value of one variable in a system changes, certain values of another variable in the system become less plausible, thus restricting its variance. This, in turn, influences relationships between that variable and other variables. These types of interactions are quite common, even if they are not recognized as RV interactions, and they exist at every level of analysis. The advantage of the RV interaction is that, as compared with other interaction types, it is relatively simple to justify. The different forms of RV interaction do, however, contain complexities of which a researcher must be aware. This article explains and illustrates the forms that RV interactions can take and their often counterintuitive implications. It also describes how one should go about testing them. Our intention is to help researchers strengthen and focus their interaction arguments.
Structural equation modeling (SEM) has been a staple of the organizational sciences for decades. It is common to report degrees of freedom (df) for tested models, and it should be possible for a reader to recreate df for any model in a published paper. We reviewed 784 models from 75 papers published in top journals in order to understand df-related reporting practices and discover how often reported df matched those that we computed based on the information given in the papers. Among other things, we found that both df and the information necessary to compute them were available about three-quarters of the time. We also found that computed df matched reported df only 62% of the time. Discrepancies were particularly common in structural (as opposed to measurement) models and were often large in magnitude. This means that the models for which fit indices are offered are often different from those described in published papers. Finally, we offer an online tool for computing df and recommendations, the Degrees of Freedom Reporting Standards (DFRS), for authors, reviewers, and editors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.