One challenge in using multilevel models is determining how to report the amount of explained variance. In multilevel models, explained variance can be reported for each level or for the total model. Existing measures have been based primarily on the reduction of variance components across models. However, these measures have not been reported consistently because they have some undesirable properties. The present study is one of the first to evaluate the accuracy of these measures using Monte Carlo simulations. In addition, a measure based on the full partitioning of variance in multilevel models was examined. With the exception of the Level 2 explained variance measure, all other measures performed well across our simulated conditions.
This paper examined the amount bias in standard errors for fixed effects when the random part of a multilevel model is misspecified. Study 1 examined the effects of misspecification for a model with one Level 1 predictor. Results indicated that misspecifying random slope variance as fixed had a moderate effect size on the standard errors of the fixed effects and had a greater effect than misspecifying fixed slopes as random. In Study 2, a second Level 1 predictor was added and allowed for the examination of the effects of misspecifying the slope variance of one predictor on the standard errors for the fixed effects of the other predictor. Results indicated that only the standard errors of coefficient relevant to that predictor were impacted and that the effect size for the bias could be considered moderate to large. These results suggest that researchers can use a piecemeal approach to testing multilevel models with random effects.
Person reliability parameters (PRPs) model temporary changes in individuals' attribute level perceptions when responding to self-report items (higher levels of PRPs represent less fluctuation). PRPs could be useful in measuring careless responding and traitedness. However, it is unclear how well current procedures for estimating PRPs can recover parameter estimates. This study assesses these procedures in terms of mean error (ME), average absolute difference (AAD), and reliability using simulated data with known values. Several prior distributions for PRPs were compared across a number of conditions. Overall, our results revealed little differences between using the χ or lognormal distributions as priors for estimated PRPs. Both distributions produced estimates with reasonable levels of ME; however, the AAD of the estimates was high. AAD did improve slightly as the number of items increased, suggesting that increasing the number of items would ameliorate this problem. Similarly, a larger number of items were necessary to produce reasonable levels of reliability. Based on our results, several conclusions are drawn and implications for future research are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.