Maximum likelihood estimation of generalized linear mixed models (GLMMs) is difficult due to marginalization of the random effects. Derivative computations of a fitted GLMM’s likelihood are also difficult, especially because the derivatives are not by-products of popular estimation algorithms. In this paper, we first describe theoretical results related to GLMM derivatives along with a quadrature method to efficiently compute the derivatives, focusing on fitted lme4 models with a single clustering variable. We describe how psychometric results related to item response models are helpful for obtaining the derivatives, as well as for verifying the derivatives’ accuracies. We then provide a tutorial on the many possible uses of these derivatives, including robust standard errors, score tests of fixed effect parameters, and likelihood ratio tests of non-nested models. The derivative computation methods and applications described in the paper are all available in easily obtained R packages.
OpenFOAM is an attractive Computational Fluid Dynamics solver for evaluating new turbulence models due to the open-source nature and the suite of existing standard model implementations. Before interpreting results obtained with a new turbulence model, a baseline for performance of the OpenFOAM solver and existing models is required. In the current study, we assess the accuracy of simulation results obtained with standard models for the Reynolds-averaged Navier-Stokes equations implemented in the OpenFOAM incompressible solver. Two planar (two-dimensional mean flow) benchmark cases generated by the AIAA turbulence Model Benchmarking Working Group are considered: the boundary layer on a zero-pressure-gradient flat plate and a bump-in-channel flow. OpenFOAM results are compared with the NASA CFD codes CFL3D and FUN3D. Sensitivity of simulation results to the grid refinement, linear pressure solvers, compressibility effects, and model implementation are analyzed. Testing is conducted using standard Spalart-Allmaras one-equation, Wilcox's 2006 version of the two-equation k-ω, and SST 1994 turbulence models. Simulations using wall-resolved (low Reynolds number) formulations are considered.
It is well known that, in traditional SEM applications, a scale must be set for each latent variable: typically, either the latent variance or a factor loading is fixed to one. While this has no impact on the fit metrics in ML estimation, it can potentially lead to varying Bayesian model comparison metrics due to the use of different prior distributions under each parameterization. This is a problem, because a researcher could artificially improve one's preferred model simply by changing the identification constraint. Using a single-factor CFA as motivation for study, we first show that Bayesian model comparison metrics can systematically change depending on constraints used. We then study principled methods for setting the scale of the latent variable that stabilize the model comparison metrics. These methods involve (i) the placement of priors on ratios of factor loadings, as opposed to individual loadings; and (ii) use of effect coding. We illustrate the methods via simulation and application.
It is well known that, in traditional SEM applications, a scale must be set for each latent variable: either the latent variance or a factor loading is typically fixed to one. While this has no impact on the fit metrics in ML estimation, it can potentially lead to varying Bayesian model comparison metrics due to the use of different priors under each parameterization. Using a single-factor CFA as motivation for study, we first show that Bayesian model comparison metrics systematically change depending on constraints used. We then study principled methods for setting the latent variable scale that stabilize the model comparison metrics. These methods involve (i) the placement of priors on ratios of factor loadings, as opposed to individual loadings, and (ii) use of effect coding. We illustrate the methods via simulation and application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.