Discusses how current goodness-of-fit indices fail to assess parsimony and hence disconftrmabiliry of a model and are insensitive to misspecifications of causal relations (a) among latent variables when measurement model with many indicators is correct and (b) when causal relations corresponding to free parameters expected to be nonzero turn out to be zero or near zero. A discussion of philosophy of parsimony elucidates relations of parsimony to parameter estimation, disconfirmability, and goodness of lit. AGFI in USREL is rejected. A method of adjusting goodness-of-fit indices by a parsimony ratio is described. Also discusses less biased estimates of goodness of fit and a relative normedfit index for testing fit of structural model exclusive of the measurement model.
Numerous procedures have been suggested for investigating behaviors across situations for consistency versus situational specificity. It is proposed here that Confirmatory Factor Analysis (CFA) may provide an useful addition to these procedures. To illustrate the process, a CFA model based on simulated data is presented and tested. The results of this simulation are employed to make recommendations for conducting CFA to test for crosssituational consistency.
In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed widespread adoption of machine learning? We argue that, in general, people implicitly adopt reliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable process or method [18]. We argue that, in cases where model deployments require moral justification, reliabilism is not sufficient, and instead justifying deployment requires establishing robust human processes as a moral "wrapper" around machine outputs. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification-moral justification. Finally, we offer cautions relevant to the (implicit or explicit) adoption of the reliabilist interpretation of machine learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.