Educational assessments, specifically standardized and normalized exams, owe most of their foundations to psychological test theory in psychometrics. While the theoretical assumptions of these practices are widespread and relatively uncontroversial in the testing community, there are at least two that are philosophically and mathematically suspect and have troubling implications in education. Assumption 1 is that repeated assessment measures that are calculated into an arithmetic mean are thought to represent some real stable, quantitative psychological trait or ability plus some error. Assumption 2 is that aggregated, group-level educational data collected from assessments can then be interpreted to make inferences about a given individual person over time without explicit justification. It is argued that the former assumption cannot be taken for granted; it is also argued that, while it is typically attributed to 20th century thought, the assumption in a rigorous form can be traced back at least to the 1830s via an unattractive Platonistic statistical thesis offered by one of the founders of the social sciences—Belgian mathematician Adolphe Quetelet (1796–1874). While contemporary research has moved away from using his work directly, it is demonstrated that cognitive psychology is still facing the preservation of assumption 1, which is becoming increasingly challenged by current paradigms that pitch human cognition as a dynamical, complex system. However, how to deal with assumption 1 and whether it is broadly justified is left as an open question. It is then argued that assumption 2 is only justified by assessments having ergodic properties, which is a criterion rarely met in education; specifically, some forms of normalized standardized exams are intrinsically non-ergodic and should be thought of as invalid assessments for saying much about individual students and their capability. The article closes with a call for the introduction of dynamical mathematics into educational assessment at a conceptual level (e.g., through Bayesian networks), the critical analysis of several key psychological testing assumptions, and the introduction of dynamical language into philosophical discourse. Each of these prima facie distinct areas ought to inform each other more closely in educational studies.
Much excitement has surrounded the idea of ‘big data’ and its potential to drive science into a new epistemological era–one of data-intensive exploration. However, there are open interpretive questions regarding how to interpret results derived from such massive and (nearly)-complete data sets. One view of big data, called here the ‘End of Theory’ (EOT) view, advocates that identifying statistical patterns in large data sets is sufficient for generating scientific results. The present paper demonstrates that EOT is untenable in at least one big data environment in neuroscience: Granger causality analysis (GC). The present paper systematically outlines the foundations for GC and its neural correlate Granger-Geweke causality analysis (GGC) by drawing on statistics and information theory. In doing so, the present paper introduces new terminology that clarifies the role of GGC in the era of big data and establishes a standard for result interpretation. Specifically, the “undecidability property of neuroimaging” (UPN) is introduced. The need for UPN is demonstrated by the concept resolving an existing conflict in the literature that occurred in the late 2010s between Barrett _et al._ (2018a [https://doi.org/10.1073/pnas.1714497115]; 2018b [https://doi.org/10.1016/j.neuroimage.2018.05.067]) and Stokes and Purdon (2017a [https://arxiv.org/abs/1709.10248]; 2017b [https://doi.org/10.1073/pnas.1704663114]). UPN is then applied to a current research thread in cognitive neuroscience to develop a standard for how to interpret positive and null results from GGC during big data ‘exploratory analyses.’
Artifact subspace reconstruction (ASR) is an automatic artifact reject method that can effectively remove transient or large-amplitude artifacts found in electroencephalographic (EEG) data. There is little systematic evidence on the effective parameter choice of ASR in real EEG data. No existing study has evaluated ASR’s performance in functional connectivity analysis, such as renormalized Partial Directed Coherence (rPDC). This paper systematically evaluates ASR on 31 EEG recordings taken during a source episodic memory retrieval task. Independent component analysis (ICA) and an independent component classifier, ICLabel, are applied to separate artifacts from brain signals to quantitatively assess the effectiveness of ASR. The effectiveness of ASR was quantified on the following metrics: the number of dipolar independent components, model order for multivariate autoregressive modeling, and the number of preserved trials. Results showed that ASR is either as effective or more effective than manual rejection of artifacts. Contrary to previous literature, the present study shows that the optimal ASR parameter could be substantially higher than 20 to 30 and could be as high as 120, depending on experimenter decisions for what to preserve. As such, ASR parameter choice should be justified in each study using quantitative preliminary analysis. This is the first study to systematically analyze ASR’s effectiveness in rPDC-based functional connectivity research. NOTE: This is the first draft; several methodological changes might occur at a later time upon further analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.