Undergraduate students who have just completed an introductory statistics course often lack deep understanding of variability and enthusiasm for the field of statistics. This paper argues that by introducing the commonly underemphasized concept of measurement error, students will have a better chance of attaining both. We further present lecture materials and activities that introduce metrology, the science of measurement, which were developed and tested in a pilot study at Iowa State University. These materials explain how to characterize sources of variability in a dataset, in a way that is natural and accessible because the sources of variability are observable. Everyday examples of measurements, such as the amount of gasoline pumped into a car, are presented, and the consequences of variability within those measurements are discussed. To gauge the success of the material, students' initial and subsequent understanding of variability and their attitude toward the usefulness of statistics were analyzed in a comparative study. Questions from the CAOS and ARTIST assessments that pertain to using variability to make
Local structure graph models (LSGMs) describe random graphs and networks as a Markov random field (MRF)-each graph edge has a specified conditional distribution dependent on explicit neighbourhoods of other graph edges. Centred parameterizations of LSGMs allow for direct control and interpretation of parameters for large-and small-scale structures (e.g., marginal means vs. dependence). We extend this parameterization to account for triples of dependent edges and illustrate the importance of centred parameterizations for incorporating covariates and interpreting parameters. Using a MRF framework, common exponential random graph models are also shown to induce conditional distributions without centred parameterizations and thereby have undesirable features. This work attempts to advance graph models through conditional model specifications with modern parameterizations, covariates and higher-order dependencies.
Phasor measurement units (PMUs) are increasingly important for monitoring the state of an electrical power grid and quickly detecting topology changes caused by events such as lines going down or large loads being dropped. Phasors are complex valued measurements of voltage and current at various points of generation and consumption. If a line goes down or a load is removed, power flows change throughout the grid according to known physical laws and the probability distribution of phasor measurements changes accordingly. This paper develops a method to estimate the current topology of a power grid from phasor measurements and considers the design goal of placing PMUs at strategic points in a distribution system to achieve good sensitivity to single-line outages. From a vector of phasor measurements probabilities are computed corresponding to the scenario that all power lines are operational and to alternate scenarios in which each line goes down individually. These probabilities are functions of the joint distributions of phasor measurements under each possible scenario, obtained through Monte Carlo simulations with random load profiles. We use log-spline densities to estimate marginal distributions of phasor measurements and fold these into a multivariate Gaussian copula to capture important correlations. Sensitivity to outages varies according to which line goes down and where PMUs are placed on the grid. A greedy search algorithm is demonstrated for placing PMUs at locations that provide good sensitivity to single-line outages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.