The digital age has significantly impacted our ability to sense our environment and infer the state or status of equipment in our environment from the sensed information. Consequently inferring from a set of observations the causal factors that produced them is known as an inverse problem. In this study the sensed information, a.k.a. sensor measurement variables, is measurable while the inferred information, a.k.a. target variables, is not measurable. The ability to solve an inverse problem depends on the quality of the optimisation approach and the relevance of information used to solve the inverse problem. In this study, we aim to improve the information available to solve an inverse problem by considering the optimal selection of m sensors from k options. This study introduces a heuristic approach to solve the sensor placement optimisation problem which is not to be confused with the required optimisation strategy to solve the inverse problem. The proposed heuristic optimisation approach relies on the rank of the cross-covariance matrix between the observations of the target variables and the observations of the sensor measurement variables obtained from simulations using the computational model of an experiment. In addition, the variance between observations of the sensor measurements is considered. A new formulation, namely the tolerance rank-variance formulation (TRVF) is introduced and investigated numerically on a full field deterioration problem. The full field deterioration is estimated for a plate by resolving a parametrisation of the deterioration field for four scenarios. We demonstrate that the optimal sen
Choosing appropriate step sizes is critical for reducing the computational cost of training large-scale neural network models. Mini-batch sub-sampling (MBSS) is often employed for computational tractability. However, MBSS introduces a sampling error, that can manifest as a bias or variance in a line search. This is because MBSS can be performed statically, where the mini-batch is updated only when the search direction changes, or dynamically, where the mini-batch is updated everytime the function is evaluated. Static MBSS results in a smooth loss function along a search direction, reflecting low variance but large bias in the estimated "true" (or full batch) minimum. Conversely, dynamic MBSS results in a point-wise discontinuous function, with computable gradients using backpropagation, along a search direction, reflecting high variance but lower bias in the estimated "true" (or full batch) minimum. In this study, quadratic line search approximations are considered to study the quality of function and derivative information to construct approximations for dynamic MBSS loss functions. An empirical study is conducted where function and derivative information are enforced in various ways for the quadratic approximations. The results for various neural network problems show that being selective on what information is enforced helps to reduce the variance of predicted step sizes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.