The Gaussian process is an indispensable tool for spatial data analysts. The onset of the “big data” era, however, has lead to the traditional Gaussian process being computationally infeasible for modern spatial data. As such, various alternatives to the full Gaussian process that are more amenable to handling big spatial data have been proposed. These modern methods often exploit low-rank structures and/or multi-core and multi-threaded computing environments to facilitate computation. This study provides, first, an introductory overview of several methods for analyzing large spatial data. Second, this study describes the results of a predictive competition among the described methods as implemented by different groups with strong expertise in the methodology. Specifically, each research group was provided with two training datasets (one simulated and one observed) along with a set of prediction locations. Each group then wrote their own implementation of their method to produce predictions at the given location and each was subsequently run on a common computing environment. The methods were then compared in terms of various predictive diagnostics. Supplementary materials regarding implementation details of the methods and code are available for this article online. Electronic Supplementary Material Supplementary materials for this article are available at 10.1007/s13253-018-00348-w.
How biological systems such as proteins achieve robustness to ubiquitous perturbations is a fundamental biological question. Such perturbations include errors that introduce phenotypic mutations into nascent proteins during the translation of mRNA. These errors are remarkably frequent. They are also costly, because they reduce protein stability and help create toxic misfolded proteins. Adaptive evolution might reduce these costs of protein mistranslation by two principal mechanisms. The first increases the accuracy of translation via synonymous "high fidelity" codons at especially sensitive sites. The second increases the robustness of proteins to phenotypic errors via amino acids that increase protein stability. To study how these mechanisms are exploited by populations evolving in the laboratory, we evolved the antibiotic resistance gene TEM-1 in Escherichia coli hosts with either normal or high rates of mistranslation. We analyzed TEM-1 populations that evolved under relaxed and stringent selection for antibiotic resistance by single molecule real-time sequencing. Under relaxed selection, mistranslating populations reduce mistranslation costs by reducing TEM-1 expression. Under stringent selection, they efficiently purge destabilizing amino acid changes. More importantly, they accumulate stabilizing amino acid changes rather than synonymous changes that increase translational accuracy. In the large populations we study, and on short evolutionary timescales, the path of least resistance in TEM-1 evolution consists of reducing the consequences of translation errors rather than the errors themselves. molecular evolution | mutational robustness | phenotypic mutations | protein stability | antibiotic resistance
Highlights: Flexible and scalable gap-fill algorithm for remotely sensed data Tested with MODIS NDVI data featuring up to 50% missing values Validated against established software Uncertainty quantification of the predicted values Software and examples provided in the open-source R package gapfill 2 AbstractRemotely sensed data are sparse, which means that data have missing values, for instance due to cloud cover. This is problematic for applications and signal processing algorithms that require complete data sets. To address the sparse data issue, we present a new gap-fill algorithm. The proposed method predicts each missing value separately based on data points in a spatio-temporal neighborhood around the missing data point. The computational workload can be distributed among several computers, making the method suitable for large datasets. The prediction of the missing values and the estimation of the corresponding prediction uncertainties are based on sorting procedures and quantile regression.The algorithm was applied to MODIS NDVI data from Alaska and tested with realistic cloud cover scenarios featuring up to 50% missing data. Validation against established software showed that the proposed method has a good performance in terms of the root mean squared prediction error. The procedure is implemented and available in the open-source R package gapfill. We demonstrate the software performance with a real data example and show how it can be tailored to specific data. Due to the flexible software design, users can control and redesign major parts of the procedure with little effort. This makes it an interesting tool for gap-filling satellite data and for the future development of gap-fill procedures.Graphical abstract 3 IntroductionRemote sensing is a technology used to study a wide range of Earth surface processes. It is usually used a long distance from the ground, and when compared to ground based measurements, the technology has the advantage of large spatial and temporal coverage. It is, however, crucial to understand and correct occasional measurement errors, which are caused by off-nadir view angles and atmospheric disturbances. We take a closer look at the data workflow of satellite observations to understand how they influence the study of satellite observations. To correct for any inaccuracies or omissions, we introduce a new gap-filling algorithm that assuages any discrepancies.
Bayesian approaches to the monitoring of group sequential designs have two main advantages compared with classical group sequential designs: first, they facilitate implementation of interim success and futility criteria that are tailored to the subsequent decision making, and second, they allow inclusion of prior information on the treatment difference and on the control group. A general class of Bayesian group sequential designs is presented, where multiple criteria based on the posterior distribution can be defined to reflect clinically meaningful decision criteria on whether to stop or continue the trial at the interim analyses. To evaluate the frequentist operating characteristics of these designs, both simulation methods and numerical integration methods are proposed, as implemented in the corresponding R package gsbDesign. Normal approximations are used to allow fast calculation of these characteristics for various endpoints. The practical implementation of the approach is illustrated with several clinical trial examples from different phases of drug development, with various endpoints, and informative priors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.