Collinearity refers to the non independence of predictor variables, usually in a regression‐type analysis. It is a common feature of any descriptive ecological data set and can be a problem for parameter estimation because it inflates the variance of regression parameters and hence potentially leads to the wrong identification of relevant predictors in a statistical model. Collinearity is a severe problem when a model is trained on data from one region or time, and predicted to another with a different or unknown structure of collinearity. To demonstrate the reach of the problem of collinearity in ecology, we show how relationships among predictors differ between biomes, change over spatial scales and through time. Across disciplines, different approaches to addressing collinearity problems have been developed, ranging from clustering of predictors, threshold‐based pre‐selection, through latent variable methods, to shrinkage and regularisation. Using simulated data with five predictor‐response relationships of increasing complexity and eight levels of collinearity we compared ways to address collinearity with standard multiple regression and machine‐learning approaches. We assessed the performance of each approach by testing its impact on prediction to new data. In the extreme, we tested whether the methods were able to identify the true underlying relationship in a training dataset with strong collinearity by evaluating its performance on a test dataset without any collinearity. We found that methods specifically designed for collinearity, such as latent variable methods and tree based models, did not outperform the traditional GLM and threshold‐based pre‐selection. Our results highlight the value of GLM in combination with penalised methods (particularly ridge) and threshold‐based pre‐selection when omitted variables are considered in the final interpretation. However, all approaches tested yielded degraded predictions under change in collinearity structure and the ‘folk lore’‐thresholds of correlation coefficients between predictor variables of |r| >0.7 was an appropriate indicator for when collinearity begins to severely distort model estimation and subsequent prediction. The use of ecological understanding of the system in pre‐analysis variable selection and the choice of the least sensitive statistical approaches reduce the problems of collinearity, but cannot ultimately solve them.
Species distribution models (SDMs) constitute the most common class of models across ecology, evolution and conservation. The advent of ready‐to‐use software packages and increasing availability of digital geoinformation have considerably assisted the application of SDMs in the past decade, greatly enabling their broader use for informing conservation and management, and for quantifying impacts from global change. However, models must be fit for purpose, with all important aspects of their development and applications properly considered. Despite the widespread use of SDMs, standardisation and documentation of modelling protocols remain limited, which makes it hard to assess whether development steps are appropriate for end use. To address these issues, we propose a standard protocol for reporting SDMs, with an emphasis on describing how a study's objective is achieved through a series of modeling decisions. We call this the ODMAP (Overview, Data, Model, Assessment and Prediction) protocol, as its components reflect the main steps involved in building SDMs and other empirically‐based biodiversity models. The ODMAP protocol serves two main purposes. First, it provides a checklist for authors, detailing key steps for model building and analyses, and thus represents a quick guide and generic workflow for modern SDMs. Second, it introduces a structured format for documenting and communicating the models, ensuring transparency and reproducibility, facilitating peer review and expert evaluation of model quality, as well as meta‐analyses. We detail all elements of ODMAP, and explain how it can be used for different model objectives and applications, and how it complements efforts to store associated metadata and define modelling standards. We illustrate its utility by revisiting nine previously published case studies, and provide an interactive web‐based application to facilitate its use. We plan to advance ODMAP by encouraging its further refinement and adoption by the scientific community.
Imaging spectroscopy, also known as hyperspectral remote sensing, is based on the characterization of Earth surface materials and processes through spectrally-resolved measurements of the light interacting with matter. The potential of imaging spectroscopy for Earth remote sensing has been demonstrated since the 1980s. However, most of the developments and applications in imaging spectroscopy have largely relied on airborne spectrometers, as the amount and quality of space-based imaging spectroscopy data remain relatively low to date. The upcoming Environmental Mapping and Analysis Program (EnMAP) German imaging spectroscopy mission is intended to fill this gap. An overview of the main characteristics and current status of the mission is provided in this contribution. The core payload of EnMAP consists of a dual-spectrometer instrument measuring in the optical spectral range between 420 and 2450 nm with a spectral sampling distance varying between 5 and 12 nm and a reference signal-to-noise ratio of 400:1 in the visible and near-infrared and 180:1 in the shortwave-infrared parts of the spectrum. EnMAP images will cover a 30 km-wide area in the across-track direction with a ground sampling distance of 30 m. An across-track tilted observation capability will enable a target revisit time of up to four days at the Equator and better at high latitudes. EnMAP will contribute to the development and exploitation of spaceborne imaging spectroscopy applications by making high-quality data freely available to scientific users worldwide.
Societal, economic and scientific interests in knowing where biodiversity is, how it is faring and what can be done to efficiently mitigate further biodiversity loss and the associated loss of ecosystem services are at an all-time high. So far, however, biodiversity monitoring has primarily focused on structural and compositional features of ecosystems despite growing evidence that ecosystem functions are key to elucidating the mechanisms through which biological diversity generates services to humanity. This monitoring gap can be traced to the current lack of consensus on what exactly ecosystem functions are and how to track them at scales beyond the site level. This contribution aims to advance the development of a global biodiversity monitoring strategy by proposing the adoption of a set of definitions and a typology for ecosystem functions, and reviewing current opportunities and potential limitations for satellite remote sensing technology to support the monitoring of ecosystem functions worldwide. By clearly defining ecosystem processes, functions and services and their interrelationships, we provide a framework to improve communication between ecologists, land and marine managers, remote sensing specialists and policy makers, thereby addressing a major barrier in the field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.