Floating photovoltaics is a emerging approach to deploy photovoltaics on water bodies. Thanks to its high overall global potential and the extensive experience gained (with more than 2 GWp installed and than 510 plants, up to 2020), it represents a promising venue for expanding renewable electricity production worldwide. However, a local assessment for sustainability is needed for this potential to be converted into specific projects attracting the attention of stakeholders. This paper provides an original and wide-ranging screening checklist that allows for site assessment, with a view of separating suitable from unsuitable sites and emphasising that appropriate design can solve difficulties linked to the site features. It offers an extensive list of activities that international, national and regional authorities, investors, solution providers, local communities and civic society, environmentalists and other stakeholders might undertake for a fruitful dialogue. It explores the possibility that art, architecture and industrial design may play a role in increasing the touristic value and the public acceptance of new plants. Although the checklist can be used in other conditions, a particular attention is paid to mountain artificial lakes used as reservoirs by hydro-power plants, since they have potential high synergies (and a global potential of over 3.0 TW) but also may encounter significant implementation issues.
The complex nature of agent-based modeling may reveal more descriptive accuracy than analytical tractability. That leads to an additional layer of methodological issues regarding empirical validation, which is an ongoing challenge. This paper offers a replicable method to empirically validate agent-based models, a specific indicator of “goodness-of-validation” and its statistical distribution, leading to a statistical test in some way comparable to the p value. The method involves an unsupervised machine learning algorithm hinging on cluster analysis. It clusters the ex-post behavior of real and artificial individuals to create meso-level behavioral patterns. By comparing the balanced composition of real and artificial agents among clusters, it produces a validation score in [0, 1] which can be judged thanks to its statistical distribution. In synthesis, it is argued that an agent-based model can be initialized at the micro-level, calibrated at the macro-level, and validated at the meso-level with the same data set. As a case study, we build and use a mobility mode-choice model by configuring an agent-based simulation platform called BedDeM. We cluster the choice behavior of real and artificial individuals with the same ex-ante given characteristics. We analyze these clusters’ similarity to understand whether the model-generated data contain observationally equivalent behavioral patterns as the real data. The model is validated with a specific score of 0.27, which is better than about 95% of all possible scores that the indicator can produce. By drawing lessons from this example, we provide advice for researchers to validate their models if they have access to micro-data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.