Aim: Recent vegetation changes in mountain areas are often explained by climate warming. However, effects of land-use changes, such as recolonization of abandoned pastures by forest, are difficult to separate from those of climate change. Even within forest belts, changes in stand structure due to forest management and stand maturation could confound the climate signal. Here, we evaluate the direction and rate of plant species elevation shifts in mountain forests, considering the role of stand dynamics.Location: Forests in the plains and mountains of Southeast France. Methods:We compared floristic data from the French National Forest Inventory collected in the 1980s and 1990s. They provided a large-scale (30 985 plots) and representative sample of vegetation between 0 and 2500 m a.s.l. Species response curves along the elevation and exposure gradients were fitted with a logistic regression model. In order to assess the effect of changes in successional stages of the forest stands, we compared plant species shifts in the whole set of stands with those solely in closed stands.Results: A total of 62 species shifted downward, whereas 113 shifted upward, resulting in a significant upward mean shift of 17.9 m. Upward shifting species were preferentially woody and heliophilous, suggesting a role for forest closure and maturation in the observed changes. Excluding all open forest stages from analyses, the upward trend became weaker (À3.0 m) and was not significant. Forests of the study area have undergone closure and maturation, more strongly at lower altitudes than at higher ones, producing an apparent shift of species.Conclusions: In the mountain relief of Southeast France, changes in the successional stages of stands appear as the main cause of the apparent upslope movement of forest species. Since a similar trend of forest maturation exists in large areas throughout Europe, forest dynamics should be better taken into account among the causes of vegetation changes before inferring any climate change effect.
This paper proposes a novel representation space for multimodal information, enabling fast and efficient retrieval of video data. We suggest describing the documents not directly by selected multimodal features (audio, visual or text), but rather by considering cross-document similarities relatively to their multimodal characteristics. This idea leads us to propose a particular form of dissimilarity space that is adapted to the asymmetric classification problem, and in turn to the query-by-example and relevance feedback paradigm, widely used in information retrieval. Based on the proposed dissimilarity space, we then define various strategies to fuse modalities through a kernel-based learning approach. The problem of automatic kernel setting to adapt the learning process to the queries is also discussed. The properties of our strategies are studied and validated on artificial data. In a second phase, a large annotated video corpus, (ie TRECVID-05), indexed by visual, audio and text features is considered to evaluate the overall performance of the dissimilarity space and fusion strategies. The obtained results confirm the validity of the proposed approach for the representation and retrieval of multimodal information in a real-time framework.
Abstract. In retrieval, indexing and classification of multimedia data an efficient information fusion of the different modalities is essential for the system's overall performance. Since information fusion, its influence factors and performance improvement boundaries have been lively discussed in the last years in different research communities, we will review their latest findings. They most importantly point out that exploiting the feature's and modality's dependencies will yield to maximal performance. In data analysis and fusion tests with annotated image collections this is undermined.
Abstract. Different strategies to learn user semantic queries from dissimilarity representations of video audio-visual content are presented. When dealing with large corpora of videos documents, using a feature representation requires the online computation of distances between all documents and a query. Hence, a dissimilarity representation may be preferred because its offline computation speeds up the retrieval process. We show how distances related to visual and audio video features can directly be used to learn complex concepts from a set of positive and negative examples provided by the user. Based on the idea of dissimilarity spaces, we derive three algorithms to fuse modalities and therefore to enhance the precision of retrieval results. The evaluation of our technique is performed on artificial data and on the complete annotated TRECVID corpus.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.