Submarine landslides can generate sediment-laden flows whose scale is impressive. Individual flow deposits have been mapped that extend for 1,500 km offshore from northwest Africa. These are the longest run-out sediment density flow deposits yet documented on Earth. This contribution analyses one of these deposits, which contains ten times the mass of sediment transported annually by all of the world's rivers. Understanding how this type of submarine flow evolves is a significant problem, because they are extremely difficult to monitor directly. Previous work has shown how progressive disintegration of landslide blocks can generate debris flow, the deposit of which extends downslope from the original landslide. We provide evidence that submarine flows can produce giant debris flow deposits that start several hundred kilometres from the original landslide, encased within deposits of a more dilute flow type called turbidity current. Very little sediment was deposited across the intervening large expanse of sea floor, where the flow was locally very erosive. Sediment deposition was finally triggered by a remarkably small but abrupt decrease in sea-floor gradient from 0.05 degrees to 0.01 degrees. This debris flow was probably generated by flow transformation from the decelerating turbidity current. The alternative is that non-channelized debris flow left almost no trace of its passage across one hundred kilometres of flat (0.2 degrees to 0.05 degrees) sea floor. Our work shows that initially well-mixed and highly erosive submarine flows can produce extensive debris flow deposits beyond subtle slope breaks located far out in the deep ocean.
The solutions adopted by the high-energy physics community to foster reproducible research are examples of best practices that could be embraced more widely. This first experience suggests that reproducibility requires going beyond openness.
The purpose of this study was to develop an understanding of the current state of scientific data sharing that stakeholders could use to develop and implement effective data sharing strategies and policies. The study developed a conceptual model to describe the process of data sharing, and the drivers, barriers, and enablers that determine stakeholder engagement. The conceptual model was used as a framework to structure discussions and interviews with key members of all stakeholder groups. Analysis of data obtained from interviewees identified a number of themes that highlight key requirements for the development of a mature data sharing culture.
Abstract. The SOAP (Study of Open Access Publishing) project has analyzed the current supply and demand situation in the open access journal landscape. Starting from the Directory of Open Access Journals, several sources of data were considered, including journal websites and direct inquiries within the publishing industry to comprehensively map the present supply of online peer-reviewed OA journals. The demand for open access publishing is summarised, as assessed through a large-scale survey of researchers' opinions and attitudes. Some forty thousand answers were collected across disciplines and around the world, reflecting major support for the idea of open access, while highlighting drivers of and barriers to open access publishing.
Higher sensor throughput has increased the demand for cyberinfrastructure, requiring those unfamiliar with large database management to acquire new skills or outsource. Some have called this shift from sensor-limited data collection the "data deluge." As an alternative, we propose that the deluge is the result of sensor control software failing to keep pace with hardware capabilities. Rather than exploit the potential of powerful embedded operating systems and construct intelligent sensor networks that harvest higher quality data, the old paradigm (i.e. collect everything) is still dominant. To mitigate the deluge, we present an adaptive sampling algorithm based on the Nyquist-Shannon sampling theorem. We calibrate the algorithm for both data reduction and increased sampling over "hot moments," which we define as periods of elevated signal activity, deviating from previous works which have emphasized adaptive sampling for data compression via minimization of signal reconstruction error. Under the feature extraction concept, samples drawn from userdefined events carry greater importance and effective control requires the researcher to describe the context of events in the form of both an identification heuristic (for calibration) and a real-time sampling model. This event-driven approach is important when observation is focused on intermittent dynamics. In our case study application, we develop a heuristic to identify hot moments from historical data and use it to train and evaluate the adaptive model in an offline analysis using soil moisture data. Results indicate the adaptive model is superior to uniform sampling, capable of extracting 20% to 100% more samples during hot moments at equivalent levels of overall efficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.