EDITOR'S SUMMARY A panel of speakers from three universities explored their challenges and progress in building programs to support research data management, whether within the library system or with research offices or computing groups. Since 2012 Oregon State University has partnered with its research office and graduate school, helping students prepare data for preservation and sharing and developing a graduate course for credit in research data management. Based on needs identified through an environmental scan, the University of Washington hired a data services coordinator to promote the services provided and to increase collaborations, visibility and support. Purdue University pairs data services specialists with subject liaison librarians to reach disciplinary faculty and researchers. The connections identify champions, lead to successful collaborations and, most importantly, provide the opportunity to show data services specialists as peers and collaborators. With basic services established, each institution looks forward to strengthening relationships and expanding services, skills and staffing.
The Chronopolis Digital Preservation Initiative, one of the Library of Congress’ latest efforts to collect and preserve at-risk digital information, has completed its first year of service as a multi-member partnership to meet the archival needs of a wide range of domains.Chronopolis is a digital preservation data grid framework developed by the San Diego Supercomputer Center (SDSC) at UC San Diego, the UC San Diego Libraries (UCSDL), and their partners at the National Center for Atmospheric Research (NCAR) in Colorado and the University of Maryland's Institute for Advanced Computer Studies (UMIACS).Chronopolis addresses a critical problem by providing a comprehensive model for the cyberinfrastructure of collection management, in which preserved intellectual capital is easily accessible, and research results, education material, and new knowledge can be incorporated smoothly over the long term. Integrating digital library, data grid, and persistent archive technologies, Chronopolis has created trusted environments that span academic institutions and research projects, with the goal of long-term digital preservation.A key goal of the Chronopolis project is to provide cross-domain collection sharing for long-term preservation. Using existing high-speed educational and research networks and mass-scale storage infrastructure investments, the partnership is leveraging the data storage capabilities at SDSC, NCAR, and UMIACS to provide a preservation data grid that emphasizes heterogeneous and highly redundant data storage systems.In this paper we will explore the major themes within Chronopolis, including:a) The philosophy and theory behind a nationally federated data grid for preservation. b) The core tools and technologies used in Chronopolis. c) The metadata schema that is being developed within Chronopolis for all of the data elements. d) Lessons learned from the first year of the project.e) Next steps in digital preservation using Chronopolis: how we plan to strengthen and broaden our network with enhanced services and new customers.
In the spring of 2011, the UC San Diego Research Cyberinfrastructure (RCI) Implementation Team invited researchers and research teams to participate in a research curation and data management pilot program. This invitation took the form of a campus-wide solicitation. More than two dozen applications were received and, after due deliberation, the RCI Oversight Committee selected five curation-intensive projects. These projects were chosen based on a number of criteria, including how they represented campus research, varieties of topics, researcher engagement, and the various services required. The pilot process began in September 2011, and will be completed in early 2014. Extensive lessons learned from the pilots are being compiled and are being used in the on-going design and implementation of the permanent Research Data Curation Program in the UC San Diego Library. In this paper, we present specific implementation details of these various services, as well as lessons learned. The program focused on many aspects of contemporary scholarship, including data creation and storage, description and metadata creation, citation and publication, and long term preservation and access. Based on the lessons learned in our processes, the Research Data Curation Program will provide a suite of services from which campus users can pick and choose, as necessary. The program will provide support for the data management requirements from national funding agencies.
Preservation of digital content into the future will rely on the ability of institutions to provide robust system infrastructures that leverage the use of distributed and shared services and tools. The academic, nonprofit, and government entities that make up the National Digital Information Infrastructure and Preservation Program (NDIIPP) partner network have been working toward an architecture that can provide for reliable redundant geographically dispersed copies of their digital content. The NDIIPP program has conducted a set of initiatives that have enabled partners to better understand the requirements for effective collection interchange. The NDIIPP program partnered with the San Diego Supercomputer Center (SDSC) to determine the feasibility of data transmission and storage utilizing the best of breed technologies inherent to U.S. high-speed research networks and high-performance computing data storage infrastructures. The results of this partnership guided the development of the Library of Congress's cyberinfrastructure and its approach to network data transfer. Other NDIIPP partners, too, are researching a range of network architecture models for data exchange and storage. All of these explorations will build toward the development of best practices for sustainable interoperability and storage solutions.
Big Data Infrastructure at the Crossroads 3 perceptions that much data is either derivative, low quality, or gathered from sources that are inappropriate for open sharing. ▪ Ethical Challenges. The ethical dimensions of big data research remain contested, and some researchers are uncertain about best practices for ethical research conduct. Although IRB guidance is valued, some researchers expressed concerns that IRB regulations are not well adapted to new or evolving research methods. ▪ Support and Training. Researchers tend to favor informal training methods, such as internet tutorials, over formal training in big data methods. While such methods work well for solving immediate problems, they are less well suited to acquiring foundational knowledge, leaving the potential for blind spots in academic research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.