PurposeThis study aims to develop a systematic approach for assessing local training needs in order to reskill liaison librarians for new roles in scholarly communication and research data management.Design/methodology/approachThis study followed a training needs assessment approach to develop a survey instrument that was administered electronically to liaison librarians. Survey data were analysed to create an overall prioritization score used to rank local training topics in terms of need. Additional data will inform the design, including formats, of a training agenda to meet these needs.FindingsSurvey results indicated that training for research data topics should be prioritized and addressed using hands‐on methods that would allow liaison librarians to develop tangible skills directly applicable to individual outreach activities.Research limitations/implicationsTraining priorities often involve factors beyond the scope of this training needs assessment methodology. This methodology also presupposes a list of potential training topics. All training efforts resulting from this study will be assessed in order to determine the effectiveness of the initial interventions and inform the next steps in this iterative training agenda.Practical implicationsInvolving potential trainees in the prioritization and development of a training agenda provides valuable information and may lead to increased receptivity to training.Originality/valueThis study provides a model for academic libraries to use to assess training needs in order to reskill current staff to adapt to a rapidly changing research and scholarly communication landscape.
We investigated 24 web-based data repositories with "controlled collections" to determine why and how repositories control access to and use of data. We selected our sample of data repositories from across scholarly and scientific disciplines in order to investigate differences between fields. Using content analysis and surveys, we collected data about current repository policies and practices and underlying motivations for controlling data access and use. Looking across all disciplines, we found no overarching reason for restricting access to data, but "avoiding misuse" was listed most frequently. Ensuring attribution was the dominant reason for controlling use of data. Observed between-field findings are tentative given the small number of repositories in some fields that met study criteria; however, our data do suggest some interesting differences. We also found cross-disciplinary patterns regarding methods for controlling access to and use of data. Better understanding of and attention to access and use control interests may allow repositories to attract more data depositors and ultimately increase the amount of data that can be shared.
This paper describes the range and variation in access and use control policies and tools used by 24 web-based data repositories across a variety of fields. It also describes rationale provided by repositories for their decisions to control data or provide means for depositors to do so. Using a purposive exploratory sample, we employed content analysis of repository website documentation, a web survey of repository managers, and selected follow up interviews to generate data. Our results describe the range and variation in access and use control policies and tools employed, identifying both commonalities and distinctions across repositories. Using concepts from commons theory as a guiding theoretical framework, in our analysis we describe five dimensions of repository rules that create and manage data commons boundaries: locus of decision making (depositor vs. repository), degree of variation in terms of use within the repository, the mission of the repository in relation to its scholarly field, what use means in relation to specific sorts of data, and types of exclusion.
Proton NMR spectra of urine from subjects with multiple acyl-CoA dehydrogenase deficiency, caused by defects in either the electron transport flavoprotein or electron transport flavoprotein ubiquinone oxidoreductase, provide a characteristic and possibly diagnostic metabolite profile. The detection of dimethylglycine and sarcosine, intermediates in the oxidative degradation of choline, should discriminate between multiple acyl-CoA dehydrogenase deficiency and related disorders involving fatty acid oxidation. The excretion rates of betaine, dimethylglycine (and sarcosine) in these subjects give an estimate of the minimum rates of both choline oxidation and methyl group release from betaine and reveal that the latter is comparable with the calculated total body methyl requirement in the human infant even when choline intake is very low. Our results provide a new insight into the rates of in vivo methylation in early human development.
While research funders and journal publishers now encourage or mandate data management and sharing, researchers are often not formally trained in these practices. As a result, many universities have begun to develop programs to assist faculty, staff and students with these needs. One such effort, Research Data Services (RDS) at the University of Colorado Boulder (CU-Boulder), is a collaborative activity between research computing (RC), a division of the Office of Information Technology, and the University Libraries. Similar to other institutions, the range of data services provided includes assistance with writing data management plans (DMPs), data storage and repository advice, data processing and other needs related to research data management. In addition, RDS has experimented with a variety of novel approaches to outreach and engagement across all disciplines at CU-Boulder and with affiliated institutions in the surrounding area. The history, services, outreach and education efforts of the RDS program at CU-Boulder are described in the sections that follow. HistoryRDS was developed in 2011 in response to the National Science Foundation (NSF) requirement for DMPs to be included with all grant proposals [1]. To meet the new requirements, the original RDS was developed initially to help with DMP writing by repurposing existing positions in RC and the libraries. In addition to providing DMP templates via the DMPTool [2], RDS would meet with campus personnel to help them understand the components of the DMP. As a result of a campus-wide Data Management Task Force report [3], a governance structure and additional services were added to RDS to expand offerings, while keeping the primary mission intact.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.