BackgroundTwo new directors were each charged by their institutions to catalyze transformational change in their libraries and to develop dynamic and evolving information ecosystems ready for the information challenges of the future. The directors approached this transformational change using a strategic, forward-looking approach.ResultsThis paper presents examples of actions that served as catalysts for change at the two libraries using Kotter’s Eight Steps of Change as a framework. Small and large changes are critical for successfully transforming library services, resources, and personnel.ConclusionsLibraries are faced with incredible pressure to adapt to meet emerging and intensifying information needs on today’s academic medical campuses. These pressures offer an opportunity for libraries to accelerate their evolution at the micro and macro levels. This commentary reports the expansion of new services and areas of support, enhancement of professional visibility of the libraries on their campuses, and overall, a more positive and productive environment at the respective institutions.
ObjectiveThe paper provides a review of current practices related to evaluation support services reported by seven biomedical and research libraries.MethodsA group of seven libraries from the United States and Canada described their experiences with establishing evaluation support services at their libraries. A questionnaire was distributed among the libraries to elicit information as to program development, service and staffing models, campus partnerships, training, products such as tools and reports, and resources used for evaluation support services. The libraries also reported interesting projects, lessons learned, and future plans.ResultsThe seven libraries profiled in this paper report a variety of service models in providing evaluation support services to meet the needs of campus stakeholders. The service models range from research center cores, partnerships with research groups, and library programs with staff dedicated to evaluation support services. A variety of products and services were described such as an automated tool to develop rank-based metrics, consultation on appropriate metrics to use for evaluation, customized publication and citation reports, resource guides, classes and training, and others. Implementing these services has allowed the libraries to expand their roles on campus and to contribute more directly to the research missions of their institutions.ConclusionsLibraries can leverage a variety of evaluation support services as an opportunity to successfully meet an array of challenges confronting the biomedical research community, including robust efforts to report and demonstrate tangible and meaningful outcomes of biomedical research and clinical care. These services represent a transformative direction that can be emulated by other biomedical and research libraries.
Institutional data repositories are the acknowledged gold standard for data curation platforms in academic libraries. But not every institution can sustain a repository, and not every dataset can be archived due to legal, ethical, or authorial constraints. Data catalogs—metadata-only indices of research data that provide detailed access instructions and conditions for use—are one potential solution, and may be especially suitable for "challenging" datasets. This article presents the strengths of data catalogs for increasing the discoverability and accessibility of research data. The authors argue that data catalogs are a viable alternative or complement to data repositories, and provide examples from their institutions' experiences to show how their data catalogs address specific curatorial requirements. The article also reports on the development of a community of practice for data catalogs and data discovery initiatives.
Technology advances and collaborations with information technology and computer science groups have enabled library services to expand into new domains. Listening to user needs, eliminating administrative burden and saving users time remain strong foundations on which to build new library services enabled by technology. Examples of what is now possible is described, including service to user groups, successes, failures and challenges. Although technology advances have enabled library service enhancements to all user groups, special emphasis on new library services in support of the research enterprise is discussed. As Lindberg and Humphreys predicted in 2015, the research enterprise's need for responsible curation of research data has created new opportunities for library services and examples of those services are discussed. As technology continues to advance, new library services are expected to emerge. These may include regulatory and compliance services. By developing these services with user feedback to save users time and expedite their work, and in collaboration with technology experts, libraries can expect to offer sustainable and valued services for years to come.
Academic institutions need to maintain publication lists for thousands of faculty and other scholars. Automated tools are essential to minimize the need for direct feedback from the scholars themselves who are practically unable to commit necessary effort to keep the data accurate. In relying exclusively on clustering techniques, author disambiguation applications fail to satisfy key use cases of academic institutions. Algorithms can perfectly group together a set of publications authored by a common individual, but, for them to be useful to an academic institution, they need to programmatically and recurrently map articles to thousands of scholars of interest en masse. Consistent with a savvy librarian’s approach for generating a scholar’s list of publications, identity-driven authorship prediction is the process of using information about a scholar to quantify the likelihood that person wrote certain articles. ReCiter is an application that attempts to do exactly that. ReCiter uses institutionally-maintained identity data such as name of department and year of terminal degree to predict which articles a given scholar has authored. To compute the overall score for a given candidate article from PubMed (and, optionally, Scopus), ReCiter uses: up to 12 types of commonly available, identity data; whether other members of a cluster have been accepted or rejected by a user; and the average score of a cluster. In addition, ReCiter provides scoring and qualitative evidence supporting why particular articles are suggested. This context and confidence scoring allows curators to more accurately provide feedback on behalf of scholars. To help users to more efficiently curate publication lists, we used a support vector machine analysis to optimize the scoring of the ReCiter algorithm. In our analysis of a diverse test group of 500 scholars at an academic private medical center, ReCiter correctly predicted 98% of their publications in PubMed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.