University libraries provide access to thousands of journals and spend millions of dollars annually on electronic resources. With several commercial entities providing these electronic resources, the result can be silo systems and processes to evaluate cost and usage of these resources, making it difficult to provide meaningful analytics. In this research, we examine a subset of journals from a large research library using a web analytics approach with the goal of developing a framework for the analysis of library subscriptions. This foundational approach is implemented by comparing the impact to the cost, titles, and usage for the subset of journals and by assessing the funding area. Overall, the results highlight the benefit of a web analytics evaluation framework for university libraries and the impact of classifying titles based on the funding area. Furthermore, they show the statistical difference in both use and cost among the various funding areas when ranked by cost, eliminating the outliers of heavily used and highly expensive journals. Future work includes refining this model for a larger scale analysis tying metrics to library organizational objectives and for the creation of an online application to automate this analysis.
This research uses the methodology of web analytics to examine the usage of subscription databases at a major academic library. Our research goal is the development of key performance indicators from which academic libraries can evaluate the business value of their content collections. There are 1,447 databases to which this academic library provides access, and these databases received nearly 2.5 million customer visits in 2012 via the library's meta‐search application, which is used for searching these databases. As such, these visits represent a substantial subset of the total traffic to the university's academic databases. The first level analysis shows that the top 20 most used databases represent over half the traffic to these academic databases. The second level analysis compared these heavily used databases (20) categorizing them by provider, and quantifying them with the remaining databases (428) from these providers. These results show the inequality of traffic generated by the top databases relative to the remaining databases from these providers in the context of search. The implications of this inequality illustrate the extreme usefulness of select databases and the possibility of the dispensability of less popular databases. The third level of analysis is a temporal evaluation of demand of databases over the course of two semesters (spring and fall 2012). This evaluation displayed the lack of increased demand throughout a semester beyond the top 300 databases. We used this analysis as the beginnings of the formulation of a set of web analytic metrics tailored for academic libraries.
University libraries provide access to thousands of online journals and other content, spending millions of dollars annually on these electronic resources. Providing access to these online resources is costly, and it is difficult both to analyze the value of this content to the institution and to discern those journals that comparatively provide more value. In this research, we examine 1,510 journals from a large research university library, representing more than 40% of the university's annual subscription cost for electronic resources at the time of the study. We utilize a web analytics approach for the creation of a linear regression model to predict usage among these journals. We categorize metrics into two classes: global (journal focused) and local (institution dependent). Using 275 journals for our training set, our analysis shows that a combination of global and local metrics creates the strongest model for predicting full-text downloads. Our linear regression model has an accuracy of more than 80% in predicting downloads for the 1,235 journals in our test set. The implications of the findings are that university libraries that use local metrics have better insight into the value of a journal and therefore more efficient cost content management.
This paper examines the decision points over the course of ten years for developing an Institutional Repository. Specifically, the focus is on the impact and influence from the open-source community, the needs of the local institution, the role that team dynamics plays, and the chosen platform. Frequently, the discussion revolves around the technology stack and its limitations and capabilities. Inherently, any technology will have several features and limitations, and these are important in determining a solution that will work for your institution. However, the people running the system and developing the software, and their enthusiasm to continue work within the existing software environment in order to provide features for your campus and the larger open-source community will play a bigger role than the technical platform. These lenses are analyzed through three points in time: the initial roll out of our Institutional Repository, our long-term running and maintenance, and eventual new development and why we made the decisions we made at each of those points in time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.