Introduction. Over the past decade, data mining has been widely applied for specified purposes in various institutions. For library services, bibliomining is concisely defined as the data mining techniques used to extract patterns of behavior-based artifacts from library systems. Bibliomining process includes identifying topic, creating data warehouse, refining data, exploring data and evaluating results. The cases of practical implementations and applications in different areas have proved that the properly enough and consolidated data warehouse is the critical promise to success data mining applications. However, the task to create the data warehouse is highly database techniques dependent and involves much information engineering knowledge. It certainly hampers librarians, who were not trained in database discipline and are with little database literacy, to apply data mining technique to improve their work flexibly. Moreover, most marketed data mining tools are even more complex for librarians to adopt bibliomining in library services and operations.Method. We apply rapid prototyping software development method to develop the integration system. Those who joined the developing procedure are one database designer, three librarians, one system analyst, two library domain knowledge experts and one programmer. The system is designed based on library experts' views and librarian capability on their domain knowledge.Results. We propose a bibliomining application model and have developed an integration system for librarians' bibliomining in easy and flexible usage of library data mining operations.Conclusion. The primary job of bibliomining is to discover what meaningful and useful information to aid decision makings for library managers. They must pay much attention on how to meet their requirements. The developed bibliomining integration system meets the purpose and can help librarians do data mining works well.
PurposeIn the digital library era, library websites are recognized as the extension of library services. The usability and findability of library websites are growing more and more important to patrons. No matter how these websites have been built, they should offer the capability that patrons can find their required information quickly and intuitively. The website logs keep tracks of users' factual behaviors of finding their required information. Based on the evidences, the author attempts to reconstruct the websites to promote their internal findability.Design/methodology/approachIn the past, the card sorting method has generally been applied to reconstruct websites to improve their internal findability. Alternately, in this research, a first attempt is made to try to use website log data to implement website reconstruction. The website log data was cleaned and user sub‐sessions were extracted according to their respective critical time of session navigation. Each sub‐session's threshold time of target page was then calculated with different weights to determine its navigating parent pages. The different weighted parent pages were utilized to reconstruct various websites. A task‐oriented experiment of four tasks and 25 participants was conducted to measure the effects of findability between the constructed websites.FindingsBy analysis of the variance of time to complete the tasks, it is shown that the reconstructed websites have better findability performance in the time spent to complete the tasks than the current one, if focusing much more on the target pages. The result clearly explores that when the parent pages of a page are selected, whether it is a target page is the most important issue to improve website findability. The target page plays a critical role in website reconstruction. Furthermore, the traditional card sorting method is applied to the case website to reconstruct it. The findability experiment is then conducted and its time to complete the tasks is compared to those of websites that are reconstructed. The approach proposed here has better effects than card sorting.Originality/valueMining web log data to discover user behaviors on the library website, this research applies a heuristic method to analyze the data collected to reconstruct websites. Focusing on the target pages, the reconstructed websites will have better findability. Besides traditional card sorting techniques, this paper provides an alternative way to reconstruct websites such that users can find what they need more conveniently and intuitively.
Concept maps can help students learn more meaningfully. According to test scores only, students were divided into three groups of high-score, middle-score and lowscore, in the previous works, researchers then applied data mining association rule technique to analysis different student groups' assessment data to construct corresponding concept maps. However, for considering more accurate to evaluate students' performance states and various possible distributions of students' assessment data, in this research, we apply studentproblem chart to obtain students response patterns for grouping purpose. We generate six response pattern groups for 30131 students. Using association rule data mining technique also, we will construct more precise concept maps for students of different groups individually.
The Pareto Principle, also known as the 80/20 rule is currently an important and popular management rule applied to marketing and customer relationship management (CRM). The rule indicates that the vital few causes inputs or efforts bringing the most results, outputs, or rewards. Analyzing circulation data to understand the usage status of library collections can help libraries comprehend their patrons' behaviour. However, little research has been done to analyse circulation data of public libraries to reveal patrons' usage behaviours. This paper aimed to analyse the circulation data generated by a municipality public library in Taiwan to gauge if the Pareto Principle manifested in this context. Subsequently, using bibliomining analysis, this research further identified vital patrons and their characteristics, as well as book-borrowed distributions to help analyse patrons' book borrowing behaviour to improve the efficiency of library management and library marketing as well as CRM. The circulation data of the public library follows the Pareto Principle, approximating to the 80/20 rule. Findings showed that when the accumulative percentage of patron is 24.7 percent, the accumulative percentage of borrowed books is 75.3 percent. The vital few patrons borrow the majority of the collections. This paper is the first study to reveal that the Pareto Principle could be found in circulation data of a public library in Taiwan. It could help libraries identify vital patrons and major collections, and improve the efficiency of their management and marketing activities in future. For other types of libraries, it would be interesting for us to explore the existence of the Pareto Principle further.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.