We present a review of research studies that deal with personalization and synthesize current knowledge about these areas. We identify issues that we envision will be of interest to researchers working in the management sciences, taking an interdisciplinary approach that spans the areas of economics, marketing, information technology (IT), and operations research. We present a framework for personalization that allows us to identify key players in the personalization process as well as key stages of personalization. The framework enables us to examine the strategic role of personalization in the interactions between a firm and other key players in the firm's value system. We conceptualize the personalization process as consisting of three stages: (1) learning about consumer preferences, (2) matching offerings to customers, and (3) evaluation of the learning and matching processes. This review focuses on the learning stage, with an emphasis on utility-based approaches to estimate preference functions using data on customer interactions with a firm.Customization, Choice Models, Internet Marketing, Online Tracking, Learning Consumer Preferences, Recommendation Systems
Although the relational mode] for databases provides a great range of advantages over other data models, It lacks a comprehensive way to handle incomplete and uncertain data. Uncertainty in data values. however, is pervasive in all real-world environments and has i-eceiv[,d muc}] attent)[]n in the literature, Several methods ha}!e been proposed for incorporating uncertain data into relational databases.However, the current approaches have many shnrtcomlngs and havi, not established an acceptable extension af the relational model, In this paper. we propose :1 consistent extension of the relatiana] model. We present a revised relational structure and extend the relational algebra. The extended algebra ]s shown to be closed. N consistent extension of the conventiana] relational algebra, and reducible to tbe latt{,r ('at egorvcs and Subject
The focus of this research is to demonstrate how probabilistic models may be used to provide early warnings for bank failures. While prior research in the auditing literature has recognized the applicability of a Bayesian belief revision framework for many audit tasks, empirical evidence has suggested that auditors' cognitive decision processes often violate probability axioms. We believe that some of the well-documented cognitive limitations of a human auditor can be compensated by an automated system. In particular, we demonstrate that a formal belief revision scheme can be incorporated into an automated system to provide reliable probability estimates for early warning of bank failures. The automated system examines financial ratios as predictors of a bank's performance and assesses the posterior probability of a banks financial health (alternatively, financial distress). We examine two different probabilistic models, one that is simpler and makes more assumptions, while the other that is somewhat more complex but makes fewer assumptions. We find that both models are able to make accurate predictions with the help of historical data to estimate the required probabilities. In particular, the more complex model is found to be very well calibrated in its probability estimates. We posit that such a model can serve as a useful decision aid to an auditor's judgment process.Belief Revision, Calibration, Classification, Information Theory, Financial Distress, Audit
The sharing of databases either within or across organizations raises the possibility of unintentionally revealing sensitive relationships contained in them. Recent advances in data-mining technology have increased the chances of such disclosure. Consequently, firms that share their databases might choose to hide these sensitive relationships prior to sharing. Ideally, the approach used to hide relationships should be impervious to as many data-mining techniques as possible, while minimizing the resulting distortion to the database. This paper focuses on frequent item sets, the identification of which forms a critical initial step in a variety of data-mining tasks. It presents an optimal approach for hiding sensitive item sets, while keeping the number of modified transactions to a minimum. The approach is particularly attractive as it easily handles databases with millions of transactions. Results from extensive tests conducted on publicly available real data and data generated using IBM’s synthetic data generator indicate that the approach presented is very effective, optimally solving problems involving millions of transactions in a few seconds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.