In business, managers may use the association information among products to define promotion and competitive strategies. The mining of high-utility association rules (HARs) from high-utility itemsets enables users to select their own weights for rules, based either on the utility or confidence values. This approach also provides more information, which can help managers to make better decisions. Some efficient methods for mining HARs have been developed in recent years. However, in some decision-support systems, users only need to mine a smallest set of HARs for efficient use. Therefore, this paper proposes a method for the efficient mining of non-redundant high-utility association rules (NR-HARs). We first build a semi-lattice of mined high-utility itemsets, and then identify closed and generator itemsets within this. Following this, an efficient algorithm is developed for generating rules from the built lattice. This new approach was verified on different types of datasets to demonstrate that it has a faster runtime and does not require more memory than existing methods. The proposed algorithm can be integrated with a variety of applications and would combine well with external systems, such as the Internet of Things (IoT) and distributed computer systems. Many companies have been applying IoT and such computing systems into their business activities, monitoring data or decision-making. The data can be sent into the system continuously through the IoT or any other information system. Selecting an appropriate and fast approach helps management to visualize customer needs as well as make more timely decisions on business strategy.
Mining closed high utility itemsets (CHUIs) serves as a compact and lossless representation of high utility itemsets (HUIs). CHUIs and their generators are useful in analytical and recommendation systems. In this paper, we introduce a lattice approach to extract CHUIs and their generators from a set of HUIs quickly. The experimental results show that mining CHUIs and their generators from a lattice of HUIs is efficient in both runtime and memory usage. ARTICLE HISTORY
We present numerical estimates of the Hausdorff dimension D of the largest cluster and its " backbone" in the percolation problem on a square lattice as a function of the concentration £ . We fine that D is an approximately linear function of p in the region near P ~Pc (-0.59) with a dimension about equal to that of a self-avoiding walk whenp = 0.455. The dimension of the backbone, or biconnected part, of the largest cluster equals that of the self-avoiding walk when/? ^p c . Atp =p c the dimension of the largest cluster equals the anomalous dimension introduced by Stanley et al. In analysis of experiments on magnetic systemsBirgeneau et ah 1 and Stanley et al. 2 recently suggested a self-avoiding random walk (SAW) as a model for the largest cluster in a percolating net. Qualitative arguments were presented suggesting that the geometrical properties of the clusters were similar to those of the SAW. Here we study this interesting suggestion by estimating the dimension of each of the two structures numerically.The dimension which we estimate is the Hausdorff-Besicovitch 3 dimension D H which is defined so that a particular structure is covered by a minimum of N(r]) disks of radius rj and lim^0N(r])ri Du is finite. We have checked that direct application of this definition to numerically estimate D H for a self-avoiding walk (SAW) in two dimensions gave results consistent with the value 4 D S AW = 1»33 obtained by finding the mean end-to-end distance (r 2 ) as a function of the number of steps n. (Writing (r 2 ) =n 2Vs it is easy to show that D SAW -l/v SQ ) We have also checked the value D S AW = L33 by ap-plication of the second method.In the percolation system we estimate D H from the relation of the average size n c of the largest cluster to the total size n 2 of the percolating net. We find empirically that n c varies with n as n c -Kn 2y where K and y are constants (see Fig. 1). (This relation is expected when na is less than the coherence length |.) We note that n c can also be regarded as an upper bound on the number of disks of size r\-a required to cover the largest cluster in a net of size n 2 . For fixed a and for each n we introduce a change of length scale x' -x/na. In terms of this length scale the covering of the largest cluster in the net of size n 2 is by disks of radius 77' -r\/na =l/n. Thus each value of n corresponds to a different covering. Using n c =Kn 2y we haveiV(7? / ) = n c =Kn 2y =K(r] f y 2y or D = 2y. By the argument just given, these D's are an upper bound on the Hausdorff dimension. We do not have a quantitative estimate of the error involved in treating D as an estimate for D H . We have two indications that the error in using D as 740
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.