Clustering is the practice of dividing given data into similar groups and is one of the most widely used methods for unsupervised learning. Lee and Ouyang proposed a self-constructing clustering (SCC) method in which the similarity threshold, instead of the number of clusters, is specified in advance by the user. For a given set of instances, SCC performs only one training cycle on those instances. Once an instance has been assigned to a cluster, the assignment will not be changed afterwards. The clusters produced may depend on the order in which the instances are considered, and assignment errors are more likely to occur. Also, all dimensions are equally weighted, which may not be suitable in certain applications, e.g., time-series clustering. In this paper, improvements are proposed. Two or more training cycles on the instances are performed. An instance can be re-assigned to another cluster in each cycle. In this way, the clusters produced are less likely to be affected by the feeding order of the instances. Also, each dimension of the input can be weighted differently in the clustering process. The values of the weights are adaptively learned from the data. A number of experiments with real-world benchmark datasets are conducted and the results are shown to demonstrate the effectiveness of the proposed ideas.Many types of clustering algorithms have been proposed [2,17]. Similarity or distance measures are core components to cluster similar data into the same clusters, while dissimilar or distant data are placed into different clusters [18]. Centroid-based clustering [19][20][21][22][23][24][25][26] groups data instances in an exclusive way. If an instance belongs to a definite cluster, it could not be included in another cluster. K-means is one such algorithm, well-known in the AI community. To use it, the user has to provide the desired number of clusters, K, in advance. Each instance is assigned to the nearest cluster center. Then the K cluster centers are re-estimated. This process is repeated until the cluster centers are stable. Self-organizing mapping (SOM) employs a set of representatives. When a vector is presented, all representatives compete with each other. The winner is updated so as to move toward the vector. Hierarchical clustering [27][28][29][30][31][32] creates a hierarchical decomposition of the set of data instances using some criteria. Two strategies, bottom-up and top-down, are adopted in hierarchical algorithms. The user usually has to decide how many and what clusters are most desirable from the offered hierarchy of clusters. Distribution-based clustering [33][34][35][36][37][38] is based on distribution models. Fuzzy C-means uses fuzzy sets to cluster instances, and data instances are bound to each cluster by means of a membership function. Therefore, each instance may belong to several clusters with different degrees of membership. Gaussian mixture model-Expectation maximization (GMM-EM) uses a completely probabilistic approach. Each cluster is mathematically represented by a parametr...