In the big data era there are some issues regarding real-world classification problems. Some of the important challenges that still need to be overcome to produce an accurate classification model are the data imbalance, difficulties in labeling process, and differences on data distribution. Most classification problems are related to the differences in the data distribution and the lack of labels on some datasets while other datasets have abundant labels. To address the problem, this paper proposes a weighted-based feature-transfer learning (WbFTL) method to transfer knowledge between different but related domains, called cross-domain. The knowledge transfer is done through making a new feature representations in order to reduce the cross-domain's distribution differences while maintaining the local structure of the domain. To make the new feature representation we implement a feature selection and inter-cluster class distance. We propose two stages of the feature selection process to capture the knowledge of the feature and its relation to the label. The first stage uses a threshold to select the feature. The second stage uses ANOVA (Analysis of Variance) to select features that are significant to the label. To enhance the accuracy, the selected features are weighted before being used for the training process using SVM. The proposed WbFTL are compared to 1-NN and PCA as baseline 1 and baseline 2. Both baseline models represent the traditional machine learning and dimensionality reduction method, without implementing transfer learning. It is also compared with TCA, the first feature-transfer learning work on this same task, as baseline 3. The experiment results of 12 cross-domain tasks on Office and Caltech dataset show that the proposed WbFTL can increase the average accuracy by 15.25%, 6.83%, and 3.59% compared to baseline 1, baseline 2, and baseline 3, respectively.