Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
A large number of digital painting image resources cannot be directly converted into electronic form due to their differences in painting techniques and poor preservation of paintings. Moreover, the difficulty of extracting classification features can also lead to the consumption of human time and misclassification problems. The aim of this research is to address the challenges of converting various digital painting image resources into electronic form and the difficulties of accurately extracting classification features. The goal is to improve the usefulness and accuracy of painting image classification. Converting various digital painting image resources directly into electronic format and accurately extracting classification features are challenging due to differences in painting techniques and painting preservation, as well as the complexity of accurately extracting classification features. Overcoming these adjustments and improving the classification of painting features with the help of artificial intelligence (AI) techniques is crucial. The existing classification methods have good applications in different fields. But their research on painting classification is relatively limited. In order to better manage the painting system, advanced intelligent algorithms need to be introduced for corresponding work, such as feature recognition, image classification, etc. Through these studies, unlabeled classification of massive painting images can be carried out, while guiding future research directions. This study proposes an image classification model based on AI stroke features, which utilizes edge detection and grayscale image feature extraction to extract stroke features; and the convolutional neural network (CNN) and support vector machine are introduced into image classification, and an improved LeNet-5 CNN is proposed to achieve comprehensive assurance of image feature extraction. Considering the diversity of painting image features, the study combines color features with stroke features, and uses weighted K-means clustering algorithm to extract sample features. The experiment illustrates that the K-CNN hybrid model proposed in the study achieved an accuracy of 94.37% in extracting image information, which is higher than 78.24, 85.69, and 86.78% of C4.5, K-Nearest Neighbor (KNN), and Bi directional Long short-term Memory (BiLSTM) algorithms. In terms of image classification information recognition, the algorithms with better performance from good to poor are: the mixed model > BiLSTM > KNN > C4.5 model, with corresponding accuracy values of 0.938, 0.897, 0.872, and 0.851, respectively. And the number of fluctuation nodes in the mixed model is relatively small. And the sample search time is significantly shorter than other comparison algorithms, with a maximum recognition accuracy of 92.64% for the style, content, color, texture, and direction features of the image, which can effectively recognize the contrast and discrimination of the image. This method effectively provides a new technical means and research direction for digitizing image information.
A large number of digital painting image resources cannot be directly converted into electronic form due to their differences in painting techniques and poor preservation of paintings. Moreover, the difficulty of extracting classification features can also lead to the consumption of human time and misclassification problems. The aim of this research is to address the challenges of converting various digital painting image resources into electronic form and the difficulties of accurately extracting classification features. The goal is to improve the usefulness and accuracy of painting image classification. Converting various digital painting image resources directly into electronic format and accurately extracting classification features are challenging due to differences in painting techniques and painting preservation, as well as the complexity of accurately extracting classification features. Overcoming these adjustments and improving the classification of painting features with the help of artificial intelligence (AI) techniques is crucial. The existing classification methods have good applications in different fields. But their research on painting classification is relatively limited. In order to better manage the painting system, advanced intelligent algorithms need to be introduced for corresponding work, such as feature recognition, image classification, etc. Through these studies, unlabeled classification of massive painting images can be carried out, while guiding future research directions. This study proposes an image classification model based on AI stroke features, which utilizes edge detection and grayscale image feature extraction to extract stroke features; and the convolutional neural network (CNN) and support vector machine are introduced into image classification, and an improved LeNet-5 CNN is proposed to achieve comprehensive assurance of image feature extraction. Considering the diversity of painting image features, the study combines color features with stroke features, and uses weighted K-means clustering algorithm to extract sample features. The experiment illustrates that the K-CNN hybrid model proposed in the study achieved an accuracy of 94.37% in extracting image information, which is higher than 78.24, 85.69, and 86.78% of C4.5, K-Nearest Neighbor (KNN), and Bi directional Long short-term Memory (BiLSTM) algorithms. In terms of image classification information recognition, the algorithms with better performance from good to poor are: the mixed model > BiLSTM > KNN > C4.5 model, with corresponding accuracy values of 0.938, 0.897, 0.872, and 0.851, respectively. And the number of fluctuation nodes in the mixed model is relatively small. And the sample search time is significantly shorter than other comparison algorithms, with a maximum recognition accuracy of 92.64% for the style, content, color, texture, and direction features of the image, which can effectively recognize the contrast and discrimination of the image. This method effectively provides a new technical means and research direction for digitizing image information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.