In recent years, the online selection of virtual clothing styles has been used to explore and expand diversified personal aesthetics, and it is also an overall reform and challenge to the clothing industry. Under the condition of the existing clothing style categories, this paper puts forward a style classification method combining fine-grained and coarse-grained techniques. Furthermore, a new deep neural network is proposed, which can improve the robustness of recognition and avoid the interference of image background through the pan learning and the background learning of image features. In order to study the relationship between the fine-grained attributes of clothing and the whole style, firstly, the clothing types are learned to realize the pre-training of model parameters. Secondly, through the transfer learning of the first stage of the pre-training model parameters, the model parameters are fine-tuned to make them more suitable for identifying the coarse-grained style types. Finally, a network structure based on the dual attention mechanism is proposed to improve the accuracy of final identification by adding different attention mechanisms at different stages of the network to enhance the performance of network features. In the experiment, we collected 50,000 images of 10 clothing styles to train and evaluate the models. The results show that the proposed classification method can effectively distinguish clothing styles and types.
Through the analysis of facial feature extraction technology, this paper designs a lightweight convolutional neural network (LW-CNN). The LW-CNN model adopts a separable convolution structure, which can propose more accurate features with fewer parameters and can extract 3D feature points of a human face. In order to enhance the accuracy of feature extraction, a face detection method based on the inverted triangle structure is used to detect the face frame of the images in the training set before the model extracts the features. Aiming at the problem that the feature extraction algorithm based on the difference criterion cannot effectively extract the discriminative information, the Generalized Multiple Maximum Dispersion Difference Criterion (GMMSD) and the corresponding feature extraction algorithm are proposed. The algorithm uses the difference criterion instead of the entropy criterion to avoid the “small sample” problem, and the use of QR decomposition can extract more effective discriminative features for facial recognition, while also reducing the computational complexity of feature extraction. Compared with traditional feature extraction methods, GMMSD avoids the problem of “small samples” and does not require preprocessing steps on the samples; it uses QR decomposition to extract features from the original samples and retains the distribution characteristics of the original samples. According to different change matrices, GMMSD can evolve into different feature extraction algorithms, which shows the generalized characteristics of GMMSD. Experiments show that GMMSD can effectively extract facial identification features and improve the accuracy of facial recognition.
In this paper, the latest virtual reconstruction technology is used to conduct in-depth research on 3D movie animation image acquisition and feature processing. This paper firstly proposes a time-division multiplexing method based on subpixel multiplexing technology to improve the resolution of integrated imaging reconstruction images. By studying the degradation effect of the reconstruction process of the 3D integrated imaging system, it is proposed to improve the display resolution by increasing the pixel point information of fixed display array units. According to the subpixel multiplexing, an algorithm to realize the reuse of pixel point information of 3D scene element image gets the element image array with new information; then, through the high frame rate light emitting diode (LED) large screen fast output of the element image array, the human eye temporary retention effect is used, so that this group of element image array information go through a plane display, to increase the limited display array information capacity thus improving the reconstructed image. In this way, the information capacity of the finite display array is increased and the display resolution of the reconstructed image is improved. In this paper, we first use the classification algorithm to determine the gender and expression attributes of the face in the input image and filter the corresponding 3D face data subset in the database according to the gender and expression attributes, then use the sparse representation theory to filter the prototype face like the target face in the data subset, then use the filtered prototype face samples to construct the sparse deformation model, and finally use the target faces. Finally, the target 3D face is reconstructed using the feature points of the target face for model matching. The experimental results show that the algorithm reconstructs faces with high realism and accuracy, and the algorithm can reconstruct expression faces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.