In order to further improve the accuracy and efficiency of network information security situation prediction, this study used the dynamic equal-dimensional method based on gray correlation analysis to improve the GM (1, N) model and carried out an experiment on the designed network security situation prediction (NSSP) model in a simulated network environment. It was found that the predicted result of the improved GM (1, N) model was closer to the actual value. Taking the 11th hour as an example, the predicted value of the improved GM (1, N) model was 28.1524, which was only 0.8983 larger than the actual value; compared with neural network and Markov models, the error of the improved GM (1, N) model was smaller: the average error was only 2.3811, which was 67.88% and 70.31% smaller than the other two models. The improved GM (1, N) model had a time complexity that was 49.99% and 39.53% lower than neural network and Markov models; thus, it had high computational efficiency. The experimental results verify the effectiveness of the improved GM (1, N) model in solving the NSSP problem. The improved GM (1, N) model can be further promoted and applied in practice and deployed in the network of schools and enterprises to achieve network information security.
Recently, it has become a popular strategy in multi-label image recognition to predict those labels that co-occur in a picture. Previous work has concentrated on capturing label correlation but has neglected to correctly fuse picture features and label embeddings, which has a substantial influence on the model’s convergence efficiency and restricts future multi-label image recognition accuracy improvement. In order to better classify labeled training samples of corresponding categories in the field of image classification, a cross-modal multi-label image classification modeling and recognition method based on nonlinear is proposed. Multi-label classification models based on deep convolutional neural networks are constructed respectively. The visual classification model uses natural images and simple biomedical images with single labels to achieve heterogeneous transfer learning and homogeneous transfer learning, capturing the general features of the general field and the proprietary features of the biomedical field, while the text classification model uses the description text of simple biomedical images to achieve homogeneous transfer learning. The experimental results show that the multi-label classification model combining the two modes can obtain a hamming loss similar to the best performance of the evaluation task, and the macro average F1 value increases from 0.20 to 0.488, which is about 52.5% higher. The cross-modal multi-label image classification algorithm can better alleviate the problem of overfitting in most classes and has better cross-modal retrieval performance. In addition, the effectiveness and rationality of the two cross-modal mapping techniques are verified.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.