It is difficult to reconstruct the complete light field, and the reconstructed light field can only recognize specific fixed targets. These have limited the applications of the light field in practice. To solve the problems above, this article introduces the multi-perspective distributed information fusion into light field reconstruction to monitor and recognize the maneuvering targets. First, the light field is represented as sub-light fields at different perspectives (i.e. the Multi-sensor distributed network), and sparse representation and reconstruction are then performed. Second, we establish the multi-perspective distributed information fusion under the condition of regional full-coverage constraints. Finally, the light field data from multiple perspectives are fused and the states of the maneuvering targets are estimated. Experimental results show that the light field reconstruction time of the proposed method is less than 583 s, and the reconstruction accuracy exceeds 92.447% compared with the existing spatially variable bidirectional reflectance distribution function, micro-lens array, and others. In the aspect of maneuvering target recognition, the recognition time of the algorithm in this article is no more than 3.5 s. The recognition accuracy of the algorithm in this article is up to 86.739%. Moreover, the more viewing angles used, the higher the accuracy.
Compared with traditional imaging, the light field contains more comprehensive image information and higher image quality. However, the available data for light field reconstruction are limited, and the repeated calculation of data seriously affects the accuracy and the real-time performance of multiperspective light field reconstruction. To solve the problems, this paper proposes a multiperspective light field reconstruction method based on transfer reinforcement learning. Firstly, the similarity measurement model is established. According to the similarity threshold of the source domain and the target domain, the reinforcement learning model or the feature transfer learning model is autonomously selected. Secondly, the reinforcement learning model is established. The model uses multiagent (i.e., multiperspective) Q-learning to learn the feature set that is most similar to the target domain and the source domain and feeds it back to the source domain. This model increases the capacity of the source-domain samples and improves the accuracy of light field reconstruction. Finally, the feature transfer learning model is established. The model uses PCA to obtain the maximum embedding space of source-domain and target-domain features and maps similar features to a new space for label data migration. This model solves the problems of multiperspective data redundancy and repeated calculations and improves the real-time performance of maneuvering target recognition. Extensive experiments on PASCAL VOC datasets demonstrate the effectiveness of the proposed algorithm against the existing algorithms.
Effective abnormal human behavior analysis serves as a warning signal before emergencies. However, most abnormal human behavior detections rely on manual monitoring at present. This method is criticized for being subjective and lack of timeliness. In response to the problems above, this paper proposes a multistage analysis method of abnormal human behavior in complex scenes. This paper firstly differentiates the abnormal behavior roughly from a large monitoring area with similarity measurement applied to the social force model, and precise analysis is conducted thereafter. The multistage analysis, based on the three-frame difference algorithm, is used for intrusion, left-behind baggage detection, and motion trajectory identification. The experimental result demonstrates the superiority of the proposed method in UMV, CAVIAR, and datasets. To demonstrate the adaptability and generalization ability of the proposed method, this paper selects the CVC and JAAD driving anomaly detection data sets to test the method. Experimental results show that the proposed method is superior to the existing methods.
The lack of sample data and the limited visual range of a single agent during light field reconstruction affect the recognition of maneuvering targets. In view of the above problems, this paper introduces generative adversarial nets (GAN) into the field of light field reconstruction and proposes a multiagent light field reconstruction and target recognition method based on GAN. The algorithm of this paper utilizes the characteristics of GAN to generate data and enhance data, which greatly improves the accuracy of light field reconstruction. The consistency mean of all observations is obtained by multiagent data fusion, which ensures the reliability of sample data and the continuity of maneuvering target recognition. The experimental results show that the accuracy of light field reconstruction reaches 94.552%. The accuracy of maneuvering target recognition is 84.267%, and the more the agents are used, the shorter the recognition time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.