This paper addresses real-time moving object detection with high accuracy in high-resolution video frames. A previously developed framework for moving object detection is modified to enable real-time processing of high-resolution images. First, a computationally efficient method is employed, which detects moving regions on a resized image while maintaining moving regions on the original image with mapping coordinates. Second, a light backbone deep neural network in place of a more complex one is utilized. Third, the focal loss function is employed to alleviate the imbalance between positive and negative samples. The results of the extensive experimentations conducted indicate that the modified framework developed in this paper achieves a processing rate of 21 frames per second with 86.15% accuracy on the dataset SimitMovingDataset, which contains high-resolution images of the size 1920 × 1080.
This paper presents the simultaneous utilization of video images and inertial signals that are captured at the same time via a video camera and a wearable inertial sensor within a fusion framework in order to achieve a more robust human action recognition compared to the situations when each sensing modality is used individually. The data captured by these sensors are turned into 3D video images and 2D inertial images that are then fed as inputs into a 3D convolutional neural network and a 2D convolutional neural network, respectively, for recognizing actions. Two types of fusion are considered—Decision-level fusion and feature-level fusion. Experiments are conducted using the publicly available dataset UTD-MHAD in which simultaneous video images and inertial signals are captured for a total of 27 actions. The results obtained indicate that both the decision-level and feature-level fusion approaches generate higher recognition accuracies compared to the approaches when each sensing modality is used individually. The highest accuracy of 95.6% is obtained for the decision-level fusion approach.
This paper presents a semi-supervised faster region-based convolutional neural network (SF-RCNN) approach to detect persons and to classify the load carried by them in video data captured from distances several miles away via high-power lens video cameras. For detection, a set of computationally efficient image processing steps are considered to identify moving areas that may contain a person. These areas are then passed onto a faster RCNN classifier whose convolutional layers consist of ResNet50 transfer learning. Frame labels are obtained in a semi-supervised manner for the training of the faster RCNN classifier. For load classification, another convolutional neural network classifier whose convolutional layers consist of GoogleNet transfer learning is used to distinguish a person carrying a bundle from a person carrying a long arm. Despite the challenges associated with the video dataset examined in terms of the low resolution of persons, the presence of heat haze, and the shaking of the camera, it is shown that the developed approach outperforms the faster RCNN approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.