In this paper, we propose an efficient beam tracking method for mobility scenario in mmWave-band communications. When the position of the mobile changes in mobility scenario, the base-station needs to perform beam training frequently to track the time-varying channel, thereby spending significant resources for training beams. In order to reduce the training overhead, we propose a new beam training approach called "beam tracking" which exploits the continuous nature of time varying angle of departure (AoD) for beam selection. We show that transmission of only two training beams is enough to track the time-varying AoD at good accuracy. We derive the optimal selection of beam pair which minimizes Cramer-Rao Lower Bound (CRLB) for AoD estimation averaged over statistical distribution of the AoD. Our numerical results demonstrate that the proposed beam tracking scheme produces better AoD estimation than the conventional beam training protocol with less training overhead.
Convolutional neural network (CNN) has led significant progress in object detection. In order to detect the objects in various sizes, the object detectors often exploit the hierarchy of the multi-scale feature maps called feature pyramid, which is readily obtained by the CNN architecture. However, the performance of these object detectors is limited since the bottom-level feature maps, which experience fewer convolutional layers, lack the semantic information needed to capture the characteristics of the small objects. In order to address such problem, various methods have been proposed to increase the depth for the bottom-level features used for object detection. While most approaches are based on the generation of additional features through the top-down pathway with lateral connections, our approach directly fuses multi-scale feature maps using bidirectional long short term memory (biLSTM) in effort to generate deeply fused semantics. Then, the resulting semantic information is redistributed to the individual pyramidal feature at each scale through the channel-wise attention model. We integrate our semantic combining and attentive redistribution feature network (ScarfNet) with the baseline object detectors, i.e., Faster R-CNN, single-shot multibox detector (SSD) and RetinaNet. Our experiments show that our method outperforms the existing feature pyramid methods as well as the baseline detectors and achieve the state of the art performances in the PASCAL VOC and COCO detection benchmarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.