Human immunodeficiency virus type 1 (HIV-1) infection remains a severe public health problem worldwide. In this study, we investigated the distribution of HIV-1 subtypes and the prevalence of drug resistance mutations (DRMs) among patients with HIV-1 infection in Henan Province, China. HIV-1 strains in blood samples taken from inpatients and outpatients visiting the Sixth People's Hospital of Zhengzhou from August 2017 to July 2019 with a viral load (VL) greater than 1000 copies/ ml were subjected to subtype and DRMs analysis. Out of a total of 769 samples, subtype and DRM data were obtained from 657 (85.43%) samples. Phylogenetic analysis based on partial pol gene sequences indicated that the most commonly found genotype was subtype B (45.51%, 299/657), followed by CRF01_AE (28.61%, 188/657), CRF07_BC (15.68%, 103/657), CRF08_BC (0.76%, 5/657), C (0.61%, 4/657), A (0.30%, 2/657), and others (8.52%, 56/657). Circulating recombinant forms (CRFs) were most commonly found in patients who were naïve to antiretroviral treatment (ART) (68.67%, 160/233). The percentage of patients with one or more major drug-resistance mutations was 50.99% (335/657), and it was 6.44% (15/233) in ART-naive patients that were primarily infected with subtype B (17.74%). Resistance mutations were most common at codons 65, 103, 106, 184, and 190 of the reverse transcriptase gene and codon 46 of the protease gene. Our study provides detailed information about the distribution of HIV-1 subtypes and the incidence of drug resistance mutations of different subtypes in ART-experienced and naïve patients. This can guide policymakers in making decisions about treatment strategies against HIV-1.
3D object detection from a single image without LiDAR is a challenging task due to the lack of accurate depth information. Conventional 2D convolutions are unsuitable for this task because they fail to capture local object and its scale information, which are vital for 3D object detection. To better represent 3D structure, prior arts typically transform depth maps estimated from 2D images into a pseudo-LiDAR representation, and then apply existing 3D point-cloud based object detectors. However, their results depend heavily on the accuracy of the estimated depth maps, resulting in suboptimal performance. In this work, instead of using pseudo-LiDAR representation, we improve the fundamental 2D fully convolutions by proposing a new local convolutional network (LCN), termed Depth-guided Dynamic-Depthwise-Dilated LCN (D 4 LCN), where the filters and their receptive fields can be automatically learned from image-based depth maps, making different pixels of different images have different filters. D 4 LCN overcomes the limitation of conventional 2D convolutions and narrows the gap between image representation and 3D point cloud representation. Extensive experiments show that D 4 LCN outperforms existing works by large margins. For example, the relative improvement of D 4 LCN against the state-of-theart on KITTI is 9.1% in the moderate setting. D 4 LCN ranks 1 st on KITTI monocular 3D object detection benchmark at the time of submission (car, December 2019) . The code is available at https://github.com/dingmyu/D4LCN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.