This study investigated the augmentation of endothelial progenitor cell (EPC) thromboresistance by using gene therapy to overexpress thrombomodulin (TM), an endothelial cell membrane glycoprotein that has potent anti-coagulant properties. Late outgrowth EPCs were isolated from peripheral blood of patients with documented coronary artery disease and transfected with an adenoviral vector containing human TM. EPC transfection conditions for maximizing TM expression, transfection efficiency, and cell viability were employed. TM-overexpressing EPCs had a fivefold increase in the rate of activated protein C production over native EPCs and EPCs transfected with an adenoviral control vector expressing β-galactosidase (p<0.05). TM upregulation caused a significant threefold reduction in platelet adhesion compared to native EPCs, and a 12-fold reduction compared to collagen I-coated wells. Additionally, the clotting time of TM-transfected EPCs incubated with whole blood was significantly extended by 19% over native cells (p<0.05). These data indicate that TM-overexpression has the potential to improve the antithrombotic performance of patient-derived EPCs for endothelialization applications.
We present Voxel Transformer (VoTr), a novel and effective voxel-based Transformer backbone for 3D object detection from point clouds. Conventional 3D convolutional backbones in voxel-based 3D detectors cannot efficiently capture large context information, which is crucial for object recognition and localization, owing to the limited receptive fields. In this paper, we resolve the problem by introducing a Transformer-based architecture that enables longrange relationships between voxels by self-attention. Given the fact that non-empty voxels are naturally sparse but numerous, directly applying standard Transformer on voxels is non-trivial. To this end, we propose the sparse voxel module and the submanifold voxel module, which can operate on the empty and non-empty voxel positions effectively. To further enlarge the attention range while maintaining comparable computational overhead to the convolutional counterparts, we propose two attention mechanisms for multi-head attention in those two modules: Local Attention and Dilated Attention, and we further propose Fast Voxel Query to accelerate the querying process in multi-head attention. VoTr contains a series of sparse and submanifold voxel modules, and can be applied in most voxel-based detectors. Our proposed VoTr shows consistent improvement over the convolutional baselines while maintaining computational efficiency on the KITTI dataset and the Waymo Open dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.