Over the last decades, Bioinformatics has been being in its honeymoon phase with more and more new algorithms as well as their improvements proposed. In Bioinformatics, the sequence alignment step is considered as an integral part that directly contributes to the DNA, RNA, or protein identifications. Despite the undeniable enhancements from the provided algorithms and computing architectures in the recent years, it is still far more to state that sequence alignment has already achieved the ideal performance. In this work, we focus on one of the most perfect justifiable steps in a state-of-the-art DNA/RNA alignment algorithm, the seed extension step in the BWA-MEM algorithm. We propose a high-speed and less power consumption FPGA-based IP core that is designed in a pipeline model under various FPGA technologies. Our core is able to operate at more than 200 MHz in almost all FPGA architectures and even up to 529 MHz on a Xilinx Virtex 6 FPGA device. The core can provide speed-ups by up to 350× when compared with an Intel Core i5 general purpose processor.
The development of machine learning has madea revolution in various applications such as object detection,image/video recognition, and semantic segmentation. Neuralnetworks, a class of machine learning, play a crucial role inthis process because of their remarkable improvement overtraditional algorithms. However, neural networks are now goingdeeper and cost a significant amount of computation operations.Therefore they usually work ineffectively in edge devices thathave limited resources and low performance. In this paper, weresearch a solution to accelerate the neural network inferencephase using FPGA-based platforms. We analyze neural networkmodels, their mathematical operations, and the inference phasein various platforms. We also profile the characteristics thataffect the performance of neural network inference. Based on theanalysis, we propose an architecture to accelerate the convolutionoperation used in most neural networks and takes up most ofthe computations in networks in terms of parallelism, data reuse,and memory management. We conduct different experiments tovalidate the FPGA-based convolution core architecture as wellas to compare performance. Experimental results show that thecore is platform-independent. The core outperforms a quad-coreARM processor functioning at 1.2GHz and a 6-core Intel CPUwith speed-ups of up to 15.69 and 2.78, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.