these machines offer computational capabilities on the peta-flop scale, making the brain a truly extraordinarily efficient device. [1] One of the major causes of this disparity in energy usage is what is referred to as the von Neumann bottleneck. [3] In modern computing systems, the dedicated central processing units (CPUs) are physically separated from the main memory areas. In addition, these CPUs are programmed to execute operations sequentially, where relevant information needs to be shuttled back and forth between the CPU and the memory. [4] This shuttling of bits puts an inherent cap on the speed of computations, as well as drastically increasing the energy usage.For this reason, researchers are motivated to develop neuromorphic computing systems that can rival or even exceed the cognitive capabilities and energy efficiency of the human brain. As biological systems use complicated systems of networks, all of which work together to form the nervous system, [5] it is going to take a similar multidisciplinary effort for neuromorphic computing to evolve to the point of emulating or even surpassing the human brain, with a concerted approach from material scientists, device engineers, circuit designers, and computer architecture engineers, etc. One particularly exciting facet of this grand work is the synapse used in the neural network. These synapses are capable of both storing information and performing complex operations at the same location, allowing networks to carry out computations in a massively parallel framework, reducing the energy cost per operation. [6] In this pursuit, artificial neural networks (ANNs) have been developed and successfully applied in various fields including: image and pattern recognition, [7] speech recognition, [8] machine translation, [9] and beating humans at chess and recently, Go. [10] Despite these recent strides in neuromorphic computing, the hardware implementation of these ANNs have been hampered by the fact that the digital transistors, the basic computing unit of modern computers, do not behave in the same manner as the analog synapses, the basic building block of the biological neural network. In this paper, we will review a number of different approaches currently being investigated that aim to improve the performance of synaptic devices towards the hardware acceleration of ANNs. First, we will discuss phase change memory (PCM) based synaptic devices, followed by three types In today's era of big-data, a new computing paradigm beyond today's von-Neumann architecture is needed to process these large-scale datasets efficiently. Inspired by the brain, which is better at complex tasks than even supercomputers with much better efficiency, the field of neuromorphic computing has recently attracted immense research interest and can have a profound impact in next-generation computing. Unlike modern computers that use digital "0" and "1" for computation, biological neural networks exhibit analog changes in synaptic connections during the decision-making and learning processes....