This paper reports 2bits/cell ferroelectric FET (FeFET) devices with 500 ns write pulse of maximum amplitude 4.5V for inference-engine applications. FeFET devices were fabricated using GlobalFoundries 28nm high-k-metal-gate (HKMG) process flow on a 300mm wafer. The devices were characterized, and statistical modeling of variations in the fabricated devices was carried out based on experimental data. Furthermore, the model was applied to multi-layer perceptron (MLP) neural network (NN) simulations using the CIMulator software platform. The neural network (NN) was trained offline, and the weights were transferred to the synaptic devices for an inference-only operation. Device-to-device (D2D) and cycle-tocycle (C2C) variations are limited by optimal process conditions and do not impact inference accuracy. However, due to short-term retention, read-to-read (R2R) variation significantly affects inference operation. This work proposes a synergistic READoptimization approach to mitigate the impact of short-term retention and device variation issues. The optimization technique fostered immunity in the MLP-NN towards R2R variations, and the MLP-NN maintains inference accuracy of 97.01%, while the software baseline is 98%.
This article reports an improvement
in the performance of the hafnium
oxide-based (HfO2) ferroelectric field-effect transistors
(FeFET) achieved by a synergistic approach of interfacial layer (IL) engineering and READ-voltage optimization.
FeFET devices with silicon dioxide (SiO2) and silicon oxynitride
(SiON) as IL were fabricated and characterized. Although
the FeFETs with SiO2 interfaces demonstrated better low-frequency
characteristics compared to the FeFETs with SiON interfaces, the latter
demonstrated better WRITE endurance and retention.
Finally, the neuromorphic simulation was conducted to evaluate the
performance of FeFETs with SiO2 and SiON IL as synaptic devices. We observed that the WRITE endurance in both types of FeFETs was insufficient
(
<
10
8
)
to carry
out online neural network training.
Therefore, we consider an inference-only operation with offline neural
network training. The system-level simulation reveals that the impact
of systematic degradation via retention degradation is much more significant
for inference-only operation than low-frequency noise. The neural
network with FeFETs based on SiON IL in the synaptic
core shows 96% accuracy for the inference operation on the handwritten
digit from the Modified National Institute of Standards and Technology
(MNIST) data set in the presence of flicker noise
and retention degradation, which is only a 2.5% deviation from the
software baseline.
Reliability is a central aspect of hafnium oxide-based ferroelectric field effect transistors (FeFETs), which are promising candidates for embedded non-volatile memories. Besides the device performance, understanding the evolution of the ferroelectric behaviour of hafnium oxide over its lifetime in FeFETs is of major importance for further improvements. Here, we present the impact of the interface layer in FeFETs on the cycling behaviour and retention of ferroelectric silicon-doped hafnium oxide. Thicker interfaces are demonstrated to reduce the presence of antiferroelectric-like wake-up effects and to improve endurance. However, they show a strong destabilisation of one polarisation state in terms of retention. In addition, measurements of the Preisach density revealed additional insight in the wake-up effect of these metal-ferroelectric-insulator-semiconductor (MFIS) capacitors.
Graphic abstract
This article reports a novel ferroelectric fieldeffect transistor (FeFET)-based crossbar array cascaded with an external resistor. The external resistor is shunted with the column of the FeFET array, as a current limiter and reduces the impact of variations in drain current (I d ), especially in a low threshold voltage (LVT) state. We have designed crossbar arrays of 8 × 8 sizes and performed multiply-and-accumulate (MAC) operations. Furthermore, we have evaluated the performance of the current limited FeFET crossbar array in system-level applications. Finally, the system-level performance evaluation was done by neuromorphic simulation of the resistor-shunted FeFET crossbar array. The crossbar array achieved software-comparable inference accuracy (∼97%) for National Institute of Standards and Technology (MNIST) datasets with multilayer perceptron (MLP) neural network, whereas the crossbar arrays built solely with FeFETs failed to learn, yielding only 9.8% accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.