The figures-of-merit for reservoir computing (RC), using spintronics devices called magnetic tunnel junctions (MTJs), are evaluated. RC is a type of recurrent neural network. The input information is stored in certain parts of the reservoir, and computation can be performed by optimizing a linear transform matrix for the output. While all the network characteristics should be controlled in a general recurrent neural network, such optimization is not necessary for RC. The reservoir only has to possess a non-linear response with memory effect. In this paper, macromagnetic simulation is conducted for the spin-dynamics in MTJs, for reservoir computing. It is determined that the MTJ-system possesses the memory effect and non-linearity required for RC. With RC using 5-7MTJs, high performance can be obtained, similar to an echo-state network with 20-30 nodes, even if there are no magnetic and/or electrical interactions between the magnetizations.
I. INTRODUCTIONThe magnetization direction of ferromagnetic metallic film is determined by the magnetic anisotropy energy, which causes non-volatility. This property can be used for magnetic random-access memory devices [1]. In magnetic tunnel junction (MTJ) devices consisting of ferromagnetic and dielectric thin films, the magnetization direction in the ferromagnet can be detected by the change in device resistance originating from the tunneling magnetoresistance (TMR) effect [2][3][4][5]. Moreover, the magnetization direction can be electrically controlled by the spin-torque [6-9]. Therefore, MTJ devices are suitable for constructing non-volatile high-density memory devices. In addition to such long-term memory effect, the magnetization precessional dynamics appear to possess short-term memory effect with non-linear behavior. Such additional magnetization dynamics properties may be suitable for computation using MTJ devices.The recurrent neural network (RNN) [10, 11] is a machine learning method. It is a mathematical model, which emulates the nerve system in human brain. The RNN concept is depicted in Fig. 1(a).The model consists of three layers, input, middle (node), and output. In the RNN, the information of the middle layer recursively propagates in itself. The middle-layer state is determined by the present input and past middle-layer state, i.e., the middle layer in the RNN possesses memory effect. All the weight matrices for the input (Win), middle (W) and output (Wout) should be precisely trained to obtain the desired output. However, when the middle layer has sufficient memory effect and non-linearity, it is feasible to perform computation by optimizing only the output matrix (Wout). This type of simple RNN is called reservoir computing (RC) [12][13][14]. In RC, as system training is simple, it