This paper presents the Flipped (F)-2T2R RRAM compute cell enhancing the performance of RRAM-based mixedsignal accelerators for deep neural networks (DNNs) in machine learning (ML) applications. The F-2T2R cell is designed to exploit the features of the FD-SOI technology and it achieves a large increase in cell output impedance, compared to the standard 1T1R cell. The paper also describes the modelling of an F-2T2Rbased accelerator and its transistor-level implementation in a 22nm FD-SOI technology. The modelling results and the accelerator performance are validated by simulation. The proposed design can achieve an energy efficiency of up to 1260 1b-TOPS/W, with a memory array of 256 rows and columns. From the results of our analytical framework, a ResNet18, mapped on the accelerator, can obtain an accuracy reduction below 2%, with respect to the floating point baseline, on the CIFAR-10 dataset.