One of the main challenges in building a quantum processor is to characterize the environmental noise. Noise characterization can be achieved by exploiting different techniques, such as randomization where several sequences of random quantum gates are applied to the qubit under test to derive statistical characteristics about the affecting noises. A scalable and robust algorithm able to benchmark the full set of Clifford gates using randomization techniques is called randomized benchmarking. In this study, we simulated randomized benchmarking protocols in a semiconducting all-electrical three-electron double-quantum dot qubit, i.e. hybrid qubit, under different error models, that include quasi-static Gaussian and the more realistic 1/f noise model, for the input controls. The average error of specific quantum computational gates is extracted through interleaved randomized benchmarking obtained including Clifford gates between the gate of interest. It provides an estimate of the fidelity as well as theoretical bounds for the average error of the gate under test.