Quantile estimation is a fundamental task in big data analysis. In order to achieve high-speed estimation under low memory consumption, especially for streaming big data processing, data sketches which provide approximate estimates at low overhead are usually used, and the Karnin-Lang-Liberty (KLL) sketch is one of the most popular options. However, soft errors in KLL memory may significantly degrade estimation performance. In this paper, the influence of soft error on the KLL sketch is considered for the first time. Firstly, the reliability of KLL to soft error is studied through theoretical analysis and fault injection experiments. The evaluation results show that the errors in the KLL construction phase may cause a large deviation in the estimated value. Then, two protection schemes are proposed based on a single parity check (SPC) and on the incremental property (IP) of the KLL memory. Further evaluation shows that the proposed schemes can significantly improve the reliability of KLL, and even remove the effect SEUs on the highest bits. In particular, the SPC scheme that requires additional memory, provides better protection for middle bit positions than the IP scheme which does not introduce any memory overhead.