The first large-scale deployment of private federated learning uses differentially private counting in the continual release model as a subroutine (Google AI blog titled "Federated Learning with Formal Differential Privacy Guarantees" on February 28, 2022). For this and several other applications, it is crucial to use a continual counting mechanism with small mean squared error. In this case, a concrete (or non-asymptotic) bound on the error is very relevant to reduce the privacy parameter ε as much as possible, and hence, it is important to improve upon the constant factor in the error term. The standard mechanism for continual counting, and the one used in the above deployment, is the binary mechanism. We present a novel mechanism and show that its mean squared error is both asymptotically optimal and a factor 10 smaller than the