Growing applications of deep learning on sensitive genomics and biomedical data introduce challenging privacy and secure problems. Homomorphic encryption (HE) is one of appropriate cryptographic techniques to provide secure machine learning evaluation by directly computing over encrypted data, so that allows the data owner and model owner to outsource processing of sensitive data to an untrusted server without leaking any information about the data. However, most current HE schemes only support limited arithmetic operations, which significantly hinder their applications to support secure deep learning algorithm. Considering the potential performance loss introduced for approximating activation function, in this paper, we develop a novel HE friendly deep network, named Residue Activation Network (ResActNet) to implement precise privacy-preserving machine learning algorithm with a non-approximating activation on HE scheme. We considered a residue activation strategy with a scaled power activation function in the deep network. In particular, a scaled power activation (SPA) function is set within the HE scheme, and so that can be directly deployed on HE computation. Moreover, we proposed a residue activation strategy to constrain the latent space in the training process for alleviating the optimization difficulty. We comprehensively evaluate ResActNet using diverse genomics datasets and widely-used image datasets. Our results demonstrate that ResActNet outperforms other alternative solutions to secure machine learning with HE and achieves low approximation errors in classification and regression tasks.