Collaborative filtering recommenders provide effective personalization services at the cost of sacrificing the privacy of their end users. Due to the increasing concerns from the society and stricter privacy regulations, it is an urgent research challenge to design privacypreserving and yet robust recommenders which offer recommendation services to privacy-aware users. Our analysis shows that existing solutions fall short in several aspects, including lacking attention to the precise output to end users and ignoring the correlated robustness issues. In this paper, we provide a general system structure for latent factor based collaborative filtering recommenders by formulating them into model training and prediction computing stages, and also describe a new security model. Aiming at pragmatic solutions, we first show how to construct privacy-preserving and yet robust model training stage based on existing solutions. Then, we propose two cryptographic protocols to realize a privacy-preserving prediction computing stage, depending on whether or not an extra proxy is involved. Different from standard Topk recommendations, we alternatively let the end user retrieve the unrated items whose predictions are above a threshold, as a result of our privacy by design strategy. Experimental results show that our new protocols are quite efficient.