Protecting a machine learning model and its inference inputs with secure computation is important for providing services with a valuable model. In this paper, we discuss how a model's parameter quantization works to protect the model and its inference inputs. To this end, we present an investigational protocol called MOTUS, based on ternary neural networks whose parameters are ternarized. Through extensive experiments with MOTUS, we found three key insights. First, ternary neural networks can avoid deterioration in accuracy due to secure computation with modulo operations. Second, the increment of model parameter candidates significantly improves accuracy more than an existing technique for accuracy improvement, i.e., batch normalization. Third, protecting both a model and inference inputs reduces inference throughput by four to seven times to provide the same level of accuracy compared with existing protocols protecting only inference inputs. We have released our source code via GitHub.