AI technology, relying on its extraordinary data searching and calculating capability, has been widely applied in assisting human decision-makers in various industries: healthcare, business management, public policies, etc. As a crucial factor influencing the performance of human and AI interaction, trust has come to be valued more in the research area in recent years. Previous studies have emphasized multiple factors that have significant impacts on the trust between human decision-makers and AI assistants. Yet, more attention needs to be paid to building up a systematic model for trust in the human-AI collaboration context. Therefore, to construct a systematic model of trust for the AI decision-making area, this paper reviews the recently conducted research, analyzes and synthesizes the significant factors of trust in the AI-assisted decision-making process and establishes a theoretical ternary interaction model from three major aspects: human decision-maker-related, AI-related, and scenario-related. Factors from the three aspects construct the three major elements of trust, which can eventually evaluate trust in the assisted decision-making process. This systematic trust model fills the theoretical gap in the current studies of trust in human-AI interaction and provides implications for further research studies in studying AI trust-related topics.