A trustworthy reinforcement learning algorithm should be competent in solving challenging real-world problems, including robustly handling uncertainties, satisfying safety constraints to avoid catastrophic failures, and generalizing to unseen scenarios during deployments. This study aims to overview these main perspectives of trustworthy reinforcement learning considering its intrinsic vulnerabilities on robustness, safety, and generalizability. In particular, we give rigorous formulations, categorize corresponding methodologies, and discuss benchmarks for each perspective. Moreover, we provide an outlook section to spur promising future directions with a brief discussion on extrinsic vulnerabilities considering human feedback. We hope this survey could bring together separate threads of studies together in a unified framework and promote the trustworthiness of reinforcement learning. CCS Concepts: • Computing methodologies → Reinforcement learning; Markov decision processes; • Security and privacy → Social aspects of security and privacy; • Computer systems organization → Robotics; • Hardware → Safety critical systems.