The percentage of failures in late pharmaceutical development due to toxicity has increased dramatically over the last decade or so, resulting in increased demand for new methods to rapidly and reliably predict the toxicity of compounds. In this review we discuss the challenges involved in both the building of in silico models on toxicology endpoints and their practical use in decision making. In particular, we will reflect upon the predictive strength of a number of different in silico models for a range of different endpoints, different approaches used to generate the models or rules, and limitations of the methods and the data used in model generation. Given that there exists no unique definition of a 'good' model, we will furthermore highlight the need to balance model complexity/interpretability with predictability, particularly in light of OECD/REACH guidelines. Special emphasis is put on the data and methods used to generate the in silico toxicology models, and their strengths and weaknesses are discussed. Switching to the applied side, we next review a number of toxicity endpoints, discussing the methods available to predict them and their general level of predictability (which very much depends on the endpoint considered). We conclude that, while in silico toxicology is a valuable tool to drug discovery scientists, much still needs to be done to, firstly, understand more completely the biological mechanisms for toxicity and, secondly, to generate more rapid in vitro models to screen compounds. With this biological understanding, and additional data available, our ability to generate more predictive in silico models should significantly improve in the future.