Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research.
What is Computational Systems Biology?Systems biology is often defined as the antithesis of the reductionist approach. Although the reductionist approach has identified many important individual components of biological systems, it often fails to integrate the interconnections between the individual components. Systems biology, on the other hand, deals with the integration of information from a wealth of individual components, creating a holistic view of the biological system [1]. Each component is made up by larger or smaller data sets. For instance, analysis of microarray experiments gives quantitative estimates of changes in expression of a whole-organism genome. High-throughput screening and high-content screening are procedures giving biological activity data on a large number of chemicals (e.g. the U.S. Environmental Protection Agency's ToxCast programme [2]). Additionally, low-/medium-throughput single-target assays provide high-quality measures of the biological function of a single or more targets after chemical exposure. Combining such complementary data types may add value in creating realistic models of potential toxic or adverse effects of chemicals [3,4]. A key strategy to handle multiple data sources is data integration, the use of which has been demonstrated in several applications as reviewed by Mitra et al. [5], for example for the identification of new biomarkers for Alzheimer's disease [6].Computational toxicology integrates molecular...