COnstraint-Based Reconstruction and Analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive software suite of interoperable COBRA methods. It has found widespread applications in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. Version 3.0 includes new methods for quality controlled reconstruction, modelling, topological analysis, strain and experimental design, network visualisation as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimisation solvers for multi-scale, multi-cellular and reaction kinetic modelling, respectively. This protocol can be adapted for the generation and analysis of a constraint-based model in a wide variety of molecular systems biology scenarios. This protocol is an update to the COBRA Toolbox 1.0 and 2.0. The COBRA Toolbox 3.0 provides an unparalleled depth of constraint-based reconstruction and analysis methods. ]); 61 | The MUST sets are the sets of reactions that must increase or decrease their flux in order to achieve the desired phenotype in the mutant strain. As shown in Figure 6, the first order MUST sets are MustU and MustL while second order MUST sets are denoted as MustUU, MustLL, and MustUL. After parameters and constraints are defined, the functions findMustL and findMustU are run to determine the mustU and mustL sets, respectively. Define an ID of the run with:Each time the MUST sets are determined, folders are generated to read inputs and store outputs, i.e., reports. These folders are located in the directory defined by the uniquely defined runID.62 | In order to find the first order MUST sets, constraints should be defined: >> constrOpt = struct('rxnList', {{'EX_gluc', 'R75', 'EX_suc'}}, 'values', [-100; 0; 155.5]); 63 | The first order MUST set MustL is determined by running: >> [mustLSet, pos_mustL] = findMustL(model, minFluxesW, maxFluxesW, ... 'constrOpt', constrOpt, 'runID', runID);If runID is set to 'TestoptForceL', a folder TestoptForceL is created, in which two additional folders InputsMustL and OutputsMustL are created. The InputsMustL folder contains all the inputs required to run the function findMustL, while the OutputsMustL folder contains the mustL set found and a report that summarises all the inputs and outputs. In order to maintain a chronological order of computational experiments, the report is timestamped.64 | Display the reactions that belong to the mustL set using: >> disp(mustLSet) 65 | The first order MUST set MustU is determined by running: >> [mustUSet, pos_mustU] = findMustU(model, minFluxesW, maxFluxesW, ... 'constrOpt', constrOpt, 'runID', runID);...
Systems biology has experienced dramatic growth in the number, size, and complexity of computational models. To reproduce simulation results and reuse models, researchers must exchange unambiguous model descriptions. We review the latest edition of the Systems Biology Markup Language (SBML), a format designed for this purpose. A community of modelers and software authors developed SBML Level 3 over the past decade. Its modular form consists of a core suited to representing reaction‐based models and packages that extend the core with features suited to other model types including constraint‐based models, reaction‐diffusion models, logical network models, and rule‐based models. The format leverages two decades of SBML and a rich software ecosystem that transformed how systems biologists build and interact with models. More recently, the rise of multiscale models of whole cells and organs, and new data sources such as single‐cell measurements and live imaging, has precipitated new ways of integrating data with models. We provide our perspectives on the challenges presented by these developments and how SBML Level 3 provides the foundation needed to support this evolution.
The mutualistic association between leguminous plants and endosymbiotic rhizobial bacteria is a paradigmatic example of a symbiosis driven by metabolic exchanges. Here, we report the reconstruction and modelling of a genome-scale metabolic network of Medicago truncatula (plant) nodulated by Sinorhizobium meliloti (bacterium). The reconstructed nodule tissue contains five spatially distinct developmental zones and encompasses the metabolism of both the plant and the bacterium. Flux balance analysis (FBA) suggests that the metabolic costs associated with symbiotic nitrogen fixation are primarily related to supporting nitrogenase activity, and increasing N 2-fixation efficiency is associated with diminishing returns in terms of plant growth. Our analyses support that differentiating bacteroids have access to sugars as major carbon sources, ammonium is the main nitrogen export product of N 2-fixing bacteria, and N 2 fixation depends on proton transfer from the plant cytoplasm to the bacteria through acidification of the peribacteroid space. We expect that our model, called 'Virtual Nodule Environment' (ViNE), will contribute to a better understanding of the functioning of legume nodules, and may guide experimental studies and engineering of symbiotic nitrogen fixation.
Genome-scale metabolic network models can be used for various analyses including the prediction of metabolic responses to changes in the environment. Legumes are well known for their rhizobial symbiosis that introduces nitrogen into the global nutrient cycle. Here, we describe a fully compartmentalised, mass and charge-balanced, genome-scale model of the clover Medicago truncatula, which has been adopted as a model organism for legumes. We employed flux balance analysis to demonstrate that the network is capable of producing biomass components in experimentally observed proportions, during day and night. By connecting the plant model to a model of its rhizobial symbiont, Sinorhizobium meliloti, we were able to investigate the effects of the symbiosis on metabolic fluxes and plant growth and could demonstrate how oxygen availability influences metabolic exchanges between plant and symbiont, thus elucidating potential benefits of inter organism amino acid cycling. We thus provide a modelling framework, in which the interlinked metabolism of plants and nodules can be studied from a theoretical perspective.
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.