2019
DOI: 10.1530/jme-18-0055
|View full text |Cite
|
Sign up to set email alerts
|

Integrated omics: tools, advances and future approaches

Abstract: With the rapid adoption of high-throughput omic approaches to analyze biological samples such as genomics, transcriptomics, proteomics, and metabolomics, each analysis can generate tera- to peta-byte sized data files on a daily basis. These data file sizes, together with differences in nomenclature among these data types, make the integration of these multi-dimensional omics data into biologically meaningful context challenging. Variously named as integrated omics, multi-omics, poly-omics, trans-omics, pan-omi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
262
0
4

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 392 publications
(267 citation statements)
references
References 162 publications
1
262
0
4
Order By: Relevance
“…In the early-integration approach, also known as juxtaposition-based, the multi-omics datasets are first concatenated into one matrix. To deal with the high-dimensionality of the joint dataset, these methods generally adopt matrix factorization (68,53,55,52), statistical (46,69,70,59,57,44,71,72,73,55), and machine learning tools (74,73,55). Although the dimensionality reduction procedure is necessary and may improve the predictive performance, it can also cause the loss of key information (66).…”
Section: Background and Related Workmentioning
confidence: 99%
“…In the early-integration approach, also known as juxtaposition-based, the multi-omics datasets are first concatenated into one matrix. To deal with the high-dimensionality of the joint dataset, these methods generally adopt matrix factorization (68,53,55,52), statistical (46,69,70,59,57,44,71,72,73,55), and machine learning tools (74,73,55). Although the dimensionality reduction procedure is necessary and may improve the predictive performance, it can also cause the loss of key information (66).…”
Section: Background and Related Workmentioning
confidence: 99%
“…This is also unlike the case of genomics where tools and methods for both de novo and reference‐guided assembly and annotations are more or less standardized. More extensive discussions on the advantages and disadvantages of ‐omics methods and workflows toward an integrated ‐omics approach are discussed elsewhere . To this end, newer large‐scale approaches, such as fishing for protein‐binding metabolites either using in vitro techniques or in silico approaches, have shown tremendous potential for the capture of the metabolite counterparts of macromolecules (i.e., proteins, and nucleic acids) and have helped pinpoint metabolite binding sites on a proteome‐wide scale.…”
Section: Coverage and Comparability With Other‐omics For A Systems Viewmentioning
confidence: 99%
“…In particular omics-research (genomics, proteomics, metabolomics etc.) is leading the charge to the growth of Big data [6,7]. The challenges in omics-research are data cleaning, normalization, biomolecule identification, data dimensionality reduction, biological contextualization, statistical validation, data storage and handling, sharing and data archiving.…”
Section: Big Medic Datamentioning
confidence: 99%
“…Data analytics requirements include several tasks like those of data cleaning, normalization, biomolecule identification, data dimensionality reduction, biological contextualization, statistical validation, data storage and handling, sharing and data archiving. These tasks are required for the Big data in some of the omics datasets like genomics, transcriptomics, proteomics, metabolomics, metagenomics, phenomics [6].…”
Section: Big Medic Datamentioning
confidence: 99%