2018
DOI: 10.1111/age.12655
|View full text |Cite
|
Sign up to set email alerts
|

Lessons learnt on the analysis of large sequence data in animal genomics

Abstract: The 'omics revolution has made a large amount of sequence data available to researchers and the industry. This has had a profound impact in the field of bioinformatics, stimulating unprecedented advancements in this discipline. Mostly, this is usually looked at from the perspective of human 'omics, in particular human genomics. Plant and animal genomics, however, have also been deeply influenced by next-generation sequencing technologies, with several genomics applications now popular among researchers and the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 65 publications
0
9
0
Order By: Relevance
“…Other new variant identification strategies such as the GATK Haplot-ypeCaller or support vector machines (O'Fallon et al, 2013) require extra inputs such as lists of true variants, false-positive variants, or estimates of machine-or allele-specific bias that can improve call rate but often with even longer run times. Researchers in animal genetics often need different computing strategies than those developed in human genetics (Biscarini et al, 2018), primarily because of differing goals and limited budgets. Animal genetics focuses on genomic prediction, low-density genotyping, lower coverage sequencing, and deep pedigrees, whereas human genetics often focuses on disease treatment, higher density genotyping, higher coverage sequencing, unrelated individuals, and discovering genetic origins.…”
Section: Discussionmentioning
confidence: 99%
“…Other new variant identification strategies such as the GATK Haplot-ypeCaller or support vector machines (O'Fallon et al, 2013) require extra inputs such as lists of true variants, false-positive variants, or estimates of machine-or allele-specific bias that can improve call rate but often with even longer run times. Researchers in animal genetics often need different computing strategies than those developed in human genetics (Biscarini et al, 2018), primarily because of differing goals and limited budgets. Animal genetics focuses on genomic prediction, low-density genotyping, lower coverage sequencing, and deep pedigrees, whereas human genetics often focuses on disease treatment, higher density genotyping, higher coverage sequencing, unrelated individuals, and discovering genetic origins.…”
Section: Discussionmentioning
confidence: 99%
“…Among-groups (BM vs. WM) and pairwise Bray–Curtis dissimilarities were evaluated non-parametrically using the permutational analysis of variance (999 permutations) ( 42 ). Details on the calculation of the mentioned alpha and beta diversity indices can be found in Supplementary File 1 and in Biscarini et al ( 43 ).…”
Section: Methodsmentioning
confidence: 99%
“…Irrespective of the chosen "omics," the need to manage large amounts of molecular data to correctly infer and validate biological hypotheses imposes a rigorous step-by-step evaluation of the work, including experimental design, sampling protocols, and data processing (eg, statistics, bioinformatics pipelines, visualization, and database matching). 32,33 At the cutting edge of the "omics" integration and multiplatform research, the analysis of host-associated microbiota by metabarcoding of marker genes, shotgun metagenomics, and metabolomics has provided diagnostics and predictive capabilities that can now help elucidate complex interspecies relationships and perturbations, such as symbiotic, commensalistic, opportunistic, and pathogenic. 34,35 Despite the steadily increasing number of transcriptomicsand proteomics-based studies, not many papers refer to the in vivo analysis of host-pathogen interactions.…”
Section: Dual Analysis Of Virus-host Interactionsmentioning
confidence: 99%
“…Irrespective of the chosen “omics,” the need to manage large amounts of molecular data to correctly infer and validate biological hypotheses imposes a rigorous step-by-step evaluation of the work, including experimental design, sampling protocols, and data processing (eg, statistics, bioinformatics pipelines, visualization, and database matching). 32,33…”
Section: Introductionmentioning
confidence: 99%