2008
DOI: 10.1074/mcp.m700240-mcp200
|View full text |Cite
|
Sign up to set email alerts
|

Statistical Similarities between Transcriptomics and Quantitative Shotgun Proteomics Data

Abstract: If the large collection of microarray-specific statistical tools was applicable to the analysis of quantitative shotgun proteomics datasets, it would certainly foster an important advancement of proteomics research. Here we analyze two large multidimensional protein identification technology datasets, one containing eight replicates of the soluble fraction of a yeast whole-cell lysate and one containing nine replicates of a human immunoprecipitate, to test whether normalized spectral abundance factor (NSAF) va… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
221
0
1

Year Published

2009
2009
2019
2019

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 152 publications
(223 citation statements)
references
References 57 publications
1
221
0
1
Order By: Relevance
“…The additive-multiplicative error structure has also been reported with quantitation by other MS-based methodologies, and the additive component may arise from the integration of count-based signal inherent with the majority of MS instrumentation (29,31,32) and/or the presence of a small basal unspecific background signal. As a consequence, heterogeneity of variance is, to varying degree, likely to be an inherent iTRAQ, like other MS-based quantitation techniques, faces the problem of how to combine readings from multiple peptides to estimate an abundance ratio for the parent protein.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The additive-multiplicative error structure has also been reported with quantitation by other MS-based methodologies, and the additive component may arise from the integration of count-based signal inherent with the majority of MS instrumentation (29,31,32) and/or the presence of a small basal unspecific background signal. As a consequence, heterogeneity of variance is, to varying degree, likely to be an inherent iTRAQ, like other MS-based quantitation techniques, faces the problem of how to combine readings from multiple peptides to estimate an abundance ratio for the parent protein.…”
Section: Discussionmentioning
confidence: 99%
“…Pavelka et al (31) used a power law global error model in conjunction with quantitation data derived from spectral counts. Other authors have proposed that the higher CV at low signal arises from the majority of MS instrumentation measuring ion counts as whole numbers (32).…”
mentioning
confidence: 99%
“…Protein abundance was evaluated according to their normalized spectral abundance factors, calculated as the spectral count relative to the length of a given protein normalized by the sum of the relative spectral counts in the sample (34). We used the open source software package "plgem" (written in R and maintained by the BioConductor project) to establish some statistical significance on this data set (39). Briefly, the runs with the most replicates (negative controls) were used to fit a power law global error model on the mean versus S.D.…”
Section: Methodsmentioning
confidence: 99%
“…It has been shown that the use of PLGEM-based standard deviations to calculate signal-to-noise ratios in a NSAF data set improves determination of protein expression changes, as it is more conservative with proteins of low abundance than proteins with high abundance. The goodness-of-fit of the model to the NSAF data and the relevant algorithmic details of the PLGEM method are explained in detail elsewhere (Pavelka et al, 2008). Principal component analysis of exported normalized FT-IR spectra was done using SIMCA-P 12.0.1.…”
Section: Statisticsmentioning
confidence: 99%