2000
DOI: 10.1002/1098-2272(200012)19:4<333::aid-gepi5>3.0.co;2-#
|View full text |Cite
|
Sign up to set email alerts
|

Ascertainment issues in variance components models

Abstract: One of the main concerns in the family studies of complex diseases is the effect that ascertainment and correction for it may have on test procedures and estimators. Elston and Sobel [1979] and Hopper and Mathews [1982] proposed two ways to correct for ascertainment in the study of quantitative trait data. For single ascertainment, using a variance components approach, we present results of simulation studies comparing estimates from these two methods for different selection criteria. We also show results from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
13
0

Year Published

2001
2001
2016
2016

Publication Types

Select...
8
2

Relationship

2
8

Authors

Journals

citations
Cited by 44 publications
(17 citation statements)
references
References 13 publications
4
13
0
Order By: Relevance
“…On the other hand, Sham et al (2000b) showed that standard variance components analysis of selected samples has inflated type-1 error rate, whether the trait follows a normal distribution or not. Appropriate ascertainment corrections can nonetheless be used to control the type-1 error rates of VC (Andrade & Amos, 2000;Sham et al, 2000a). Similarly, Allison et al (1999) and Blangero et al (2001) showed in a range of simulations that standard VC has inflated type-1 error rate when analyzing nonnormal data from a random sample.…”
Section: Type-1 Error Ratesupporting
confidence: 85%
“…On the other hand, Sham et al (2000b) showed that standard variance components analysis of selected samples has inflated type-1 error rate, whether the trait follows a normal distribution or not. Appropriate ascertainment corrections can nonetheless be used to control the type-1 error rates of VC (Andrade & Amos, 2000;Sham et al, 2000a). Similarly, Allison et al (1999) and Blangero et al (2001) showed in a range of simulations that standard VC has inflated type-1 error rate when analyzing nonnormal data from a random sample.…”
Section: Type-1 Error Ratesupporting
confidence: 85%
“…The problem is exacerbated in datasets comprising small kindreds such as sibpairs or nuclear families, and our experience suggests that linkage analysis of discrete traits in extended pedigrees can also benefit from sample ascertainment, especially for traits having a prevalence much lower than 10%. Nonrandom sampling can significantly increase recruitment costs (Todorov et al 1997), and complicate the inferential aspects of statistical linkage analysis (Fisher, 1935;Cannings & Thompson, 1977;Hanis & Chakraborty, 1984;Thompson, 1993;Comuzzie & Williams, 1999;de Andrade & Amos, 2000), but these difficulties may necessarily remain secondary to the goal of ensuring that a given study design will have the greatest possible power simply to detect the existence of linkage to a disease-predisposing QTL.…”
Section: Discussionsupporting
confidence: 68%
“…However, such designs are not optimal for studying quantitative endophenotypic variation, where contrasts between individuals with high and low endophenotype values generate power. This fact-especially as it bears on linkage analysis-has been explored in great detail (eg, [66][67][68][69][70][71][72][73][74][75] ). Thus, sib pair and multiplex families would not be optimal for studying ''quantitative'' phenotypes underlying or contributing to ''qualitative'' disease outcomes like schizophrenia, because, if high scores on these phenotypes truly associate with schizophrenia, then by having families with exclusively multiply affected individuals (and the consequent lack of unaffecteds in those families) one would actually ''reduce'' the variation exhibited by this trait in the samples and thereby reduce power for studying the determinants of the quantitative phenotypes.…”
Section: Ascertainment Issuesmentioning
confidence: 55%