2010
DOI: 10.1007/s11136-010-9740-3
|View full text |Cite
|
Sign up to set email alerts
|

Missing data methods for dealing with missing items in quality of life questionnaires. A comparison by simulation of personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques applied to the SF-36 in the French 2003 decennial health survey

Abstract: Whereas multiple imputation and full information maximum likelihood are confirmed as reference methods, the personal mean score appears nonetheless appropriate for dealing with items missing from completed SF-36 questionnaires in most situations of routine use. These results can reasonably be extended to other questionnaires constructed according to classical test theory.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

5
116
1

Year Published

2012
2012
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 212 publications
(122 citation statements)
references
References 31 publications
5
116
1
Order By: Relevance
“…Applying the interview version of a questionnaire instead of postal self-administration can improve response rates and reduce missing data [41]. In addition, methods to handle missing data, such as multiple imputation techniques, were recommended in order to quantify potential biases [41,47]. In terms of the analysis of psychometric properties we showed that the 12-item WHODAS 2.0 fulfilled the assumptions of Rasch modeling.…”
Section: Discussionmentioning
confidence: 89%
“…Applying the interview version of a questionnaire instead of postal self-administration can improve response rates and reduce missing data [41]. In addition, methods to handle missing data, such as multiple imputation techniques, were recommended in order to quantify potential biases [41,47]. In terms of the analysis of psychometric properties we showed that the 12-item WHODAS 2.0 fulfilled the assumptions of Rasch modeling.…”
Section: Discussionmentioning
confidence: 89%
“…Unfortunately, there is no way to determine whether data are NMAR because the information required to make this determination is missing (Allison 2012). Research indicates that MI, FIML, and EM are significantly less likely to lead to bias than traditional approaches to handling missing data at all three levels of missingness (Peyre et al 2011). Consequently, the current analyses were based on the assumption that data were missing at random.…”
Section: Missing Datamentioning
confidence: 99%
“…Missing data of the SF-36 were computed per subscale by imputation of personal mean scores, in case half or less of questions within the subscale were missing (24). Imputation of personal mean scores per subscale was also used if one item or less was missing per subscale in the HADS and MFI-20.…”
Section: Missing Datamentioning
confidence: 99%