Background
Digital technologies have changed how we manage our health, and eHealth literacy is needed to engage with health technologies. Any eHealth strategy would be ineffective if users’ eHealth literacy needs are not addressed. A robust measure of eHealth literacy is essential for understanding these needs. On the basis of the eHealth Literacy Framework, which identified 7 dimensions of eHealth literacy, the eHealth Literacy Questionnaire (eHLQ) was developed. The tool has demonstrated robust psychometric properties in the Danish setting, but validity testing should be an ongoing and accumulative process.
Objective
This study aims to evaluate validity evidence based on test content, response process, and internal structure of the eHLQ in the Australian community health setting.
Methods
A mixed methods approach was used with cognitive interviewing conducted to examine evidence on test content and response process, whereas a cross-sectional survey was undertaken for evidence on internal structure. Data were collected at 3 diverse community health sites in Victoria, Australia. Psychometric testing included both the classical test theory and item response theory approaches. Methods included Bayesian structural equation modeling for confirmatory factor analysis, internal consistency and test-retest for reliability, and the Bayesian multiple-indicators, multiple-causes model for testing of differential item functioning.
Results
Cognitive interviewing identified only 1 confusing term, which was clarified. All items were easy to read and understood as intended. A total of 525 questionnaires were included for psychometric analysis. All scales were homogenous with composite scale reliability ranging from 0.73 to 0.90. The intraclass correlation coefficient for test-retest reliability for the 7 scales ranged from 0.72 to 0.95. A 7-factor Bayesian structural equation modeling using small variance priors for cross-loadings and residual covariances was fitted to the data, and the model of interest produced a satisfactory fit (posterior productive P=.49, 95% CI for the difference between observed and replicated chi-square values −101.40 to 108.83, prior-posterior productive P=.92). All items loaded on the relevant factor, with loadings ranging from 0.36 to 0.94. No significant cross-loading was found. There was no evidence of differential item functioning for administration format, site area, and health setting. However, discriminant validity was not well established for scales 1, 3, 5, 6, and 7. Item response theory analysis found that all items provided precise information at different trait levels, except for 1 item. All items demonstrated different sensitivity to different trait levels and represented a range of difficulty levels.
Conclusions
The evidence suggests that the eHLQ is a tool with robust psychometric properties and further investigation of discriminant validity is recommended. It is ready to be used to identify eHealth literacy strengths and challenges and assist the development of digital health interventions to ensure that people with limited digital access and skills are not left behind.