PurposeTo develop and psychometrically evaluate a visual functioning questionnaire (VFQ) in an ultra-low vision (ULV) population.MethodsQuestionnaire items, based on visual activities self-reported by a ULV population, were categorized by functional visual domain (e.g., mobility) and visual aspect (e.g., contrast) to ensure a representative distribution. In Round 1, an initial set of 149 items was generated and administered to 90 participants with ULV (visual acuity [VA] ≤ 20/500; mean [SD] age 61 [15] years), including six patients with a retinal implant. Psychometric properties were evaluated through Rasch analysis and a revised set (150 items) was administered to 80 participants in Round 2.ResultsIn Round 1, the person measure distribution (range, 8.6 logits) was centered at −1.50 logits relative to the item measures. In Round 2, the person measure distribution (range, 9.5 logits) was centered at −0.86 relative to the item mean. The reliability index in both rounds was 0.97 for Items and 0.99 for Persons. Infit analysis showed four underfit items in Round 1, five underfit items in Round 2 with a z-score greater than 4 cutoff. Principal component analysis on the residuals found 69.9% explained variance; the largest component in the unexplained variance was less than 3%.ConclusionsThe ULV-VFQ, developed with content generated from a ULV population, showed excellent psychometric properties as well as superior measurement validity in a ULV population.Translational RelevanceThe ULV-VFQ, part of the Prosthetic Low Vision Rehabilitation (PLoVR) development program, is a new VFQ developed for assessment of functional vision in ULV populations.
PurposeTo understand how individuals with profound visual impairment (ultra-low vision, ULV) use their remaining vision.MethodsForty-six participants with ULV (visual acuity ≤ 200/500 in the better seeing eye) were divided into nine focus groups (4–6 individuals per group) and met either in person (n = 2) or over the phone (n = 7). Discussions were guided by the Massof Activity Inventory. Audio recordings were transcribed and analyzed for visual activities that were then classified along two visual categorizations – functional domains and visual aspects. The latter was based on a Grounded Theory classification of participants' descriptions.ResultsSeven hundred sixty activities were reported. By functional domain they were classified as reading/shape recognition (10%), mobility (17%), visual motor (24%), and visual information gathering (49%). By visual aspects, they were classified as contrast (43%), luminance (17%), environmental lighting (9%), familiarity (3%), motion perception (5%), distance (7%), size (9%), eccentricity (5%), depth perception (1%), and other/miscellaneous (1%). More than one visual aspect may be critical for an activity: participants reported that contrast plays a role in 68% of visual activities, followed by luminance (27%), environmental lighting (14%), and size (14%).ConclusionsVisual aspects, primarily contrast, were found to be critical factors enabling ULV individuals to perform visual activities.Translational RelevanceThis inventory, part of the Prosthetic Low Vision Rehabilitation (PLoVR) curriculum development study, provides a unique perspective into the visual world of the nearly blind, and can be used in the development of a Visual Functioning Questionnaire (VFQ) and visual performance measures suited for ULV populations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.