Introduction
For many reasons, it is important for audiologists and consumers to document improvement and benefit from amplification device at various stages of uses of amplification device. Professional are also interested to see the impact of amplification device on the consumer's auditory performance at different stages i.e. immediately after fitting and over several months of use.
Objective
The objective of the study was to measure the hearing aid benefit following 6 months – 1-year usage, 1 year – 1.5 yeaŕs usage, and 1.5 yeaŕs – 2 years' usage.
Methods
A total of 45 subjects participated in the study and were divided equally in three groups: hearing aid users from 6 months to 1 year, 1 year to 1.5 year, and 1.5 year to two years. All subjects responded to the Hearing Aid Benefit Questionnaire (63 questions), which assesses six domains of listening skills.
Result
Results showed the mean scores obtained were higher for all domains in the aided condition, as compared with unaided condition for all groups. Results also showed a significant improvement in the overall score between first-time users with hearing aid experience of six months to one year and hearing aid users using hearing aids for a period between 1.5 and 2 years.
Conclusion
It is possible to conclude that measuring the hearing aid benefit with the self-assessment questionnaires will assist the clinicians in making judgments about the areas in which a patient is experiencing more difficulty in everyday listening environment and in revising the possible technologies.
In this paper, we focus on one of the visual recognition facets of computer vision, i.e. image
captioning. This model’s goal is to come up with captions for an image. Using deep learning techniques,
image captioning aims to generate captions for an image automatically. Initially, a Convolutional Neural
Network is used to detect the objects in the image (InceptionV3). Recurrent Neural Networks (RNN) and
Long Short Term Memory (LSTM) with attention mechanism are used to generate a syntactically and
semantically correct caption for the image based on the detected objects. In our project, we're working
with a traffic sign dataset that has been captioned using the process described above. This model is
extremely useful for visually impaired people who need to cross roads safely.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.