This research introduces an innovative intelligent model developed for predicting and analyzing sentiment responses regarding audio feedback from students with visual impairments in a virtual learning environment. Sentiment is divided into five types: high positive, positive, neutral, negative, and high negative. The model sources data from post-COVID-19 outbreak educational platforms (Microsoft Teams) and offers automated evaluation and visualization of audio feedback, which enhances students’ performances. It also offers better insight into the sentiment scenarios of e-learning visually impaired students to educators. The sentiment responses from the assessment to point out deficiencies in computer literacy and forecast performance were pretty successful with the support vector machine (SVM) and artificial neural network (ANN) algorithms. The model performed well in predicting student performance using ANN algorithms on structured and unstructured data, especially by the 9th week against unstructured data only. In general, the research findings provide an inclusive policy implication that ought to be followed to provide education to students with a visual impairment and the role of technology in enhancing the learning experience for these students.