In this modern era of technology, most of the accessibility issues are handled with the help of smart devices and cutting-edge gadgets. Smartphones play a crucial role in addressing various accessibility challenges, including voice recognition, sign language detection and interpretation, navigation systems, speech-to-text conversion, and vice versa, among others. They are computationally powerful enough to handle and run numerous machine and deep learning applications. Among various accessibility challenges, speech disorders represent a disability where individuals struggle to communicate verbally. Similarly, hearing loss is a disability that impairs an individual’s ability to hear, necessitating reliance on gestures for communication. A significant challenge encountered by people with speech disorders, hearing loss, or both is their inability to effectively convey or receive messages from others. Hence, these individuals heavily depend on the sign language (a gesture-based communication) method, typically involving hand movements and expressions. To the best of our knowledge, there are currently no comprehensive review and/or survey articles available that cover the literature on speech disabilities and sign language detection and interpretation via smartphones utilizing machine learning and/or deep learning approaches. This study fills the gap in the literature by analyzing research publications on speech disabilities, published from 2012 to July 2023. A rigorous search and standard strategy for formulating the literature along with a well-defined theoretical framework for results and findings have been used. The paper has implications for practitioners and researchers working in accessibilities in general and smart/intelligent gadgets and applications for speech-disabled people in specific.