Failure to consider the characteristics, limitations, and abilities of diverse end-users during mobile apps development may lead to problems for end-users such as accessibility and usability issues. We refer to this class of problems as human-centric issues. Despite their importance, there is a limited understanding of the types of human-centric issues that are encountered by end-users and taken into account by the developers of mobile apps. In this paper, we examine what human-centric issues end-users report through Google App Store reviews, which human-centric issues are a topic of discussion for developers on GitHub, and whether end-users and developers discuss the same human-centric issues. We then investigate whether an automated tool might help detect such human-centric issues and whether developers would find such a tool useful. To do this, we conducted an empirical study by extracting and manually analysing a random sample of 1,200 app reviews and 1,200 issue comments from 12 diverse projects that exist on both Google App Store and GitHub. Our analysis led to a taxonomy of human-centric issues that categorises human-centric issues into three-high levels: App Usage, Inclusiveness, and User Reaction. We then developed machine learning and deep learning models that are promising in automatically identifying and classifying human-centric issues from app reviews and developer discussions. A survey of mobile app developers shows that the automated detection of human-centric issues has practical applications. Guided by our findings, we highlight some implications and possible future work to further understand and incorporate human-centric issues in mobile apps development.