The use of websites and mobile applications has become essential for numerous daily activities. However, not everyone can have full access to such services and content due to many websites and applications being inaccessible to people with disabilities, such as people with vision impairments. In this context, even though developers may demonstrate an effort to create more accessible content, there is limited information about the characteristics of different accessibility assessment methods applied to websites and mobile applications. Thus, the present study aimed to perform a meta-analysis of 38 types of accessibility problems on websites and mobile applications extracted from 38 studies in the literature from an initial search of 304 articles. Studies carried out automated assessments using tools, expert-based inspections and user testing involving disabled people. The results confirm other considerations made in the literature, showing that automated evaluation methods have significant limitations on an adequate coverage of accessibility problems, covering less than 40% of the types of problems found on websites and less than 20% on mobile apps. A significant percentage of problems both on mobile and web platforms were only encountered by studies involving users. Expert inspection showed a higher coverage of problems encountered by users, both on mobile apps and on websites, despite not covering all of them. Thus, the article concludes by showing a consolidation of literature data to reinforce that effective accessibility evaluations of web and mobile applications should count in expert-based inspections and user tests involving people with disabilities.