Can persons without disabilities be good evaluators of accessibility? This question, often posed by persons with disabilities when looking at crowdsourced accessibility maps, is related to one of the most important unresolved issues of crowdsourcing: data quality control. Many of the recent ground-breaking advancements in machine learning depend on data annotation done by humans.Existing approaches for managing inaccuracies in crowdsourcing are based on validating output against preset gold standards, but they are unsuitable for subjective contexts such as sentiment analysis, semantic annotation, or measuring accessibility. While existing accessibility maps are largely centered in Europe and the United States, we built the largest database of such kind in Latin America. We detail techniques used for engaging over 27,000 volunteers who generated more than 300,000 data points over the course of 90 months, and a novel method for validating data quality in a context that lacks a definite ground truth. We tested it by applying concepts of serious games for exposing biases of different demographic profiles, and crowdsourced a different dataset for validating data quality. We found that persons without disabilities did not have worse performance than persons with disabilities, strong evidence that crowdsourcing can be a reliable source for accessibility data.