This article discusses the role of technological myths in the development of Artificial Intelligence (AI) technologies from 1950s to the early 1970s. It shows how the rise of AI was accompanied by the construction of a powerful cultural myth: the creation of a thinking machine, which would be able to perfectly simulate the cognitive faculties of the human mind. Based on a content analysis of articles on Artificial Intelligence published in two magazines, the Scientific American and the New Scientist, which were aimed at a broad readership of scientists, engineers, and technologists, three dominant patterns in the construction of the AI myth are identified: (1) the recurrence of analogies and discursive shifts, by which ideas and concepts from other fields were employed to describe the functioning of AI technologies; (2) a rhetorical use of the future, imagining that present shortcomings and limitations will shortly be overcome; (3) the relevance of controversies around the claims of AI, which we argue should be considered as an integral part of the discourse surrounding the AI myth.
Abstract. The assessment of the quality of volunteered geographic information (VGI) is cornerstone to understand the fitness for purpose of datasets in many application domains. While most analyses focus on geometric and positional quality, only sporadic attention has been devoted to the interpretation of the data, i.e., the communication process through which consumers try to reconstruct the meaning of information intended by its producers. Interpretability is a notoriously ephemeral, culturally rooted, and context-dependent property of the data that concerns the conceptual quality of the vocabularies, schemas, ontologies, and documentation used to describe and annotate the geographic features of interest. To operationalize conceptual quality in VGI, we propose a multi-faceted framework that includes accuracy, granularity, completeness, consistency, compliance, and richness, proposing proxy measures for each dimension. The application of the framework is illustrated in a case study on a European sample of OpenStreetMap, focused specifically on conceptual compliance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.