Increasing attention is paid to ethical issues and values when designing and deploying artificial intelligence (AI). However, we do not know how those values are embedded in artificial artefacts or how relevant they are to the population exposed to and interacting with AI applications. Based on literature engaging with ethical principles and moral values in AI, we designed an original survey instrument, including 15 value components, to estimate the importance of these values to people in the general population. The article is based on representative surveys conducted in Estonia, Germany, and Sweden (n = 4501), which have varying experiences with implementing AI. The factor analysis showed four underlying dimensions of values embedded in the design and use of AI: (1) protection of personal interests to ensure social benefit, (2) general monitoring to ensure universal solidarity, (3) ensuring social diversity and social sustainability, and (4) efficiency. We found that value types can be ordered along the two dimensions of resources and change. The comparison between countries revealed that some dimensions, like social diversity and sustainability evaluations, are more universally valued among individuals, countries, and domains. Based on our analysis, we suggest a need and a framework for developing basic values in AI.