Generative AI is increasingly presented as a potential substitute for humans in many areas. This is also true in research. Large language models (LLMs) are often said to be able to replace human subjects, be they agents in simulated models, economic Mactors, subjects for experimental psychology, survey respondents, or potential consumers. Yet, there is no scientific consensus on how closely these in-silico clones represent their human counterparts. Another body of research points out the models inaccuracies, which they link to a bias. According to this skeptic view, LLMs are said to take the views of certain social groups, or of those which predominate in certain countries. Through a targeted experiment on survey questionnaires, we demonstrate that these critics are right to be wary of generative AI, but probably not for the right reasons. Our results i) confirm that to date, the models cannot replace humans for opinion or attitudinal research; and ii) that they also display a strong bias. Yet we also show that this bias is iii) both specific (it has a very low variance) and not easily related to any given social group. We devise a simple strategy to test for ‘social biases’ in models, from which we conclude that the social bias perspective is not adequate. We call machine bias the large, dominant part of the error in prediction, attributing it to the nature of current large language models rather than to their training data only