The often investigated future of human-machine relationships, ranging from cooperation partners at work, to social bots in care homes and personal assistants at home, is based on an often implicit technological requirement of robotics and artificial intelligence (AI): the ability of those machines to communicate with us in a form familiar and comfortable to us.Thus, those machines will have to learn how to communicate, either through intuitively understandable signs, text, or audio. This issue deals mostly with the latter. We may assume that machines 'speak', or rather, have to become speakers. However, this simple statement is laden with philosophical notions both about the conditions of what it means to 'speak', and thus become a speaker, and whether machines will ever be able to achieve these conditions. One may argue that the key challenge is to teach machines to use language according to its rules. However, what distinguishes 'natural' speakers from artificial ones? There is more to human language use than mere linguistic rule following: What do humans do additionally to follow the rules of language-and the requirements of communication in general-that machines are not currently and may not be able to? Could machines perform speech acts? If yes, which ones can be performed without the presence of underlying conditions such as human intentionality? Should machines count as agents, or should we reconstruct their actions as "quasiaction"? What could be functional equivalents to speech, and where exactly do they * Hendrik Kempt