In the past few years, the subject of AI rights-the thesis that AIs, robots, and other artefacts (hereafter, simply 'AIs') ought to be included in the sphere of moral concern-has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress-namely, a lack of a solution to the 'Hard Problem' of consciousness: the problem of explaining why certain brain states give rise to experience. To motivate this claim, I consider three ways in which to ground AI rights-namely: superintelligence, empathy, and a capacity for consciousness. I argue that appeals to superintelligence and empathy are problematic, and that consciousness should be our central focus, as in the case of animal rights. However, I also argue that AI rights is disanalogous from animal rights in an important respect: animal rights can proceed without a solution to the 'Hard Problem' of consciousness. Not so with AI rights, I argue. There we cannot make the same kinds of assumptions that we do about animal consciousness, since we still do not understand why brain states give rise to conscious mental states in humans.