Implementationalism maintains that conventional, silicon‐based artificial systems are not conscious because they fail to satisfy certain substantive constraints on computational implementation. In this article, we argue that several recently proposed substantive constraints are implausible, or at least are not well‐supported, insofar as they conflate intuitions about computational implementation generally and consciousness specifically. We argue instead that the mechanistic account of computation can explain several of the intuitions driving implementationalism and non‐computationalism in a manner which is consistent with artificial consciousness. Our argument provides indirect support for computationalism about consciousness and the view that conventional artificial systems can be conscious.