Developers of artificial intelligence (AI) often cannot explain the inferences their neural networks make, at least not in ways that satisfy user needs. XAI—explainable artificial intelligence—aims to develop techniques for providing such explanations. XAI researchers have adopted techniques that, to philosophers, seem representationalist–internalist, leading some philosophers to call for more externalist alternatives. But is explaining AI models through causally related external factors feasible? I suggest we compare the idea of externalist XAI to so‐called functionalist XAI. Two common arguments against functionalist XAI are the Epistemic Argument and the Trust Argument, respectively. I discuss whether these arguments can be met by externalists, as well as a new, previously unnoticed challenge from random initialization.