Problem-solving software that is not-necessarily infallible is central to AI. Such software whose correctness and incorrectness properties are deducible by agents is an issue at the foundations of AI. The Comprehensibility Theorem, which appeared in a journal for specialists in formal mathematical logic, might provide a limitation concerning this issue and might be applicable to any agents, regardless of whether the agents are artificial or natural. The present article, aimed at researchers interested in the foundations of AI, addresses many questions related to that theorem, including differences between it and results of Gödel and Turing that have sometimes played key roles in Minds and Machines articles. This study also suggests that-if one is willing to assume a thesis due to Donald Knuth-the Comprehensibility Theorem is the first mathematical theorem implying the impossibility of any AI agent or natural agent-including a not-necessarily infallible human agent-satisfying a rigorous and deductive interpretation of the self-comprehensibility challenge. Some have pointed out the difficulty of self-comprehensibility, even according to presumably a less rigorous interpretation. This includes Socrates, who considered it to be among the most important of intellectual tasks. Selfcomprehensibility in some form might be essential for a kind of self-reflection useful for self-improvement that might enable some agents to increase their success. We use the methods of applied mathematics, rather than philosophy, although some topics considered could be of interest to philosophers.