The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, to make assessments. Employing information-processing-based Bayesian Mindsponge Framework (BMF) analytics on a dataset of 266 residents in the United States, we found that the more people believe that an AI agent seeks continued functioning, the more they believe in that AI agent’s capability of having a mind of its own. Moreover, we also found that the above association becomes stronger if a person is more familiar with personally interacting with AI. This suggests a directional pattern of value reinforcement in perceptions of AI. As the information processing of AI becomes even more sophisticated in the future, it will be much harder to set clear boundaries about what it means to have an autonomous mind.