As ChatGPT emerges as a potential ally in healthcare decision-making, it is imperative to investigate how users leverage and perceive it. The repurposing of technology is innovative but brings risks, especially since AI’s effectiveness depends on the data it’s fed. In healthcare, where accuracy is critical, ChatGPT might provide sound advice based on current medical knowledge, which could turn into misinformation if its data sources later include erroneous information. Our study assesses user perceptions of ChatGPT, particularly of those who used ChatGPT for healthcare-related queries. By examining factors such as competence, reliability, transparency, trustworthiness, security, and persuasiveness of ChatGPT, the research aimed to understand how users rely on ChatGPT for health-related decision-making. A web-based survey was distributed to U.S. adults using ChatGPT at least once a month. Data was collected from February to March 2023. Bayesian Linear Regression was used to understand how much ChatGPT aids in informed decision-making. This analysis was conducted on subsets of respondents, both those who used ChatGPT for healthcare decisions and those who did not. Qualitative data from open-ended questions were analyzed using content analysis, with thematic coding to extract public opinions on urban environmental policies. The coding process was validated through inter-coder reliability assessments, achieving a Cohen’s Kappa coefficient of 0.75. Six hundred and seven individuals responded to the survey. Respondents were distributed across 306 US cities of which 20 participants were from rural cities. Of all the respondents, 44 used ChatGPT for health-related queries and decision-making. While all users valued the content quality, privacy, and trustworthiness of ChatGPT across different contexts, those using it for healthcare information place a greater emphasis on safety, trust, and the depth of information. Conversely, users engaging with ChatGPT for non-healthcare purposes prioritize usability, human-like interaction, and unbiased content. In conclusion our study findings suggest a clear demarcation in user expectations and requirements from AI systems based on the context of their use.