Smart Assistants have rapidly emerged in smartphones, vehicles, and many smart home devices. Establishing comfortable personal spaces in smart cities requires that these smart assistants are transparent in design and implementation—a fundamental trait required for their validation and accountability. In this article, we take the case of Google Assistant (GA), a state-of-the-art smart assistant, and perform its diagnostic analysis from the transparency and accountability perspectives. We compare our discoveries from the analysis of GA with those of four leading smart assistants. We use two online user studies (N = 100 and N = 210) conducted with students from four universities in three countries (China, Italy, and Pakistan) to learn whether risk communication in GA is transparent to its potential users and how it affects them. Our research discovered that GA has unusual permission requirements and sensitive Application Programming Interface (API) usage, and its privacy requirements are not transparent to smartphone users. The findings suggest that this lack of transparency makes the risk assessment and accountability of GA difficult posing risks to establishing private and secure personal spaces in a smart city. Following the separation of concerns principle, we suggest that autonomous bodies should develop standards for the design and development of smart city products and services.