Voice-controlled systems have revolutionized user interactions, making technology more accessible and intuitive across various settings. In multi-user environments, such as households, voice assistants like Amazon Alexa are favored as they enable seamless interaction with devices and services. However, the convenience these systems offer comes with challenges, especially concerning privacy and security. In environments where multiple users interact with the same voice assistant, the need for sophisticated access control mechanisms becomes apparent to prevent unauthorized access to sensitive information. This study assesses the effectiveness of voice access control mechanisms within these multi-user contexts, shedding light on the inherent privacy risks associated with shared voice-controlled systems. First, we show that the current access control is vulnerable in handling users' private data. Second, we develop a framework for automated testing to explore the access control weaknesses to determine whether the accessible data is of consequence, as not all information may be equally sensitive or vital to users. Third, we identify two flaws within the access control mechanisms offered by the voice system, and we show that existing access controls are susceptible to unauthorized access. Finally, our study shows that operations on the system are protected, whereas other operations that are not protected still reveal user's private information. These findings underscore the need for enhanced privacy safeguards and improved access control systems in multi-user environments. We offer recommendations to mitigate risks associated with unauthorized access, shedding light on securing the user's private data on the voice assistant.