Voice Activated Personal Assistants (VAPA) are unique and different from other Information Systems (IS) due to their personalized, intelligent, and human-like behavior. Given the unique characteristics of these VAPA's, current technology adoption models are not comprehensive enough for explaining the usage of these systems. While trust and privacy have been identified as relevant issues affecting adoption of VAPA's, both these have been treated in a simplistic fashion that is not effective keeping in mind the complex nature of these factors. Moreover, being "always on", VAPA's are intrusive by nature: another aspect that current research has overlooked. Drawing on current findings in IS and artificial intelligence, we propose two different types of trust (cognitive and emotional) together with their antecedents (anthropomorphism, intelligence, VAPA privacy concern, household privacy concern, vendor & third-party privacy concern, and government privacy concern). The moderation effect of perceived intrusiveness on usage behavior is also examined. The proposed research model is empirically validated with data obtained from 466 VAPA users in India using a Structure Equation Modelling approach. We observe that perceived anthropomorphism does not affect emotional trust, whereas the effect of perceived intelligence on cognitive trust is significant. Social privacy concerns like VAPA and household privacy affect both forms of trust, whereas the effect of institutional privacy category is weak with only vendor & third-party privacy concern affecting emotional trust. Additionally, the findings establish the moderating role of perceived intrusiveness in dampening and negatively influencing the usage of VAPA's, with a stronger effect for large households.