With the rise of emotional AI, citizens are increasingly subjected to the practice of non-conscious emotional data harvesting, i.e., the opaque ways in which their emotions are analyzed, categorized, and responded to by algorithm. The facts that nowadays, AI technologies not only feel but also feed our emotions, often operate silently in the background of our physical and virtual infrastructure without our knowledge, and are quietly moving across cultural and national borders necessitate a more culturally sensitive way of quantitively studying the act of technological acceptance. Thus, this study provides the first attempt to extend the Technological Acceptance Model (TAM) (Davis, 1989) model with insights from the Mindsponge model of information filtering (Vuong and Napier, 2015) and the Bayesian statistical approach. Analyzing a multinational dataset of 1,015 young adults (age 18-27) with this new framework, this study aims to understand behavioral factors determining the attitude toward non-conscious data harvesting by the government and the private sector. First, we find that the data fitting results of mindsponge-based TAM models are distributed more Bayesian weights compared to the purely TAM models in both the public and private cases. The results suggest fertile ground for future studies that further explore the intersection of psychology, culture, and technology. Second, the analyses also indicate that autonomy and self-efficacy seem to solve the so-called privacy-personalization paradox, where people state a strong preference for privacy but are willing to give up their personal data for personalized benefits. Concretely, we find positive correlates of attitude toward non-conscious data harvesting by either governmental or private sectors are the variables that measure the familiarity with the AI technologies, perceived utility of AI technologies, emotional control when engaging with social media discussion, and using social media for public messaging. Finally, this study found an indicator for the lack of trust toward the government’s engagement with non-conscious dataveillance, concurring with the literature. These results carry important implications for governing and ethical living in the age of emotional AI.