Authentic self is believed to be morally good. The current research proposes that the authentic self is also environmentally good. Across two studies, we tested the link between authenticity and pro-environmental attitude and behavior. In Study 1 (N=2,646), dispositional authenticity was found to be a predictor of pro-environmental behavior (PEB). In Study 2 (N=474), participants in the authentic condition (recalling their experiences of being authentic) were more willing to donate money to protect the environment than those in the inauthentic (recalling their experiences of being inauthentic) or the neutral (recalling their experiences of a typical day) conditions. Participants in the authentic condition also reported higher intention to conduct PEB than their peers in the other conditions. The results of the present research provide initial evidence that people are more likely to endorse pro-environmental attitude and behave pro-environmentally when being authentic.
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabeled target domains without access to the source data. Existing methods mainly focus on model-based adaptation in a self-training manner, such as predicting pseudo labels for new domain datasets. Since pseudo labels are noisy and unreliable, these methods suffer from catastrophic forgetting and error accumulation when dealing with dynamic data distributions. Motivated by the prompt learning in NLP, in this paper, we propose to learn an image-layer visual domain prompt for target domains while having the source model parameters frozen. During testing, the changing target datasets can be adapted to the source model by reformulating the input data with the learned visual prompts. Specifically, we devise two types of prompts, i.e., domains-specific prompts and domains-agnostic prompts, to extract current domain knowledge and maintain the domain-shared knowledge in the continual adaptation. Furthermore, we design a homeostasis-based adaptation strategy to suppress domain-sensitive parameters in domain-invariant prompts to learn domain-shared knowledge more effectively. This transition from the model-dependent paradigm to the model-free one enables us to bypass the catastrophic forgetting and error accumulation problems. Experiments show that our proposed method achieves significant performance gains over state-of-the-art methods on four widely-used benchmarks, including CIFAR-10C, CIFAR-100C, ImageNet-C, and VLCS datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.