Designing and constructing bifunctional electrocatalysts is vital for water splitting. Particularly, the rational interface engineering can effectively modify the active sites and promote the electronic transfer, leading to the improved splitting efficiency. Herein, free‐standing and defect‐rich heterogeneous MoS
2
/NiS
2
nanosheets for overall water splitting are designed. The abundant heterogeneous interfaces in MoS
2
/NiS
2
can not only provide rich electroactive sites but also facilitate the electron transfer, which further cooperate synergistically toward electrocatalytic reactions. Consequently, the optimal MoS
2
/NiS
2
nanosheets show the enhanced electrocatalytic performances as bifunctional electrocatalysts for overall water splitting. This study may open up a new route for rationally constructing heterogeneous interfaces to maximize their electrochemical performances, which may help to accelerate the development of nonprecious electrocatalysts for overall water splitting.
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels. When the training and testing data are under similar distributions, their dominant features are similar, leading to decent test performance. The performance is nonetheless unmet when tested with different distributions, leading to the challenges in cross-domain image classification. We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the outof-domain data. RSC iteratively challenges (discards) the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels. This process appears to activate feature representations applicable to out-of-domain data without prior knowledge of new domain and without learning extra network parameters. We present theoretical properties and conditions of RSC for improving cross-domain generalization. The experiments endorse the simple, effective, and architecture-agnostic nature of our RSC method.
Statistical natural language inference (NLI) models are susceptible to learning dataset bias: superficial cues that happen to associate with the label on a particular dataset, but are not useful in general, e.g., negation words indicate contradiction. As exposed by several recent challenge datasets, these models perform poorly when such association is absent, e.g., predicting that "I love dogs." contradicts "I don't love cats.". Our goal is to design learning algorithms that guard against known dataset bias. We formalize the concept of dataset bias under the framework of distribution shift and present a simple debiasing algorithm based on residual fitting, which we call DRiFt. We first learn a biased model that only uses features that are known to relate to dataset bias. Then, we train a debiased model that fits to the residual of the biased model, focusing on examples that cannot be predicted well by biased features only. We use DRiFt to train three high-performing NLI models on two benchmark datasets, SNLI and MNLI. Our debiased models achieve significant gains over baseline models on two challenge test sets, while maintaining reasonable performance on the original test sets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.