Attention based Transformer models have achieved state-of-the-art results in natural language processing (NLP). However, recent work shows that the underlying attention mechanism can be exploited by adversaries to craft malicious inputs designed to induce spurious outputs, thereby harming model performance and trustworthiness. Unlike in the vision domain, the literature examining neural networks under adversarial conditions in the NLP domain is limited and most of it focuses mainly on the English language. In this paper, we first analyze the adversarial robustness of Bidirectional Encoder Representations from Transformers (BERT) models for German datasets. Second, we introduce two novel NLP attacks. Namely, a character-level and a word-level attacks, both of which utilize attention scores to calculate where to inject character-level and word-level noise, respectively. Finally, we present two defense strategies against the attacks above. The first implicit character-level defense is a variant of adversarial training, which trains a new classifier capable of abstaining/rejecting certain (ideally adversarial) inputs. The other explicit character-level defense learns a latent representation of the complete training data vocabulary and then maps all tokens of an input example to the same latent space, enabling the replacement of all out of vocabulary tokens with the most similar in-vocabulary tokens based on the cosine similarity metric.
Sentiment Analysis typically refers to using natural language processing, text analysis, and computational linguistics to extract effect and emotion-based information from text data. Our work explores how we can effectively use deep neural networks in transfer learning and joint dual input learning settings to effectively classify sentiments and detect hate speech in Hindi and Bengali data. We start by training Word2Vec word embeddings for Hindi HASOC dataset and Bengali hate speech [1] and then train LSTM and subsequently, employ parameter sharing based transfer learning to Bengali sentiment classifiers by reusing and fine-tuning the trained weights of Hindi classifiers with both classifiers being used as the baseline in our study. Finally, we use BiLSTM with self-attention in a joint dual input learning setting where we train a single neural network on the Hindi and Bengali datasets simultaneously using their respective embeddings.
The purpose of this study is to ascertain pre-marital and post-marital expectations, the differences between pre- and post-marital expectations, and the impact of post-marital expectations on current marital satisfaction. This study employs a mixed design with a correlational technique. This study includes 164 married young adults (n=75 males & n=89 females) in Pakistan, all of whom have been married for at least six months and are between the ages of 19 and 40. The sampling technique employed is convenience-based. In this study, the Couples Satisfaction Index (CSI-16) and the Marital Scales are utilised as measures (Pre and Post Forms). The demographic information sheet was presented first, followed by the CSI-16, and then the Marital Scales, with the pre-marital form being presented first and the post-marital form being presented second. According to findings, there is a significant difference between pre- and post-marriage expectations. In addition, post-marriage expectations were found to impact participants' current marital satisfaction. The majority of the participants were female, employed, parents, upper middle class, and part of a joint family. This study contributes to the existing literature on pre-marital and post-marital expectations and marital satisfaction, can be used in marital therapy, can be applied to the culture and context of Pakistan, and offers an explanation of certain marital expectations and their impact on marital satisfaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.