Rights and duties are essential features of legal documents. Machine learning algorithms have been increasingly applied to extract information from such texts. Currently, their main focus is on named entity recognition, sentiment analysis, and the classification of court cases to predict court outcome. In this paper it is argued that until the essential features of such texts are captured, their analysis can remain bottle-necked by the very technology being used to assess them. As such, the use of legal theory to identify the most pertinent dimensions of such texts is proposed. Specifically, the interest theory of rights, and the first-order Hohfeldian taxonomy of legal relations. These principal legal dimensions allow for a stratified representation of knowledge, making them ideal for the abstractions needed for machine learning. This study considers how such dimensions may be identified. To do so it implements a novel heuristic based in philosophy coupled with language models. Hohfeldian relations of ‘rights-duties’ vs. ‘privileges-no-rights’ are determined to be identifiable. Classification of each type of relation to accuracies of 92.5% is found using Sentence Bidirectional Encoder Representations from Transformers. Testing is carried out on religious discrimination policy texts in the United Kingdom.
We aim to measure the postintervention effects of A-tDCS (anodal-tDCS) on brain potentials commonly used in BCI applications, namely, Event-Related Desynchronization (ERD), Event-Related Synchronization (ERS), and P300. Ten subjects were given sham and 1.5 mA A-tDCS for 15 minutes on two separate experiments in a double-blind, randomized order. Postintervention EEG was recorded while subjects were asked to perform a spelling task based on the “oddball paradigm” while P300 power was measured. Additionally, ERD and ERS were measured while subjects performed mental motor imagery tasks. ANOVA results showed that the absolute P300 power exhibited a statistically significant difference between sham and A-tDCS when measured over channel Pz (p = 0.0002). However, the difference in ERD and ERS power was found to be statistically insignificant, in controversion of the the mainstay of the litrature on the subject. The outcomes confirm the possible postintervention effect of tDCS on the P300 response. Heightening P300 response using A-tDCS may help improve the accuracy of P300 spellers for neurologically impaired subjects. Additionally, it may help the development of neurorehabilitation methods targeting the parietal lobe.
In this paper we investigate the use of EEG to detect the affective state of humor. The EEG of five subjects was recorded while they recalled humorous videos. Extracted frequency features were compared to a control state in which users where asked to remain in a neutral mental state. An ANOVA test performed on the two groups: neutral and humor recall found a statistically significant difference in the frequency range 28-32 Hz for a number of channels including T7 and P7. Both of which presented the greatest statistically significant results with p values of 0.009 and 0.0 respectively Furthermore, we demonstrate that these mental states can be classified using Principal Component Analysis followed by a 3 features Linear Discriminant Analysis resulting in a leave one out classification accuracy of 95%.
Programming artificial intelligence (AI) to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair (e.g., slur, insult) or fair (e.g., thank, appreciate). It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing to accept if done to themselves. Secondly, that such a construal is ontologically reflected in Word Embeddings, by virtue of their ability to reflect the dimensions of such a perception. These dimensions being: responsibility vs. irresponsibility, gain vs. loss, reward vs. sanction, joy vs. pain, all as a single vector (FairVec). The paper finds it possible to quantify and qualify a verb as fair or unfair by calculating the cosine similarity of the said verb’s embedding vector against FairVec—which represents the above dimensions. We apply this to Glove and Word2Vec embeddings. Testing on a list of verbs produces an F1 score of 95.7, which is improved to 97.0. Lastly, a demonstration of the method’s applicability to sentence measurement is carried out.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.