A major factor contributing to the success of modern representation learning is the ease of performing various vector operations. Recently, objects with geometric structures (eg. distributions, complex or hyperbolic vectors, or regions such as cones, disks, or boxes) have been explored for their alternative inductive biases and additional representational capacities. In this work, we introduce Box Embeddings, a Python library that enables researchers to easily apply and extend probabilistic box embeddings. 1 Fundamental geometric operations on boxes are implemented in a numerically stable way, as are modern approaches to training boxes which mitigate gradient sparsity. The library is fully open-source, and compatible with both PyTorch and TensorFlow, which allows existing neural network layers to be replaced with or transformed into boxes effortlessly. In this work, we present the implementation details of the fundamental components of the library, and the concepts required to use box representations alongside existing neural network architectures.
Skin diseases are becoming increasingly prevalent all over the world due to a multitude of factors including disparity in income groups, lack of access to primary health care, poor levels of hygiene, varied climate, and different cultural factors.The ratio of dermatologists to the number of people affected is very low, and hence, there is a need for expedited and accurate diagnosis of skin diseases. Prurigo Nodularis can be a bothersome-to-enervating disease and its treatment requires a multifaceted approach depending on the severity and underlying etiology of the disease. Often, once patients are diagnosed with Prurigo Nodularis, they are also advised a complete work-up to rule out any underlying systemic disease. Knowing the advantages of early detection of the disease to facilitate quick and suitable treatment, this paper proposes the use of deep learning for accurate and early detection of Prurigo Nodularis. Different architectures of convolutional neural networks were used on the dataset of diseased skin images and the results were compared to ascertain the best Rithvika Iyer and Tejas Chheda-joint second authorship.
Explanation methods have emerged as an important tool to highlight the features responsible for the predictions of neural networks. There is mounting evidence that many explanation methods are rather unreliable and susceptible to malicious manipulations. In this paper, we particularly aim to understand the robustness of explanation methods in the context of text modality. We provide initial insights and results towards devising a successful adversarial attack against text explanations. To our knowledge, this is the first attempt to evaluate the adversarial robustness of an explanation method. Our experiments show the explanation method can be largely disturbed for up to 86% of the tested samples with small changes in the input sentence and its semantics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.