Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting outof-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called "confident-classifier" by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KL divergence between the predictive distribution of OOD samples in the low-density regions of in-distribution and the uniform distribution (maximizing the entropy of the outputs). Thus, the samples could be detected as OOD if they have low confidence or high entropy. In this paper, we analyze this setting both theoretically and experimentally. We conclude that the resulting confident-classifier still yields arbitrarily high confidence for OOD samples far away from the in-distribution. We instead suggest training a classifier by adding an explicit "reject" class for OOD samples.
Traits are primitive units of code reuse that serve as building blocks of classes. In this research, we enhance reuse by extending the capabilities of traits; in particular, we add modeling abstractions to them.Traits have a variety of benefits, including facilitating reuse and separation of concerns. They have appeared in several programming languages, particularly derivatives of Smalltalk. However, there is still no support for traits that contain modeling abstractions, and no straightforward support for them in general-purpose programming languages. The latter is due to structural concerns that exist for them at runtime, especially traits that contain modeling abstractions.Model-driven technologies are making inroads into the development community, albeit slowly. Modeling abstractions such as state machines and associations provide new opportunities for reuse, and can be combined with inheritance for even greater reusability.However, issues with inheritance apply also when these new abstractions are inheritable units. This suggests that traits and models ought to be able to be synergistically combined.We perform a comprehensive analysis of using modeling elements in traits. We implement such traits in Umple, which is a model-oriented programming language that permits embedding of programming concepts into models.The contributions of the thesis are: a) Adding new elements including state machines and associations into traits, hence bringing more reusability, modularity, and applications to traits; b) Developing an algorithm that allows reusing, extending, and composing state machines through traits; c) Extending traits with required interfaces so dependencies at the semantic level become part of their usage, rather than simple syntactic capture; d) Adding template parameters with associations in traits, offering new applications for traits in which it is possible to define design patterns and to have a library of most-used functionality; e) The implementation of all the above concepts, including generating code in multiple generalpurpose programming languages through automatic model transformation.iii
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.