Understanding open-domain text is one of the primary challenges in natural language processing (NLP). Machine comprehension benchmarks evaluate the system's ability to understand text based on the text content only. In this work, we investigate machine comprehension on MCTest, a question answering (QA) benchmark. Prior work is mainly based on feature engineering approaches. We come up with a neural network framework, named hierarchical attention-based convolutional neural network (HABCNN), to address this task without any manually designed features. Specifically, we explore HABCNN for this task by two routes, one is through traditional joint modeling of document, question and answer, one is through textual entailment. HABCNN employs an attention mechanism to detect key phrases, key sentences and key snippets that are relevant to answering the question. Experiments show that HABCNN outperforms prior deep learning approaches by a big margin.
Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte.
Embeddings are generic representations that are useful for many NLP tasks. In this paper, we introduce DENSIFIER, a method that learns an orthogonal transformation of the embedding space that focuses the information relevant for a task in an ultradense subspace of a dimensionality that is smaller by a factor of 100 than the original space. We show that ultradense embeddings generated by DENSI-FIER reach state of the art on a lexicon creation task in which words are annotated with three types of lexical information -sentiment, concreteness and frequency. On the SemEval2015 10B sentiment analysis task we show that no information is lost when the ultradense subspace is used, but training is an order of magnitude more efficient due to the compactness of the ultradense space.
Numerous theoretical predictions such as precautionary saving or preventive behavior have been derived for prudent decision makers. Further, prudence can be characterized as downside risk aversion and plays a key role in preference for skewness. We use a simple experimental method to test for prudence and skewness preference in the laboratory and compare the two. To this end, we introduce a novel graphical representation of compound lotteries that is easily accessible to subjects and test it for robustness, using a factorial design. Prudence is observed on the aggregate and individual level. We find that prudence does not boil down to skewness seeking. We further provide some theoretical explanations for this result. This paper was accepted by Peter Wakker, decision analysis.decision making under risk, precautionary savings, prudence, downside risk, skewness seeking, laboratory experiment
Cumulative prospect theory (CPT, Tversky and Kahneman 1992) is arguably the most prominent alternative to expected utility theory (EUT, Bernoulli 1738/1954, von Neumann and Morgenstern 1944. EUT is well-studied in static and dynamic settings, ranging from game theory over investment problems to institutional economics. In contrast, CPT with probability weighting-the assumption that individuals overweight unlikely and extreme events-has mostly focused on the static case. This paper studies the dynamic investment and gambling behavior of CPT agents who are naïve, i.e., unaware of being time-inconsistent.Our main result shows that naïve CPT agents never stop gambling when the set of gambling or investment opportunities is not too restrictive. This never-stopping result applies to highly unfavorable gambles and investments with arbitrarily large expected losses per time. It follows from a static result on skewness preference under CPT that we label skewness preference in the small: a CPT agent always wants to take a simple, small, lottery-like risk, even if it has negative expectation. At any point in time the naïve CPT investor reasons "If I lose just a little bit more, I will stop. And if I gain, I will continue." This simple strategy results in a right-skewed gambling experience that is attractive due to skewness preference in the small. Once a loss has occurred, however, a new skewed gambling strategy comes to the naïve CPT investor's mind, and-as long as such a strategy is feasible-he continues gambling.No "malicious" third party is responsible for manipulating the CPT investor into this behavior. Never stopping arises naturally in numerous prominent economic and financial decision situations, thereby yielding predictions that are arguably too extreme. In a casino gambling model in the spirit of Barberis (2012), the naïve CPT gambler may gamble until the bitter end, i.e., until bankruptcy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.