We describe an ACT-R model for sentence memory that extracts both a parsed surface representation and a propositional representation. In addition, if possible for each sentence, pointers are added to a long-term memory referent which reflects past experience with the situation described in the sentence. This system accounts for basic results in sentence memory without assuming different retention functions for surface, propositional, or situational information. There is better retention for gist than for surface information because of the greater complexity of the surface representation and because of the greater practice of the referent for the sentence. This model's only inference during sentence comprehension is to insert a pointer to an existing referent. Nonetheless, by this means it is capable of modeling many effects attributed to inferential processing. The ACT-R architecture also provides a mechanism for mixing the various memory strategies that participants bring to bear in these experiments.
We present interpretation-based processing-a theory of sentence processing that builds a syntactic and a semantic representation for a sentence and assigns an interpretation to the sentence as soon as possible. That interpretation can further participate in comprehension and in lexical processing and is vital for relating the sentence to the prior discourse. Our theory offers a unified account of the processing of literal sentences, metaphoric sentences, and sentences containing semantic illusions. It also explains how text can prime lexical access. We show that word literality is a matter of degree and that the speed and quality of comprehension depend both on how similar words are to their antecedents in the preceding text and how salient the sentence is with respect to the preceding text. Interpretation-based processing also reconciles superficially contradictory findings about the difference in processing times for metaphors and literals Ambiguity is one feature of human language that often frustrates the attempts to automatize its understanding by computers: not only can words have multiple meanings, but sometimes the meaning of a word is not taken at face value. Everyday language is often nonliteral; figurative devices such as irony, indirect request, metaphor, metonymy, or hyperbole are common and are understood easily. Metaphor is a particularly pervasive device: it is a rich source of new words (recent examples include web and couch potato) and, moreover, according to researchers such
Many studies have suggested that people understand metaphors as easily as they understand literal sentences. For instance, in a 1978 experiment, Ortony, Schallert, Reynolds, and Antos showed participants a passage either about a women's club meeting or about chickens on a farm and and followed each of them by a target sentence such as The hens clucked noisily. When the sentence came after the first passage, it had a metaphoric interpretation; after the second passage it was literal. Participants in Ortony et al.'s experiment read this sentence just as fast in both conditions. This result was interpreted as evidence that when context is rich and supportive, people process metaphoric sentences as fast as literal sentences, contradicting Searle's (1979) theory of metaphor comprehension. Searle's theory asserts that to understand a metaphoric utterance, people first need to compute its literal interpretation, and only if it does not make sense do they proceed to search for a metaphoric interpretation. Further studies (Glucksberg, Glidea, & Bookin, 1982;Goldvarg & Glucksberg, 1998;Inhoff, Lima, & Carroll, 1984; Keysar, 1989;Shinjo & Myers, 1987) supported the assumption that similar processes are involved in the comprehension of both literal and metaphoric sentences and that metaphoric interpretation is not optional (i.e., people access it even when they do not need it for performing the task).Janus and Bever (1985) replicated Ortony et al.'s (1978) findings for metaphors embedded within a rich context; however, besides measuring the sentence-reading times, they looked at the reading times (RTs) for the metaphoric nouns. Even though, like Ortony et al., they found no significant difference between RTs for metaphoric and literal sentences, the RTs for metaphoric nouns were longer than those for literal nouns. This result threw some doubt over the idea that the same mechanism is involved in the comprehension of metaphoric and literal language.A later study by Gibbs (1990) also provided some support to Searle's (1979) model of metaphor comprehension. Gibbs showed participants short passages followed by either a metaphoric or a literal sentence. For instance, one such passage was about a boxing match and ended either with a metaphoric sentence such as The creampuff did not show up for the match or with its literal equivalent, The boxer did not show up for the match. Gibbs did find a reading time disadvantage for metaphoric sentences with respect to literals, but attributed this result to the type of metaphors used-anaphoric in his study versus predicative in those studies that had provided evidence for similar literal-and metaphor-comprehensionprocesses (Glucksberg et al., 1982;Inhoff et al., 1984; Keysar, 1989;Shinjo & Myers, 1987). Predicative metaphors are of the form A is B (e.g., marriages are iceboxes, time is money). In contrast, in sentences that contain anaphoric metaphors, the metaphoric term (vehicle) is used to refer anaphorically to some previously introduced concept (e.g., in the sentence The creampuff did...
We present an experiment that compares how people perform search tasks in a degree-of-interest browser and in a Windows-Explorer-like browser. Our results show that, whereas users do attend to more information in the DOI browser, they do not complete the task faster than in an Explorer-like browser. However, in both types of browser, users are faster to complete high information scent search tasks than low information scent tasks. We present an ACT-R computational model of the search task in the DOI browser. The model describes how a visual search strategy may combine with semantic aspects of processing, as captured by information scent. We also describe a way of automatically estimating information scent in an ontological hierarchy by querying a large corpus (in our case, Google's corpus).
Tagging systems such as del.icio.us and Diigo have become important ways for users to organize information gathered from the Web. However, despite their popularity among early adopters, tagging still incurs a relatively high interaction cost for the general users. We introduce a new tagging system called SparTag.us, which uses an intuitive Click2Tag technique to provide in situ, low cost tagging of web content. SparTag.us also lets users highlight text snippets and automatically collects tagged or highlighted paragraphs into a system-created notebook, which can be later browsed and searched. We report several user studies aimed at evaluating Click2Tag and SparTag.us.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.