Subjectivity in natural language refers to aspects of language used to express opinions, evaluations, and speculations. There are numerous natural language processing applications for which subjectivity analysis is relevant, including information extraction and text categorization. The goal of this work is learning subjective language from corpora. Clues of subjectivity are generated and tested, including low-frequency words, collocations, and adjectives and verbs identified using distributional similarity. The features are also examined working together in concert. The features, generated from different data sets using different procedures, exhibit consistency in performance in that they all do better and worse on the same data sets. In addition, this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective, and it provides the results of an annotation study assessing the subjectivity of sentences with high-density features. Finally, the clues are used to perform opinion piece recognition (a type of text categorization and genre detection) to demonstrate the utility of the knowledge acquired in this article.
Cross-level effects suggest that measurements could be taken at one level (e.g., neural) to assess team experience (or skill) on another level (e.g., cognitive-behavioral).
Some have argued that the most appropriate measure of team cognition is a holistic measure directed at the entire team. In particular, communication data are useful for measuring team cognition because of the holistic nature of the data, and because of the connection between communication and declarative cognition. In order to circumvent the logistic difficulties of communication data, the present paper proposes several relatively automatic methods of analysis. Four data types are identified, with low-level physical data vs. content data being one dimension, and sequential vs. static data being the other. Methods addressing all four of these data types are proposed, with the exception of static physical data. Latent Semantic Analysis is an automatic method used to assess content, either statically or sequentially. PRONET is useful to address either physical or content-based sequential data, and we propose CHUMS to address sequential physical data. The usefulness of each method to predict team performance data is assessed.
This paper presents a corpus study of evaluative and speculative language. Knowledge of such language would be useful in many applications, such a s text categorization and summarization. Analyses of annotator agreement and of characteristics of subjective language are performed. This study yields knowledge needed to design e ective machine learning systems for identifying subjective language.
Team process is thought to mediate team member inputs and team performance. Among the team behaviors identified as process variables, team communications have been widely studied. We view team communications as a team behavior and also as team information processing, or team cognition. Within the context of a Predator Uninhabited Air Vehicle (UAV) synthetic task, we have developed several methods of communications content assessment based on Latent Semantic Analysis (LSA). These methods include: Communications Density (CD) which is the average task relevance of a team's communications, Lag Coherence (LC) which measures taskrelevant topic shifting over UAV missions, and Automatic Tagging (AT) which categorizes team communications. Each method is described in detail. CD and LC are related to UAV team performance. AT-human is comparable to human-human agreement on content coding. The results are promising for the assessment of teams based on LSA applied to communication content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.