Legislative language exhibits some characteristics typical of languages of administration that are particularly prone to eliciting ambiguities. However, ambiguity is generally undesirable in legislative texts and can pose problems for the interpretation and application of codified law. In this paper, we demonstrate how methods of controlled natural languages can be applied to prevent ambiguities in legislative texts. We investigate what types of ambiguities are frequent in legislative language and therefore important to control, and we examine which ambiguities are already controlled by existing drafting guidelines. For those not covered by the guidelines, we propose additional control mechanisms. Wherever possible, the devised mechanisms reflect existing conventions and frequency distributions and exploit domain-specific means to make ambiguities explicit. Abstract. Legislative language exhibits some characteristics typical of languages of administration that are particularly prone to eliciting ambiguities. However, ambiguity is generally undesirable in legislative texts and can pose problems for the interpretation and application of codified law. In this paper, we demonstrate how methods of controlled natural languages can be applied to prevent ambiguities in legislative texts. We investigate what types of ambiguities are frequent in legislative language and therefore important to control, and we examine which ambiguities are already controlled by existing drafting guidelines. For those not covered by the guidelines, we propose additional control mechanisms. Wherever possible, the devised mechanisms reflect existing conventions and frequency distributions and exploit domain-specific means to make ambiguities explicit.
Among the many applications in social science for the entry and management of data, there are only a few software packages that apply natural language processing to identify semantic concepts such as issue categories or political statements by actors. Although these procedures usually allow efficient data collection, most have difficulty in achieving sufficient accuracy because of the high complexity and mutual relationships of the variables used in the social sciences. To address these flaws, we suggest a (semi-) automatic annotation approach that implements an innovative coding method (Core Sentence Analysis) by computational linguistic techniques (mainly entity recognition, concept identification, and dependency parsing). Although such computational linguistic tools have been readily available for quite a long time, social scientists have made astonishingly little use of them. The principal aim of this article is to gather data on party-issue relationships from newspaper articles. In the first stage, we try to recognize relations between parties and issues with a fully automated system. This recognition is extensively tested against manually annotated data of the coverage in the boulevard newspaper
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.