This article is based on the notion of ‘sousveillance’, which was invented by Steve Mann to describe the present state of modern technological societies where anybody may take photos or videos of any person or event, and then diffuse the information freely all over the world. The article shows how sousveillance can be generalized both to the real world and to the virtual world of the Infosphere using modern information technologies. As a consequence, the separation between public and private spheres tends to disappear. We believe that generalized sousveillance may transform the overall society, e.g. modern public transportation like the Paris subway might have to change the way it disseminates information due to the impossibility of managing the flow of information coming from its infrastructures. To attempt to elucidate a society based on generalized sousveillance, the article introduces the notion of the ‘Catopticon’, derived from Bentham’s Panopticon: while the architecture of the Panopticon was designed to facilitate surveillance by prohibiting communication and by installing surveyors in a watchtower, the architecture of the ‘Catopticon’ allows everybody to communicate with everybody and removes surveyors from the watchtower. The article goes on to explore the opportunities the Catopticon might offer if extended to the whole planet. It also shows the limitations of the extended Catopticon; some are extrinsic: they consist of various resistances which restrict access to the Internet; others are intrinsic: for instance, we can exchange simultaneously only with a few people, while we may have millions of contacts. As a consequence, the various new ‘regimes of distinction’ mentioned above play a key role in modern societies.
This paper proposes a graph-based Named Entity Linking (NEL) algorithm named REDEN for the disambiguation of authors' names in French literary criticism texts and scientific essays from the 19th and early 20th centuries. The algorithm is described and evaluated according to the two phases of NEL as reported in current state of the art, namely, candidate retrieval and candidate selection. REDEN leverages knowledge from different Linked Data sources in order to select candidates for each author mention, subsequently crawls data from other Linked Data sets using equivalence links (e.g., owl:sameAs), and, finally, fuses graphs of homologous individuals into a non-redundant graph well-suited for graph centrality calculation; the resulting graph is used for choosing the best referent. The REDEN algorithm is distributed in open-source and follows current standards in digital editions (TEI) and semantic Web (RDF). Its integration into an editorial workflow of digital editions in Digital humanities and cultural heritage projects is entirely plausible. Experiments are conducted along with the corresponding error analysis in order to test our approach and to help us to study the weaknesses and strengths of our algorithm, thereby to further improvements of REDEN.
In this paper, we investigate the use of high-level action languages for representing and reasoning about ethical responsibility in goal specification domains. First, we present a simplified Event Calculus formulated as a logic program under the stable model semantics in order to represent situations within Answer Set Programming. Second, we introduce a model of causality that allows us to use an answer set solver to perform reasoning over the agent's ethical responsibility. We then extend and test this framework against the Trolley Problem and the Doctrine of Double Effect. The overarching aim of the paper is to propose a general and adaptable formal language that may be employed over a variety of ethical scenarios in which the agent's responsibility must be examined and their choices determined. Our fundamental ambition is to displace the burden of moral reasoning from the programmer to the program itself, moving away from current computational ethics that too easily embed moral reasoning within computational engines, thereby feeding atomic answers that fail to truly represent underlying dynamics.
With the rapid extension of clinical data and knowledge, decision making becomes a complex task for manual sleep staging. In this process, there is a need for integrating and analyzing information from heterogeneous data sources with high accuracy. This paper proposes a novel decision support algorithm-Symbolic Fusion for sleep staging application. The proposed algorithm provides high accuracy by combining data from heterogeneous sources, like EEG, EOG and EMG. This algorithm is developed for implementation in portable embedded systems for automatic sleep staging at low complexity and cost. The proposed algorithm proved to be an efficient design support method and achieved up to 76% overall agreement rate on our database of 12 patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.