This article explores techniques for using supervised machine learning to study discourse quality in large datasets. We explain and illustrate the computational techniques that we have developed to facilitate a large-scale study of deliberative quality in Canada’s three northern territories: Yukon, Northwest Territories, and Nunavut. This larger study involves conducting comparative analyses of hundreds of thousands of parliamentary speech acts since the creation of Nunavut 20 years ago. Without computational techniques, we would be unable to conduct such an ambitious and comprehensive analysis of deliberative quality. The purpose of this article is to demonstrate the machine learning techniques that we have developed with the hope that they might be used and improved by other communications scholars who are interested in conducting textual analyses using large datasets. Other possible applications of these techniques might include analyses of campaign speeches, party platforms, legislation, judicial rulings, online comments, newspaper articles, and television or radio commentaries.
The launch of OpenAI’s GPT-3 model in June 2020 began a new era for conversational chatbots. While there are chatbots that do not use artificial intelligence (AI), conversational chatbots integrate AI language models that allow for back-and-forth conversation between an AI system and a human user. GPT-3, since upgraded to GPT-4, harnesses a natural language processing technique called sentence embedding and allows for conversations with users that are more nuanced and realistic than before. The launch of this model came in the first few months of the COVID-19 pandemic, where increases in health care needs globally combined with social distancing measures made virtual medicine more relevant than ever. GPT-3 and other conversational models have been used for a wide variety of medical purposes, from providing basic COVID-19–related guidelines to personalized medical advice and even prescriptions. The line between medical professionals and conversational chatbots is somewhat blurred, notably in hard-to-reach communities where the chatbot replaced face-to-face health care. Considering these blurred lines and the circumstances accelerating the adoption of conversational chatbots globally, we analyze the use of these tools from an ethical perspective. Notably, we map out the many types of risks in the use of conversational chatbots in medicine to the principles of medical ethics. In doing so, we propose a framework for better understanding the effects of these chatbots on both patients and the medical field more broadly, with the hope of informing safe and appropriate future developments.
This article proposes an automated methodology for the analysis of online political discourse. Drawing from the discourse quality index (DQI) by Steenbergen et al., it applies a machine learning–based quantitative approach to measuring the discourse quality of political discussions online. The DelibAnalysis framework aims to provide an accessible, replicable methodology for the measurement of discourse quality that is both platform and language agnostic. The framework uses a simplified version of the DQI to train a classifier, which can then be used to predict the discourse quality of any non-coded comment in a given political discussion online. The objective of this research is to provide a systematic framework for the automated discourse quality analysis of large datasets and, in applying this framework, to yield insight into the structure and features of political discussions online.
This article sets out the rationale for a United Nations Regulation for Artificial Intelligence, which is needed to set out the modes of engagement of the organisation when using artificial intelligence technologies in the attainment of its mission. It argues that given the increasing use of artificial intelligence by the United Nations, including in some activities considered high risk by the European Commission, a regulation is urgent. It also contends that rules of engagement for artificial intelligence at the United Nations would support the development of ‘good artificial intelligence’, by giving developers clear pathways for authorisation that would build trust in these technologies. Finally, it argues that an internal regulation would build upon the work in artificial intelligence ethics and best practices already initiated in the organisation that could, like the Brussels Effect, set an important precedent for regulations in other countries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.