In recent years researchers have gravitated to Twitter and other social media platforms as fertile ground for empirical analysis of social phenomena. Social media provides researchers access to trace data of interactions and discourse that once went unrecorded in the offline world. Researchers have sought to use these data to explain social phenomena both particular to social media and applicable to the broader social world. This paper offers a minireview of Twitter-based research on political crowd behavior. This literature offers insight into particular social phenomena on Twitter, but often fails to use standardized methods that permit interpretation beyond individual studies. Moreover, the literature fails to ground methodologies and results in social or political theory, divorcing empirical research from the theory needed to interpret it. Rather, investigations focus primarily on methodological innovations for social media analyses, but these too often fail to sufficiently demonstrate the validity of such methodologies. This minireview considers a small number of selected papers; we analyse their (often lack of) theoretical approaches, review their methodological innovations, and offer suggestions as to the relevance of their results for political scientists and sociologists.
Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.
Corporations play a major role in artificial intelligence (AI) research, development, and deployment, with profound consequences for society. This paper surveys opportunities to improve how corporations govern their AI activities so as to better advance the public interest. The paper focuses on the roles of and opportunities for a wide range of actors inside the corporation—managers, workers, and investors—and outside the corporation—corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and governments. Whereas prior work on multistakeholder AI governance has proposed dedicated institutions to bring together diverse actors and stakeholders, this paper explores the opportunities they have even in the absence of dedicated multistakeholder institutions. The paper illustrates these opportunities with many cases, including the participation of Google in the U.S. Department of Defense Project Maven; the publication of potentially harmful AI research by OpenAI, with input from the Partnership on AI; and the sale of facial recognition technology to law enforcement by corporations including Amazon, IBM, and Microsoft. These and other cases demonstrate the wide range of mechanisms to advance AI corporate governance in the public interest, especially when diverse actors work together.
The international governance of artificial intelligence (AI) is at a crossroads: should it remain fragmented or be centralised? We draw on the history of environment, trade, and security regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak for centralisation. The risk of creating a slow and brittle institution, and the difficulty of pairing deep rules with adequate participation, speak against it. Other considerations depend on the specific design. A centralised body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial, and fragmented institutions could self-organise. In sum, these trade-offs should inform development of the AI governance architecture, which is only now emerging. We apply the tradeoffs to the case of the potential development of high-level machine intelligence. We conclude with two recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, fragmentation will likely persist for now. The developing landscape should be monitored to see if it is self-organising or simply inadequate. Policy Implications • Secretariats of emerging AI initiatives, for example, the OECD AI Policy Observatory, Global Partnership on AI, the UN High-level Panel on Digital Cooperation, and the UN System Chief Executives Board (CEB) should coordinate to halt and reduce further regime fragmentation. • There is an important role for academia to play in providing objective monitoring and assessment of the emerging AI regime complex to assess its conflict, coordination, and catalysts to address governance gaps without vested interests. Secretariats of emerging AI initiatives should be similarly empowered to monitor the emerging regime. The CEB appears particularly well placed and mandated to address this challenge, but other options exist. • What AI issues and applications need to be tackled in tandem is an open question on which the centralisation debate sensitively turns. We encourage scholars across AI issues from privacy to military applications to organise venues to more closely consider this vital question. • Non-state actors, especially those with technical expertise, will have a potent influence in either a fragmented or centralised regime. These contributions need to be used, but there also need to be safeguards in place against regulatory capture. • The AI regime complex is at an embryonic stage, where informed interventions may be expected to have an outsized impact. The effect of academics as norm entrepreneurs should not be underestimated at this point.
As artificial intelligence (AI) systems are increasingly deployed, principles for ethical AI are also proliferating. Certification offers a method to both incentivize adoption of these principles and substantiate that they have been implemented in practice. This paper draws from management literature on certification and reviews current AI certification programs and proposals. Successful programs rely on both emerging technical methods and specific design considerations. In order to avoid two common failures of certification, program designs should ensure that the symbol of the certification is substantially implemented in practice and that the program achieves its stated goals. The review indicates that the field currently focuses on self-certification and third-party certification of systems, individuals, and organizations-to the exclusion of process management certifications. Additionally, the paper considers prospects for future AI certification programs. Ongoing changes in AI technology suggest that AI certification regimes should be designed to emphasize governance criteria of enduring value, such as ethics training for AI developers, and to adjust technical criteria as the technology changes. Overall, certification can play a valuable mix in the portfolio of AI governance tools.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.