The essay describes a new way to evaluate variation among simulations through six distinct categories. The six points involve the content of the simulation, the integration of the simulation with other aspects of the course, the length of a simulation, the strictness of rules, the interaction among students during the simulation, and assessment techniques. The essay assesses the benefits and costs of the simulation and course design through student feedback and instructor evaluation. In addition, the essay uses these six points of variation as a framework to explain a sample simulation integrated with course objectives, goals, lecture material, two short research essays, and other class activities designed to teach game theory applied to current international problems. The simulation uses current world issues to help students incorporate class content, game theoretic modeling, and strategic interaction into an international negotiation conducted in class.
Although recent data creation efforts in international relations have begun to focus on issues of reliability and validity more explicitly than previously, current efforts still contain significant problems. This essay focuses on three recent data generation projects that study international relations (the ICOW, ATOP, and River Treaty datasets) and shows the successes and failures of each in assessing reliability when generating data from qualitative evidence. All three datasets attempt to generate reliable data, document the procedures used, and present indications of data reliability. However, their efforts face problems when assessing the reliability of their case selection variables, in the development of reliability indicators, and in the presentation of reliability statistics. In addition to evaluating these recent efforts to generate large‐N databases, this essay clarifies the difference between generating data from qualitative and quantitative evidence, explains the importance of reliability when coding qualitative evidence, and provides ways to improve the assessment of the quality of one’s data.
T he importance of educational technology continues to grow for teachers, students, and administrators. This study examined the use of audience-response systems (ARS) in diverse undergraduate classes in the International Relations and Peace Studies cluster at Ritsumeikan Asia Pacifi c University (APU). The study specifically compared Twitter and Turning Technologies clickers for both academic performance and survey results on student interactivity and attentiveness. The study found that clickers outperformed Twitter in student satisfaction; however, neither had a strong impact on grade performance.This article describes how clickers have a greater but limited advantage over Twitter in the classroom. The success of in-class technology depends on students' technological culture, methods of use, and available logistical resources.
AUDIENCE RESPONSE SYSTEMSARS refers to any system in which an audience interacts with a speaker(s) during a presentation. ARS can involve low-technology tools, such as colored cards held up during a presentation, or high-technology dials used to indicate favorability to a speech (e.g., widely used in political-campaign analysis). The purpose of ARS varies based on the goals of a presenter. However, all ARS involve interaction between the audience and the presenter(s).Twitter is an online system in which individuals send messages composed of up to 140 characters to any number of subscribers (i.e., followers). Audiences use Twitter for back-channel communication or short messages to one another or the presenter(s) (Atkinson 2010). Sometimes Twitter is broadcasted on an overhead display to increase transparency and communication between the audience and the presenter(s).
A Study of Twitter and Clickers as Audience Response Systems in International Relations Courses
Steven B. Rothman, Ritsumeikan Asia Pacifi c UniversityThis study conducted experiments using clickers and Twitter in international relations courses to evaluate the eff ectiveness of audience-response tools on students' experiences and their performance. The study used both within-group and between-group experimental designs and evaluated the results primarily through inferential descriptive statistical methods. The results show that clickers outperformed Twitter, students enjoy using clickers in class, and the use of these tools had little impact on grade performance.
ABSTRACTTwitter attempts to take education out of the classroom by allowing students to engage in topics among networks of professionals and peers and increase communication among students, though eff ects are unclear. One study that used Twitter to democratize student involvement had both positive and negative eff ects (Blair 2013). Other research indicates a number of challenges in using online communication tools due to the diff erent social-interactive mechanisms to which we are accustomed (Blair 2013). For example, in face-to-face communication, a person who is asked a question feels more pressure to answer than in an online communication (Middleton 2010). A...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.