We present an overview of the second shared task on language identification in codeswitched data. For the shared task, we had code-switched data from two different language pairs: Modern Standard ArabicDialectal Arabic (MSA-DA) and SpanishEnglish (SPA-ENG). We had a total of nine participating teams, with all teams submitting a system for SPA-ENG and four submitting for MSA-DA. Through evaluation, we found that once again language identification is more difficult for the language pair that is more closely related. We also found that this year's systems performed better overall than the systems from the previous shared task indicating overall progress in the state of the art for this task.
Code-switching, where a speaker switches between languages mid-utterance, is frequently used by multilingual populations worldwide. Despite its prevalence, limited effort has been devoted to develop computational approaches or even basic linguistic resources to support research into the processing of such mixedlanguage data. We present a user-centric approach to collecting code-switched utterances from social media posts, and develop language universal guidelines for the annotation of codeswitched data. We also present results for several baseline language identification models on our corpora and demonstrate that language identification in code-switched text is a difficult task that calls for deeper investigation.
Stories of cyber-attacks have been prevalent in the public media and the cyber security market has grown greatly to help meet this demand. However, much of the effort has been focused on development of better hardware and software solutions with little thought to the human factors of cyber security. This investigation sought to gain a better understanding of the influence cyber-attacks have on the decisionmaking and collaboration of distributed team members working together to solve a complex logic problem. Eight three-person teams worked together to piece together bits of information to solve a potential terrorist attack. The time and outcome scores were evaluated for the three experimental conditions, which varied the levels of information injected. The goal of the injected statements was to disrupt the decision-making and collaborative process. Injects that were explicitly negating true facts had the more detrimental effect on team performance while performance in the condition with injects that were more suggestive in nature were no different from the no inject condition. These results shed light into the breakdown in team decisionmaking when confronted with a contradictory fact thus aiding in our knowledge to build robust collaborative tools.
Over the last several years, teams that collaborate across geographic, temporal and cultural boundaries are becoming common in the modern workplace. While these "distributed teams" provide organizations with numerous benefits from an operational and cost standpoint, there are still numerous challenges associated with them. One such issue that has yet to be explored is how well-known cognitive biases may impact distributed decision making. In this paper, we present an initial exploration of confirmation bias and its propagation in distributed team collaborations. Using the ELICIT task environment, we manipulated the order of information provided to teams. Our hypothesis was that serial order would influence the significance teams placed on information, such that they would overweigh incorrect information presented early in the task, thus inducing a team cognitive bias. Our results conformed to that hypothesis, i.e., when incorrect information was presented early, teams appeared to apply greater focus on that information in subsequent discussions, and they reported incorrect answers more often, suggesting the influence of a confirmation bias in their deliberations. These results highlight the need for continued research on team cognition and team cognitive biases, particularly in complex, distributed environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.