Access to clean water is a critical challenge and opportunity for community-level collaboration. People rely on local water sources, but awareness of water quality and participation in water management is often limited. Lack of community engagement can increase risks of water catastrophes, such as those in Flint, Michigan, and Cape Town, South Africa. We investigated water quality practices in a watershed system serving c.100 000 people in the United States. We identified a range of entities including government and nonprofit citizen groups that gather water quality data. Many of these data are accessible in principle to citizens. However, the data are scattered and diverse; information infrastructures are primitive and not integrated. Water quality data and data practices are hidden in plain sight. Based on fieldwork, we consider sociotechnical courses of action, drawing on best practices in human–computer interaction and community informatics, data and environmental systems management.
This lecture-style tutorial, which mixes in an interactive literature browsing component, is intended for the many researchers and practitioners working with text data and on applications of natural language processing (NLP) in data science and knowledge discovery. The focus of the tutorial is on the issues of transparency and interpretability as they relate to building models for text and their applications to knowledge discovery. As black-box models have gained popularity for a broad range of tasks in recent years, both the research and industry communities have begun developing new techniques to render them more transparent and interpretable. Reporting from an interdisciplinary team of social science, humancomputer interaction (HCI), and NLP/knowledge management researchers, our tutorial has two components: an introduction to explainable AI (XAI) in the NLP domain and a review of the stateof-the-art research; and findings from a qualitative interview study of individuals working on real-world NLP projects as they are applied to various knowledge extraction and discovery at a large, multinational technology and consulting corporation. The first component will introduce core concepts related to explainability in NLP. Then, we will discuss explainability for NLP tasks and report on a systematic literature review of the state-of-the-art literature in AI, NLP and HCI conferences. The second component reports on our qualitative interview study, which identifies practical challenges and concerns that arise in real-world development projects that require the modeling and understanding of text data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.