We compared the accuracy and precision of low-dose insulin administration using various devices including, for the first time, an insulin pump. We dispensed 1, 2, and 5 unit(s) of soluble insulin (100 units/mL) 15 times each from a NovoPen (3.0 mL), a BD-Mini Pen (1.5 mL), a Humalog Pen (100 units/mL), 30G Precision Sure-Dose Insulin Syringes, 30G BD Ultra-Fine II Short Needle Syringes, and a H-TRON-plus V100 insulin pump. Each dose was weighed on an analytical scale, and the delivered and target doses were compared. Accuracy was defined by the absolute percent difference from the target dose. Precision was defined as the absolute percent difference from the group sample mean. Overall, we found that the pen and pump devices were more accurate, and the pump more precise, than the syringes at the 1- and 2-unit doses. Syringes were dangerously inaccurate, clinically, at the 1-unit dose. The use of pens and syringes with very fine increment markings (1/2 unit) did not improve accuracy or precision. Earlier researchers used multiple individuals to draw and weigh the samples. In an effort to eliminate the potential introduction of significant error; our study used only 2 investigators: 1 to draw up the doses and another to weigh them. The conclusions in our study were similar to prior studies.
Many applications of computational social science aim to infer causal conclusions from nonexperimental data. Such observational data often contains confounders, variables that influence both potential causes and potential effects. Unmeasured or latent confounders can bias causal estimates, and this has motivated interest in measuring potential confounders from observed text. For example, an individual's entire history of social media posts or the content of a news article could provide a rich measurement of multiple confounders. Yet, methods and applications for this problem are scattered across different communities and evaluation practices are inconsistent. This review is the first to gather and categorize these examples and provide a guide to dataprocessing and evaluation decisions. Despite increased attention on adjusting for confounding using text, there are still many open problems, which we highlight in this paper.
A fundamental goal of scientific research is to learn about causal relationships. However, despite its critical role in the life and social sciences, causality has not had the same importance in Natural Language Processing (NLP), which has traditionally placed more emphasis on predictive tasks.
We propose a new, socially-impactful task for natural language processing: from a news corpus, extract names of persons who have been killed by police. We present a newly collected police fatality corpus, which we release publicly, and present a model to solve this problem that uses EM-based distant supervision with logistic regression and convolutional neural network classifiers. Our model outperforms two off-the-shelf event extractor systems, and it can suggest candidate victim names in some cases faster than one of the major manually-collected police fatality databases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.