Objective: The International Association of Diabetes and Pregnancy Study Groups (IADPSG) has proposed new criteria for the diagnosis of gestational diabetes mellitus (GDM). The aim of this study was to compare the prevalence of GDM when IADPSG criteria were used with the prevalence when the current Australasian Diabetes in Pregnancy Society (ADIPS) criteria were used.
Design, setting and participants: This was a prospective study over a 6‐month period, examining the results of all glucose tolerance tests (GTTs) conducted for the diagnosis of GDM in Wollongong, a city using the public and private sectors.
Main outcome measures: The prevalence of GDM using the existing (ADIPS) and the proposed (IADPSG) criteria.
Results: There were 1275 evaluable GTTs (571 public and 704 private). Using the current ADIPS diagnostic criteria, the prevalence of GDM was 8.6% (public), 10.5% (private) and 9.6% (overall). Using the proposed IADPSG criteria, the prevalence of GDM was 9.1% (public), 16.2% (private) and 13.0% (overall).
Conclusions: The proposed IADPSG criteria would increase the prevalence of GDM from 9.6% to 13.0% (P < 0.001). In our study in the Wollongong area, which has a population with a predominantly white background, this increase came mainly from older women attending a private pathology provider. Data from both the public and private sectors need to be included in any discussion on the change in prevalence of GDM.
BERT (Bidirectional Encoder Representations from Transformers) and related pre-trained Transformers have provided large gains across many language understanding tasks, achieving a new state-of-the-art (SOTA). BERT is pretrained on two auxiliary tasks: Masked Language Model and Next Sentence Prediction. In this paper we introduce a new pre-training task inspired by reading comprehension to better align the pre-training from memorization to understanding. Span Selection Pre-Training (SSPT) poses cloze-like training instances, but rather than draw the answer from the model's parameters, it is selected from a relevant passage. We find significant and consistent improvements over both BERT BASE and BERT LARGE on multiple Machine Reading Comprehension (MRC) datasets. Specifically, our proposed model has strong empirical evidence as it obtains SOTA results on Natural Questions, a new benchmark MRC dataset, outperforming BERT LARGE by 3 F1 points on short answer prediction. We also show significant impact in HotpotQA, improving answer prediction F1 by 4 points and supporting fact prediction F1 by 1 point and outperforming the previous best system. Moreover, we show that our pre-training approach is particularly effective when training data is limited, improving the learning curve by a large amount.
Mechanism design, an important tool in microeconomics, has found widespread applications in modelling and solving decentralized design problems in many branches of engineering, notably computer science, electronic commerce, and network economics. Mechanism design is concerned with settings where a social planner faces the problem of aggregating the announced preferences of multiple agents into a collective decision when the agents exhibit strategic behaviour. The objective of this paper is to provide a tutorial introduction to the foundations and key results in mechanism design theory. The paper is in two parts. Part 1 focuses on basic concepts and classical results which form the foundation of mechanism design theory. Part 2 presents key advanced concepts and deeper results in mechanism design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.