Many epidemiological studies are undertaken with a use of large epidemiological databases, which involves the simultaneous evaluation of a large number of variables. Epidemiologists face a number of problems when dealing with large data sets: multicolinearity (when variables are correlated to each other), confounding factors (when risk factor is correlated with both exposure and outcome variable), and interactions (when the direction or magnitude of an association between two variables differs due to the effect of a third variable). Correct variable selection helps to address these issues and helps to obtain unbiased results. Selection of relevant variables is a complicated and a time consuming task. Flawed variable selection methods still prevail in the scientific literature; there is a need to demonstrate the usability of new algorithms using real data. In this paper we propose to use a novel machine learning method, k-support regularized logistic regression, for discovering predictors of mental health service utilization in the National Epidemiologic Survey for Alcohol and Related Conditions (NESARC). We show that k-support regularized logistic regression yields better prediction accuracy than 1 or 2 regularized logistic regression as well as several baseline methods on this task, and we qualitatively evaluate the top weighted variates. The selected variables are supported by related epidemiological research, and give important cues for public policy.
Federated learning is used for decentralized training of machine learning models on a large number (millions) of edge mobile devices. It is challenging because mobile devices often have limited communication bandwidth and local computation resources. Therefore, improving the efficiency of federated learning is critical for scalability and usability. In this paper, we propose to leverage partially trainable neural networks, which freeze a portion of the model parameters during the entire training process, to reduce the communication cost with little implications on model performance. Through extensive experiments, we empirically show that Federated learning of Partially Trainable neural networks (FedPT) can result in superior communication-accuracy trade-offs, with up to 46× reduction in communication cost, at a small accuracy cost. Our approach also enables faster training, with a smaller memory footprint, and better utility for strong differential privacy guarantees. The proposed FedPT can be particularly interesting for pushing the limitations of overparameterization in on-device learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.