Estimates from infectious disease models have constituted a significant part of the scientific evidence used to inform the response to the COVID-19 pandemic in the UK. These estimates can vary strikingly in their bias and variability. Epidemiological forecasts should be consistent with the observations that eventually materialize. We use simple scoring rules to refine the forecasts of a novel statistical model for multisource COVID-19 surveillance data by tuning its smoothness hyperparameter. This article is part of the theme issue ‘Technical challenges of modelling real-life epidemics and examples of overcoming these’.
The emergence of the novel coronavirus (COVID-19) has generated a need to quickly and accurately assemble up-to-date information related to its spread. While it is possible to use deaths to provide a reliable information feed, the latency of data derived from deaths is significant. Confirmed cases derived from positive test results potentially provide a lower latency data feed. However, the sampling of those tested varies with time and the reason for testing is often not recorded. Hospital admissions typically occur around 1-2 weeks after infection and can be considered out of date in relation to the time of initial infection. The extent to which these issues are problematic is likely to vary over time and between countries.We use a machine learning algorithm for natural language processing, trained in multiple languages, to identify symptomatic individuals derived from social media and, in particular Twitter, in real-time. We then use an extended SEIRD epidemiological model to fuse combinations of low-latency feeds, including the symptomatic counts from Twitter, with death data to estimate parameters of the model and nowcast the number of people in each compartment. The model is implemented in the probabilistic programming language Stan and uses a bespoke numerical integrator. We present results showing that using specific low-latency data feeds along with death data provides more consistent and accurate forecasts of COVID-19 related deaths than using death data alone.
The emergence of the novel coronavirus (COVID-19) generated a need to quickly and accurately assemble up-to-date information related to its spread. In this research article, we propose two methods in which Twitter is useful when modelling the spread of COVID-19: (1) machine learning algorithms trained in English, Spanish, German, Portuguese and Italian are used to identify symptomatic individuals derived from Twitter. Using the geo-location attached to each tweet, we map users to a geographic location to produce a time-series of potential symptomatic individuals. We calibrate an extended SEIRD epidemiological model with combinations of low-latency data feeds, including the symptomatic tweets, with death data and infer the parameters of the model. We then evaluate the usefulness of the data feeds when making predictions of daily deaths in 50 US States, 16 Latin American countries, 2 European countries and 7 NHS (National Health Service) regions in the UK. We show that using symptomatic tweets can result in a 6% and 17% increase in mean squared error accuracy, on average, when predicting COVID-19 deaths in US States and the rest of the world, respectively, compared to using solely death data. (2) Origin/destination (O/D) matrices, for movements between seven NHS regions, are constructed by determining when a user has tweeted twice in a 24 h period in two different locations. We show that increasing and decreasing a social connectivity parameter within an SIR model affects the rate of spread of a disease.
State-space models have been widely used to model the dynamics of communicable diseases in populations of interest by fitting to time-series data. Particle filters have enabled these models to incorporate stochasticity and so can better reflect the true nature of population behaviours. Relevant parameters such as the spread of the disease, Rt, and recovery rates can be inferred using Particle MCMC. The standard method uses a Metropolis-Hastings random-walk proposal which can struggle to reach the stationary distribution in a reasonable time when there are multiple parameters.In this paper we obtain full Bayesian parameter estimations using gradient information and the No U-Turn Sampler (NUTS) when proposing new parameters of stochastic non-linear Susceptible-Exposed-Infected-Recovered (SEIR) and SIR models. Although NUTS makes more than one target evaluation per iteration, we show that it can provide more accurate estimates in a shorter run time than Metropolis-Hastings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.