IntroductionLow-dose CT (LDCT) screening of high-risk smokers reduces lung cancer (LC) specific mortality. Determining screening eligibility using individualised risk may improve screening effectiveness and reduce harm. Here, we compare the performance of two risk prediction models (PLCOM2012 and Liverpool Lung Project model (LLPv2)) and National Lung Screening Trial (NLST) eligibility criteria in a community-based screening programme.MethodsEver-smokers aged 55–74, from deprived areas of Manchester, were invited to a Lung Health Check (LHC). Individuals at higher risk (PLCOM2012 score ≥1.51%) were offered annual LDCT screening over two rounds. LLPv2 score was calculated but not used for screening selection; ≥2.5% and ≥5% thresholds were used for analysis.ResultsPLCOM2012 ≥1.51% selected 56% (n=1429) of LHC attendees for screening. LLPv2 ≥2.5% also selected 56% (n=1430) whereas NLST (47%, n=1188) and LLPv2 ≥5% (33%, n=826) selected fewer. Over two screening rounds 62 individuals were diagnosed with LC; representing 87% (n=62/71) of 6-year incidence predicted by mean PLCOM2012 score (5.0%). 26% (n=16/62) of individuals with LC were not eligible for screening using LLPv2 ≥5%, 18% (n=11/62) with NLST criteria and 7% (n=5/62) with LLPv2 ≥2.5%. NLST eligible Manchester attendees had 2.5 times the LC detection rate than NLST participants after two annual screens (≈4.3% (n=51/1188) vs 1.7% (n=438/26 309); p<0.0001). Adverse measures of health, including airflow obstruction, respiratory symptoms and cardiovascular disease, were positively correlated with LC risk. Coronary artery calcification was predictive of LC (adjOR 2.50, 95% CI 1.11 to 5.64; p=0.028).ConclusionProspective comparisons of risk prediction tools are required to optimise screening selection in different settings. The PLCOM2012 model may underestimate risk in deprived UK populations; further research focused on model calibration is required.