The aim was to undertake an initial study of the relationship between texture features in computed tomography (CT) images of non-small cell lung cancer (NSCLC) and tumour glucose metabolism and stage. This retrospective pilot study comprised 17 patients with 18 pathologically confirmed NSCLC. Non-contrast-enhanced CT images of the primary pulmonary lesions underwent texture analysis in 2 stages as follows: (a) image filtration using Laplacian of Gaussian filter to differentially highlight fine to coarse textures, followed by (b) texture quantification using mean grey intensity (MGI), entropy (E) and uniformity (U) parameters. Texture parameters were compared with tumour fluorodeoxyglucose (FDG) uptake (standardised uptake value (SUV)) and stage as determined by the clinical report of the CT and FDG-positron emission tomography imaging. Tumour SUVs ranged between 2.8 and 10.4. The number of NSCLC with tumour stages I, II, III and IV were 4, 4, 4 and 6, respectively. Coarse texture features correlated with tumour SUV (E: r = 0.51, p = 0.03; U: r = −0.52, p = 0.03), whereas fine texture features correlated with tumour stage (MGI: rs = 0.71, p = 0.001; E: rs = 0.55, p = 0.02; U: rs = −0.49, p = 0.04). Fine texture predicted tumour stage with a kappa of 0.7, demonstrating 100% sensitivity and 87.5% specificity for detecting tumours above stage II ( p = 0.0001). This study provides initial evidence for a relationship between texture features in NSCLC on non-contrast-enhanced CT and tumour metabolism and stage. Texture analysis warrants further investigation as a potential method for obtaining prognostic information for patients with NSCLC undergoing CT.
http://radiology.rsnajnls.org/cgi/content/full/2502071879/DC1.
Position-, rotation-, scale-, and orientation-invariant multiple object recognition from cluttered scenes Article (Accepted Version) http://sro.sussex.ac.uk Bone, Peter, Young, Rupert and Chatwin, Chris (2006) Position-, rotation-, scale-, and orientation-invariant multiple object recognition from cluttered scenes. Optical Engineering, 45 (7). ISSN 0091-3286This version is available from Sussex Research Online: http://sro.sussex.ac.uk/28111/ This document is made available in accordance with publisher policies and may differ from the published version or from the version of record. If you wish to cite this item you are advised to consult the publisher's version. Please see the URL above for details on accessing the published version. Copyright and reuse:Sussex Research Online is a digital repository of the research output of the University.Copyright and all moral rights to the version of the paper presented here belong to the individual author(s) and/or other copyright owners. To the extent reasonable and practicable, the material made available in SRO has been checked for eligibility before being made available.Copies of full text items generally can be reproduced, displayed or performed and given to third parties in any format or medium for personal research or study, educational, or not-for-profit purposes without prior permission or charge, provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. ABSTRACTA method of tracking objects in video sequences despite any kind of perspective distortion is demonstrated. Moving objects are initially segmented from the scene using a background subtraction method to minimize the search area of the filter. A variation on the Maximum Average Correlation Height (MACH) filter is used to create invariance to orientation while giving high tolerance to background clutter and noise. A log r-θ mapping is employed to give invariance to in-plane rotation and scale by transforming rotation and scale variations of the target object into vertical and horizontal shifts. The MACH filter is trained on the log r-θ map of the target for a range of orientations and applied sequentially over the regions of movement in successive video frames. Areas of movement producing a strong correlation response indicate an in-class target and can then be used to determine the position, in-plane rotation and scale of the target objects in the scene and track it over successive frames.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.