Over the last two decades, radiologists have been using multi-view images to detect tumors. Computer Tomography (CT) imaging is considered as one of the reliable imaging techniques. Many medical-image-processing techniques have been developed to diagnoses lung cancer at early or later stages through CT images; however, it is still a big challenge to improve the accuracy and sensitivity of the algorithms. In this paper, we propose an algorithm based on image fusion for lung segmentation to optimize lung cancer diagnosis. The image fusion technique was developed through Laplacian Pyramid (LP) decomposition along with Adaptive Sparse Representation (ASR). The suggested fusion technique fragments medical images into different sizes using the LP. After that, the LP is used to fuse the four decomposed layers. For the evaluation purposes of the proposed technique, the Lungs Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) was used. The results showed that the Dice Similarity Coefficient (DSC) index of our proposed method was 0.9929, which is better than recently published results. Furthermore, the values of other evaluation parameters such as the sensitivity, specificity, and accuracy were 89%, 98% and 99%, respectively, which are also competitive with the recently published results.
Wearable electronics capable of recording and transmitting biosignals can provide convenient and pervasive health monitoring. A typical EEG recording produces large amount of data. Conventional compression methods cannot compress date below Nyquist rate, thus resulting in large amount of data even after compression. This needs large storage and hence long transmission time. Compressed sensing has proposed solution to this problem and given a way to compress data below Nyquist rate. In this paper, double temporal sparsity based reconstruction algorithm has been applied for the recovery of compressively sampled EEG data. The results are further improved by modifying the double temporal sparsity based reconstruction algorithm using schattern-p norm along with decorrelation transformation of EEG data before processing. The proposed modified double temporal sparsity based reconstruction algorithm out-perform block sparse bayesian learning and Rackness based compressed sensing algorithms in terms of SNDR and NMSE. Simulation results further show that the proposed algorithm has better convergence rate and less execution time.
A 2-D Adaptive Trimmed Mean Autoregressive (ATMAR) model has been proposed for denoising of medical images corrupted with poisson noise. Unfiltered images are divided into smaller chunks and ATMAR model is applied on each chunk separately. In this paper, two 5x5 windows with 40% overlapping are used to predict the center pixel value of the central row. The AR coefficients are updated by sliding both windows forward with 60% shift. The same process is repeated to scan the entire image for prediction of a new denoised image. The Adaptive Trimmed Mean Filter (ATMF) eradicates the lowest and highest variations in pixel values of the ATMAR model denoised image and also average out the remaining neighborhood pixel values. Finally, power-law transformation is applied on the resultant image of the ATMAR model for contrast stretching. Image quality is judged in terms of correlation, Mean Squared Error (MSE), Structural Similarity Index Measure (SSIM) and Peak Signal to Noise Ratio (PSNR) of the image with latest denoising techniques. The proposed technique showed an efficient way to scale down poisson noise in scintigraphic images on a pixel-by-pixel basis.
Digital surveillance systems are ubiquitous and continuously generate massive amounts of data, and manual monitoring is required in order to recognise human activities in public areas. Intelligent surveillance systems that can automatically identify normal and abnormal activities are highly desirable, as these would allow for efficient monitoring by selecting only those camera feeds in which abnormal activities are occurring. This paper proposes an energy-efficient camera prioritisation framework that intelligently adjusts the priority of cameras in a vast surveillance network using feedback from the activity recognition system. The proposed system addresses the limitations of existing manual monitoring surveillance systems using a three-step framework. In the first step, the salient frames are selected from the online video stream using a frame differencing method. A lightweight 3D convolutional neural network (3DCNN) architecture is applied to extract spatio-temporal features from the salient frames in the second step. Finally, the probabilities predicted by the 3DCNN network and the metadata of the cameras are processed using a linear threshold gate sigmoid mechanism to control the priority of the camera. The proposed system performs well compared to state-of-theart violent activity recognition methods in terms of efficient camera prioritisation in large-scale surveillance networks. Comprehensive experiments and an evaluation of activity recognition and camera prioritisation showed that our approach achieved an accuracy of 98% with an F1-score of 0.97 on the Hockey Fight dataset, and an accuracy of 99% with an F1-score of 0.98 on the Violent Crowd dataset.
Noise in signals and images can be removed through different de-noising techniques such as mean filtering, median filtering, total variation and filtered variation techniques etc. Wavelet based de-noising is one of the major techniques used for noise removal. In the first part of our work, wavelet transform based logarithmic shrinkage technique is used for de-noising of images, corrupted by noise (during under-sampling in the frequency domain). The logarithmic shrinkage technique is applied to under-sampled Shepp-Logan Phantom image. Experimental results show that the logarithmic shrinkage technique is 7-10% better in PSNR values than the existing classical techniques. In the second part of our work we denoise the noisy, under-sampled phantom image, having salt and pepper, Gaussian, speckle and Poisson noises through the four thresholding techniques and compute their correlations with the original image. They give the correlation values close to the noisy image. By applying median or wiener filter in parallel with the thresholding techniques, we get 30-35% better results than only applying the thresholding techniques individually. So, in the second part we recover and de-noise the sparse under-sampled images by the combination of shrinkage functions and median filtering or wiener filtering.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.