Tissue assessment for chronic wounds is the basis of wound grading and selection of treatment approaches. While several image processing approaches have been proposed for automatic wound tissue analysis, there has been a shortcoming in these approaches for clinical practices. In particular, seemingly, all previous approaches have assumed only 3 tissue types in the chronic wounds, while these wounds commonly exhibit 7 distinct tissue types that presence of each one changes the treatment procedure. In this paper, for the first time, we investigate the classification of 7 wound tissue types. We work with wound professionals to build a new database of 7 types of wound tissue. We propose to use pre-trained deep neural networks for feature extraction and classification at the patch-level. We perform experiments to demonstrate that our approach outperforms other state-of-the-art. We will make our database publicly available to facilitate research in wound assessment.
Recently, convolutional neural networks have shown promising performance for single-image super-resolution. In this paper, we propose Deep Artifact-Free Residual (DAFR) network which uses the merits of both residual learning and usage of ground-truth image as target. Our framework uses a deep model to extract the high frequency information which are necessary for high quality image reconstruction. We use a skip-connection to feed the low-resolution image to network before the image reconstruction. In this way, we are able to use the ground-truth images as target and avoid misleading the network due to artifacts in difference image. In order to extract clean high frequency information, we train network in two steps. First step is a traditional residual learning which uses the difference image as target. Then, the trained parameters of this step are transferred to the main training in second step. Our experimental results show that the proposed method achieves better quantitative and qualitative image quality compared to the existing methods.
Image fusion in visual sensor networks (VSNs) aims to combine information from multiple images of the same scene in order to transform a single image with more information. Image fusion methods based on discrete cosine transform (DCT) are less complex and time saving in DCT based standards of image and video which makes them more suitable for VSN applications. In this paper an efficient algorithm to fusion of multi-focus images in DCT domain is proposed. Sum of modified laplacian (SML) of corresponding blocks of source images are used as contrast criterion and blocks with larger value of SML are absorbed to output images. The experimental results on several images show the improvement of proposed algorithm in terms of both subjective and objective quality of fused image relative to other DCT based techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.