Foot ulceration is the most common complication of diabetes and represents a major health problem all over the world. If these ulcers are not adequately treated in an early stage, they may lead to lower limb amputation. Considering the low-cost and prevalence of smartphones with a high-resolution camera, Diabetic Foot Ulcer (DFU) healing assessment by image analysis became an attractive option to help clinicians for a more accurate and objective management of the ulcer. In this work, we performed DFU segmentation using Deep Learning methods for semantic segmentation. Our aim was to find an accurate fully convolutional neural network suitable to our small database. Three different fully convolutional networks have been tested to perform the ulcer area segmentation. The U-Net network obtained a Dice Similarity Coefficient of 97.25% and an intersection over union index of 94.86%. These preliminary results demonstrate the power of fully convolutional neural networks in diabetic foot ulcer segmentation using a limited number of training samples.
Wound area segmentation really progressed with the emergence of deep learning, due to its robustness in uncontrolled lighting and no need to design hand-crafted features but two limits have still to be overcome : firstly, its performance relies on the size and quality of the training dataset in the medical field, where data annotation is costly and time-consuming ; secondly the accuracy of the segmentation depends highly on the camera distance and angle and moreover perspective effects prevent measuring real surfaces in single views. To address concurrently these two issues, we propose to apply multi-view modeling : an image sequence is acquired around the wound site and enables wound 3D reconstruction. Then, a segmentation step is run to extract roughly the wound from the background in each view and to select the best view with an original strategy. This view provides the most accurate segmentation and the real wound bed area even on non planar wounds. Finally, this segmentation is backprojected in each view to generate a complete set of well annotated real images to reinforce the learning step of the neural network.In our experiments, we compare several strategies to select the best view in the image sequence. The proposed method, tested on a dataset of 270 images, outperforms standard deep learning approach based on a single view, as recorded with DICE index and IoU score which rise respectively from 36.53% to 86.3% and 29.48% to 77.09% for the wound class to achieve an overall DICE and IoU score of 93.04% and 86.61% including background class. These results attest to the robustness of our method and its improved accuracy in the wound segmentation task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.