Image-based plant phenotyping is a growing application domain of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants Arabidopsis and young tobacco. Although leaves do share appearance and shape characteristics, the presence of occlusions and variability in leaf shape and pose, as well as imaging conditions, render this problem challenging. The aim of this paper is to compare several leaf segmentation solutions on a unique and first of its kind dataset containing images from typical phenotyping experiments. In particular, we report and discuss methods and findings of a collection of submissions for the first Leaf Segmentation Challenge (LSC) of the Computer Vision Problems in Plant Phenotyping (CVPPP) workshop in 2014. Four methods are presented: three segment leaves via processing the distance transform in an unsupervised fashion, and the other via optimal template selection and Chamfer matching. Overall, we find that although separating plant from background can be achieved with satisfactory accuracy (>90% Dice score), individual leaf segmentation and counting remain challenging when leaves overlap. Besides, accuracy is lower for younger leaves. We find also that variability in datasets does affect outcomes. Our findings motivate further investigations and development of specialized algorithms for this particular application, and that challenges of this form are ideally suited for advancing the state of the art. Data are publicly available (http://www.plantphenotyping.org/CVPPP2014-dataset) to support future challenges beyond segmentation within this application domain.
Automated surgical workflow analysis and understanding can assist surgeons to standardize procedures and enhance post-surgical assessment and indexing, as well as, interventional monitoring. Computerassisted interventional (CAI) systems based on video can perform workflow estimation through surgical instruments' recognition while linking them to an ontology of procedural phases. In this work, we adopt a deep learning paradigm to detect surgical instruments in cataract surgery videos which in turn feed a surgical phase inference recurrent network that encodes temporal aspects of phase steps within the phase classification. Our models present comparable to state-of-the-art results for surgical tool detection and phase recognition with accuracies of 99 and 78% respectively.
Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the user's knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using Super-Regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets.
Background: Intrapapillary capillary loops (IPCLs) represent an endoscopically visible feature of early squamous cell neoplasia (ESCN) which correlate with invasion depth-an important factor in the success of curative endoscopic therapy. IPCLs visualised on magnification endoscopy with Narrow Band Imaging (ME-NBI) can be used to train convolutional neural networks (CNNs) to detect the presence and classify staging of ESCN lesions. Methods: A total of 7046 sequential high-definition ME-NBI images from 17 patients (10 ESCN, 7 normal) were used to train a CNN. IPCL patterns were classified by three expert endoscopists according to the Japanese Endoscopic Society classification. Normal IPCLs were defined as type A, abnormal as B1-3. Matched histology was obtained for all imaged areas. Results: This CNN differentiates abnormal from normal IPCL patterns with 93.7% accuracy (86.2% to 98.3%) and sensitivity and specificity for classifying abnormal IPCL patterns of 89.3% (78.1% to 100%) and 98% (92% to 99.7%), respectively. Our CNN operates in real time with diagnostic prediction times between 26.17 ms and 37.48 ms. Conclusion: Our novel and proof-of-concept application of computer-aided endoscopic diagnosis shows that a CNN can accurately classify IPCL patterns as normal or abnormal. This system could be used as an in vivo, real-time clinical decision support tool for endoscopists assessing and directing local therapy of ESCN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.