Mapping is a fundamental application of remote sensing images, and the accurate evaluation of remote sensing image information extraction using artificial intelligence is critical. However, the existing evaluation method, based on Intersection over Union (IoU), is limited in evaluating the extracted information’s boundary accuracy. It is insufficient for determining mapping accuracy. Furthermore, traditional remote sensing mapping methods struggle to match the inflection points encountered in artificial intelligence contour extraction. In order to address these issues, we propose the mean inflection point distance (MPD) as a new segmentation evaluation method. MPD can accurately calculate error values and solve the problem of multiple inflection points, which traditional remote sensing mapping cannot match. We tested three algorithms on the Vaihingen dataset: Mask R-CNN, Swin Transformer, and PointRend. The results show that MPD is highly sensitive to mapping accuracy, can calculate error values accurately, and is applicable for different scales of mapping accuracy while maintaining high visual consistency. This study helps to assess the accuracy of automatic mapping using remote sensing artificial intelligence.
Background The application of artificial intelligence (AI) to whole slide images has the potential to improve research reliability and ultimately diagnostic efficiency and service capacity. Image annotation plays a key role in AI and digital pathology. However, the work‐streams required for tissue‐specific (skin) and immunostain‐specific annotation has not been extensively studied compared with the development of AI algorithms. Objectives The objective of this study is to develop a common workflow for annotating whole slide images of biopsies from inflammatory skin disease immunostained with a variety of epidermal and dermal markers prior to the development of the AI‐assisted analysis pipeline. Methods A total of 45 slides containing 3–5 sections each were scanned using Aperio AT2 slide scanner (Leica Biosystems). These slides were annotated by hand using a commonly used image analysis tool which resulted in more than 4000 images blocks. We used deep learning (DL) methodology to first sequentially segment (epidermis and upper dermis), with the exclusion of common artefacts and second to quantify the immunostained signal in those two compartments of skin biopsies and the ratio of positive cells. Results We validated two DL models using 10‐fold validation runs and by comparing to ground truth manually annotated data. The models achieved an average (global) accuracy of 95.0% for the segmentation of epidermis and dermis and 86.1% for the segmentation of positive/negative cells. Conclusions The application of two DL models in sequence facilitates accurate segmentation of epidermal and dermal structures, exclusion of common artefacts and enables the quantitative analysis of the immunostained signal. However, inaccurate annotation of the slides for training the DL model can decrease the accuracy of the output. Our open source code will facilitate further external validation across different immunostaining platforms and slide scanners.
Surgical context inference has recently garnered significant attention in robot-assisted surgery as it can facilitate workflow analysis, skill assessment, and error detection. However, runtime context inference is challenging since it requires timely and accurate detection of the interactions among the tools and objects in the surgical scene based on the segmentation of video data. On the other hand, existing stateof-the-art video segmentation methods are often biased against infrequent classes and fail to provide temporal consistency for segmented masks. This can negatively impact the context inference and accurate detection of critical states. In this study, we propose a solution to these challenges using a Space-Time Correspondence Network (STCN). STCN is a memory network that performs binary segmentation and minimizes the effects of class imbalance. The use of a memory bank in STCN allows for the utilization of past image and segmentation information, thereby ensuring consistency of the masks. Our experiments using the publicly-available JIGSAWS dataset demonstrate that STCN achieves superior segmentation performance for objects that are difficult to segment, such as needle and thread, and improves context inference compared to the state-of-the-art. We also demonstrate that segmentation and context inference can be performed at runtime without compromising performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.