Segmentation, or the classification of pixels (grid cells) in imagery, is ubiquitously applied in the natural sciences. Manual methods are often prohibitively time-consuming, especially those images consisting of small objects and/or significant spatial heterogeneity of colors or textures. Labeling complicated regions of transition that in Earth surface imagery are represented by collections of mixed-pixels, -textures, and -spectral signatures, can be especially error-prone because it is difficult to reliably unmix, identify and delineate consistently. However, the success of supervised machine learning (ML) approaches is entirely dependent on good label data. We describe a fast, semi-automated, method for interactive segmentation of N-dimensional (x, y, N) images into two-dimensional (x, y) label images. It uses human-in-the-loop ML to achieve consensus between the labeler and a model in an iterative workflow. The technique is reproducible; the sequence of decisions made by human labeler and ML algorithms can be encoded to file, so the entire process can be played back and new outputs generated with alternative decisions and/or algorithms. We illustrate the scientific potential of segmentation of imagery of diverse settings and image types using six case studies from river, estuarine, and open coast environments. These photographic and non-photographic imagery consist of 1-and 3-bands on regular and irregular grids ranging from centimeters to tens of meters. We demonstrate high levels of agreement in label images generated by several labelers on the same imagery, and make suggestions to achieve consensus and measure uncertainty, ideal for widespread application in training supervised ML for image segmentation.Plain Language Summary Labeling pixels in scientific images by hand is time-consuming and error-prone, so we would like to train computers to do that for us. We can use automated techniques from Artificial Intelligence or AI, like one called Deep Learning, but it needs a lot of example images and corresponding labels that have been made by hand. So, we still need to label quite a lot of images at the pixel level-called image segmentation. We made a computer program called Doodler that speeds up the process; you label some pixels, and it labels the rest. It is the fastest method we know of for image segmentation because it is semi-automated. We also show that it produces accurate and precise labeling, as we demonstrated by having multiple people use this method to label the same images. Because it is so fast and accurate, it allows us to get enough data to train Deep Learning models to do segmentation on all the images we have, from the past and in the future. Doodler therefore enables geoscientists to use Artificial Intelligence to extract much more information from their imagery, in service of geoscience in general.
Segmentation, or the classification of pixels (grid cells) in imagery, is ubiquitously applied in the natural sciences. Manual methods are often prohibitively time-consuming, especially those images consisting of small objects and/or significant spatial heterogeneity of colors or textures. Labeling complicated regions of transition that in Earth surface imagery are represented by collections of mixed-pixels, -textures, and -spectral signatures, can be especially error-prone because it is difficult to reliably unmix, identify and delineate consistently. However, the success of supervised machine learning (ML) approaches is entirely dependent on good label data. We describe a fast, semi-automated, method for interactive segmentation of N-dimensional (x,y,N) images into two-dimensional (x,y) label images. It uses human-in-the-loop ML to achieve consensus between the labeler and a model in an iterative workflow. The technique is reproducible; the sequence of decisions made by human labeler and ML algorithms can be encoded to file, so the entire process can be played back and new outputs generated with alternative decisions and/or algorithms. We illustrate the scientific potential of segmentation of imagery of diverse settings and image types using six case studies from river, estuarine, and open coast environments. These photographic and non-photographic imagery consist of 1- and 3-bands on regular and irregular grids ranging from centimeters to tens of meters. We demonstrate high levels of agreement in label images generated by several labelers on the same imagery, and make suggestions to achieve consensus and measure uncertainty, ideal for widespread application in training supervised ML for image segmentation.
The world’s coastlines are spatially highly variable, coupled-human-natural systems that comprise a nested hierarchy of component landforms, ecosystems, and human interventions, each interacting over a range of space and time scales. Understanding and predicting coastline dynamics necessitates frequent observation from imaging sensors on remote sensing platforms. Machine Learning models that carry out supervised (i.e., human-guided) pixel-based classification, or image segmentation, have transformative applications in spatio-temporal mapping of dynamic environments, including transient coastal landforms, sediments, habitats, waterbodies, and water flows. However, these models require large and well-documented training and testing datasets consisting of labeled imagery. We describe “Coast Train,” a multi-labeler dataset of orthomosaic and satellite images of coastal environments and corresponding labels. These data include imagery that are diverse in space and time, and contain 1.2 billion labeled pixels, representing over 3.6 million hectares. We use a human-in-the-loop tool especially designed for rapid and reproducible Earth surface image segmentation. Our approach permits image labeling by multiple labelers, in turn enabling quantification of pixel-level agreement over individual and collections of images.
The world’s coastlines are spatially highly variable, coupled-human-natural systems that comprise a nested hierarchy of component landforms, ecosystems, and human interventions, each interacting over a range of space and time scales. Understanding and predicting coastline dynamics necessitates frequent observation from imaging sensors on remote sensing platforms. Machine Learning models that carry out supervised (i.e., human-guided) pixel-based classification, or image segmentation, have transformative applications in spatio-temporal mapping of dynamic environments, including transient coastal landforms, sediments, habitats, waterbodies, and water flows. However, these models require large and well-documented training and testing datasets consisting of labeled imagery. We describe “Coast Train,” a multi-labeler dataset of orthomosaic and satellite images of coastal environments and corresponding labels. These data include imagery that are diverse in space and time, and contain 1.2 billion labeled pixels, representing over 3.6 million hectares. We use a human-in-the-loop tool especially designed for rapid and reproducible Earth surface image segmentation. Our approach permits image annotation by multiple labelers, in turn enabling quantification of pixel-level agreement over individual and collections of images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.