2022
DOI: 10.3389/fpls.2022.893140
|View full text |Cite
|
Sign up to set email alerts
|

A workflow for segmenting soil and plant X-ray computed tomography images with deep learning in Google’s Colaboratory

Abstract: X-ray micro-computed tomography (X-ray μCT) has enabled the characterization of the properties and processes that take place in plants and soils at the micron scale. Despite the widespread use of this advanced technique, major limitations in both hardware and software limit the speed and accuracy of image processing and data analysis. Recent advances in machine learning, specifically the application of convolutional neural networks to image analysis, have enabled rapid and accurate segmentation of image data. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 62 publications
1
10
0
Order By: Relevance
“…We can compare our results with Rippner et al. (2022), despite the fact that they used alternative segmentation quality metric, the F1 score. For this reason, IOU was recalculated to the F1 score based on the methodology of Olczak et al.…”
Section: Discussionmentioning
confidence: 79%
See 1 more Smart Citation
“…We can compare our results with Rippner et al. (2022), despite the fact that they used alternative segmentation quality metric, the F1 score. For this reason, IOU was recalculated to the F1 score based on the methodology of Olczak et al.…”
Section: Discussionmentioning
confidence: 79%
“…In soil, a similar approach was implemented in the multiclass segmentation of microtomographic images of soil aggregates with particulate organic matter (POM; Rippner et al, 2022). We can compare our results with Rippner et al (2022), despite the fact that they used alternative segmentation quality metric, the F1 score. For this reason, IOU was recalculated to the F1 score based on the methodology of Olczak et al (2021).…”
Section: Discussionmentioning
confidence: 99%
“…Models were trained on two image feature classes, hops and background. Model performance was evaluated based on accuracy (the percentage of all correctly classified observations) and F1 score (the harmonic mean of r calculated from evaluating the six image/annotation pairs that were not included in the training or validation datasets) (Rippner et al., 2022). In this study, accuracy is defined as the percentage of all correctly classified pixels, while the F1 score is defined as the harmonic mean of precision (the % of correct positive predictions out of all positive predictions) and recall (the % of correct positive predictions out of all possible positive predictions).…”
Section: Methodsmentioning
confidence: 99%
“…Ten models were trained for 250 epochs using a binary cross‐entropy loss function. An Adam optimizer for stochastic optimization with the learning rate set to 0.001 of the image/annotation pairs was used for training and 20% were used for the validation of the model during training (Rippner et al., 2022). The batch size was set to 1, and images were scaled to 0.5 size in both the x and y dimensions for training due to graphics processing unit VRAM limitations when working with large images (5184 × 3456 pixels).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation