2022
DOI: 10.1002/essoar.10506807.2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

U-Net Segmentation Methods for Variable-Contrast XCT Images of Methane-Bearing Sand Using Small Training Datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…Even though these 2D models are quicker to train and require fewer computational resources than their 3D counterparts (Alvarez-Borges et al, 2022), when predicting a segmentation for a volume, the lack of 3D context available to these models can lead to striping artifacts in the 3D output, especially when viewed in planes other than the one used for prediction. To overcome this, a multi-axis prediction method is used, and the multiple predictions are merged by using maximum probability voting.…”
Section: Statement Of Needmentioning
confidence: 99%
See 1 more Smart Citation
“…Even though these 2D models are quicker to train and require fewer computational resources than their 3D counterparts (Alvarez-Borges et al, 2022), when predicting a segmentation for a volume, the lack of 3D context available to these models can lead to striping artifacts in the 3D output, especially when viewed in planes other than the one used for prediction. To overcome this, a multi-axis prediction method is used, and the multiple predictions are merged by using maximum probability voting.…”
Section: Statement Of Needmentioning
confidence: 99%
“…In a completely different context, SXCT datasets were collected on a soil system in which methane bubbles were forming in brine amongst sand particles. The utility of a pre-trained 2D U-Net was investigated to segment these variable-contrast image volumes in comparison to a 3D U-Net with no prior training (Alvarez-Borges et al, 2022). In this case, the training data ranged in size from 384 3 pixels to 572 3 .…”
Section: Real-world Usagementioning
confidence: 99%