2021
DOI: 10.1007/s11837-021-04865-x
|View full text |Cite
|
Sign up to set email alerts
|

The Alignment and Fusion of Multimodal 3D Serial Sectioning Datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 47 publications
0
1
0
Order By: Relevance
“…Since the end goal of image registration here is data fusion, maximizing the similarity between the reference and sensed images leads to a more correct multimodal dataset. This reasoning was used by Nguyen and Rowenhorst [35] in image registration and data fusion of pores from BSE images with EBSD data in an additive manufactured steel sample, when a control point-free transformation function was found iteratively by optimizing the similarity of binary maps of pores detected in the BSE images and EBSD maps. By assuming that the BSE image and EBSD intensity map in Fig.…”
Section: Evaluation Of the Multimodal Data Fusion Workflowmentioning
confidence: 99%
“…Since the end goal of image registration here is data fusion, maximizing the similarity between the reference and sensed images leads to a more correct multimodal dataset. This reasoning was used by Nguyen and Rowenhorst [35] in image registration and data fusion of pores from BSE images with EBSD data in an additive manufactured steel sample, when a control point-free transformation function was found iteratively by optimizing the similarity of binary maps of pores detected in the BSE images and EBSD maps. By assuming that the BSE image and EBSD intensity map in Fig.…”
Section: Evaluation Of the Multimodal Data Fusion Workflowmentioning
confidence: 99%