2023
DOI: 10.3389/fnbot.2022.1042429
|View full text |Cite
|
Sign up to set email alerts
|

An improved adaptive triangular mesh-based image warping method

Abstract: It is of vital importance to stitch the two images into a panorama in many computer vision applications of motion detection and tracking and virtual reality, panoramic photography, and virtual tours. To preserve more local details and with few artifacts in panoramas, this article presents an improved mesh-based joint optimization image stitching model. Since the uniform vertices are usually used in mesh-based warps, we consider the matched feature points and uniform points as grid vertices to strengthen constr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…In the overlapping area ๐›บ ๐‘œ of ๐ผ 1 and ๐ผ 2 , the SC method is used to generate a virtual camera, whose viewpoint gradually transitions from the viewpoint of ๐ผ 1 to that of ๐ผ 2 . ๐‘† 4 and ๐‘† 5 are the intersection points of the back-projection lines of ๐‘ข 4 , ๐‘ข 5 in the visual camera and projection plane $n$, respectively. The virtual camera's image is generated from images ๐ผ 1 and ๐ผ 2 using perspective transformation.…”
Section: Sc Stitching Processmentioning
confidence: 99%
See 1 more Smart Citation
“…In the overlapping area ๐›บ ๐‘œ of ๐ผ 1 and ๐ผ 2 , the SC method is used to generate a virtual camera, whose viewpoint gradually transitions from the viewpoint of ๐ผ 1 to that of ๐ผ 2 . ๐‘† 4 and ๐‘† 5 are the intersection points of the back-projection lines of ๐‘ข 4 , ๐‘ข 5 in the visual camera and projection plane $n$, respectively. The virtual camera's image is generated from images ๐ผ 1 and ๐ผ 2 using perspective transformation.…”
Section: Sc Stitching Processmentioning
confidence: 99%
“…Many solutions have proposed to solve the problems of parallax and perspective deformation in image stitching, so as to improve the quality of stitched images. But most state-of-art mesh-based [3][4][5]and multi-plane [6][7][8] method are time-consuming and vulnerable to false matches.…”
Section: Introductionmentioning
confidence: 99%
“…This is time-consuming and does not allow for emergency response. Strategies based on simultaneous localization and mapping (SLAM) [2][3][4] and inter-frame transformation [5][6][7][8][9][10] offer significant speed advantages but suffer from serious cumulative error problems. Currently, it typically relies on global navigation satellite systems (GNSS) and position orientation system (POS) for rectification and geographic coordinates acquisition [5,6], but this method is less reliable for emergency mapping tasks in extreme environments, such as GNSS denial.…”
Section: Introductionmentioning
confidence: 99%
“…Currently, it typically relies on global navigation satellite systems (GNSS) and position orientation system (POS) for rectification and geographic coordinates acquisition [5,6], but this method is less reliable for emergency mapping tasks in extreme environments, such as GNSS denial. Another approach [7][8][9][10] is to minimize cumulative error through a keyframe selection strategy and multiple optimization strategies to achieve greater robustness. In addition, with the rapid development of deep learning, many researchers have attempted to use end-to-end deep neural networks to learn frame-to-frame transformation relationships to avoid error accumulation [11][12][13].…”
Section: Introductionmentioning
confidence: 99%
“…Many solutions have been proposed to solve the problems of parallax and perspective deformation in image stitching, so as to improve the quality of stitched images. But most state-of-the-art mesh-based [3][4][5] and multi-plane [6][7][8] methods are time-consuming and vulnerable to false matches.…”
Section: Introductionmentioning
confidence: 99%