2023
DOI: 10.48550/arxiv.2302.10663
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RealFusion: 360° Reconstruction of Any Object from a Single Image

Abstract: https://lukemelas.github.io/realfusion Figure 1. RealFusion generates a full 360 • reconstruction of any object given a single image of it (left column). It does so by leveraging an existing diffusion-based 2D image generator. From the given image, it synthesizes a prompt that causes the diffusion model to "dream up" other views of the object. It then extracts a neural radiance field from the original image and the diffusion model-based prior, thereby reconstructing the object in full. Both appearance and geom… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 38 publications
0
9
0
Order By: Relevance
“…Diffusion models are a class of generative models that make use of a Markovian noising process to iteratively reverse the noise. In recent years, several researchers [28]- [31] have explored the use of diffusion models in conjunction with radiance fields and have demonstrated excellent results in tasks such as conditional synthesis, completion, and other related tasks [32], [33].…”
Section: B Novel View Synthesismentioning
confidence: 99%
“…Diffusion models are a class of generative models that make use of a Markovian noising process to iteratively reverse the noise. In recent years, several researchers [28]- [31] have explored the use of diffusion models in conjunction with radiance fields and have demonstrated excellent results in tasks such as conditional synthesis, completion, and other related tasks [32], [33].…”
Section: B Novel View Synthesismentioning
confidence: 99%
“…The most related to our work is DreamFusion [26], which introduced Score Distillation Sampling (SDS) for creation of 3D assets, leveraging the power of text-to-image diffusion models. Despite the flexible merit of SDS to enable the optimization of arbitrary differentiable operators, most works mainly focus on applying SDS to enhance the synthesis quality of 3D scenes by introducing 3D specific frameworks [48,49,50,51,52]. Although there exists some work to apply SDS for visual domains other than 3D scenes, they have limited their scope to image editing [53], or image generation [54].…”
Section: Related Workmentioning
confidence: 99%
“…To leverage the capability of 2D generative models, one line of approaches [Melas-Kyriazi et al 2023;Poole et al 2022; propose Distillation Sampling to generate 3D assets from texts. These approaches usually suffer from low diversity, oversaturation, and multi-face problems.…”
Section: Multi-view/3d Generation With 2d Generationmentioning
confidence: 99%
“…Diffusion models are a class of generative models that make use of a Markovian noising process to iteratively reverse the noise. In recent years, several researchers Lin et al 2023b;Liu et al 2023b;Melas-Kyriazi et al 2023;Poole et al 2022;Shi et al 2023a] have explored the use of diffusion models in conjunction with radiance fields and have demonstrated excellent results in tasks such as conditional synthesis, completion, and other related tasks [Zeng et al 2022;Zhou et al 2021].…”
Section: Novel View Synthesismentioning
confidence: 99%
See 1 more Smart Citation