dreamfusion3d.github.io - DreamFusion: Text-to-3D using 2D Diffusion

Example domain paragraphs

Paper Project Gallery Abstract Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs. Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D assets and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis. We introduce a loss based on probability

DreamFusion generates objects and scenes from diverse captions. Search through hundreds of generated assets in our full gallery.

Search assets Composing objects into a scene

Links to dreamfusion3d.github.io (35)