zzfive
's Collections
TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion
Paper
•
2401.09416
•
Published
•
10
SHINOBI: Shape and Illumination using Neural Object Decomposition via
BRDF Optimization In-the-wild
Paper
•
2401.10171
•
Published
•
13
DMV3D: Denoising Multi-View Diffusion using 3D Large Reconstruction
Model
Paper
•
2311.09217
•
Published
•
21
GALA: Generating Animatable Layered Assets from a Single Scan
Paper
•
2401.12979
•
Published
•
7
ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural
Radiance Fields
Paper
•
2401.17895
•
Published
•
15
Advances in 3D Generation: A Survey
Paper
•
2401.17807
•
Published
•
17
AToM: Amortized Text-to-Mesh using 2D Diffusion
Paper
•
2402.00867
•
Published
•
10
GaussianObject: Just Taking Four Images to Get A High-Quality 3D Object
with Gaussian Splatting
Paper
•
2402.10259
•
Published
•
13
MVDiffusion++: A Dense High-resolution Multi-view Diffusion Model for
Single or Sparse-view 3D Object Reconstruction
Paper
•
2402.12712
•
Published
•
17
FlashTex: Fast Relightable Mesh Texturing with LightControlNet
Paper
•
2402.13251
•
Published
•
13
Consolidating Attention Features for Multi-view Image Editing
Paper
•
2402.14792
•
Published
•
7
MVD^2: Efficient Multiview 3D Reconstruction for Multiview Diffusion
Paper
•
2402.14253
•
Published
•
5
ViewFusion: Towards Multi-View Consistency via Interpolated Denoising
Paper
•
2402.18842
•
Published
•
13
TripoSR: Fast 3D Object Reconstruction from a Single Image
Paper
•
2403.02151
•
Published
•
12
ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models
Paper
•
2403.01807
•
Published
•
7
CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction
Model
Paper
•
2403.05034
•
Published
•
20
3D-VLA: A 3D Vision-Language-Action Generative World Model
Paper
•
2403.09631
•
Published
•
7
GVGEN: Text-to-3D Generation with Volumetric Representation
Paper
•
2403.12957
•
Published
•
5
GaussianFlow: Splatting Gaussian Dynamics for 4D Content Creation
Paper
•
2403.12365
•
Published
•
10
TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation
Paper
•
2403.12906
•
Published
•
5
Compress3D: a Compressed Latent Space for 3D Generation from a Single
Image
Paper
•
2403.13524
•
Published
•
8
VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation
Paper
•
2403.17001
•
Published
•
6
Gamba: Marry Gaussian Splatting with Mamba for single view 3D
reconstruction
Paper
•
2403.18795
•
Published
•
18
GaussianCube: Structuring Gaussian Splatting using Optimal Transport for
3D Generative Modeling
Paper
•
2403.19655
•
Published
•
18
Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field
Representation and Generation
Paper
•
2403.19319
•
Published
•
12
FlexiDreamer: Single Image-to-3D Generation with FlexiCubes
Paper
•
2404.00987
•
Published
•
21
PointInfinity: Resolution-Invariant Point Diffusion Models
Paper
•
2404.03566
•
Published
•
13
Robust Gaussian Splatting
Paper
•
2404.04211
•
Published
•
8
Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion
Paper
•
2404.06429
•
Published
•
6
MonoPatchNeRF: Improving Neural Radiance Fields with Patch-based
Monocular Guidance
Paper
•
2404.08252
•
Published
•
5
CompGS: Efficient 3D Scene Representation via Compressed Gaussian
Splatting
Paper
•
2404.09458
•
Published
•
6
Taming Latent Diffusion Model for Neural Radiance Field Inpainting
Paper
•
2404.09995
•
Published
•
6
MeshLRM: Large Reconstruction Model for High-Quality Mesh
Paper
•
2404.12385
•
Published
•
26
Interactive3D: Create What You Want by Interactive 3D Generation
Paper
•
2404.16510
•
Published
•
18
CAT3D: Create Anything in 3D with Multi-View Diffusion Models
Paper
•
2405.10314
•
Published
•
45
Dual3D: Efficient and Consistent Text-to-3D Generation with Dual-mode
Multi-view Latent Diffusion
Paper
•
2405.09874
•
Published
•
16
Dreamer XL: Towards High-Resolution Text-to-3D Generation via Trajectory
Score Matching
Paper
•
2405.11252
•
Published
•
12
CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and
Interactive Geometry Refiner
Paper
•
2405.14979
•
Published
•
15
HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed
via Gaussian Splatting
Paper
•
2405.15125
•
Published
•
5
Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with
Dynamic Gaussian Surfels
Paper
•
2405.16822
•
Published
•
11
Part123: Part-aware 3D Reconstruction from a Single-view Image
Paper
•
2405.16888
•
Published
•
11
GFlow: Recovering 4D World from Monocular Video
Paper
•
2405.18426
•
Published
•
15
3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian
Splatting
Paper
•
2405.18424
•
Published
•
7
NPGA: Neural Parametric Gaussian Avatars
Paper
•
2405.19331
•
Published
•
10
GECO: Generative Image-to-3D within a SECOnd
Paper
•
2405.20327
•
Published
•
9
PLA4D: Pixel-Level Alignments for Text-to-4D Gaussian Splatting
Paper
•
2405.19957
•
Published
•
9
4Diffusion: Multi-view Video Diffusion Model for 4D Generation
Paper
•
2405.20674
•
Published
•
12
Ouroboros3D: Image-to-3D Generation via 3D-aware Recursive Diffusion
Paper
•
2406.03184
•
Published
•
19
4Real: Towards Photorealistic 4D Scene Generation via Video Diffusion
Models
Paper
•
2406.07472
•
Published
•
11
Physics3D: Learning Physical Properties of 3D Gaussians via Video
Diffusion
Paper
•
2406.04338
•
Published
•
34
3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and
Less Hallucination
Paper
•
2406.05132
•
Published
•
27
Real3D: Scaling Up Large Reconstruction Models with Real-World Images
Paper
•
2406.08479
•
Published
•
6
LRM-Zero: Training Large Reconstruction Models with Synthesized Data
Paper
•
2406.09371
•
Published
•
4
GaussianSR: 3D Gaussian Super-Resolution with 2D Diffusion Priors
Paper
•
2406.10111
•
Published
•
6
MeshAnything: Artist-Created Mesh Generation with Autoregressive
Transformers
Paper
•
2406.10163
•
Published
•
32
L4GM: Large 4D Gaussian Reconstruction Model
Paper
•
2406.10324
•
Published
•
13
ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians
Paper
•
2406.16815
•
Published
•
7
YouDream: Generating Anatomically Controllable Consistent Text-to-3D
Animals
Paper
•
2406.16273
•
Published
•
40
GaussianDreamerPro: Text to Manipulable 3D Gaussians with Highly
Enhanced Quality
Paper
•
2406.18462
•
Published
•
11
Tailor3D: Customized 3D Assets Editing and Generation with Dual-Side
Images
Paper
•
2407.06191
•
Published
•
12
RodinHD: High-Fidelity 3D Avatar Generation with Diffusion Models
Paper
•
2407.06938
•
Published
•
23
Controlling Space and Time with Diffusion Models
Paper
•
2407.07860
•
Published
•
16
StyleSplat: 3D Object Style Transfer with Gaussian Splatting
Paper
•
2407.09473
•
Published
•
11
CharacterGen: Efficient 3D Character Generation from Single Images with
Multi-View Pose Canonicalization
Paper
•
2402.17214
•
Published
•
1
DreamCatalyst: Fast and High-Quality 3D Editing via Controlling
Editability and Identity Preservation
Paper
•
2407.11394
•
Published
•
11
Animate3D: Animating Any 3D Model with Multi-view Video Diffusion
Paper
•
2407.11398
•
Published
•
8
Click-Gaussian: Interactive Segmentation to Any 3D Gaussians
Paper
•
2407.11793
•
Published
•
3
Splatfacto-W: A Nerfstudio Implementation of Gaussian Splatting for
Unconstrained Photo Collections
Paper
•
2407.12306
•
Published
•
5
Shape of Motion: 4D Reconstruction from a Single Video
Paper
•
2407.13764
•
Published
•
19
PlacidDreamer: Advancing Harmony in Text-to-3D Generation
Paper
•
2407.13976
•
Published
•
5
BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis
in Large-scale Scenes
Paper
•
2407.15848
•
Published
•
17
HoloDreamer: Holistic 3D Panoramic World Generation from Text
Descriptions
Paper
•
2407.15187
•
Published
•
12
Temporal Residual Jacobians For Rig-free Motion Transfer
Paper
•
2407.14958
•
Published
•
5
F-HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions
Paper
•
2407.12435
•
Published
•
14
SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View
Consistency
Paper
•
2407.17470
•
Published
•
14
DreamCar: Leveraging Car-specific Prior for in-the-wild 3D Car
Reconstruction
Paper
•
2407.16988
•
Published
•
7
Floating No More: Object-Ground Reconstruction from a Single Image
Paper
•
2407.18914
•
Published
•
20
Cycle3D: High-quality and Consistent Image-to-3D Generation via
Generation-Reconstruction Cycle
Paper
•
2407.19548
•
Published
•
25
Expressive Whole-Body 3D Gaussian Avatar
Paper
•
2407.21686
•
Published
•
7
Improving 2D Feature Representations by 3D-Aware Fine-Tuning
Paper
•
2407.20229
•
Published
•
7
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation
Learning for Neural Radiance Fields
Paper
•
2404.01300
•
Published
•
4
SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and
Illumination Disentanglement
Paper
•
2408.00653
•
Published
•
28
TexGen: Text-Guided 3D Texture Generation with Multi-view Sampling and
Resampling
Paper
•
2408.01291
•
Published
•
11
MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh
Tokenization
Paper
•
2408.02555
•
Published
•
28
An Object is Worth 64x64 Pixels: Generating 3D Object via Image
Diffusion
Paper
•
2408.03178
•
Published
•
38
RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel
View Synthesis
Paper
•
2408.03356
•
Published
•
9
Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields
Paper
•
2408.03822
•
Published
•
14
Sketch2Scene: Automatic Generation of Interactive 3D Game Scenes from
User's Casual Sketches
Paper
•
2408.04567
•
Published
•
24
FruitNeRF: A Unified Neural Radiance Field based Fruit Counting
Framework
Paper
•
2408.06190
•
Published
•
17
HeadGAP: Few-shot 3D Head Avatar via Generalizable Gaussian Priors
Paper
•
2408.06019
•
Published
•
13
SlotLifter: Slot-guided Feature Lifting for Learning Object-centric
Radiance Fields
Paper
•
2408.06697
•
Published
•
14
3D Gaussian Editing with A Single Image
Paper
•
2408.07540
•
Published
•
10
MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and
3D Editing
Paper
•
2408.08000
•
Published
•
8
MeshFormer: High-Quality Mesh Generation with 3D-Guided Reconstruction
Model
Paper
•
2408.10198
•
Published
•
32
SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse
Views
Paper
•
2408.10195
•
Published
•
12
ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their
Self-Supervised Pretraining
Paper
•
2408.10906
•
Published
•
3
DreamCinema: Cinematic Transfer with Free Camera and 3D Character
Paper
•
2408.12601
•
Published
•
28
Subsurface Scattering for 3D Gaussian Splatting
Paper
•
2408.12282
•
Published
•
6
LayerPano3D: Layered 3D Panorama for Hyper-Immersive Scene Generation
Paper
•
2408.13252
•
Published
•
24
T3M: Text Guided 3D Human Motion Synthesis from Speech
Paper
•
2408.12885
•
Published
•
10
FLoD: Integrating Flexible Level of Detail into 3D Gaussian Splatting
for Customizable Rendering
Paper
•
2408.12894
•
Published
•
4
MagicMan: Generative Novel View Synthesis of Humans with 3D-Aware
Diffusion and Iterative Refinement
Paper
•
2408.14211
•
Published
•
10
Towards Realistic Example-based Modeling via 3D Gaussian Stitching
Paper
•
2408.15708
•
Published
•
7
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion
Model
Paper
•
2408.16767
•
Published
•
30
SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners
Paper
•
2408.16768
•
Published
•
26
3D Reconstruction with Spatial Memory
Paper
•
2408.16061
•
Published
•
14
GST: Precise 3D Human Body from a Single Image with Gaussian Splatting
Transformers
Paper
•
2409.04196
•
Published
•
14
UniDet3D: Multi-dataset Indoor 3D Object Detection
Paper
•
2409.04234
•
Published
•
7
Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video
Diffusion Models
Paper
•
2409.07452
•
Published
•
20
FlashSplat: 2D to 3D Gaussian Splatting Segmentation Solved Optimally
Paper
•
2409.08270
•
Published
•
9
Phidias: A Generative Model for Creating 3D Content from Text, Image,
and 3D Conditions with Reference-Augmented Diffusion
Paper
•
2409.11406
•
Published
•
25
SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction
Paper
•
2409.11211
•
Published
•
8
Vista3D: Unravel the 3D Darkside of a Single Image
Paper
•
2409.12193
•
Published
•
9
3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive
Diffusion
Paper
•
2409.12957
•
Published
•
18
FlexiTex: Enhancing Texture Generation with Visual Guidance
Paper
•
2409.12431
•
Published
•
11
3DGS-LM: Faster Gaussian-Splatting Optimization with Levenberg-Marquardt
Paper
•
2409.12892
•
Published
•
5
Portrait Video Editing Empowered by Multimodal Generative Priors
Paper
•
2409.13591
•
Published
•
16
DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D
Diffusion
Paper
•
2409.17145
•
Published
•
13
Game4Loc: A UAV Geo-Localization Benchmark from Game Data
Paper
•
2409.16925
•
Published
•
6
TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans
Paper
•
2409.16666
•
Published
•
5
LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with
3D-awareness
Paper
•
2409.18125
•
Published
•
33
Disco4D: Disentangled 4D Human Generation and Animation from a Single
Image
Paper
•
2409.17280
•
Published
•
9
MonST3R: A Simple Approach for Estimating Geometry in the Presence of
Motion
Paper
•
2410.03825
•
Published
•
18
RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion
Models
Paper
•
2409.19989
•
Published
•
17
Semantic Score Distillation Sampling for Compositional Text-to-3D
Generation
Paper
•
2410.09009
•
Published
•
14
GS^3: Efficient Relighting with Triple Gaussian Splatting
Paper
•
2410.11419
•
Published
•
11
Long-LRM: Long-sequence Large Reconstruction Model for Wide-coverage
Gaussian Splats
Paper
•
2410.12781
•
Published
•
5
FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without
Learned Priors
Paper
•
2410.16271
•
Published
•
81
SpectroMotion: Dynamic 3D Reconstruction of Specular Scenes
Paper
•
2410.17249
•
Published
•
41
3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with
View-consistent 2D Diffusion Priors
Paper
•
2410.16266
•
Published
•
4
DynamicCity: Large-Scale LiDAR Generation from Dynamic Scenes
Paper
•
2410.18084
•
Published
•
13
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias
Paper
•
2410.17242
•
Published
•
3
MotionCLR: Motion Generation and Training-free Editing via Understanding
Attention Mechanisms
Paper
•
2410.18977
•
Published
•
14
Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling
Paper
•
2410.18912
•
Published
•
6
MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D
Paper
•
2411.02336
•
Published
•
23
GenXD: Generating Any 3D and 4D Scenes
Paper
•
2411.02319
•
Published
•
20
AutoVFX: Physically Realistic Video Editing from Natural Language
Instructions
Paper
•
2411.02394
•
Published
•
17
DreamPolish: Domain Score Distillation With Progressive Geometry
Generation
Paper
•
2411.01602
•
Published
•
10
GarVerseLOD: High-Fidelity 3D Garment Reconstruction from a Single
In-the-Wild Image using a Dataset with Levels of Details
Paper
•
2411.03047
•
Published
•
8
DimensionX: Create Any 3D and 4D Scenes from a Single Image with
Controllable Video Diffusion
Paper
•
2411.04928
•
Published
•
48
StdGEN: Semantic-Decomposed 3D Character Generation from Single Images
Paper
•
2411.05738
•
Published
•
14
KMM: Key Frame Mask Mamba for Extended Motion Generation
Paper
•
2411.06481
•
Published
•
4
SAMPart3D: Segment Any Part in 3D Objects
Paper
•
2411.07184
•
Published
•
26
Wavelet Latent Diffusion (Wala): Billion-Parameter 3D Generative Model
with Compact Wavelet Encodings
Paper
•
2411.08017
•
Published
•
11
LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models
Paper
•
2411.09595
•
Published
•
71
GaussianAnything: Interactive Point Cloud Latent Diffusion for 3D
Generation
Paper
•
2411.08033
•
Published
•
22
VeGaS: Video Gaussian Splatting
Paper
•
2411.11024
•
Published
•
6
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable
Single-stage Image-to-3D Generation
Paper
•
2411.14384
•
Published
•
9
Material Anything: Generating Materials for Any 3D Object via Diffusion
Paper
•
2411.15138
•
Published
•
42
SplatFlow: Multi-View Rectified Flow Model for 3D Gaussian Splatting
Synthesis
Paper
•
2411.16443
•
Published
•
9
Paper
•
2411.13550
•
Published
•
4
TEXGen: a Generative Diffusion Model for Mesh Textures
Paper
•
2411.14740
•
Published
•
15
Learning 3D Representations from Procedural 3D Programs
Paper
•
2411.17467
•
Published
•
8
SAR3D: Autoregressive 3D Object Generation and Understanding via
Multi-scale 3D VQVAE
Paper
•
2411.16856
•
Published
•
11
CAT4D: Create Anything in 4D with Multi-View Video Diffusion Models
Paper
•
2411.18613
•
Published
•
50
MARVEL-40M+: Multi-Level Visual Elaboration for High-Fidelity Text-to-3D
Content Creation
Paper
•
2411.17945
•
Published
•
24
Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready
3D Characters
Paper
•
2411.18197
•
Published
•
14
DisCoRD: Discrete Tokens to Continuous Motion via Rectified Flow
Decoding
Paper
•
2411.19527
•
Published
•
10
SOLAMI: Social Vision-Language-Action Modeling for Immersive Interaction
with 3D Autonomous Characters
Paper
•
2412.00174
•
Published
•
22
World-consistent Video Diffusion with Explicit 3D Modeling
Paper
•
2412.01821
•
Published
•
4
Imagine360: Immersive 360 Video Generation from Perspective Anchor
Paper
•
2412.03552
•
Published
•
26
Distilling Diffusion Models to Efficient 3D LiDAR Scene Completion
Paper
•
2412.03515
•
Published
•
25
Video-3D LLM: Learning Position-Aware Video Representation for 3D Scene
Understanding
Paper
•
2412.00493
•
Published
•
16
MIDI: Multi-Instance Diffusion for Single Image to 3D Scene Generation
Paper
•
2412.03558
•
Published
•
15
Structured 3D Latents for Scalable and Versatile 3D Generation
Paper
•
2412.01506
•
Published
•
50
MV-Adapter: Multi-view Consistent Image Generation Made Easy
Paper
•
2412.03632
•
Published
•
23
Momentum-GS: Momentum Gaussian Self-Distillation for High-Quality Large
Scene Reconstruction
Paper
•
2412.04887
•
Published
•
16
2DGS-Room: Seed-Guided 2D Gaussian Splatting with Geometric Constrains
for High-Fidelity Indoor Scene Reconstruction
Paper
•
2412.03428
•
Published
•
10
You See it, You Got it: Learning 3D Creation on Pose-Free Videos at
Scale
Paper
•
2412.06699
•
Published
•
11
MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and
Photorealism From Sparse Views
Paper
•
2412.06767
•
Published
•
6
Turbo3D: Ultra-fast Text-to-3D Generation
Paper
•
2412.04470
•
Published
•
3
Neural LightRig: Unlocking Accurate Object Normal and Material
Estimation with Multi-Light Diffusion
Paper
•
2412.09593
•
Published
•
17
PIG: Physics-Informed Gaussians as Adaptive Parametric Mesh
Representations
Paper
•
2412.05994
•
Published
•
17
GenEx: Generating an Explorable World
Paper
•
2412.09624
•
Published
•
87
IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and
Illuminations
Paper
•
2412.12083
•
Published
•
12
GaussianProperty: Integrating Physical Properties to 3D Gaussians with
LMMs
Paper
•
2412.11258
•
Published
•
13
DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation
for High-quality 3D Asset Creation
Paper
•
2412.15200
•
Published
•
9
Sequence Matters: Harnessing Video Models in 3D Super-Resolution
Paper
•
2412.11525
•
Published
•
10
3DGraphLLM: Combining Semantic Graphs and Large Language Models for 3D
Scene Understanding
Paper
•
2412.18450
•
Published
•
32
DepthLab: From Partial to Complete
Paper
•
2412.18153
•
Published
•
33
PartGen: Part-level 3D Generation and Reconstruction with Multi-View
Diffusion Models
Paper
•
2412.18608
•
Published
•
11
Orient Anything: Learning Robust Object Orientation Estimation from
Rendering 3D Models
Paper
•
2412.18605
•
Published
•
17