Check out our Project Page for videos!
High-quality 3D assets are essential for various applications in computer graphics and 3D vision but remain scarce due to significant acquisition costs. To address this shortage, we introduce Elevate3D, a novel framework that transforms readily accessible low-quality 3D assets into higher quality. At the core of Elevate3D is HFS-SDEdit, a specialized texture enhancement method that significantly improves texture quality while preserving the appearance and geometry while fixing its degradations. Furthermore, Elevate3D operates in a view-by-view manner, alternating between texture and geometry refinement. Unlike previous methods that have largely overlooked geometry refinement, our framework leverages geometric cues from images refined with HFS-SDEdit by employing state-of-the-art monocular geometry predictors. This approach ensures detailed and accurate geometry that aligns seamlessly with the enhanced texture. Elevate3D outperforms recent competitors by achieving state-of-the-art quality in 3D model refinement, effectively addressing the scarcity of high-quality open-source 3D assets.
2025.07.23
- Initial code release.
2025.08.01
- Upload data pre-processing code.
To Do
- Add hugging face demo
- OS: Tested only on Linux.
- Hardware: We recommend using an NVIDIA GPU with at least 48GB of memory due to the requirement of FLUX. The code has been verified on NVIDIA A6000 GPUs.
- Software:
- NVIDIA Driver & CUDA Toolkit 12.0 or later.
- Conda for environment management.
- Python version 3.10 or higher
-
Clone the repo:
git clone --recurse-submodules https://github.com/ryunuri/Elevate3D.git cd Elevate3D
-
Set Up the Environment: Create and activate the
elevate3d
conda environment using the provided file.conda env create -f environment.yml --name elevate3d conda activate elevate3d
-
Download Models & Dependencies: You need to download pre-trained models and build one external dependency.
A. Download Checkpoints: Our framework relies on several off-the-shelf models. Some will be downloaded automatically from Hugging Face, but others need to be placed manually.
# Create directories for checkpoints mkdir -p Checkpoints/sam # Download the Segment Anything Model (SAM) checkpoint wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth -P Checkpoints/sam/
B. Build PoissonRecon: The geometry refinement step uses Poisson Surface Reconstruction. You need to build the executable from the source.
# Clone the PoissonRecon repository git clone https://github.com/mkazhdan/PoissonRecon.git # Navigate and build the executable cd PoissonRecon/ make cd ..
Before running, make sure you have downloaded the necessary example data and configured your .yaml file with the correct paths to the checkpoints and PoissonRecon executable.
Before running the examples, you need to download the sample 2D images and low-quality 3D models.
-
Download the file: Click the link below to download
Inputs.zip
from Google Drive. -
Unzip the file: Move the downloaded
Inputs.zip
to the root of this project directory (e.g.,Elevate3D/
). Then, run the following command in your terminal to create an./Inputs
folder and extract the files into it:# Make sure Inputs.zip is in the current project directory unzip Inputs.zip
Before running the examples the directory structure should be like this:
Elevate3D/
├── Checkpoints/ <-- For pre-trained models
│ └── sam/
│ └── sam_vit_h_4b8939.pth
├── Inputs/ <-- Example data you just downloaded
│ ├── 2D/
│ └── 3D/
├── PoissonRecon/ <-- For geometry processing
│ └── Bin/
│ └── Linux/
│ └── PoissonRecon <-- The compiled executable
├── ... (other project files)
└── README.md
An example of using HFS-SDEdit for 2D image refinement.
This runs our texture enhancement module on a single image.
python -m FLUX.flux_HFS-SDEdit
This script runs the complete Elevate3D pipeline on an example model. It will perform iterative texture and geometry refinement.
bash run_3d_refine_script_example.sh
The process is broken into two stages: Data Pre-processing and Model Refinement.
We will use a sample knight
model, generated from TRELLIS, as a running example.
This step takes your input 3D model and prepares it for the refinement stage.
1. Get the Example Files: Download the example data from this link
After unzipping, you will have:
knight.glb
: The low-quality input 3D model.knight.txt
: A text file describing the target appearance.
Organize the inputs as follows:
# Create the directory structure
mkdir -p Inputs/3D/MyData/knight
# Move the downloaded files into it
mv knight.glb Inputs/3D/MyData/knight
mv knight.txt Inputs/3D/MyData/knight
2. Execute the Pre-processing Script
Run the provided shell script, passing the path to your object's directory. This script handles model normalization and multi-view rendering.
bash run_render_script.sh
The script will populate your directory with new files.:
Inputs/3D/MyData/
└── knight/
├── knight_normalized.obj
├── prompt.txt
├── train_0_0.png
└── ... (many more rendered images)
Now that the data is ready, you can run the main refinement algorithm.
Use the following command, ensuring the dataset
and obj_name
arguments match the directory structure you just created.
python main_refactor.py \
--obj_name="knight" \
--conf_name="config_my" \
--bake_mesh \
--device_idx=0
The default config_my.yaml
provides a good starting point.
For best results on your own data, we recommend adjusting this configuration for your own dataset.
After the refinement results will be available in Outputs/3D/MyData/knight
This work builds upon the fantastic research and open-source contributions from the community. We extend our sincere thanks to the authors of the following projects:
- FLUX
- BiNI
- Segment Anything (SAM)
- Marigold
- Marigold E2E
- PoissonRecon
- continuous-remeshing
- InTeX
- TRELLIS
If you find this work helpful, please consider citing our paper:
@inproceedings{10.1145/3721238.3730701,
author = {Ryu, Nuri and Won, Jiyun and Son, Jooeun and Gong, Minsu and Lee, Joo-Haeng and Cho, Sunghyun},
title = {Elevating 3D Models: High-Quality Texture and Geometry Refinement from a Low-Quality Model},
year = {2025},
isbn = {9798400715402},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3721238.3730701},
doi = {10.1145/3721238.3730701},
booktitle = {Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers},
articleno = {165},
numpages = {12},
keywords = {3D Asset Refinement, Diffusion models},
location = {},
series = {SIGGRAPH Conference Papers '25}
}