-
Notifications
You must be signed in to change notification settings - Fork 133
[Documentation] Nvidia style edits to surgical scene reconstruction #1329
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 10 commits
7ba449c
9c5ba36
cb9eb17
c5b9657
b8e97b9
3a688e5
183875e
4e982f8
5350154
1a71b0d
050a7e9
8be0367
248989e
20a3ce1
c48adca
b7918ac
54d31a8
a3b1594
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -1,16 +1,14 @@ | ||||||
| # Surgical Scene Reconstruction with Gaussian Splatting | ||||||
|
|
||||||
| Real-time 3D surgical scene reconstruction using Gaussian Splatting in a Holoscan streaming pipeline with temporal deformation for accurate tissue modeling. | ||||||
| This application demonstrates real-time 3D surgical scene reconstruction by combining **Holoscan SDK** for high-performance streaming, **3D Gaussian Splatting** for neural 3D representation, and **temporal deformation networks** for accurate modeling of dynamic tissue. | ||||||
|
|
||||||
|  | ||||||
|
|
||||||
megnvidia marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
| ## Overview | ||||||
|  | ||||||
|
|
||||||
| This application demonstrates real-time 3D surgical scene reconstruction by combining **Holoscan SDK** for high-performance streaming, **3D Gaussian Splatting** for neural 3D representation, and **temporal deformation networks** for accurate modeling of dynamic tissue. | ||||||
|
|
||||||
| The application provides a complete end-to-end pipeline—from raw surgical video to real-time 3D reconstruction—enabling researchers and developers to train custom models on their own endoscopic data and visualize results with GPU-accelerated rendering. | ||||||
| The application provides a complete end-to-end pipeline—from raw surgical video to real-time 3D reconstruction. Researchers and developers can use it to train custom models on their own endoscopic data and visualize results with GPU-accelerated rendering. | ||||||
|
|
||||||
| ### Key Features | ||||||
| Features of this application include: | ||||||
|
|
||||||
| - **Real-time Visualization:** Stream surgical scene reconstruction at >30 FPS using Holoscan | ||||||
| - **Temporal Deformation:** Accurate per-frame tissue modeling as it deforms over time | ||||||
|
|
@@ -19,13 +17,9 @@ The application provides a complete end-to-end pipeline—from raw surgical vide | |||||
| - **Two Operation Modes:** Inference-only (with pre-trained checkpoint) OR train-then-render | ||||||
| - **Production Ready:** Tested and optimized Holoscan pipeline with complete Docker containerization | ||||||
|
|
||||||
| ### What It Does | ||||||
| It takes input from EndoNeRF surgical datasets (RGB images + stereo depth + camera poses + tool masks). It processes the input using multi-frame Gaussian Splatting with a 4D spatiotemporal deformation network. And it outputs real-time 3D tissue reconstruction without surgical instruments. | ||||||
megnvidia marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
|
||||||
| - **Input:** EndoNeRF surgical dataset (RGB images + stereo depth + camera poses + tool masks) | ||||||
| - **Process:** Multi-frame Gaussian Splatting with 4D spatiotemporal deformation network | ||||||
| - **Output:** Real-time 3D tissue reconstruction without surgical instruments | ||||||
|
|
||||||
| ### Use Cases | ||||||
| It is ideal for use cases, such as: | ||||||
bhashemian marked this conversation as resolved.
Show resolved
Hide resolved
megnvidia marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
|
||||||
| - Surgical scene understanding and visualization | ||||||
| - Tool-free tissue reconstruction for analysis | ||||||
|
|
@@ -34,7 +28,7 @@ The application provides a complete end-to-end pipeline—from raw surgical vide | |||||
|
|
||||||
| ## Quick Start | ||||||
|
|
||||||
| ### Step 1: Clone HoloHub | ||||||
| ### Step 1: Clone the HoloHub Repository | ||||||
|
|
||||||
| ```bash | ||||||
| git clone https://github.com/nvidia-holoscan/holohub.git | ||||||
|
|
@@ -43,31 +37,35 @@ cd holohub | |||||
|
|
||||||
| ### Step 2: Read and Agree to the Terms and Conditions of the EndoNeRF Sample Dataset | ||||||
|
|
||||||
| - Read and agree to the [Terms and Conditions](https://docs.google.com/document/d/1P6q2hXoGpVMKeD-PpjYYdZ0Yx1rKZdJF1rXxpobbFMY/edit?usp=share_link) for the EndoNeRF dataset. | ||||||
| - EndoNeRF sample dataset is being downloaded automatically when building the application. For manual download, please refer to the [Data](#data) section below. | ||||||
| - If you do not agree to the terms and conditions, set the `HOLOHUB_DOWNLOAD_DATASETS` environment variable to `OFF` and manually download the dataset and place it in the correct location by following the instructions in the [Data](#data) section below. | ||||||
| 1. Read and agree to the [Terms and Conditions](https://docs.google.com/document/d/1P6q2hXoGpVMKeD-PpjYYdZ0Yx1rKZdJF1rXxpobbFMY/edit?usp=share_link) for the EndoNeRF dataset. | ||||||
| 1. EndoNeRF sample dataset is being downloaded automatically when building the application. | ||||||
| 1. Optionally, for manual download of the dataset, refer to the [Data](#pulling-soft-tissues-dataset) section below. | ||||||
| 1. Optionally, if you do not agree to the terms and conditions, set the `HOLOHUB_DOWNLOAD_DATASETS` environment variable to `OFF` and manually download the dataset and place it in the correct location by following the instructions in the [Data](#pulling-soft-tissues-dataset) section below. | ||||||
bhashemian marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
|
|
||||||
| ```bash | ||||||
| export HOLOHUB_DOWNLOAD_DATASETS=OFF | ||||||
| ``` | ||||||
| ```bash | ||||||
| export HOLOHUB_DOWNLOAD_DATASETS=OFF | ||||||
| ``` | ||||||
|
|
||||||
| ### Step 3: Run Training | ||||||
|
|
||||||
| To run the model training: | ||||||
|
|
||||||
megnvidia marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
| ```bash | ||||||
| ./holohub run surgical_scene_recon train | ||||||
| ``` | ||||||
| ./holohub run surgical_scene_recon train | ||||||
megnvidia marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
| ``` | ||||||
bhashemian marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
bhashemian marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
megnvidia marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
|
|
||||||
| ### Step 4: Dynamic Rendering with Trained Model | ||||||
| ### Step 4: Dynamic Rendering with a Trained Model | ||||||
|
|
||||||
| After training completes, visualize your results in real-time: | ||||||
| After training completes, to visualize your results in real-time, run the surgical render: | ||||||
megnvidia marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
|
||||||
| ```bash | ||||||
| ./holohub run surgical_scene_recon render | ||||||
| ``` | ||||||
|
|
||||||
|  | ||||||
|
|
||||||
| ## Data | ||||||
| ## Pulling Soft Tissues Dataset | ||||||
megnvidia marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
|
|
||||||
| This application uses the **EndoNeRF "pulling_soft_tissues" dataset**, which contains: | ||||||
|
|
||||||
|
|
@@ -76,36 +74,43 @@ This application uses the **EndoNeRF "pulling_soft_tissues" dataset**, which con | |||||
| - Tool segmentation masks for instrument removal | ||||||
| - Camera poses and bounds (poses_bounds.npy) | ||||||
|
|
||||||
| ### Download | ||||||
| ### Download the Dataset | ||||||
|
|
||||||
| 📦 **Direct Google Drive:** <https://drive.google.com/drive/folders/1zTcX80c1yrbntY9c6-EK2W2UVESVEug8?usp=sharing> | ||||||
| You can download the dataset from one of the following locations: | ||||||
|
|
||||||
| In the Google Drive folder, you'll see: | ||||||
| * 📦 Direct Google Drive: <https://drive.google.com/drive/folders/1zTcX80c1yrbntY9c6-EK2W2UVESVEug8?usp=sharing> | ||||||
|
|
||||||
| - `cutting_tissues_twice` | ||||||
| - `pulling_soft_tissues` ← **Download this one** | ||||||
| 1. In the Google Drive folder, you'll see: | ||||||
|
|
||||||
| **Alternative:** Visit the [EndoNeRF repository](https://github.com/med-air/EndoNeRF) | ||||||
| - `cutting_tissues_twice` | ||||||
| - `pulling_soft_tissues` | ||||||
|
|
||||||
| 1. Download `pulling_soft_tissues`. | ||||||
|
|
||||||
| * Visit the [EndoNeRF repository](https://github.com/med-air/EndoNeRF). | ||||||
bhashemian marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
|
||||||
| ### Dataset Setup | ||||||
|
|
||||||
| The dataset will be automatically used by the application when placed in the correct location. Refer to the [HoloHub glossary](../../README.md#Glossary) for definitions of HoloHub-specific directory terms used below. | ||||||
|
|
||||||
| Place the dataset at `<HOLOHUB_ROOT>/data/EndoNeRF/pulling/`: | ||||||
| To place the dataset at `<HOLOHUB_ROOT>/data/EndoNeRF/pulling/`: | ||||||
|
|
||||||
| ```bash | ||||||
| # From the HoloHub root directory | ||||||
| mkdir -p data/EndoNeRF | ||||||
| 1. From the HoloHub root directory: | ||||||
| ```bash | ||||||
| mkdir -p data/EndoNeRF | ||||||
| ``` | ||||||
|
|
||||||
| # Extract and move (or copy) the downloaded dataset | ||||||
| mv /path/to/pulling_soft_tissues data/EndoNeRF/pulling | ||||||
| ``` | ||||||
| 1. Extract and move (or copy) the downloaded dataset: | ||||||
|
|
||||||
| ```bash | ||||||
| mv /path/to/pulling_soft_tissues data/EndoNeRF/pulling | ||||||
| ``` | ||||||
|
|
||||||
| **⚠️ Important:** The dataset MUST be physically at the path above—do NOT use symlinks! Docker containers cannot follow symlinks outside mounted volumes. | ||||||
| **Important:** The dataset MUST be physically at the path above, do NOT use symlinks. Docker containers cannot follow symlinks outside mounted volumes. | ||||||
|
|
||||||
| ### Verify Dataset Structure | ||||||
| ### Verify the Dataset Structure | ||||||
|
|
||||||
| Your dataset should have this structure: | ||||||
| Verify that your dataset has this structure: | ||||||
|
|
||||||
| ```text | ||||||
| <HOLOHUB_ROOT>/ | ||||||
|
|
@@ -118,39 +123,53 @@ Your dataset should have this structure: | |||||
| └── poses_bounds.npy # Camera poses (8.5 KB) | ||||||
| ``` | ||||||
|
|
||||||
| ## Model | ||||||
| ## Models Used by the `surgical_scene_recon` Application | ||||||
megnvidia marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
|
||||||
| The `surgical_scene_recon` application uses a **3D Gaussian Splatting** model with a **temporal deformation network** for dynamic scene reconstruction. | ||||||
|
|
||||||
| The application uses **3D Gaussian Splatting** with a **temporal deformation network** for dynamic scene reconstruction: | ||||||
| - Gaussian Splatting Model | ||||||
|
|
||||||
| ### Gaussian Splatting | ||||||
| Each portion of the application makes use of different aspects of the Gaussian Splatting Model. | ||||||
bhashemian marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
|
|
||||||
megnvidia marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
| - **Architecture:** 3D Gaussians with learned position, scale, rotation, opacity, and color | ||||||
| - **Initialization:** Multi-frame point cloud (~30,000-50,000 points from all frames) | ||||||
| - **Renderer:** gsplat library (CUDA-accelerated differentiable rasterization) | ||||||
| - **Spherical Harmonics:** Degree 3 (16 coefficients per gaussian for view-dependent color) | ||||||
| - **Resolution:** 640×512 pixels (RGB, 3 channels) | ||||||
| - Architecture: 3D Gaussians with learned position, scale, rotation, opacity, and color | ||||||
| - Initialization: Multi-frame point cloud (~30,000-50,000 points from all frames) | ||||||
| - Renderer: `gsplat` library (CUDA-accelerated differentiable rasterization) | ||||||
| - Spherical Harmonics: Degree 3 (16 coefficients per gaussian for view-dependent color) | ||||||
| - Resolution: 640×512 pixels (RGB, three channels) | ||||||
megnvidia marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
|
||||||
| ### Temporal Deformation Network | ||||||
| - Temporal Deformation Network Model | ||||||
|
|
||||||
bhashemian marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
| - **Architecture:** HexPlane 4D spatiotemporal grid + MLP decoder | ||||||
| - **Input:** 3D position + normalized time value [0, 1] | ||||||
| - **Output:** Deformed position, scale, rotation, and opacity changes | ||||||
| - **Training:** Two-stage process (coarse: static, fine: with deformation) | ||||||
| - **Inference:** Direct PyTorch (no conversion, full precision) | ||||||
| The Temporal Deformation Network deforms 3D Gaussians over time to model dynamic tissue movement during surgery. | ||||||
bhashemian marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||||||
|
|
||||||
| ### Training Process | ||||||
| - Architecture: HexPlane 4D spatiotemporal grid + MLP decoder | ||||||
| - Input: 3D position + normalized time value [0, 1] | ||||||
| - Output: Deformed position, scale, rotation, and opacity changes | ||||||
| - Training: Two-stage process (coarse: static, fine: with deformation) | ||||||
| - Inference: Direct PyTorch (no conversion, full precision) | ||||||
|
|
||||||
| ## About the Model Training Process | ||||||
|
|
||||||
| The application trains in two stages: | ||||||
|
|
||||||
| 1. **Coarse Stage:** Learn base static Gaussians without deformation | ||||||
| 2. **Fine Stage:** Add temporal deformation network for dynamic tissue modeling | ||||||
| 1. The Coarse Stage where the application learns the base static Gaussian models without deformation. | ||||||
| 2. The Fine Stage where a temporal deformation network model is added for dynamic tissue modeling. | ||||||
|
|
||||||
| The training uses: | ||||||
|
|
||||||
| - **Multi-modal Data:** RGB images, depth maps, tool segmentation masks | ||||||
| - **Loss Functions:** RGB loss, depth loss, TV loss, masking losses | ||||||
| - **Optimization:** Adam optimizer with batch-size scaled learning rates | ||||||
| - **Tool Removal:** Segmentation masks applied during training for tissue-only reconstruction | ||||||
| - Multi-modal Data: RGB images, depth maps, tool segmentation masks | ||||||
| - Loss Functions: RGB loss, depth loss, TV loss, masking losses | ||||||
| - Optimization: Adam optimizer with batch-size scaled learning rates | ||||||
| - Tool Removal: Segmentation masks applied during training for tissue-only reconstruction | ||||||
|
|
||||||
| The **training pipeline** (`gsplat_train.py`) runs in the following order: | ||||||
|
|
||||||
| 1. Data Loading using EndoNeRF parser loads RGB, depth, masks, and poses. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Awkward phrasing with "using" appearing twice.
Suggested change
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time! |
||||||
| 2. Initialization uses Multi-frame point cloud (~30k points). | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Awkward phrasing - "Initialization uses Multi-frame point cloud" reads unnaturally.
Suggested change
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time! |
||||||
| 3. Training happens in two stages: | ||||||
| - Coarse | ||||||
| - Fine | ||||||
| 4. Optimization is done by the Adam (Adaptive Moment Estimation) optimizer with batch-size scaled learning rates. | ||||||
| 5. Regularization, for depth loss, TV loss, and masking losses, is performed on the data. | ||||||
|
|
||||||
| The default training command trains a model on all 63 frames with 2000 iterations, producing smooth temporal deformation and high-quality reconstruction. | ||||||
|
|
||||||
|
|
@@ -178,60 +197,48 @@ EndoNeRFLoaderOp → GsplatLoaderOp → GsplatRenderOp → HolovizOp | |||||
| ImageSaverOp | ||||||
| ``` | ||||||
|
|
||||||
| **Components:** | ||||||
|
|
||||||
| - **EndoNeRFLoaderOp:** Streams camera poses and timestamps | ||||||
| - **GsplatLoaderOp:** Loads checkpoint and deformation network | ||||||
| - **GsplatRenderOp:** Applies temporal deformation and renders | ||||||
| - **HolovizOp:** Real-time GPU-accelerated visualization | ||||||
| - **ImageSaverOp:** Optional frame saving | ||||||
|
|
||||||
| ## Requirements | ||||||
| ## Requirements for the `surgical_scene_recon` Application | ||||||
|
|
||||||
| - **Hardware:** | ||||||
| - NVIDIA GPU (RTX 3000+ series recommended, tested on RTX 6000 Ada Generation) | ||||||
| - ~2 GB free disk space (dataset) | ||||||
| - ~30 GB free disk space (Docker container) | ||||||
| - ~2 GB free disk space (for the dataset) | ||||||
| - ~30 GB free disk space (for Docker containers) | ||||||
| - **Software:** | ||||||
| - Docker with NVIDIA GPU support | ||||||
| - X11 display server (for visualization) | ||||||
| - Holoscan SDK 3.7.0 or later (automatically provided in container) | ||||||
| - Holoscan SDK 3.7.0 or later (automatically provided in containers) | ||||||
|
|
||||||
| ## Testing | ||||||
| ## Application Integration Testing | ||||||
|
|
||||||
| We provide integration tests that can be run with the following command to test the application for training and inference: | ||||||
| We provide integration tests. | ||||||
|
|
||||||
| To test the application for training and inference, run: | ||||||
|
|
||||||
| ```bash | ||||||
| ./holohub test surgical_scene_recon --verbose | ||||||
| ``` | ||||||
|
|
||||||
| ## Technical Details | ||||||
|
|
||||||
| ### Training Pipeline (gsplat_train.py) | ||||||
|
|
||||||
| 1. **Data Loading:** EndoNeRF parser loads RGB, depth, masks, poses | ||||||
| 2. **Initialization:** Multi-frame point cloud (~30k points) | ||||||
| 3. **Two-Stage Training:** | ||||||
| - **Coarse:** Learn base Gaussians (no deformation) | ||||||
| - **Fine:** Add temporal deformation network | ||||||
| 4. **Optimization:** Adam with batch-size scaled learning rates | ||||||
| 5. **Regularization:** Depth loss, TV loss, masking losses | ||||||
| ## Performance | ||||||
|
|
||||||
| ### Performance | ||||||
| Tested Configuration: | ||||||
|
|
||||||
| **Tested Configuration:** | ||||||
| - GPU: NVIDIA RTX 6000 Ada Generation | ||||||
| - Container: Holoscan SDK 3.7.0 | ||||||
| - Training Time: ~5 minutes (63 frames, 2000 iterations) | ||||||
| - Rendering: Real-time >30 FPS | ||||||
|
|
||||||
| - **GPU:** NVIDIA RTX 6000 Ada Generation | ||||||
| - **Container:** Holoscan SDK 3.7.0 | ||||||
| - **Training Time:** ~5 minutes (63 frames, 2000 iterations) | ||||||
| - **Rendering:** Real-time >30 FPS | ||||||
| Quality Metrics (train mode): | ||||||
|
|
||||||
| **Quality Metrics (train mode):** | ||||||
|
|
||||||
| - **PSNR:** ~36-38 dB | ||||||
| - **SSIM:** ~0.80 | ||||||
| - **Gaussians:** ~50,000 splats | ||||||
| - **Deformation:** Smooth temporal consistency | ||||||
| - PSNR: ~36-38 dB | ||||||
| - SSIM: ~0.80 | ||||||
| - Gaussians: ~50,000 splats | ||||||
| - Deformation: Smooth temporal consistency | ||||||
|
|
||||||
| ## Troubleshooting | ||||||
|
|
||||||
|
|
@@ -263,40 +270,40 @@ We provide integration tests that can be run with the following command to test | |||||
|
|
||||||
| ### Citation | ||||||
|
|
||||||
| If you use this work, please cite: | ||||||
| If you use this work, cite the following: | ||||||
|
|
||||||
| **EndoNeRF:** | ||||||
| * EndoNeRF: | ||||||
|
|
||||||
| ```bibtex | ||||||
| @inproceedings{wang2022endonerf, | ||||||
| title={EndoNeRF: Neural Rendering for Stereo 3D Reconstruction of Deformable Tissues in Robotic Surgery}, | ||||||
| author={Wang, Yuehao and Yifan, Wang and Tao, Rui and others}, | ||||||
| booktitle={MICCAI}, | ||||||
| year={2022} | ||||||
| } | ||||||
| ``` | ||||||
| ```bibtex | ||||||
| @inproceedings{wang2022endonerf, | ||||||
| title={EndoNeRF: Neural Rendering for Stereo 3D Reconstruction of Deformable Tissues in Robotic Surgery}, | ||||||
| author={Wang, Yuehao and Yifan, Wang and Tao, Rui and others}, | ||||||
| booktitle={MICCAI}, | ||||||
| year={2022} | ||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| **3D Gaussian Splatting:** | ||||||
| * 3D Gaussian Splatting: | ||||||
|
|
||||||
| ```bibtex | ||||||
| @article{kerbl20233d, | ||||||
| title={3d gaussian splatting for real-time radiance field rendering}, | ||||||
| author={Kerbl, Bernhard and Kopanas, Georgios and Leimk{\"u}hler, Thomas and Drettakis, George}, | ||||||
| journal={ACM Transactions on Graphics}, | ||||||
| year={2023} | ||||||
| } | ||||||
| ``` | ||||||
| ```bibtex | ||||||
| @article{kerbl20233d, | ||||||
| title={3d gaussian splatting for real-time radiance field rendering}, | ||||||
| author={Kerbl, Bernhard and Kopanas, Georgios and Leimk{\"u}hler, Thomas and Drettakis, George}, | ||||||
| journal={ACM Transactions on Graphics}, | ||||||
| year={2023} | ||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| **gsplat Library:** | ||||||
| * `gsplat` Library: | ||||||
|
|
||||||
| ```bibtex | ||||||
| @software{ye2024gsplat, | ||||||
| title={gsplat}, | ||||||
| author={Ye, Vickie and Turkulainen, Matias and others}, | ||||||
| year={2024}, | ||||||
| url={https://github.com/nerfstudio-project/gsplat} | ||||||
| } | ||||||
| ``` | ||||||
| ```bibtex | ||||||
| @software{ye2024gsplat, | ||||||
| title={gsplat}, | ||||||
| author={Ye, Vickie and Turkulainen, Matias and others}, | ||||||
| year={2024}, | ||||||
| url={https://github.com/nerfstudio-project/gsplat} | ||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| ### License | ||||||
|
|
||||||
|
|
||||||
Uh oh!
There was an error while loading. Please reload this page.