Skip to content

Commit 7d481c1

Browse files
authored
Adds data collection pipeline (#40)
# Description Adds data collection pipeline. Fixes #7 #14 #36 ## Type of change - New feature (non-breaking change which adds functionality) ## Checklist - [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with `./formatter.sh` - [x] I have made corresponding changes to the documentation - [x] My changes generate no new warnings
1 parent 5fc8a3e commit 7d481c1

File tree

20 files changed

+1680
-41
lines changed

20 files changed

+1680
-41
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -92,10 +92,10 @@ Here an overview of the steps involved in training the policy.
9292
For more detailed instructions, please refer to [TRAINING.md](TRAINING.md).
9393

9494
0. Training Data Generation <br>
95-
Training data is generated from the [Matterport 3D](https://github.com/niessner/Matterport), [Carla](https://carla.org/) and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_static_assets.html) using developed Isaac Sim Extension, the extensions are part of a new internal project (``isaac-nav-suite``) and will be open sourced with that project. In the case that you require an earlier access, please contact us via mail.
95+
Training data is generated from the [Matterport 3D](https://github.com/niessner/Matterport), [Carla](https://carla.org/) and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_static_assets.html) using IsaacLab. For detailed instruction on how to install the extension and run the data collection script, please see [here](omniverse/README.md)
9696

9797
1. Build Cost-Map <br>
98-
The first step in training the policy is to build a cost-map from the available depth and semantic data. A cost-map is a representation of the environment where each cell is assigned a cost value indicating its traversability. The cost-map guides the optimization, therefore, is required to be differentiable. Cost-maps are built using the [cost-builder](viplanner/cost_builder.py) with configs [here](viplanner/config/costmap_cfg.py), given a pointcloud of the environment with semantic information (either from simultion or real-world information).
98+
The first step in training the policy is to build a cost-map from the available depth and semantic data. A cost-map is a representation of the environment where each cell is assigned a cost value indicating its traversability. The cost-map guides the optimization, therefore, is required to be differentiable. Cost-maps are built using the [cost-builder](viplanner/cost_builder.py) with configs [here](viplanner/config/costmap_cfg.py), given a pointcloud of the environment with semantic information (either from simultion or real-world information). The point-cloud of the simulated environments can be generated with the [reconstruction-script](viplanner/depth_reconstruct.py) with config [here](viplanner/config/costmap_cfg.py).
9999

100100
2. Training <br>
101101
Once the cost-map is constructed, the next step is to train the policy. The policy is a machine learning model that learns to make decisions based on the depth and semantic measurements. An example training script can be found [here](viplanner/train.py) with configs [here](viplanner/config/learning_cfg.py)

TRAINING.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,12 @@
22

33
Here an overview of the steps involved in training the policy is provided.
44

5+
6+
## Data Generation
7+
8+
For the data generation, please follow the instruction given in [here](omniverse/README.md).
9+
10+
511
## Cost-Map Building
612

713
Cost-Map building is an essential step in guiding optimization and representing the environment.
@@ -28,13 +34,14 @@ If depth and semantic images of the simulation are available, then first 3D reco
2834
├── xxxx.png # images saved with 4 digits, e.g. 0000.png
2935
```
3036

31-
when both depth and semantic images are available, then define sem_suffic and depth_suffix in ReconstructionCfg to differentiate between the two with the following structure:
37+
In the case that the semantic and depth images have an offset in their position (as typical on some robotic platforms),
38+
define a `sem_suffic` and `depth_suffix` in `ReconstructionCfg` to differentiate between the two with the following structure:
3239

3340
``` graphql
3441
env_name
3542
├── camera_extrinsic{depth_suffix}.txt # format: x y z qx qy qz qw
3643
├── camera_extrinsic{sem_suffix}.txt # format: x y z qx qy qz qw
37-
├── intrinsics.txt # P-Matrix for intrinsics of depth and semantic images
44+
├── intrinsics.txt # P-Matrix for intrinsics of depth and semantic images (depth first)
3845
├── depth # either png and/ or npy, if both npy is used
3946
| ├── xxxx{depth_suffix}.png # images saved with 4 digits, e.g. 0000.png
4047
| ├── xxxx{depth_suffix}.npy # arrays saved with 4 digits, e.g. 0000.npy
@@ -49,7 +56,7 @@ If depth and semantic images of the simulation are available, then first 3D reco
4956

5057
3. **Cost-Building** <br>
5158

52-
Fully automated, either a geometric or semantic cost map can be generated running the following command:
59+
Either a geometric or semantic cost map can be generated running the following command:
5360

5461
```
5562
python viplanner/cost_builder.py
@@ -72,7 +79,6 @@ If depth and semantic images of the simulation are available, then first 3D reco
7279
```
7380

7481

75-
7682
## Training
7783

7884
Configurations of the training given in [TrainCfg](viplanner/config/learning_cfg.py). Training can be started using the example training script [train.py](viplanner/train.py).

omniverse/README.md

Lines changed: 27 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://pre-commit.com/)
1313
[![License](https://img.shields.io/badge/license-BSD--3-yellow.svg)](https://opensource.org/licenses/BSD-3-Clause)
1414

15-
The ViPlanner Omniverse Extension offers a testing environment for ViPlanner.
15+
The ViPlanner Omniverse Extension offers a testing environment for ViPlanner and includes the data collection pipeline.
1616
Within NVIDIA Isaac Sim as a photorealistic simulator and using [IsaacLab](https://isaac-sim.github.io/IsaacLab/), this extension provides an assessment tool for ViPlanner's performance across diverse environments.
1717

1818

@@ -62,16 +62,9 @@ It is necessary to comply with PEP660 for the install. This requires the followi
6262
./isaaclab.sh -p -m pip install --upgrade setuptools
6363
```
6464
65-
## Usage
66-
67-
A demo script is provided to run the planner in three different environments: [Matterport](https://niessner.github.io/Matterport/), [Carla](https://carla.org//), and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/features/environment_setup/assets/usd_assets_environments.html#warehouse).
68-
In each scenario, the goal is represented as a movable cube within the environment.
69-
70-
To run the demo, download the model: [[checkpoint](https://drive.google.com/file/d/1PY7XBkyIGESjdh1cMSiJgwwaIT0WaxIc/view?usp=sharing)] [[config](https://drive.google.com/file/d/1r1yhNQAJnjpn9-xpAQWGaQedwma5zokr/view?usp=sharing)] and the environment files. Then adjust the paths (marked as `${USER_PATH_TO_USD}`) in the corresponding config files.
65+
## Download the Simulation Environments
7166
7267
### Matterport
73-
[Config](./extension/omni.viplanner/omni/viplanner/config/matterport_cfg.py)
74-
7568
To download Matterport datasets, please refer to the [Matterport3D](https://niessner.github.io/Matterport/) website. The dataset should be converted to USD format using Isaac Sim by executing the following steps:
7669
7770
1. Run the `convert_mesh.py` script to convert the `.obj` file (located under `matterport_mesh`) to `USD`. With the recent update of the asset converter script, use the resulting `*_non_metric.usd` file.
@@ -92,6 +85,20 @@ To download Matterport datasets, please refer to the [Matterport3D](https://nies
9285
top left corner, select `Show by Type -> Physics -> Colliders` and set the value to `All` ). The colliders should be visible as pink linkes. In the case that no colliders are presented, select the mesh in the stage,
9386
go the `Property` section and click `Add -> Physics -> Colliders Preset`. Then save the asset.
9487
88+
### Carla
89+
We provide an already converted asset of the `Town01` of Carla. It can be downloaded as USD asset: [Download USD Link](https://drive.google.com/file/d/1wZVKf2W0bSmP1Wm2w1XgftzSBx0UR1RK/view?usp=sharing)
90+
91+
92+
## Planer Demo
93+
94+
A demo script is provided to run the planner in three different environments: [Matterport](https://niessner.github.io/Matterport/), [Carla](https://carla.org//), and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/features/environment_setup/assets/usd_assets_environments.html#warehouse).
95+
In each scenario, the goal is represented as a movable cube within the environment.
96+
97+
To run the demo, download the model: [[checkpoint](https://drive.google.com/file/d/1PY7XBkyIGESjdh1cMSiJgwwaIT0WaxIc/view?usp=sharing)] [[config](https://drive.google.com/file/d/1r1yhNQAJnjpn9-xpAQWGaQedwma5zokr/view?usp=sharing)] and the environment files. Then adjust the paths (marked as `${USER_PATH_TO_USD}`) in the corresponding config files.
98+
99+
### Matterport
100+
[Config](./extension/omni.viplanner/omni/viplanner/config/matterport_cfg.py)
101+
95102
The demo uses the **2n8kARJN3HM** scene from the Matterport dataset. A preview is available [here](https://aspis.cmpt.sfu.ca/scene-toolkit/scans/matterport3d/houses).
96103
97104
```
@@ -100,7 +107,7 @@ cd IsaacLab
100107
```
101108
102109
### Carla
103-
[Download USD Link](https://drive.google.com/file/d/1wZVKf2W0bSmP1Wm2w1XgftzSBx0UR1RK/view?usp=sharing) | [Config](./extension/omni.viplanner/omni/viplanner/config/carla_cfg.py)
110+
[Config](./extension/omni.viplanner/omni/viplanner/config/carla_cfg.py)
104111
105112
```
106113
cd IsaacLab
@@ -115,7 +122,14 @@ cd IsaacLab
115122
./isaaclab.sh -p <path-to-viplanner-repo>/omniverse/standalone/viplanner_demo.py --scene warehouse --model_dir <path-to-model-download-dir>
116123
```
117124
118-
## Data Collection and Evaluation
125+
## Data Collection
126+
127+
The training data is generated from different simulation environments. After they have been downloaded and converted to USD, adjust the paths (marked as `${USER_PATH_TO_USD}`) in the corresponding config files ([Carla](./extension/omni.viplanner/omni/viplanner/config/carla_cfg.py) and [Matterport](./extension/omni.viplanner/omni/viplanner/config/matterport_cfg.py)).
128+
The rendered viewpoints are collected by executing
129+
130+
```
131+
cd IsaacLab
132+
./isaaclab.sh -p <path-to-viplanner-repo>/omniverse/standalone/viplanner_demo.py --scene <matterport/carla/warehouse> --num_samples <how-many-viewpoints>
133+
```
119134
120-
The data collection is currently included in a new internal project and will be released with this project in the future.
121-
If you require the code, please contact us per mail.
135+
To test that the data has been correctly extracted, please run the 3D reconstruction and see if the results fits to the simulated environment.

omniverse/extension/omni.isaac.matterport/omni/isaac/matterport/domains/matterport_raycast_camera.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,7 @@ def _initialize_impl(self):
6262
# More Information: https://github.com/niessner/Matterport/blob/master/data_organization.md#house_segmentations
6363
mapping = pd.read_csv(DATA_DIR + "/mappings/category_mapping.tsv", sep="\t")
6464
self.mapping_mpcat40 = torch.tensor(mapping["mpcat40index"].to_numpy(), device=self._device, dtype=torch.long)
65+
self.classes_mpcat40 = pd.read_csv(DATA_DIR + "/mappings/mpcat40.tsv", sep="\t")["mpcat40"].to_numpy()
6566
self._color_mapping()
6667

6768
def _color_mapping(self):

omniverse/extension/omni.viplanner/data/warehouse/keyword_mapping.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
# SPDX-License-Identifier: BSD-3-Clause
66

77
floor:
8+
- SM_Floor0
89
- SM_Floor1
910
- SM_Floor2
1011
- SM_Floor3
@@ -31,6 +32,7 @@ ceiling:
3132

3233
static:
3334
- LampCeiling
35+
- Section
3436
- SM_FloorDecal
3537
- SM_FireExtinguisher
3638

Binary file not shown.
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Copyright (c) 2023-2024, ETH Zurich (Robotics Systems Lab)
2+
# Author: Pascal Roth
3+
# All rights reserved.
4+
#
5+
# SPDX-License-Identifier: BSD-3-Clause
6+
7+
from .viewpoint_sampling import ViewpointSampling
8+
from .viewpoint_sampling_cfg import ViewpointSamplingCfg

0 commit comments

Comments
 (0)