Skip to content

Commit 3b49851

Browse files
authored
Updates the citation (#54)
# Description Update the citation and resolve some spelling mistakes in the README ## Type of change - Bug fix (non-breaking change which fixes an issue) ## Checklist - [ ] I have run the [`pre-commit` checks](https://pre-commit.com/) with `./formatter.sh` - [ ] I have made corresponding changes to the documentation - [ ] My changes generate no new warnings
1 parent 35a3bea commit 3b49851

File tree

1 file changed

+18
-17
lines changed

1 file changed

+18
-17
lines changed

README.md

Lines changed: 18 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -6,15 +6,15 @@
66
<a href="https://youtu.be/8KO4NoDw6CM">Video</a> •
77
<a href="#citing-viplanner">BibTeX</a>
88

9-
Click on image for demo video!
9+
Click on the image for the demo video!
1010
[![Demo Video](./assets/crosswalk.jpg)](https://youtu.be/8KO4NoDw6CM)
1111

1212
</p>
1313

1414
ViPlanner is a robust learning-based local path planner based on semantic and depth images.
15-
Fully trained in simulation, the planner can be applied in dynamic indoor as well outdoor environments.
15+
Fully trained in simulation, the planner can be applied in dynamic indoor as well as outdoor environments.
1616
We provide it as an extension for [NVIDIA Isaac-Sim](https://developer.nvidia.com/isaac-sim) within the [IsaacLab](https://isaac-sim.github.io/IsaacLab/) project (details [here](./omniverse/README.md)).
17-
Furthermore, a ready to use [ROS Noetic](http://wiki.ros.org/noetic) package is available within this repo for direct integration on any robot (tested and developed on ANYmal C and D).
17+
Furthermore, a ready-to-use [ROS Noetic](http://wiki.ros.org/noetic) package is available within this repo for direct integration on any robot (tested and developed on ANYmal C and D).
1818

1919
**Keywords:** Visual Navigation, Local Planning, Imperative Learning
2020

@@ -52,7 +52,7 @@ Furthermore, a ready to use [ROS Noetic](http://wiki.ros.org/noetic) package is
5252

5353
**Extension**
5454

55-
This work includes the switch from semantic to direct RGB input for the training pipeline, to facilitate further research. For RGB input, an option exist to employ a backbone with mask2former pre-trained weights. For this option, include the github submodule, install the requirements included there and build the necessary cuda operators. These steps are not necessary for the published planner!
55+
This work includes the switch from semantic to direct RGB input for the training pipeline to facilitate further research. For RGB input, an option exists to employ a backbone with mask2former pre-trained weights. For this option, include the GitHub submodule, install the requirements included there, and build the necessary Cuda operators. These steps are not necessary for the published planner!
5656

5757
```bash
5858
pip install git+https://github.com/facebookresearch/detectron2.git
@@ -64,7 +64,7 @@ sh make.sh
6464

6565
**Remark**
6666

67-
Note that for an editable install for packages without setup.py, PEP660 has to be fulfilled. This requires the following versions (as described [here](https://stackoverflow.com/questions/69711606/how-to-install-a-package-using-pip-in-editable-mode-with-pyproject-toml) in detail)
67+
Note that for an editable installation of packages without setup.py, PEP660 has to be fulfilled. This requires the following versions (as described [here](https://stackoverflow.com/questions/69711606/how-to-install-a-package-using-pip-in-editable-mode-with-pyproject-toml) in detail)
6868
- [pip >= 21.3](https://pip.pypa.io/en/stable/news/#v21-3)
6969
```
7070
python3 -m pip install --upgrade pip
@@ -79,7 +79,7 @@ Note that for an editable install for packages without setup.py, PEP660 has to b
7979

8080
1. Real-World <br>
8181

82-
ROS-Node is provided to run the planner on the LeggedRobot ANYmal, for details please see [ROS-Node-README](ros/README.md).
82+
ROS-Node is provided to run the planner on the LeggedRobot ANYmal; for details, please see [ROS-Node-README](ros/README.md).
8383

8484
2. NVIDIA Isaac-Sim <br>
8585

@@ -88,33 +88,34 @@ Note that for an editable install for packages without setup.py, PEP660 has to b
8888

8989
## Training
9090

91-
Here an overview of the steps involved in training the policy.
91+
Here is an overview of the steps involved in training the policy.
9292
For more detailed instructions, please refer to [TRAINING.md](TRAINING.md).
9393

9494
0. Training Data Generation <br>
95-
Training data is generated from the [Matterport 3D](https://github.com/niessner/Matterport), [Carla](https://carla.org/) and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_static_assets.html) using IsaacLab. For detailed instruction on how to install the extension and run the data collection script, please see [here](omniverse/README.md)
95+
Training data is generated from the [Matterport 3D](https://github.com/niessner/Matterport), [Carla](https://carla.org/) and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_static_assets.html) using IsaacLab. For detailed instructions on how to install the extension and run the data collection script, please see [here](omniverse/README.md)
9696

9797
1. Build Cost-Map <br>
98-
The first step in training the policy is to build a cost-map from the available depth and semantic data. A cost-map is a representation of the environment where each cell is assigned a cost value indicating its traversability. The cost-map guides the optimization, therefore, is required to be differentiable. Cost-maps are built using the [cost-builder](viplanner/cost_builder.py) with configs [here](viplanner/config/costmap_cfg.py), given a pointcloud of the environment with semantic information (either from simultion or real-world information). The point-cloud of the simulated environments can be generated with the [reconstruction-script](viplanner/depth_reconstruct.py) with config [here](viplanner/config/costmap_cfg.py).
98+
The first step in training the policy is to build a cost-map from the available depth and semantic data. A cost-map is a representation of the environment where each cell is assigned a cost value indicating its traversability. The cost-map guides the optimization, therefore, it is required to be differentiable. Cost-maps are built using the [cost-builder](viplanner/cost_builder.py) with configs [here](viplanner/config/costmap_cfg.py), given a pointcloud of the environment with semantic information (either from simulation or real-world information). The point-cloud of the simulated environments can be generated with the [reconstruction-script](viplanner/depth_reconstruct.py) with config [here](viplanner/config/costmap_cfg.py).
9999

100100
2. Training <br>
101-
Once the cost-map is constructed, the next step is to train the policy. The policy is a machine learning model that learns to make decisions based on the depth and semantic measurements. An example training script can be found [here](viplanner/train.py) with configs [here](viplanner/config/learning_cfg.py)
101+
Once the cost-map is constructed, the next step is to train the policy. The policy is a machine learning model that learns to make decisions based on depth and semantic measurements. An example training script can be found [here](viplanner/train.py) with configs [here](viplanner/config/learning_cfg.py)
102102

103103
3. Evaluation <br>
104-
Performance assessment can be performed on simulation and real-world data. The policy will be evaluated regarding multiple metrics such as distance to goal, average and maximum cost, path length. In order to let the policy be executed on anymal in simulation, please refer to [Omniverse Extension](./omniverse/README.md)
104+
Performance assessment can be performed on simulation and real-world data. The policy will be evaluated regarding multiple metrics such as distance to the goal, average and maximum cost, and path length. In order to let the policy be executed on anymal in simulation, please refer to [Omniverse Extension](./omniverse/README.md)
105105

106106

107107
### Model Download
108108
The latest model is available to download: [[checkpoint](https://drive.google.com/file/d/1PY7XBkyIGESjdh1cMSiJgwwaIT0WaxIc/view?usp=sharing)] [[config](https://drive.google.com/file/d/1r1yhNQAJnjpn9-xpAQWGaQedwma5zokr/view?usp=sharing)]
109109

110110
## <a name="CitingViPlanner"></a>Citing ViPlanner
111111
```
112-
@article{roth2023viplanner,
113-
title ={ViPlanner: Visual Semantic Imperative Learning for Local Navigation},
114-
author ={Pascal Roth and Julian Nubert and Fan Yang and Mayank Mittal and Marco Hutter},
115-
journal = {2024 IEEE International Conference on Robotics and Automation (ICRA)},
116-
year = {2023},
117-
month = {May},
112+
@inproceedings{roth2024viplanner,
113+
title={Viplanner: Visual semantic imperative learning for local navigation},
114+
author={Roth, Pascal and Nubert, Julian and Yang, Fan and Mittal, Mayank and Hutter, Marco},
115+
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
116+
pages={5243--5249},
117+
year={2024},
118+
organization={IEEE}
118119
}
119120
```
120121

0 commit comments

Comments
 (0)