Skip to content

Commit

Permalink
Merge pull request #14 from bit-bots/fix/dependency_conflict
Browse files Browse the repository at this point in the history
Fix/dependency conflict
  • Loading branch information
Flova authored Jan 9, 2024
2 parents 9325d91 + 89c7020 commit 8cb3413
Show file tree
Hide file tree
Showing 3 changed files with 1,099 additions and 617 deletions.
130 changes: 76 additions & 54 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,43 @@
## TORSO-21 Dataset: Typical Objects in RoboCup Soccer 2021
# TORSO-21 Dataset: Typical Objects in RoboCup Soccer 2021

This repository contains the scripts and addtional information for the TORSO-21 Dataset.
This is a dataset for the RoboCup Humanoid Soccer domain consisting of images of the Humanoid League as well as the Standard Platform League. We provide two image collections. The first one consists of images from various real-world locations, recorded by different robots. It includes annotations for the ball, goalposts, robots, lines, field edge, and three types of line intersections. The second collection is generated in the Webots simulator which is used for the official RoboCup Virtual Humanoid Soccer Competition. Additionally to the labels of the first collection, labels for the complete goal, depth images, 6D poses for all labels, as well as the camera location in the field of play, are provided.
This repository contains the scripts and additional information for the TORSO-21 Dataset.
This is a dataset for the RoboCup Humanoid Soccer domain consisting of images of the Humanoid League as well as the Standard Platform League. We provide two image collections. The first one consists of images from various real-world locations, recorded by different robots. It includes annotations for the ball, goalposts, robots (including team color and player number), lines, field edge, and three types of line intersections. The second collection is generated in the Webots simulator which is used for the official RoboCup Virtual Humanoid Soccer Competition. Additionally to the labels of the first collection, labels for the complete goal, depth images, 6D poses for all labels, as well as the camera location in the field of play, are provided.

### Meta Data
| # of real-world Images | 10464 |
## Meta Data
### Real World
| # of Images | 10464 |
|--------------------------|-------|
| # of Balls | 10959 |
| # of Robots | 14383 |
| # of Goalposts | 12780 |
| # of L-Intersections | 15458 |
| # of T-Intersection | 13479 |
| # of X-Intersections | 13447 |
| # of Balls | 6081 |
| # of Robots | 7641 |
| # of Goalposts | 7888 |
| # of L-Intersections | 10375 |
| # of T-Intersection | 8659 |
| # of X-Intersections | 7268 |
| # of Field Segmentations | 10464 |
| # of Line Segmentations | 10464 |

| Robot Team Colors | |
|--------------------------|-------|
| # of Blue Robots | 1277 |
| # of Red Robots | 1917 |
| # of Unknown Robots | 4447 |

| Robot Player Numbers | |
|--------------------------|-------|
| # of Robots w/out # | 6526 |
| # of Robots with #1 | 229 |
| # of Robots with #2 | 162 |
| # of Robots with #3 | 276 |
| # of Robots with #4 | 309 |
| # of Robots with #5 | 42 |
| # of Robots with #6 | 97 |

### Simultation
**\# of simulated images: 24.000**

### Example images
#### Real-World
## Example images

### Real-World

<img src="example_images/viz_689-1530140457.8499236frame09549.jpg" alt="example_image" height="120"/><img src="example_images/viz_614-frame1586.jpg" alt="example_image" height="120"/><img src="example_images/viz_627-frame0089.jpg" alt="example_image" height="120"/><img src="example_images/viz_615-frame8359.jpg" alt="example_image" height="120"/>

Expand All @@ -32,38 +51,36 @@ With segmentations of lines and field

<img src="example_images/segmentation_1_1.png" alt="example_image" width="200"/><img src="example_images/segmentation_1_0.png" alt="example_image" width="200"/><img src="example_images/segmentation_2_1.png" alt="example_image" width="200"/><img src="example_images/segmentation_2_0.png" alt="example_image" width="200"/>

#### Simulation
### Simulation
<img src="example_images/img_fake_cam_000800.PNG" alt="example_image" width="200"/><img src="example_images/img_fake_cam_000737.PNG" alt="example_image" width="200"/><img src="example_images/img_fake_cam_000530.PNG" alt="example_image" width="200"/><img src="example_images/img_fake_cam_000096.PNG" alt="example_image" width="200"/>

With annotations, segmentation mask and depth image

<img src="example_images/img_fake_cam_000059.PNG" alt="example_image" width="200"/><img src="example_images/viz_img_fake_cam_000059.PNG" alt="example_image" width="200"/><img src="example_images/img_fake_cam_000059_seg.PNG" alt="example_image" width="200"/><img src="example_images/img_fake_cam_000059_depth.PNG" alt="example_image" width="200"/>




## Download Dataset and Labels

### Manual Download

The images and annotations can be downloaded here: https://data.bit-bots.de/TORSO-21/
The images and annotations can be downloaded here: <https://data.bit-bots.de/TORSO-21/>

### Automated Download

The data can also be downloaded with the following script (use `--help` for further options):

```
```shell
./scripts/download_dataset.py --all
```

### YOLO Labelform
### YOLO Label Format

If you want to train a YOLO, you can use the script provided in [this repository](https://github.com/bit-bots/AutoImageLabeler#create-labels-for-training) to generate the labels.

## Structure

The repository structure is as follows:

```
```raw
├── data # contains the annotations and images
│ ├── reality # the images recorded in reality
│ │ ├── train # the training set
Expand Down Expand Up @@ -93,8 +110,10 @@ images:
height: 1080
annotations:
- blurred: true
concealed: true
color: unknown # possible values {blue, red, unknown}
concealed: false
in_image: true
number: null # possible values {null, 1, 2, 3, 4, 5, 6}
type: robot
vector:
- - 42 # x value
Expand Down Expand Up @@ -140,38 +159,44 @@ images:

Follow these instructions to set up the dependencies for the scripts used for visualization and creation of the dataset.

```
# Clone the repository
git clone https://github.com/bit-bots/TORSO_21_dataset.git
cd TORSO_21_dataset
1. Install the package manager Poetry as described [here](https://python-poetry.org/docs/#installation). This prevents dependency conflicts and ensures that the correct versions of the dependencies are installed.

# Install poetry
pip3 install poetry --user
2. Clone the repository:

# Install dependencies
poetry install --no-dev
```shell
git clone https://github.com/bit-bots/TORSO_21_dataset.git
```

# Optionally install dependencies including those for the dataset creation
poetry install
```
3. Move into the repository and install the dependencies

### Usage
- without optional dependencies (for dataset creation):

To run the tools you need to source the poetry environment in your shell.
```shell
cd TORSO_21_dataset && poetry install --without=dev --no-root
```

```
# Source the virtualenv
- with optional dependencies (for dataset creation):

```shell
cd TORSO_21_dataset && poetry install --no-root
```

## Usage

To run the tools you need to enter the poetry environment:

```shell
poetry shell
```

Instead, you can also use `poetry run ./scripts/...` to run the scripts without sourcing.
Alternatively, you can use `poetry run ./scripts/<file>` to run the scripts without sourcing.

### Visualize annotations

To visualize the annotations, run the following two commands to pickle and show the annotations in
the poetry environment:

```
```shell
./scripts/pickle_annotations.py data/reality/train/annotations.yaml
./scripts/viz_annotations.py data/reality/train/annotations.pkl
```
Expand All @@ -192,7 +217,6 @@ occurs per image. Its first argument is the annotation file to generate annotati
Sanity-checks the annotations, i.e. checks if some labels are marked as in image and not in image
and if the field boundary is contained.


### YOLO Evaluation

Simple script that runs a YOLO model against the test dataset and calculates the IOU metrics.
Expand All @@ -201,7 +225,7 @@ Simple script that runs a YOLO model against the test dataset and calculates the

### Further scripts

To use these scripts, make sure to install all dependencies with `poetry install`.
To use these scripts, make sure to install all dependencies with `poetry install` (see [Set up environment](#set-up-environment)).

#### `download_and_merge_data.py`

Expand Down Expand Up @@ -259,18 +283,18 @@ More details are avalible by running `vae/train.py -h`.
#### `vae/reconstruct.py`
This script runs the autoencoder on a given input and shows the recnstruction of the image.
More details are avalible by running `vae/reconstruct.py -h`.
This script runs the autoencoder on a given input and shows the reconstruction of the image.
More details are available by running `vae/reconstruct.py -h`.
#### `vae/embeddings.py`
This script runs the vae recursivly on all image inside a given folder and saves their latent space representation und reconstruction errors inside a file.
More details are avalible by running `vae/embeddings.py -h`.
This script runs the vae recursively on all image inside a given folder and saves their latent space representation und reconstruction errors inside a file.
More details are available by running `vae/embeddings.py -h`.
#### `vae/distances.py`
Plots n'th neigbours in the latent space of a given image.
More details are avalible by running `vae/distances.py -h`.
Plots n'th neighbors in the latent space of a given image.
More details are available by running `vae/distances.py -h`.

#### `vae/duplicates.py`

Expand All @@ -280,7 +304,7 @@ Creates a yaml file containing three lists containing:
* The images that got selected due to the high reconstruction error
* The images that will be removed from the dataset

More details are avalible by running `vae/duplicates.py -h`.
More details are available by running `vae/duplicates.py -h`.

#### `vae/plot_error.py`

Expand Down Expand Up @@ -338,13 +362,11 @@ The PyTorch dataset definition.

### Generation of Simulation Data

The code for generating the simulation data can be found here https://github.com/bit-bots/wolfgang_robot/blob/feature/recognition/wolfgang_webots_sim/src/wolfgang_webots_sim/webots_camera_controller.py

TODO better documentation
The code for generating the simulation data can be found here <https://github.com/bit-bots/wolfgang_robot/blob/feature/recognition/wolfgang_webots_sim/src/wolfgang_webots_sim/webots_camera_controller.py>

## Evaluation

Visualization of the position density of the respective annotations in the imagespace over all images of the real-world collection:
Visualization of the position density of the respective annotations in the image space over all images of the real-world collection:

<img src="example_images/heatmaps.png" alt="heatmaps"/>

Expand Down Expand Up @@ -379,8 +401,8 @@ _NOTE: You can get various citation types in the right sidebar on GitHub "Cite t

### July 12, 2021

* Replacement of the segmentations in the reality collection (using the `fix_segmentations.py`). The update towards the publication introduced incorrect color values of the field class in the segmentation images.
- Replacement of the segmentations in the reality collection (using the `fix_segmentations.py`). The update towards the publication introduced incorrect color values of the field class in the segmentation images.

### June 27, 2021

* Publication
- Publication
Loading

0 comments on commit 8cb3413

Please sign in to comment.