Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track Generation #6

Open
wants to merge 78 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
78 commits
Select commit Hold shift + click to select a range
73786a2
added more signs according to new CC2018 rules
fabolhak Nov 4, 2017
428e7ec
Extend schema for new signs and traffic isle, ramp
MaslinuPoimal Jan 4, 2019
a298e21
Add models for traffic signs (everything but the traffic velocity zones)
MaslinuPoimal Mar 2, 2019
606fc22
Add speed zone signs
MaslinuPoimal Mar 2, 2019
4630818
Working sign mesh spawn
MaslinuPoimal Mar 2, 2019
bba76e1
Move meshes and materials to drive_gazebo_worlds
MaslinuPoimal Mar 4, 2019
186de2d
Adjust output structure to match worlds package, remove keyframe plug…
MaslinuPoimal Mar 4, 2019
eaabbb1
Spawn ramp model
MaslinuPoimal Mar 5, 2019
3838696
Wip Ramp sign addition
MaslinuPoimal Mar 14, 2019
45552cd
WIP changes
MaslinuPoimal May 31, 2019
6857b2c
Working speed limit markings
MaslinuPoimal Jun 2, 2019
6546314
Add turn direction markers to rendering, add font file for easy insta…
Jun 10, 2019
1d3de62
Update README.md
MaslinuPoimal Jun 10, 2019
9de949c
Add option to place signs on opposite side, spawn signs on both sides…
Jun 11, 2019
8a77925
Added all directional signs at intersection, added additional case of…
Jun 11, 2019
be6c8f3
Cleanup, add 10-zone to driving-extended.xml
Jun 11, 2019
757a094
Fix mirrored speed limit text, add 50 zone to driving_extended.xml
Jun 12, 2019
5086e8b
Merge branch 'master' of https://github.com/tum-phoenix/drive_sim_roa…
Jun 12, 2019
2630eb3
Driving-extended.xml fixes, some testing for visual improvements
MaslinuPoimal Jun 12, 2019
e317d2c
Initial commit, very WIP
MaslinuPoimal Nov 18, 2019
90aeb8e
Set up basic structure
MaslinuPoimal Nov 20, 2019
d9441e4
Working initial
MaslinuPoimal Nov 20, 2019
50d488c
Working environment generation
Nov 21, 2019
0e589eb
Working blender renderer
Nov 26, 2019
bafa96d
Readme, some convenience adjustments
Nov 26, 2019
0558e46
Render blocked areas with the corresponding class
Nov 26, 2019
2ee778b
Copy commonroad file to output
Nov 26, 2019
0578945
Improve default render quality
Nov 27, 2019
190c3e2
Fixed midline bleeding in curves, add instance ID pass for signs, mov…
Nov 28, 2019
3c5deca
Fix midline bleeding, add color to svg turn markers
Nov 29, 2019
dcf3c8f
Fix center segmentation gap, fix black lines at seams, render keyfram…
Nov 29, 2019
b5cb2c4
Fix output configurations, correct camera position, remove some compr…
Dec 4, 2019
c766add
Render all frames
Dec 4, 2019
c41bbd1
Fix missing object id for first sign, add smoothing to traffic signs
Dec 7, 2019
6c8ffe1
Instance ID only for the actual sign
Dec 7, 2019
8b6b851
Add lane segmentation layer
Dec 7, 2019
0b7f215
Added lane segmentation generation pass
Dec 7, 2019
5bfdbce
Object detection label extraction from segmentation + instances
Dec 7, 2019
ee25d28
Fix roatation offset with car mesh, fix incorrect ground in segmentat…
Dec 7, 2019
1376084
Append python paths by default
MaslinuPoimal Dec 7, 2019
cde544c
Wooden Lounge env config
MaslinuPoimal Dec 7, 2019
29a7dc1
Add machine shop config
MaslinuPoimal Dec 8, 2019
973ee9e
Do not smooth segmentation signs
Dec 8, 2019
36622aa
Merge branch 'blender_renderer' of https://github.com/tum-phoenix/dri…
Dec 8, 2019
c1612b8
name files correctly if continuing to render
Dec 8, 2019
f31e886
Fix wrong groundplane color space, fix wrong turn marking labels
Dec 8, 2019
63924d7
Speed up segmentation rendering by adding car mask, extend blender pr…
Dec 12, 2019
78552b2
Fix broken signs on binary blender 2.79
MaslinuPoimal Dec 12, 2019
523663b
Adjust default config
MaslinuPoimal Dec 12, 2019
58bf33e
Adjust default config
MaslinuPoimal Dec 12, 2019
d5c6d52
Adjust preset, cleanup
MaslinuPoimal Dec 12, 2019
d55add2
Add correct arrows, correct 40 zone sign
MaslinuPoimal Dec 14, 2019
6697990
Fix mirrored arrows in segmentation
MaslinuPoimal Dec 14, 2019
dd2648a
Working segmentation groundplane with different output forlder
Dec 17, 2019
5d3adf7
Merge branch 'blender_renderer' of https://github.com/tum-phoenix/dri…
Dec 17, 2019
510fca9
Update env configs
Dec 17, 2019
bceb88a
Merge branch 'blender_renderer' of https://github.com/tum-phoenix/dri…
Dec 17, 2019
319a4b4
Add multi-camera support, add top camera
MaslinuPoimal Dec 22, 2019
acf03ba
Improve camera settings
Dec 22, 2019
ed379f5
Adjust render default settings
Dec 22, 2019
22cb44a
Fix instance map on Ubuntu 18, assign correct sign class label
Dec 23, 2019
af064f2
Align object detection labels to hand-labeled ones
Dec 30, 2019
315d4a4
Fix label assignment by assigning the dominant label, add segmentatio…
Jan 1, 2020
c9d98f0
Disable object label viz be default, modify output
MaslinuPoimal Jan 1, 2020
cb4f47c
Fix incorrect label for up/down
MaslinuPoimal Jan 5, 2020
b3c7934
Add missing requirements
MaslinuPoimal Jan 6, 2020
db635e6
Merge branch 'blender_renderer' of https://github.com/tum-phoenix/dri…
MaslinuPoimal Jan 6, 2020
9aac8fc
Remove color mapping from GT passes
MaslinuPoimal Jan 19, 2020
0d293a4
Fix camera rotation transform
MaslinuPoimal Jan 19, 2020
3be1c57
Enable GPU rendering by default
MaslinuPoimal Jan 19, 2020
df9850d
Fix orientation
MaslinuPoimal Jan 20, 2020
a1a0532
Fix orientation of camera, fix segmentation masks
MaslinuPoimal Jan 20, 2020
7a92f1b
Enable rgb and seg mask by default
Jan 20, 2020
0af0f48
add industrial env config
MaslinuPoimal Jan 23, 2020
214c3a3
Add random jitter
MaslinuPoimal Jan 23, 2020
fce26aa
Adjust orientation
MaslinuPoimal Jan 24, 2020
abd9311
Fix jitter transform
MaslinuPoimal Jan 24, 2020
7b11ae8
Add env config
MaslinuPoimal Jan 26, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,7 @@
__pycache__
keyframe_plugin/build
keyframe_plugin/CMakeLists.txt.user
world/
driving-scenario.xml
blender-output
.idea
39 changes: 38 additions & 1 deletion README.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,18 @@ make
export GAZEBO_PLUGIN_PATH=`pwd`:$GAZEBO_PLUGIN_PATH
```

Additionally, install the font located in the commonroad/renderer/fonts subfolder, which is needed to draw speed limit numbers.

## Usage
Generate a random scenario in CommonRoad XML from Template XML:

```
./road-generator.py presets/driving.xml -o driving-scenario.xml
```

Render CommonRoad XML for Gazbeo:
Attention: for rendering of the advanced cup track, you will need an installed "Din 1451 Std" font.

Render CommonRoad XML for Gazebo:

```
./gazebo-renderer.py driving-scenario.xml -o world
Expand All @@ -50,3 +54,36 @@ Road generation and rendering can also be done in a single step:
```
./road-generator.py presets/driving.xml | ./gazebo-renderer.py -o world
```

## Blender Renderer

The Blender renderer automatically converts the intermediate CommonRoad representation and renders keyframe images (currently RGB color and Semantic Segmentation) driving along them.

Configuration is set via variables in the file ```blender-renderer.py``` as Blender unfortunately does not allow the passing of arguments during script execution.

Before running, the following packages have to cloned as well:
- drive_gazebo_worlds
- drive_gazebo_sim

Afterwards, the user has to adjust the ```GAZEBO_WORLDS_PATH``` and ```GAZEBO_SIM_PATH``` variables in the file ```blender-renderer.py``` to point to the root folders of the packages, respectively.

The variable ```INPUT_PATH``` has to point to the intermediate CommonRoad file generated as described above by ```road-generator.py ```.

Additional configuration options are available:
- ```OUTPUT_DIR``` : output directory (default: ```./blender-output```)
- ```FORCE_OUTPUT``` : overwrite existing files if output folder exists (default: ```True```)
- ```ADD_VEHICLE``` : render ego-vehicle (default: ```True```)

Finally, the renderer can the be run using:
```
bash ./generate_blender_env.sh
```

We have tested it using Blender 2.79. Images are rendered in the resolution of 1280 x 800 (corresponding to the Intel RealSense D435 camera). Output images are uncompressed .png (this might require a large amount of free memory for larger sequences). The RGB images are found in ```OUTPUT_DIR/rgb``` Semantic Segmentation images inside ```OUTPUT_DIR/semseg_color```. The label mapping for the Semantic Segmentation images can be found in ```./blender/renderer/segmentation_colormap.py```.

Two separate scenes are created within the Blender file, one for the RGB image and one for the Semantic Segmentation ground truth.

The RGB image is rendered using the Cycles Renderer (warning, this might be quite computationally expensive, especially if done on the CPU). It uses a HDRI skydome image in order to introduce a reasonably realistic ambient lighting and background. The background is configured via dicts found in ```blender/renderer/env_configs.py```.
The user has to specify the HDRI image filepath (we have rendered using 2K images) as well as the default orientation and scale of the skybox dome (this can be calibrated empirically using the Cycles background view in the viewport and the background node editor).

The Semantic Segmentation ground truth images are rendered using the internal Blender renderer. Mipmaps are disabled on the ground plane materials in order to prevent blurry artifcats from appearing on faraway roads.
Empty file added __init__.py
Empty file.
Empty file modified bachelor-thesis/main.pdf
100644 → 100755
Empty file.
Empty file modified bachelor-thesis/presentation.pdf
100644 → 100755
Empty file.
Empty file modified bachelor-thesis/videos/driving_static_obstacles.mp4
100644 → 100755
Empty file.
Empty file modified bachelor-thesis/videos/parking.mp4
100644 → 100755
Empty file.
78 changes: 78 additions & 0 deletions blender-renderer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
#!/usr/bin/env python3
import sys
import os
import shutil
import json
# because blender does not want to have it otherwise -__-
sys.path.append(os.getcwd())
sys.path += ['/usr/local/lib/python3.5/dist-packages', '/usr/lib/python3/dist-packages',
'/usr/lib/python3.5/dist-packages']
from blender.renderer.blender import generate_blend

OUTPUT_DIR = '/data_2/synthetic_data/rendered_scenes/new_cameras/industrial-pipe-1-with-jitter' # output directory
FORCE_OUTPUT = True # overwrite output if True
ADD_VEHICLE = True # render ego-vehicle in the frames
INPUT_FILE = 'driving-scenario.xml' # input CommonRoad file
GAZEBO_WORLDS_PATH = '../drive_gazebo_worlds' # location of the drive_gazebo_worlds package
GAZEBO_SIM_PATH = '../drive_gazebo_sim' # location of the drive_gazebo_sim package


# blender does not less us parse arguments
config = {'render_interval_distance': 0.05,
'groundplane_shader_type': 'ShaderNodeBsdfGlossy',
'env_config': 'industrial_pipe',
'texture_padding_ratio': 1.0,
# choose camera to render from, available: realsense, top
# available: RGB (cycles render image), semseg_color (semantic segmentation colormap)
# instances (id map of traffic signs (including poles)), lanes (DRIVABLE lane segmentation, only left/right)
'render_passes': ['rgb', 'semseg_color', 'instances', 'lanes'],
# resolution provided separately for each camera
'frame_range': (0, -1),
# use a .png to render the vehicle -> has to be re-generated for each camera position
'use_vehicle_mask': True,
'add_random_jitter': True,
"random_jitter": {'x': 0.05, 'y': 0.05, 'angle': 25}, # angle in degrees
'cameras': [{'name': 'top',
'position_offset': {'x': -0.126113, 'y': 0, 'z': 0.231409},
# rotation of the cameras around the Y axis (lateral car axis) in degrees
'rotation': 31,
'image_resolution': (2048, 1536),
# used if camera_mask is set to True
'segmentation_mask': 'top_segmentation_mask.png',
'sensor_width': 7.18, # 1/1.8 inch on IDS camera
'sensor_height': 5.32, # 1/1.8 inch on IDS camera
'focal_length': 1.7, # 1.7 mm on Theia
},
# http://robotsforroboticists.com/wordpress/wp-content/uploads/2019/09/realsense-sep-2019.pdf
{'name': 'realsense',
'position_offset': {'x': -0.220317, 'y': 0.0325, 'z': 0.11},
# rotation of the cameras around the Y axis (lateral car axis) in degrees
'rotation': 0,
'image_resolution': (1280, 960),
# used if camera_mask is set to True
'segmentation_mask': 'realsense_segmentation_mask.png',
'sensor_width': 6.4, # 1/2 inch, see above for source
'sensor_height': 4.8, # 1/2 inch, see above for source
# 'focal_length': 1.93, -> documentation above incorrect, use horizontal FOV
'horizontal_fov': 69.4 # unit: degrees
}
]
}


if __name__ == "__main__":
os.makedirs(OUTPUT_DIR, exist_ok=True)
if os.listdir(OUTPUT_DIR) != [] and not FORCE_OUTPUT:
print("Output directory is not empty.")
print("Use --force")
sys.exit(1)

shutil.copy2(INPUT_FILE, OUTPUT_DIR)

with open(os.path.join(OUTPUT_DIR, 'config.json'), 'w', encoding='utf-8') as config_file:
json.dump(config, config_file, ensure_ascii=False, indent=4)

with open(INPUT_FILE) as input_file:
xml = input_file.read()

generate_blend(xml, OUTPUT_DIR, ADD_VEHICLE, OUTPUT_DIR, GAZEBO_WORLDS_PATH, GAZEBO_SIM_PATH, config)
Empty file added blender/__init__.py
Empty file.
101 changes: 101 additions & 0 deletions blender/convert_segmentation_to_object_labels.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
from glob import glob
from tqdm import tqdm
import os
import logging
import csv
import cv2
import numpy as np
from blender.renderer.segmentation_colormap import SIGN_TO_COLOR, SIGN_TO_CLASSID
import argparse


_LOGGER = logging.getLogger(__name__)


def match_input_target_files(a, b):
# match base filenames only as formats differ between label files
a_mask = set(os.path.basename(fp).split('.')[0] for fp in a)
b_mask = set(os.path.basename(fp).split('.')[0] for fp in b)

uncommon_mask = set.symmetric_difference(a_mask, b_mask)
if len(uncommon_mask) == 0:
_LOGGER.debug('No mismatch found between input and target files.')
return a, b
else:
_LOGGER.warning("Mismatching Image Ids found: {}".format(', '.join(s for s in uncommon_mask)))
return [fp for fp in a if "_".join(os.path.basename(fp).split('_')[0:3]) not in uncommon_mask], \
[fp for fp in b if "_".join(os.path.basename(fp).split('_')[0:3]) not in uncommon_mask]


def convert_dataset_trafficsignid_only(base_path, draw_debug=False, min_pixel_size=50):
"""
Reads segmentation images and converts them to .csv file with annotations and visualizes it optionally

Note: makes the assumption that two labels of the same type are not connected to each other, will treat them as
one label if that happens for some reason
"""
semseg_image_path = os.path.join(base_path, 'semseg_color')
instance_image_path = os.path.join(base_path, 'traffic_sign_id')
gt_path = os.path.join(base_path, 'signs_ground_truth.csv')

segmentation_images = sorted(glob(os.path.join(semseg_image_path, '*.png'), recursive=False))
instance_images = sorted(glob(os.path.join(instance_image_path, '*.exr'), recursive=False))

COLOR_TO_SIGN = {color: name for name, color in SIGN_TO_COLOR.items()}

segmentation_images, instance_images = match_input_target_files(segmentation_images, instance_images)

with open(gt_path, 'w') as csvfile:
gt_writer = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
gt_writer.writerow(['Filename', 'Roi.X1', 'Roi.X2', 'Roi.Y1', 'Roi.Y2', 'ClassId'])
for semantic_name, instance_name in tqdm(zip(segmentation_images, instance_images)):
semantic_image = cv2.imread(semantic_name)
instance_image = cv2.imread(instance_name)[..., 0]

if draw_debug:
color_image = cv2.imread(os.path.join(base_path, 'rgb',
os.path.basename(semantic_name)))

# find unique combinations of traffic sign id labels and cluster them, put bbs around and write
for traffic_sign_id in np.sort(np.unique(instance_image))[1:]:
location_mask = (instance_image == traffic_sign_id)
colors, counts = np.unique(semantic_image[location_mask], axis=0, return_counts=True)
dominant_color = colors[np.argsort(-counts)][0]

if tuple(dominant_color)[::-1] in COLOR_TO_SIGN:
traffic_sign = SIGN_TO_CLASSID[COLOR_TO_SIGN[tuple(dominant_color)[::-1]]]
unique_positions = np.argwhere(location_mask)

x1 = np.min(unique_positions[:, 0])
x2 = np.max(unique_positions[:, 0])

y1 = np.min(unique_positions[:, 1])
y2 = np.max(unique_positions[:, 1])

if (x2-x1) + (y2-y1) > min_pixel_size:
# we switch around the y and x coordinates here as this was done when hand-labeling
gt_writer.writerow([os.path.basename(semantic_name), y1, y2, x1, x2, traffic_sign])

if draw_debug:
cv2.rectangle(color_image, (y1, x1), (y2, x2), [int(val) for val in dominant_color])
cv2.putText(color_image, str(traffic_sign), (y1, x1), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
[int(val) for val in dominant_color])
else:
logging.warning('Image: {}, Dominant color {} of sign with id: {} is not in the color'
' dict, skipping!'.format(os.path.basename(semantic_name), dominant_color, traffic_sign_id))

if draw_debug:
cv2.destroyAllWindows()
cv2.imshow('Traffic signs in image {}'.format(os.path.basename(semantic_name)), color_image)
cv2.waitKey(0)


if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Generate phoenix-style annotations from synthetic segmantation & '
'sign id')
parser.add_argument('path', type=str, nargs=1,
help='Filepath, should contain the \'semseg_color\' and \'traffic_sign_id\' sub-paths')
args = parser.parse_args()

# change the pixel size accordingly to remove too small signs (unit is square pixels)
convert_dataset_trafficsignid_only(args.path[0], draw_debug=False, min_pixel_size=50)
Empty file added blender/renderer/__init__.py
Empty file.
Loading