diff --git a/CHANGELOG.md b/CHANGELOG.md index 4f3eb0970..757d354d0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,6 @@ ## Latest changes + +## CARLA Scenario_Runner 0.9.5.1 * Added initial support for OpenScenario v0.9.1 * Added support for multiple ego vehicles plus an example * Added commandline option for output directory diff --git a/Docs/agent_evaluation.md b/Docs/agent_evaluation.md index 484fd4eb4..64b3b38a9 100644 --- a/Docs/agent_evaluation.md +++ b/Docs/agent_evaluation.md @@ -1,8 +1,8 @@ -#### Setting up your agent for evaluation. +#### Setting up your agent for evaluation -To have your agent evaluated by the challenge evaluation system -you must define an Agent class that inherits the -[AutonomousAgent](../srunner/challenge/autoagents/autonomous_agent.py) base class. +To have your agent evaluated by the challenge evaluation system +you must define an Agent class that inherits the +[AutonomousAgent](../srunner/challenge/autoagents/autonomous_agent.py) base class. In addition, you need to setup your environment as described in [the Challenge evaluator tutorial](challenge_evaluation.md). On your agent class there are three main functions to be overwritten that need to be defined in order to set your agent to run. @@ -11,7 +11,6 @@ initially set as a variable. ##### The "setup" function: - This is the function where you should make all the necessary setup for the your agent. @@ -21,9 +20,9 @@ file to be parsed by the user. When executing the "challenge_evaluator.py" you should pass the configuration file path as a parameter. For example: - - python srunner/challenge/challenge_evaluator.py -a --config myconfigfilename.format - +``` +python srunner/challenge/challenge_evaluator_routes.py --agent= --config=myconfigfilename.format +``` ##### The "sensors" function: @@ -31,37 +30,27 @@ configuration file path as a parameter. For example: This function is where you set all the sensors required by your agent. For instance, on the [dummy agent sample class](../srunner/challenge/agents/DummyAgent.py) the following sensors are defined: -``` - def sensors(self): - - sensors = [{'type': 'sensor.camera.rgb', 'x':0.7, 'y':0.0, 'z':1.60, 'roll':0.0, 'pitch':0.0, 'yaw':0.0, - 'width':800, 'height': 600, 'fov':100, 'id': 'Center'}, - {'type': 'sensor.camera.rgb', 'x':0.7, 'y':-0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, - 'yaw': -45.0, 'width': 800, 'height': 600, 'fov': 100, 'id': 'Left'}, - {'type': 'sensor.camera.rgb', 'x':0.7, 'y':0.4, 'z':1.60, 'roll':0.0, 'pitch':0.0, 'yaw':45.0, - 'width':800, 'height':600, 'fov':100, 'id': 'Right'}, - {'type': 'sensor.lidar.ray_cast', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, - 'yaw': -45.0, 'id': 'LIDAR'}, - {'type': 'sensor.other.gnss', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'id': 'GPS'}, - {'type': 'sensor.speedometer','reading_frequency': 25, 'id': 'speed'} - - ]'], - ] - return sensors +```Python +def sensors(self): + sensors = [{'type': 'sensor.camera.rgb', 'x':0.7, 'y':0.0, 'z':1.60, 'roll':0.0, 'pitch':0.0, 'yaw':0.0, 'width':800, 'height': 600, 'fov':100, 'id': 'Center'}, + {'type': 'sensor.camera.rgb', 'x':0.7, 'y':-0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0, 'width': 800, 'height': 600, 'fov': 100, 'id': 'Left'}, + {'type': 'sensor.camera.rgb', 'x':0.7, 'y':0.4, 'z':1.60, 'roll':0.0, 'pitch':0.0, 'yaw':45.0, 'width':800, 'height':600, 'fov':100, 'id': 'Right'}, + {'type': 'sensor.lidar.ray_cast', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0, 'id': 'LIDAR'}, + {'type': 'sensor.other.gnss', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'id': 'GPS'}, + {'type': 'sensor.speedometer','reading_frequency': 25, 'id': 'speed'} + ] + return sensors ``` Every sensor is a dictionary where you should specify: - * type: basically which is the sensor to be added, for example: 'sensor.camera.rgb' for - an rgb camera or 'sensor.lidar.ray_cast' for a ray casting lidar. - * id: the label that will be given to the sensor in order for it - to be accessed later. - * other parameters: these are sensor dependent, such as position, 'x' and 'y', - or the field of view for a camera, 'fov' - - +* type: basically which is the sensor to be added, for example: 'sensor.camera.rgb' for an rgb camera or 'sensor.lidar.ray_cast' for a ray casting lidar. +* id: the label that will be given to the sensor in order for it to be accessed later. +* other parameters: these are sensor dependent, such as position, 'x' and 'y', or the field of view for a camera, 'fov' + + ##### The "run_step" function: @@ -83,13 +72,13 @@ On the beginning of the execution, the entire route that the hero agent should travel is set on the "self.global_plan" variable: ``` -[({'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002271601998707}, ), +[({'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002271601998707}, ), ({'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002709765148996}, ), - ... + ... ({'z': 0.0, 'lat': 48.99822679980298, 'lon': 8.002735250105061}, )]` ``` - + It is represented as a list of tuples, containing the route waypoints, expressed in latitude and longitude and the current road option recommended. For an intersection, the option can be go straight, turn left or turn right. For the rest of the route the recommended option - is lane follow. \ No newline at end of file + is lane follow. diff --git a/Docs/challenge_evaluation.md b/Docs/challenge_evaluation.md index beede63b7..94adc2b87 100644 --- a/Docs/challenge_evaluation.md +++ b/Docs/challenge_evaluation.md @@ -2,57 +2,43 @@ Challenge evaluator ================= - *This tutorial shows how to put some sample agent to run the + *This tutorial shows how to put some sample agent to run the challenge evaluation.* - -The idea of the evaluation for the challenge is to put + +The idea of the evaluation for the challenge is to put the hero agent to perform on several scenarios described in a XML file. A scenario is defined by a certain trajectory that the hero -agent has to follow and certain events +agent has to follow and certain events that are going to take effect during this trajectory. The scenario also control the termination criteria, and the score criteria. - - At the end of a route, the system gives a result (fail or success) - and a final score (numeric). - +At the end of a route, the system gives a result (fail or success) +and a final score (numeric). ### Getting started - - #### Installation -Please, to install the system, follow the general [installation steps for +Please, to install the system, follow the general [installation steps for the scenario runner repository](getting_started.md/#install_prerequisites) - -Run the setup environment script in order to point where the root folder of -the CARLA latest release is located: - - ./setup_environment.sh --carla-root - +To run the challenge, several environment variables need to be provided: +``` +export CARLA_ROOT=/path/to/your/carla/installation +export ROOT_SCENARIO_RUNNER=`pwd` +export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla/:${ROOT_SCENARIO_RUNNER}:`pwd`:${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.5-py2.7-linux-x86_64.egg:${CARLA_ROOT}/PythonAPI/carla/agents +``` #### Running sample agents +In general, both Python 2 and Python 3 are supported. In the following, we just refer to "python" as representative. Please replace it with your python executable. -You can see the list of supported scenarios before you run the evaluation: - - python3 srunner/challenge/challenge_evaluator.py --list - +You can run the challenge evaluation as follows: +``` +python srunner/challenge/challenge_evaluator_routes.py --scenarios=${ROOT_SCENARIO_RUNNER}/srunner/challenge/all_towns_traffic_scenarios1_3_4.json --agent=${ROOT_SCENARIO_RUNNER}/srunner/challenge/autoagents/DummyAgent.py +``` -To control, using the keyboard, an agent running a basic scenario run: - - bash srunner/challenge/run_evaluator.sh - - -You can also execute the challenge evaluator manually, the following -example runs a dummy agent that basically goes forward: - - python srunner/challenge/challenge_evaluator.py --scenario group:ChallengeBasic -a srunner/challenge/autoagents/DummyAgent.py - - -After running the evaluator, either manually or using the script, you should see the CARLA simulator being started +After running the evaluator, you should see the CARLA simulator being started and the following type of output should continuously appear on the terminal screen: =====================> @@ -77,4 +63,3 @@ Finally, you can also add your own agent into the system by following [this tuto ### ROS-based Agent Implement an Agent for a ROS-based stack is described [here](ros_agent.md). - diff --git a/Docs/getting_started.md b/Docs/getting_started.md index df6f4349d..cb10603fb 100644 --- a/Docs/getting_started.md +++ b/Docs/getting_started.md @@ -35,6 +35,7 @@ Note: py-trees newer than v0.8 is *NOT* supported. First of all, you need to get latest master branch from CARLA. Then you have to include CARLA Python API to the Python path: ``` +export CARLA_ROOT=/path/to/your/carla/installation export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla/dist/carla-.egg:${CARLA_ROOT}/PythonAPI/carla/agents:${CARLA_ROOT}/PythonAPI/carla ``` NOTE: ${CARLA_ROOT} needs to be replaced with your CARLA installation directory,