Skip to content

Latest commit

 

History

History
57 lines (51 loc) · 2.03 KB

Readme.md

File metadata and controls

57 lines (51 loc) · 2.03 KB

This is a series of projects where I solve AI gym environments by building RL algorithms from scratch using Python, Pytorch and Tensorflow

Exercise

Use the Q-Learning algorithm to solve the Mountain-Car-v0 environment by discretizing a continuous state space

Mountain-Car v0

Watch the video

Environment:

A car is on a one-dimensional track, positioned between two "mountains". The goal is to drive up the mountain on the right; however, the car's engine is not strong enough to scale the mountain in a single pass. Therefore, the only way to succeed is to drive back and forth to build up momentum. Here, the reward is greater if you spend less energy to reach the goal

State space:

The agent (a car) is started at the bottom of a valley. For any given state the agent may choose to accelerate to the left, right or cease any acceleration.

Num Observation Min Max
0 Car Position -1.2 0.6
1 Car Velocity -0.07 0.07

Action space:

The agent (a car) is started at the bottom of a valley. For any given state the agent may choose to accelerate to the left, right or cease any acceleration.

Num Observation Min Max
0 The Power coeffecient -1.0 1.0

Rewards:

Reward of 100 is awarded if the agent reached the flag (position = 0.45) on top of the mountain. Reward is decreased based on amount of energy consumed each step.

Starting State:

The position of the car is assigned a uniform random value in [-0.6 , -0.4]. The starting velocity of the car is always assigned to 0.

Episode Termination:

The car position is more than 0.45. Episode length is greater than 200