Skip to content

nslyubaykin/relax_double_dqn_example

Repository files navigation

Example Double DQN implementation with ReLAx

This repository contains an implementation of double deep q-network (DDQN) with ReLAx.

DDQN actor was trained on Kangaroo-v0 Atari Gym environment for 3m env-steps.

!Note: For demonstration purposes training was run only for 3m steps. In papers, DQN and its augmentations are trained for 200m steps, which may require several days of learning. That is why performance is lower than reported in papers.

The graph of average return vs environment step is shown below (logs done every 50k steps):

ddqn_training

The distribution of estimated Q-values vs data Q-values is shown below:

dqn_q_func

Resulting Policy:

ddqn_run.mp4