This project is a variant of multi-agent game platform TripleSumo (Publication, Repository). A live demo of this game can be found in This Video. You're welcome to visit the author's Youtube page to find more about her work. Contact her at [email protected] if you have inquiry.
Steps of installing Ant_racer:
- Download Mujoco200, rename the package into mujoco200, then extract it in
/home/your_username/.mujoco/
, then download the license into the same directory - Add
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/your_username/.mujoco/mujoco200/bin
to your~/.bashrc
, and thensource ~/.bashrc
- Use Anaconda to create a virtual environment 'ant_racer' with
conda env create -f ant_racer_env.yml
; Thenconda activate ant_racer
. git clone https://github.com/niart/Ant_racer.git
andcd Ant_racer
- Use the
gym
foler of this repository to replace thegym
installed in your conda environment ant_racer. - To test the demo, run
python chase_demo.py
. If you meet errorCreating window glfw ... ERROR: GLEW initalization error: Missing GL version
, you may addexport LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so
to~/.bashrc
, thensource ~/.bashrc
.
A simple RL algorithm interface has been written in chase_runmanin.py
which implement DDPG. Important training steps are in /gym/envs/mujoco/chase.py
To cite this platform:
@misc{Ant_racer,
howpublished = {\href{https://github.com/niart/Ant_racer}{N. Wang, Ant_racer: a multi-agent pursuit-evasion platform. Github Repository, 2021, https://github.com/niart/Ant_racer}},}
An overview of Ant_racer game: