rlgameauthors is the replication package of the work “Using Reinforcement Learning for Load Testing of Video Games”.
The purpose of this repository is to provide code and data to replicate our results.
requirements_study1.txtprovides the list of libraries needed for study1.requirements_study2.txtprovides the list of libraries needed for study2.
In order to install the requirements run the following command in your virtual environment:
pip install -r requirements_study<x>.txt
The CartPole folder contains everything needed to replicate study 1 on CartPole game.
CartPole_RELINE_train.pyis the script to train the agent using the RELINE approach for 200 steps. It provides as outputmodel_RELINE(i.e., the model trained)CartPole_RL-baseline_train.pyis the script to train the agent using the RL-baseline for 200 steps. It provides as outputmodel_RL-baseline.CartPole_RELINE_1k_episodes.pyis the script to play additional 1000 episodes (still training) withmodel_RELINE. It provide as output information about the injected bugs spotted.CartPole_RL-baseline_1k_episodes.pyis the script to play additional 1000 episodes (still training) withmodel_RL-baseline. It provide as output information about the injected bugs spotted.CartPole_Random_1k_episodes.pyis the script to play 1000 episodes with a random agent. It provides as output information about the injected bugs spotted.results.xlsxcontains our results obtained with the 3 different approaches.RL-baseline_agent_playing.mp4is a video of the RL-baseline agent playing CartPole.
The MsPacman folder contains everything needed to replicate study 1 on MsPacman game.
- ‘msPacman_RELINE.py’ is the script to train the agent using the RELINE approach for 1000 steps and to play additional 1000 episodes (still training), used to create a report of the injected bugs spotted. It provides as output the best (
MsPacmanNoFrameskip-v4-best_RELINE.dat) and the last (MsPacmanNoFrameskip-v4-last_RELINE.dat) model trained and information about the injected bugs spotted. msPacman_RL-baseline.pyis the script to train the agent using the RL-baseline for 1000 steps and to play additional 1000 episodes (still training), used to create a report of the injected bugs spotted. It provides as output the best (MsPacmanNoFrameskip-v4-best_RL-baseline.dat) and the last (MsPacmanNoFrameskip-v4-last_RL-baseline.dat) model trained and information about the injected bugs spotted.msPacman_Random.pyis the script to play 1000 episodes with a random agent. It provides as output information about the injected bugs spotted.bug_leftandbug_rightfolders contain images used to check if the agent is in one of the injected bug locations.libfolder contains files used by python scripts to define the DQN model used by the RELINE approach and the RL-baseline and the game environment.results.xlsxcontains our results obtained with the 3 different approaches.RL-baseline_agent_playing.mp4is a video of the RL-baseline agent playing MsPacman.
The SuperTuxKart folder contains everything needed to replicate study 2 on SuperTuxKart game.
-
300episodes_RL-baseline_agent.xlsx,300episodes_same_actions.csvfiles containing information about the rendering time of frames collected during 300 episodes played by the RL-baseline best agent and 300 episodes played performing always the same actions respectively. We used this information to verify the stability of the metric used and to evaluate the threshold needed for the RELINE approach’s reward function. -
300episodes_RL-baseline_agent.png,300episodes_same_actions.pngimages graphically showing the information about the rendering time of frames contained in the 300episodes_RL-baseline_agent.csvand300episodes_same_actions.csv` files. -
FPS_info_RELINE.xlsx,FPS_info_RL-baseline.xlsx,FPS_info_Random.xlsxfiles containing information about the rendering time of frames collected during episodes played by the different agents trained with the 3 different approaches (RELINE, RL-baseline, Random). -
RELINE_training.png,RL-baseline_training.pngimages showing the mean reward (of last 100 games) during the training of the agents using the RELINE approach and the RL-baseline. -
RL-baseline_agent_playing.aviis a video of the RL-baseline agent playing SuperTuxKart. -
code:agents:SuperTuxKart_RELINE.pyis the script to train the agent using the RELINE approach for 2300 steps and to play additional 1000 episodes (still training), used to create the report of the low-FPS positions spotted. It provides as output the best (SuperTuxKart-best_RELINE.dat) and the last (SuperTuxKart-last_RELINE.dat) model trained and information about the rendering time of frames collected during the episodes.SuperTuxKart_RL-baseline.pyis the script to train the agent using the RL-baseline for 2300 steps and to play additional 1000 episodes (still training), used to create the report of the low-FPS positions spotted. It provides as output the best (SuperTuxKart-best_RL-baseline.dat) and the last (SuperTuxKart-last_RL-baseline.dat) model trained and information about the rendering time of frames collected during the episodes.SuperTuxKart_Random.pyis the script to play 1000 episodes with a random agent. It provides as output information about the rendering time of frames collected during the episodes.dqn_model.pyis the python file defining the DQN model used by the RELINE approach and the RL-baseline.
gym-masteris the repository containing the wrapper of SuperTuxKart game. Follow the instructions contained in the README file to install it.
Once the wrapper has been installed, move the
agentfolder ingym-mastar/gym/.To obtain reliable data (rendering time of frames) run the scripts using the
chrt --rr 1option, that in Linux maximizes the priority of the process. Also, make sure that no other processes (besides the ones run by the OS) are running on the machine.