Skip to content

GT-STAR-Lab/JaxRobotarium

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JaxRobotarium: Train and Deploy Multi-Robot Policies in 10 Minutes

TLDR

JaxRobotarium is an end-to-end open-source Jax-based platform designed to significantly accelerate the training and deployment of MRRL algorithms while maintaining ease-of-use and open access to hardware-based evaluations.

Team: Shalin Anand Jain, Jiazhen Liu, Siva Kailas, Harish Ravichandar

Preprint: https://arxiv.org/abs/2505.06771

Installation (assumes integration with JaxMARL as platform of choice)

Install Jax and JaxMARL

  1. Create a conda env
    conda create -n "jaxrobotarium" python=3.10 ipython
    conda activate jaxrobotarium
    
  2. Clone and install JaxRobotarium (from top of repo)
    pip install -e .
    
  3. Install JaxMARL (from top of repo, clone into JaxMARL-Robotarium) [WIP: add JaxRobotarium as a submodule in JaxMARL]
    pip install -e .
    
  4. Install cuda runtime
    pip install nvidia-cuda-runtime-cu12
    
  5. Install Jax (use the current version supported by JaxMARL)
    pip install -U "jax[cuda12]==0.4.38"
    
  6. Install Flax (use the version associated with the jax version here)

Install Robotarium Simulator

  1. Initialize the Robotarium Simulator submodule
    git submodule init
    git submodule update
    
  2. Install (within jaxrobotarium/robotarium_python_simulator)
    pip install -e .
    

Training

Config

See baselines for config examples. We call out some config parameters here, but see example configs, robotarium_env.py and scenario definitions to better understand all parameters.

"ENV_NAME": ...
"ENV_KWARGS": {
  "num_agents": # must be identical to number of robots,
  "max_steps": 100,
  "action_type": # Discrete or Continuous,
  "robotarium": {
    "number_of_robots": # must be identical to num agents,
  },
  "controller": {
    "controller": # controller for robot to go to goal,
    "barrier_fn": # collision avoidance,
  },
  "update_frequency": 30
}

Callouts:

  • "action_type", if Discrete, actions are interpreted as waypoints to follow, if Continuous, actions are interpreted as unicycle velocity commands, "controller" should be set to null
  • see constants.py for available controllers and barrier functions, set to null to not use any
  • update_frequency controls how many robotarium steps are taken per environment step

Run Training

Example: train a multi-robot navigation policy

python baselines/QLearning/qmix_rnn.py +alg=ql_rnn_navigation

Deployment to Robotarium

See README.md in deploy/ for detailed instructions on how to deploy to the Robotarium.

Dev Notes

Unit Testing

Each folder has a test folder to add unit tests for the files in that folder. For example to run unit tests for navigation.py, run,

python -m unittest jaxrobotarium/scenarios/test/test_navigation.py

Scenario Cross-Compatibility

In order to compatible with the Robotarium hardware platform, developed scenarios must be compatible with the non-jax Robotarium simulator. To achieve this, every scenario should only contain the following import.

# wrap import statement in try-except block to allow for correct import during deployment
try:
    from jaxrobotarium.robotarium_env import *
except Exception as e:
    from robotarium_env import *

And the following logic within __init__,

if self.backend == 'jax':
    super().__init__(num_agents, max_steps, **kwargs)
else:
    self.num_agents = num_agents
    self.initial_state = self.initialize_robotarium_state(kwargs.get("seed", 0))
    kwargs['initial_conditions'] = self.initial_state.p_pos[:self.num_agents, :].T
    super().__init__(num_agents, max_steps, **kwargs)

where initialize_robotarium_state is specific to the scenario being developed. See scenarios/navigation.py for an example.

Citing

If you use JaxRobotarium, please cite:

  1. Our work
@article{jain2025jaxrobotarium,
  title={JaxRobotarium: Training and Deploying Multi-Robot Policies in 10 Minutes},
  author={Jain, Shalin Anand and Liu, Jiazhen and Kailas, Siva and Ravichandar, Harish},
  journal={arXiv preprint arXiv:2505.06771},
  year={2025}
}
  1. The Robotarium
@inproceedings{pickem2017robotarium,
  title={The robotarium: A remotely accessible swarm robotics research testbed},
  author={Pickem, Daniel and Glotfelter, Paul and Wang, Li and Mote, Mark and Ames, Aaron and Feron, Eric and Egerstedt, Magnus},
  booktitle={2017 IEEE International Conference on Robotics and Automation (ICRA)},
  pages={1699--1706},
  year={2017},
  organization={IEEE}
}

If you integrate with JaxMARL, please cite:

@inproceedings{
    flair2024jaxmarl,
    title={JaxMARL: Multi-Agent RL Environments and Algorithms in JAX},
    author={Alexander Rutherford and Benjamin Ellis and Matteo Gallici and Jonathan Cook and Andrei Lupu and Gar{\dh}ar Ingvarsson and Timon Willi and Ravi Hammond and Akbir Khan and Christian Schroeder de Witt and Alexandra Souly and Saptarashmi Bandyopadhyay and Mikayel Samvelyan and Minqi Jiang and Robert Tjarko Lange and Shimon Whiteson and Bruno Lacerda and Nick Hawes and Tim Rockt{\"a}schel and Chris Lu and Jakob Nicolaus Foerster},
    booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2024},
}

Acknowledgements

We would like to recognize JaxMARL and MARBLER for heavily inspiring the design of this repository.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages