Iris is a library for performing synchronous and distributed zeroth-order optimization at scale. It is meant primarily to train large neural networks with evolutionary methods, but can be applied to optimize any high dimensional blackbox function.
To launch a local optimization, run:
python3 -m launch \
--lp_launch_type=local_mp \
--experiment_name=iris_example \
--config=iris/configs/simple_example_config.py \
--logdir=/tmp/bblog \
--num_workers=16 \
--num_eval_workers=10 \
--alsologtostderr
- SARA-RT: Scaling up Robotics Transformers with Self-Adaptive Robust Attention (ICRA 2024 - Best Robotic Manipulation Award)
- Embodied AI with Two Arms: Zero-shot Learning, Safety and Modularity (IROS 2024 - Robocup Best Paper Award)
- Agile Catching with Whole-Body MPC and Blackbox Policy Learning (L4DC 2023)
- Discovering Adaptable Symbolic Algorithms from Scratch (IROS 2023, Best Paper Finalist)
- Visual-Locomotion: Learning to Walk on Complex Terrains with Vision (CoRL 2022)
- ES-ENAS: Efficient Evolutionary Optimization for Large Hybrid Search Spaces (arXiv, 2021)
- Hierarchical Reinforcement Learning for Quadruped Locomotion (RSS 2021)
- Rapidly Adaptable Legged Robots via Evolutionary Meta-Learning (IROS 2020)
- Robotic Table Tennis with Model-Free Reinforcement Learning (IROS 2020)
- ES-MAML: Simple Hessian-Free Meta Learning (ICLR 2020)
- Provably Robust Blackbox Optimization for Reinforcement Learning (CoRL 2019)
- Structured Evolution with Compact Architectures for Scalable Policy Optimization (ICML 2018)
- Optimizing Simulations with Noise-Tolerant Structured Exploration (ICRA 2018)
- On Blackbox Backpropagation and Jacobian Sensing (NeurIPS 2017)
Disclaimer: This is not an officially supported Google product.