An Evolutionary Strategies Toolkit for high speed blackbox optimization. Evokit currently supports Classic ES (p, λ), Natural Evolutionary Strategies, and Canonical Evolutionary Strategies optimization. It requires a processor supporting AVX operations.
This toolkit has two binaries:
evo-rank
: an out of the box ES ranker similar to the papers listed belowmulberry
: a framework that supports custom optimization goals, including market level metrics. As most of our work is focused on this framework, the readme outlines this.
Please see docs/DEVELOPMENT.md for how to run this code locally and edit the code in this repo.
In the above flowchart you can see a high level view of how Mulberry learns a new model.
A user must provide:
- Scoring & policy config: specifies what metrics to optimize for and how to weight each metric in the final fitness function computation. These weights are hyperparameters and must be provided by the user. They are NOT learned by the framework. Please see the tuning section for suggestions on how to select these values.
- train/validation data: provide separate train and validation data in LibSVM format
- model & optimizer config: currently passed as separate arguments at train time
For a more detailed view on each component of Mulberry, please see docs/MULBERRY.md. You can also see more information in the cargo docs.
To build your own pipeline using Mulberry, please see docs/BUILDING_PIPELINE.md.
- Paper on Evokit/Mulberry
- ES-Rank: Evolution Strategy Learning to Rank Approach
- An Evolutionary Strategy with Machine Learning for Learning to Rank in Information Retrieval
- Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari
- Natural Evolution Strategies
- Evolution Strategies as a Scalable Alternative to Reinforcement Learning
- From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox Optimization
- Limited Evaluation Cooperative Co-evolutionary Differential Evolution for Large-scale Neuroevolution