-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: How can I set up a custom environment? #198
Comments
Greetings, We have prepared documentation on how to customize environments and algorithms within the LightZero framework, which can be accessed through the following links: Although these documents provide fundamental guidance, they may not encompass all details. Should you encounter any issues or have queries during the customization process, please do not hesitate to reach out to us. We are eager to assist you in ensuring a smooth customization experience. Best wishes! |
Thanks for getting back to me! Thank you for the clarifiaction! |
Certainly, an environment that allows two players to make any legal moves can be created. In the field of Multi-Agent Reinforcement Learning (MARL), such types of environments are quite common. For example, the PettingZoo library provides many examples of such environments. You can browse PettingZoo's GitHub repository for more related information. In the LightZero project, we have some ongoing pull requests, such as PR#149, PR#153, and PR#171. You can follow the updates on these pull requests, or contribute your own insights. Regarding the encoding of "no action" as an embedding vector with a value of 0, this is technically feasible. However, this requires that the design of the environment explicitly clarifies how to interpret this condition and how to enable agents to recognize and learn a "no action" strategy. For self-play algorithms seeking data efficiency, you might refer to research papers that focus on data-efficient reinforcement learning. For instance, data utilization can be enhanced through methods like model-based reinforcement learning, representation learning, and so on. Relevant resources can be found on awesome-model-based-RL. |
Hello,
I came across this repository and was wondering what the steps and requirements would be to set up a custom environment and utilize the algorithm. Specifically, for example, a gym environment will require which functions?
Thanks
The text was updated successfully, but these errors were encountered: