Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

several issues about training a no-limit hold'em agent #314

Open
hzwudi2014 opened this issue Mar 19, 2024 · 0 comments
Open

several issues about training a no-limit hold'em agent #314

hzwudi2014 opened this issue Mar 19, 2024 · 0 comments

Comments

@hzwudi2014
Copy link

  1. How can I modify the origin env such as players' origin chip num or game settings like ante(every player have to put some chips into pot which is different from a standard game).
  2. As my understanding, if I want to train a good no-limit hold'em angent which can achive human-level, I should initiate the env with 8 agents and all these agents should use cfr algorithm ( in the example code, you use cfr agent playing against random agent, while as I see this is for a instant effect proving the RL algorighm runs well)
  3. After the model is trained, can I integrate it in my code if I input a certain situation in a well-suited format?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant