Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic programming #12

Open
lamare3423 opened this issue Jan 8, 2022 · 6 comments
Open

Dynamic programming #12

lamare3423 opened this issue Jan 8, 2022 · 6 comments

Comments

@lamare3423
Copy link

How can we add dynamic bellman equation for reward function? It will be more sensitive rewards for us.
thank u

@lamare3423
Copy link
Author

lamare3423 commented Jan 8, 2022

So how can we implement iterations for policy and value statements.
For any idea mail me. [email protected]

@gbartyzel
Copy link
Owner

gbartyzel commented Jan 16, 2022

@lamare3423 What do you mean by saying "add dynamic bellman equation for reward function? Do you want to customize the reward function?
This environment is ready to use with any RL algorithm. Just create an env. reset() method is for recovering the env state to the initial one and returns the initial state. step() method returns the next state, reward and the information about the termination. This information is sufficient to perform value or policy iteration or even more complicated algos like SAC, PPO etc.

@lamare3423
Copy link
Author

@Souphis i want to understand something. If we imply dynamic reward function, our reward function can be more success. Is it true . for example how can we customize our reward function for your work. Should we write the code to be prepared to create the reward function with dynamic programming in our main function or code it into the agent we will use? Have you got any examples ? For example How to convert the reward function into a dynamic reward function using the ddpg algorithm and does it help?
Thanks,

@gbartyzel
Copy link
Owner

@lamare3423 Oh, okay, so you want to change the reward function during learning? There are two solutions:

@lamare3423
Copy link
Author

lamare3423 commented Jan 22, 2022

@Souphis First of all thank u for all information.
You said that , modify the reward function during learning in your agent. İ read lots of thing but i dont understand well , how can i implement it my environment and agent with using code? My biggest problem is here ;
When I have a scene like the one in the figure, with the robot hexagon and the target star, my results are successful.
(Click here for successful environment)
My results fail when I have a scene like the one in the figure with robot hexagon, target star. In other words, when the distance between the target and the robot is close, the robot constantly crashes into obstacles.
(click here for failure environment)

I'm dealing with a mobile robot that can avoid obstacle and go to the target. I'm encountering and trying to solve the situations I mentioned above with what I've been able to do so far. I've used Pyrep and I'm working with the ddpg agent. I don't know how to make the changes according to the situations you suggest. What should I change in the agent itself and its network updates? For example, I created a function called build critic train method in my ddpg agent code, do I need to make changes related to the reward function in this part?

@alisalim70
Copy link

hi sir, i prepare a study (master degree) ((deep reinforcement learning approach based on dynamic path planning for mobile robot)
and i found research close to my study
https://github.com/dranaju/project
Due to i am new in programming i couldn't run the code there were some error as you could see in attached file, sir could you help me with that (run the code) it will be a great favor.
1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants