Skip to content
This repository has been archived by the owner on May 6, 2021. It is now read-only.

Support CommonRLInterface #70

Closed
findmyway opened this issue Jun 16, 2020 · 3 comments
Closed

Support CommonRLInterface #70

findmyway opened this issue Jun 16, 2020 · 3 comments

Comments

@findmyway
Copy link
Member

Though all the environments are written based on ReinforcementLearningBase.jl, it's relatively easy to support the minimal interfaces provided in CommonRLInterface.

struct CommonEnvWrapper{T<:RLBase.AbstractEnv} <: CommonRL.AbstractCommonEnv
     env::T
end

convert(CommonRL.AbstractCommonEnv, env::RLBase.AbstractEnv) = CommonEnvWrapper(env)

function CommonRL.step!(env::CommonEnvWrapper, action)
    env.env(action)
    obs = RLBase.observe(env.env)
    RLBase.get_state(obs), RLBase.get_reward(obs), RLBase.get_terminal(obs), obs
end

function CommonRL.reset!(env::CommonEnvWrapper)
    RLBase.reset!(env.env)
    obs = RLBase.observe(env.env)
    RLBase.get_state(obs), RLBase.get_reward(obs), RLBase.get_terminal(obs), obs
end

CommonRL.actions(env::CommonEnvWrapper) = RLBase.get_action_space(env)

What do you think? @zsunberg

@zsunberg
Copy link
Member

Yep, this is what I had in mind! except there should also be a convert(::Type{<:RLBase.AbstractEnv}, env::CommonRLInterface.CommonEnv)

@findmyway
Copy link
Member Author

I'll clean up APIs in RLBase a little and then work on this issue together with JuliaReinforcementLearning/CommonRLInterface.jl#18 (comment) in the next week.

@findmyway
Copy link
Member Author

Supported in RLBase directly. JuliaReinforcementLearning/ReinforcementLearningBase.jl-Archive#58

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants