You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 6, 2021. It is now read-only.
Though all the environments are written based on ReinforcementLearningBase.jl, it's relatively easy to support the minimal interfaces provided in CommonRLInterface.
struct CommonEnvWrapper{T<:RLBase.AbstractEnv} <: CommonRL.AbstractCommonEnv
env::T
end
convert(CommonRL.AbstractCommonEnv, env::RLBase.AbstractEnv) = CommonEnvWrapper(env)
function CommonRL.step!(env::CommonEnvWrapper, action)
env.env(action)
obs = RLBase.observe(env.env)
RLBase.get_state(obs), RLBase.get_reward(obs), RLBase.get_terminal(obs), obs
end
function CommonRL.reset!(env::CommonEnvWrapper)
RLBase.reset!(env.env)
obs = RLBase.observe(env.env)
RLBase.get_state(obs), RLBase.get_reward(obs), RLBase.get_terminal(obs), obs
end
CommonRL.actions(env::CommonEnvWrapper) = RLBase.get_action_space(env)
Though all the environments are written based on ReinforcementLearningBase.jl, it's relatively easy to support the minimal interfaces provided in CommonRLInterface.
What do you think? @zsunberg
The text was updated successfully, but these errors were encountered: