Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
-
Updated
Apr 1, 2021 - Jupyter Notebook
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Clean, Robust, and Unified PyTorch implementation of popular DRL Algorithms (Q-learning, Duel DDQN, PER, C51, Noisy DQN, PPO, DDPG, TD3, SAC, ASL)
A pytorch tutorial for DRL(Deep Reinforcement Learning)
C51-DDQN in Keras
Deep Reinforcement Learning codes for study. Currently, there are only codes for algorithms: DQN, C51, QR-DQN, IQN, QUOTA.
DQN-Atari-Agents: Modularized & Parallel PyTorch implementation of several DQN Agents, i.a. DDQN, Dueling DQN, Noisy DQN, C51, Rainbow, and DRQN
Paddle-RLBooks is a reinforcement learning code study guide based on pure PaddlePaddle.
🍰 51单片机实验
🐳 Implementation of various Distributional Reinforcement Learning Algorithms using TensorFlow2.
An implementation of an Autonomous Vehicle Agent in CARLA simulator, using TF-Agents
A collection of Deep Reinforcement Learning algorithms implemented with PyTorch to solve Atari games and classic control tasks like CartPole, LunarLander, and MountainCar.
一种有限状态机(Mealy)的精简实现,编码遵循 ANSI C,易于扩展和学习,非常适用于资源有限的场景。 其工作过程如下: 1、使用指定起始状态和最终状态初始化状态机,并设置状态机的当前状态为起始状态。状态机开始工作。 2、在相关事件发生时,把事件关联的变量值传递给状态机并执行状态转换活动。 3、如果状态机进入最终状态(使用当前状态是否等于最终状态来判断),则状态机停机;否则继续工作。
Add a description, image, and links to the c51 topic page so that developers can more easily learn about it.
To associate your repository with the c51 topic, visit your repo's landing page and select "manage topics."