1 Star 0 Fork 5

DJ / ElegantRL

forked from 曾伊言 / ElegantRL 
加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

Lightweight, Efficient and Stable DRL Implementation Using PyTorch

Downloads Downloads Python 3.6 PyPI



ElegantRL is featured with lightweight, efficient and stable, for researchers and practitioners.

  • Lightweight: The core codes <1,000 lines (check elegantrl/tutorial), using PyTorch (train), OpenAI Gym (env), NumPy, Matplotlib (plot).

  • Efficient: performance is comparable with Ray RLlib.

  • Stable: as stable as Stable Baseline 3.

Currently, model-free deep reinforcement learning (DRL) algorithms:

  • DDPG, TD3, SAC, A2C, PPO, PPO(GAE) for continuous actions
  • DQN, DoubleDQN, D3QN for discrete actions

For DRL algorithms, please check out the educational webpage OpenAI Spinning Up.

Check out the ElegantRL documentation.

Table of Contents

News

File Structure

File_structure

An agent in agent.py uses networks in net.py and is trained in run.py by interacting with an environment in env.py.

-----kernel file----

  • elegantrl/net.py # Neural networks.
    • Q-Net,
    • Actor Network,
    • Critic Network,
  • elegantrl/agent.py # RL algorithms.
    • AgentBase
  • elegantrl/run.py # run DEMO 1 ~ 4
    • Parameter initialization,
    • Training loop,
    • Evaluator.

-----utils file----

  • elegantrl/envs/ # gym env or custom env, including FinanceStockEnv.
    • gym_utils.py: A PreprocessEnv class for gym-environment modification.
    • Stock_Trading_Env: A self-created stock trading environment as an example for user customization.
  • eRL_demo_BipedalWalker.ipynb # BipedalWalker-v2 in jupyter notebooks
  • eRL_demos.ipynb # Demo 1~4 in jupyter notebooks. Tell you how to use tutorial version and advanced version.
  • eRL_demo_SingleFilePPO.py # Use single file to train PPO, more simple than tutorial version

As a high-level overview, the relations among the files are as follows. Initialize an environment in Env.py and an agent in Agent.py. The agent is constructed with Actor and Critic networks in Net.py. In each training step in Run.py, the agent interacts with the environment, generating transitions that are stored into a Replay Buffer. Then, the agent fetches transitions from the Replay Buffer to train its networks. After each update, an evaluator evaluates the agent's performance and saves the agent if the performance is good.

Training Pipeline

Initialization:

  • hyper-parameters args.
  • env = PreprocessEnv() : creates an environment (in the OpenAI gym format).
  • agent = agent.XXX() : creates an agent for a DRL algorithm.
  • evaluator = Evaluator() : evaluates and stores the trained model.
  • buffer = ReplayBuffer() : stores the transitions.

Then, the training process is controlled by a while-loop:

  • agent.explore_env(…): the agent explores the environment within target steps, generates transitions, and stores them into the ReplayBuffer.
  • agent.update_net(…): the agent uses a batch from the ReplayBuffer to update the network parameters.
  • evaluator.evaluate_save(…): evaluates the agent's performance and keeps the trained model with the highest score.

The while-loop will terminate when the conditions are met, e.g., achieving a target score, maximum steps, or manually breaks.

Experimental Results

Results using ElegantRL

LunarLanderContinuous-v2

LunarLanderTwinDelay3

BipedalWalkerHardcore-v2

BipedalWalkerHardcore is a difficult task in continuous action space. There are only a few RL implementations can reach the target reward.

Check out a video on bilibili: Crack the BipedalWalkerHardcore-v2 with total reward 310 using IntelAC.

Requirements

Necessary:
| Python 3.6+     |           
| PyTorch 1.6+    |    

Not necessary:
| Numpy 1.18+     | For ReplayBuffer. Numpy will be installed along with PyTorch.
| gym 0.17.0      | For env. Gym provides tutorial env for DRL training. (env.render() bug in gym==1.18 pyglet==1.6. Change to gym==1.17.0, pyglet==1.5)
| pybullet 2.7+   | For env. We use PyBullet (free) as an alternative of MuJoCo (not free).
| box2d-py 2.3.8  | For gym. Use pip install Box2D (instead of box2d-py)
| matplotlib 3.2  | For plots. Evaluate the agent performance.

pip3 install gym==1.17.0 pybullet Box2D matplotlib

空文件

简介

Lightweight, stable, efficient PyTorch implement of reinforcement learning. I want to call this PyTorch implement as "3-Python-file-RL". 展开 收起
Python
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
Python
1
https://gitee.com/dang050/ElegantRL.git
git@gitee.com:dang050/ElegantRL.git
dang050
ElegantRL
ElegantRL
master

搜索帮助