Updated on 2022.12.13 DI-engine-v0.4.5
DI-engine is a generalized decision intelligence engine. It supports various deep reinforcement learning algorithms (link):
DI-engine aims to standardize different Decision Intelligence enviroments and applications. Various training pipelines and customized decision AI applications are also supported.
DI-engine also has some system optimization and design for efficient and robust large-scale RL training:
Have fun with exploration and exploitation.
You can simply install DI-engine from PyPI with the following command:
pip install DI-engine
If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:
conda install -c opendilab di-engine
For more information about installation, you can refer to installation.
And our dockerhub repo can be found here,we prepare base image
and env image
with common RL environments.
The detailed documentation are hosted on doc | 中文文档.
How to migrate a new RL Env | 如何迁移一个新的强化学习环境
How to customize the neural network model | 如何定制策略使用的神经网络模型
Bonus: Train RL agent in one line code:
ding -m serial -e cartpole -p dqn -s 0
discrete means discrete action space, which is only label in normal DRL algorithms (1-18)
means continuous action space, which is only label in normal DRL algorithms (1-18)
means hybrid (discrete + continuous) action space (1-18)
Distributed Reinforcement Learning|分布式强化学习
Multi-Agent Reinforcement Learning|多智能体强化学习
Exploration Mechanisms in Reinforcement Learning|强化学习中的探索机制
Offiline Reinforcement Learning|离线强化学习
Model-Based Reinforcement Learning|基于模型的强化学习
means other sub-direction algorithm, usually as plugin-in in the whole pipeline
P.S: The .py
file in Runnable Demo
can be found in dizoo
No. | Algorithm | Label | Doc and Implementation | Runnable Demo |
---|---|---|---|---|
1 | DQN |
DQN doc DQN中文文档 policy/dqn |
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0 | |
2 | C51 |
C51 doc policy/c51 |
ding -m serial -c cartpole_c51_config.py -s 0 | |
3 | QRDQN |
QRDQN doc policy/qrdqn |
ding -m serial -c cartpole_qrdqn_config.py -s 0 | |
4 | IQN |
IQN doc policy/iqn |
ding -m serial -c cartpole_iqn_config.py -s 0 | |
5 | FQF |
FQF doc policy/fqf |
ding -m serial -c cartpole_fqf_config.py -s 0 | |
6 | Rainbow |
Rainbow doc policy/rainbow |
ding -m serial -c cartpole_rainbow_config.py -s 0 | |
7 | SQL |
|
SQL doc policy/sql |
ding -m serial -c cartpole_sql_config.py -s 0 |
8 | R2D2 |
|
R2D2 doc policy/r2d2 |
ding -m serial -c cartpole_r2d2_config.py -s 0 |
9 | PG |
PG doc policy/pg |
ding -m serial -c cartpole_pg_config.py -s 0 | |
10 | A2C |
A2C doc policy/a2c |
ding -m serial -c cartpole_a2c_config.py -s 0 | |
11 | PPO/MAPPO |
|
PPO doc policy/ppo |
python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0 |
12 | PPG |
PPG doc policy/ppg |
python3 -u cartpole_ppg_main.py | |
13 | ACER |
|
ACER doc policy/acer |
ding -m serial -c cartpole_acer_config.py -s 0 |
14 | IMPALA |
|
IMPALA doc policy/impala |
ding -m serial -c cartpole_impala_config.py -s 0 |
15 | DDPG/PADDPG |
|
DDPG doc policy/ddpg |
ding -m serial -c pendulum_ddpg_config.py -s 0 |
16 | TD3 |
|
TD3 doc policy/td3 |
python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0 |
17 | D4PG |
D4PG doc policy/d4pg |
python3 -u pendulum_d4pg_config.py | |
18 | SAC/[MASAC] |
|
SAC doc policy/sac |
ding -m serial -c pendulum_sac_config.py -s 0 |
19 | PDQN | policy/pdqn | ding -m serial -c gym_hybrid_pdqn_config.py -s 0 | |
20 | MPDQN | policy/pdqn | ding -m serial -c gym_hybrid_mpdqn_config.py -s 0 | |
21 | HPPO | policy/ppo | ding -m serial_onpolicy -c gym_hybrid_hppo_config.py -s 0 | |
22 | QMIX |
QMIX doc policy/qmix |
ding -m serial -c smac_3s5z_qmix_config.py -s 0 | |
23 | COMA |
COMA doc policy/coma |
ding -m serial -c smac_3s5z_coma_config.py -s 0 | |
24 | QTran | policy/qtran | ding -m serial -c smac_3s5z_qtran_config.py -s 0 | |
25 | WQMIX |
WQMIX doc policy/wqmix |
ding -m serial -c smac_3s5z_wqmix_config.py -s 0 | |
26 | CollaQ |
CollaQ doc policy/collaq |
ding -m serial -c smac_3s5z_collaq_config.py -s 0 | |
27 | MADDPG |
MADDPG doc policy/ddpg |
ding -m serial -c ant_maddpg_config.py -s 0 | |
28 | GAIL |
GAIL doc reward_model/gail |
ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0 | |
29 | SQIL |
SQIL doc entry/sqil |
ding -m serial_sqil -c cartpole_sqil_config.py -s 0 | |
30 | DQFD |
DQFD doc policy/dqfd |
ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0 | |
31 | R2D3 |
R2D3 doc R2D3中文文档 policy/r2d3 |
python3 -u pong_r2d3_r2d2expert_config.py | |
32 | Guided Cost Learning |
Guided Cost Learning中文文档 reward_model/guided_cost |
python3 lunarlander_gcl_config.py | |
33 | TREX |
TREX doc reward_model/trex |
python3 mujoco_trex_main.py | |
34 | Implicit Behavorial Cloning (DFO+MCMC) |
policy/ibc model/template/ebm |
python3 d4rl_ibc_main.py -s 0 -c pen_human_ibc_mcmc_config.py | |
35 | BCO | entry/bco | python3 -u cartpole_bco_config.py | |
36 | HER |
HER doc reward_model/her |
python3 -u bitflip_her_dqn.py | |
37 | RND |
RND doc reward_model/rnd |
python3 -u cartpole_rnd_onppo_config.py | |
38 | ICM |
ICM doc ICM中文文档 reward_model/icm |
python3 -u cartpole_ppo_icm_config.py | |
39 | CQL |
CQL doc policy/cql |
python3 -u d4rl_cql_main.py | |
40 | TD3BC |
TD3BC doc policy/td3_bc |
python3 -u d4rl_td3_bc_main.py | |
41 | Decision Transformer | policy/dt | python3 -u d4rl_dt_main.py | |
42 | MBSAC(SAC+MVE+SVG) |
|
policy/mbpolicy/mbsac | python3 -u pendulum_mbsac_mbpo_config.py \ python3 -u pendulum_mbsac_ddppo_config.py |
43 | STEVESAC(SAC+STEVE+SVG) |
|
policy/mbpolicy/mbsac | python3 -u pendulum_stevesac_mbpo_config.py |
44 | MBPO |
MBPO doc world_model/mbpo |
python3 -u pendulum_sac_mbpo_config.py | |
45 | DDPPO | world_model/ddppo | python3 -u pendulum_mbsac_ddppo_config.py | |
46 | PER | worker/replay_buffer | rainbow demo |
|
47 | GAE | rl_utils/gae | ppo demo |
|
48 | ST-DIM | torch_utils/loss/contrastive_loss | ding -m serial -c cartpole_dqn_stdim_config.py -s 0 | |
49 | PLR |
PLR doc data/level_replay/level_sampler |
python3 -u bigfish_plr_config.py -s 0 | |
50 | PCGrad | torch_utils/optimizer_helper/PCGrad | python3 -u multi_mnist_pcgrad_main.py -s 0 | |
51 | BDQ | policy/bdq | python3 -u hopper_bdq_config.py |
means discrete action space
means continuous action space
means hybrid (discrete + continuous) action space
means multi-agent RL environment
means environment which is related to exploration and sparse reward
means offline RL environment
means Imitation Learning or Supervised Learning Dataset
means environment that allows agent VS agent battle
P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type
File an issue on Github
Open or participate in our forum
Discuss on DI-engine slack communication channel
Discuss on DI-engine's QQ group (700157520) or add us on WeChat
Contact our email (opendilab@pjlab.org.cn)
Contributes to our future plan Roadmap
We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md
offers some necessary information.
@misc{ding,
title={{DI-engine: OpenDILab} Decision Intelligence Engine},
author={DI-engine Contributors},
publisher = {GitHub},
howpublished = {\url{https://github.com/opendilab/DI-engine}},
year={2021},
}
DI-engine released under the Apache 2.0 license.
你可以在登录后,发表评论
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。
代码活跃度
社区活跃度
团队健康
流行趋势
影响力
:与代码提交频次相关
:与项目和用户的issue、pr互动相关
:与团队成员人数和稳定度相关
:与项目近期受关注度相关
:与项目的star、下载量等社交指标相关
仓库评论 ( 0 )