adept is a reinforcement learning framework designed to accelerate research by providing:
This code is early-access, expect rough edges. Interfaces subject to change. We're happy to accept feedback and contributions.
Dependencies:
From source:
git clone https://github.com/heronsystems/adeptRL
cd adeptRL
# Remove mpi, sc2, profiler if you don't plan on using these features:
pip install .[mpi,sc2,profiler]
From docker:
Train an Agent
Logs go to /tmp/adept_logs/
by default. The log directory contains the
tensorboard file, saved models, and other metadata.
# Local Mode (A2C)
# We recommend 4GB+ GPU memory, 8GB+ RAM, 4+ Cores
python -m adept.app local --env BeamRiderNoFrameskip-v4
# Distributed Mode (A2C, requires NCCL)
# We recommend 2+ GPUs, 8GB+ GPU memory, 32GB+ RAM, 4+ Cores
python -m adept.app distrib --env BeamRiderNoFrameskip-v4
# IMPALA (requires mpi4py and is resource intensive)
# We recommend 2+ GPUs, 8GB+ GPU memory, 32GB+ RAM, 4+ Cores
python -m adept.app impala --agent ActorCriticVtrace --env BeamRiderNoFrameskip-v4
# StarCraft 2 (IMPALA not supported yet)
# Warning: much more resource intensive than Atari
python -m adept.app local --env CollectMineralShards
# To see a full list of options:
python -m adept.app -h
python -m adept.app help <command>
Use your own Agent, Environment, Network, or SubModule
"""
my_script.py
Train an agent on a single GPU.
"""
from adept.scripts.local import parse_args, main
from adept.networks import NetworkModule, NetworkRegistry, SubModule1D
from adept.agents import AgentModule, AgentRegistry
from adept.environments import EnvModule, EnvRegistry
class MyAgent(AgentModule):
pass # Implement
class MyEnv(EnvModule):
pass # Implement
class MyNet(NetworkModule):
pass # Implement
class MySubModule1D(SubModule1D):
pass # Implement
if __name__ == '__main__':
agent_registry = AgentRegistry()
agent_registry.register_agent(MyAgent)
env_registry = EnvRegistry()
env_registry.register_env(MyEnv, ['env-id-1', 'env-id-2'])
network_registry = NetworkRegistry()
network_registry.register_custom_net(MyNet)
network_registry.register_submodule(MySubModule1D)
main(
parse_args(),
agent_registry=agent_registry,
env_registry=env_registry,
net_registry=network_registry
)
python my_script.py --agent MyAgent --env env-id-1 --custom-network MyNet
Local (Single-node, Single-GPU)
Distributed (Multi-node, Multi-GPU)
Importance Weighted Actor Learner Architectures, IMPALA (Single Node, Multi-GPU)
python -m adept.app local --logdir ~/local64_benchmark --eval -y --nb-step 50e6 --env <env-id>
We borrow pieces of OpenAI's gym and baselines code. We indicate where this is done.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。