nnU-Net has been tested on Linux (Ubuntu 18.04, 20.04, 22.04; centOS, RHEL), Windows and MacOS! It should work out of the box!
We support GPU (recommended), CPU and Apple M1/M2 as devices (currently Apple mps does not implement 3D convolutions, so you might have to use the CPU on those devices).
We recommend you use a GPU for training as this will take a really long time on CPU or MPS (Apple M1/M2). For training a GPU with at least 10 GB (popular non-datacenter options are the RTX 2080ti, RTX 3080/3090 or RTX 4080/4090) is required. We also recommend a strong CPU to go along with the GPU. 6 cores (12 threads) are the bare minimum! CPU requirements are mostly related to data augmentation and scale with the number of input channels and target structures. Plus, the faster the GPU, the better the CPU should be!
Again we recommend a GPU to make predictions as this will be substantially faster than the other options. However, inference times are typically still manageable on CPU and MPS (Apple M1/M2). If using a GPU, it should have at least 4 GB of available (unused) VRAM.
Example workstation configurations for training:
Example Server configuration for training:
(nnU-net by default uses one GPU per training. The server configuration can run up to 8 model trainings simultaneously)
Note that you will need to manually set the number of processes nnU-Net uses for data augmentation according to your
CPU/GPU ratio. For the server above (256 threads for 8 GPUs), a good value would be 24-30. You can do this by
setting the nnUNet_n_proc_DA
environment variable (export nnUNet_n_proc_DA=XX
).
Recommended values (assuming a recent CPU with good IPC) are 10-12 for RTX 2080 ti, 12 for a RTX 3090, 16-18 for
RTX 4090, 28-32 for A100. Optimal values may vary depending on the number of input channels/modalities and number of classes.
We strongly recommend that you install nnU-Net in a virtual environment! Pip or anaconda are both fine. If you choose to compile PyTorch from source (see below), you will need to use conda instead of pip.
Use a recent version of Python! 3.9 or newer is guaranteed to work!
nnU-Net v2 can coexist with nnU-Net v1! Both can be installed at the same time.
pip install nnunetv2
WITHOUT PROPERLY INSTALLING PYTORCH FIRST. For maximum speed, consider
compiling pytorch yourself (experienced users only!).For use as standardized baseline, out-of-the-box segmentation algorithm or for running inference with pretrained models:
pip install nnunetv2
For use as integrative framework (this will create a copy of the nnU-Net code on your computer so that you can modify it as needed):
git clone https://github.com/MIC-DKFZ/nnUNet.git
cd nnUNet
pip install -e .
pip install --upgrade git+https://github.com/julien-blanchon/hiddenlayer.git
Installing nnU-Net will add several new commands to your terminal. These commands are used to run the entire nnU-Net
pipeline. You can execute them from any location on your system. All nnU-Net commands have the prefix nnUNetv2_
for
easy identification.
Note that these commands simply execute python scripts. If you installed nnU-Net in a virtual environment, this environment must be activated when executing the commands. You can see what scripts/functions are executed by checking the entry_points in the setup.py file.
All nnU-Net commands have a -h
option which gives information on how to use them.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。