Does your system run like it should? Is your epoch time longer than expected? What epoch times should you expect?
Look no further for we have the solution here!
nnU-Net's benchmark trains models for 5 epochs. At the end, the fastest epoch will be noted down, along with the GPU name, torch version and cudnn version. You can find the benchmark output in the corresponding nnUNet_results subfolder (see example below). Don't worry, we also provide scripts to collect your results. Or you just start a benchmark and look at the console output. Everything is possible. Nothing is forbidden.
The benchmark implementation revolves around two trainers:
nnUNetTrainerBenchmark_5epochs
runs a regular training for 5 epochs. When completed, writes a .json file with the fastest
epoch time as well as the GPU used and the torch and cudnn versions. Useful for speed testing the entire pipeline
(data loading, augmentation, GPU training)nnUNetTrainerBenchmark_5epochs_noDataLoading
is the same, but it doesn't do any data loading or augmentation. It
just presents dummy arrays to the GPU. Useful for checking pure GPU speed.It's quite simple, actually. It looks just like a regular nnU-Net training.
We provide reference numbers for some of the Medical Segmentation Decathlon datasets because they are easily
accessible: download here. If it needs to be
quick and dirty, focus on Tasks 2 and 4. Download and extract the data and convert them to the nnU-Net format with
nnUNetv2_convert_MSD_dataset
.
Run nnUNetv2_plan_and_preprocess
for them.
Then, for each dataset, run the following commands (only one per GPU! Or one after the other):
nnUNetv2_train DATSET_ID 2d 0 -tr nnUNetTrainerBenchmark_5epochs
nnUNetv2_train DATSET_ID 3d_fullres 0 -tr nnUNetTrainerBenchmark_5epochs
nnUNetv2_train DATSET_ID 2d 0 -tr nnUNetTrainerBenchmark_5epochs_noDataLoading
nnUNetv2_train DATSET_ID 3d_fullres 0 -tr nnUNetTrainerBenchmark_5epochs_noDataLoading
If you want to inspect the outcome manually, check (for example!) your
nnUNet_results/DATASET_NAME/nnUNetTrainerBenchmark_5epochs__nnUNetPlans__3d_fullres/fold_0/
folder for the benchmark_result.json
file.
Note that there can be multiple entries in this file if the benchmark was run on different GPU types, torch versions or cudnn versions!
If you want to summarize your results like we did in our results, check the
summary script. Here you need to change the
torch version, cudnn version and dataset you want to summarize, then execute the script. You can find the exact
values you need to put there in one of your benchmark_result.json
files.
We have tested a variety of GPUs and summarized the results in a spreadsheet. Note that you can select the torch and cudnn versions at the bottom! There may be comments in this spreadsheet. Read them!
Results are shown as epoch time in seconds. Lower is better (duh). Epoch times can fluctuate between runs, so as long as you are within like 5-10% of the numbers we report, everything should be dandy.
If not, here is how you can try to find the culprit!
The first thing to do is to compare the performance between the nnUNetTrainerBenchmark_5epochs_noDataLoading
and
nnUNetTrainerBenchmark_5epochs
trainers. If the difference is about the same as we report in our spreadsheet, but
both your numbers are worse, the problem is with your GPU:
pip install torch
! Go to the
PyTorch installation page, select the most recent cuda version your
system supports and only then copy and execute the correct command! Either pip or conda should worknvidia-smi
. Hot GPUs throttle performance in order to not self-destructIf you see a large performance difference between nnUNetTrainerBenchmark_5epochs_noDataLoading
(fast) and
nnUNetTrainerBenchmark_5epochs
(slow) then the problem might be related to data loading and augmentation. As a
reminder, nnU-net does not use pre-augmented images (offline augmentation) but instead generates augmented training
samples on the fly during training (no, you cannot switch it to offline). This requires that your system can do partial
reads of the image files fast enough (SSD storage required!) and that your CPU is powerful enough to run the augmentations.
Check the following:
nnUNet_n_proc_DA
environment variable (Linux: export nnUNet_n_proc_DA=24
). Read here on how to do this.
If your CPU does not support more processes (setting more processes than your CPU has threads makes
no sense!) you are out of luck and in desperate need of a system upgrade!top
(sorry, Windows users. I don't know how to do this on Windows) and look at the value
left of 'wa' in the row that begins
with '%Cpu (s)'. If this is >1.0 (arbitrarily set threshold here, essentially look for unusually high 'wa'. In a
healthy training 'wa' will be almost 0) then your storage cannot keep up with data loading. Make sure to set
nnUNet_preprocessed to a folder that is located on an SSD. nvme is preferred over SATA. PCIe3 is enough. 3000MB/s
sequential read recommended.export nnUNet_keep_files_open=True
to your .bashrc and the problem goes away. The neat
part: it causes new problems if you are not allowed to have enough open files. You may have to increase the number
of allowed open files. ulimit -n
gives your current limit (Linux only). It should not be something like 1024.
Increasing that to 65535 works well for me. See here for how to change these limits:
Link
(works for Ubuntu 18, google for your OS!).此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。