代码拉取完成,页面将自动刷新
We hope that the new structure of nnU-Net v2 makes it much more intuitive on how to modify it! We cannot give an extensive tutorial on how each and every bit of it can be modified. It is better for you to search for the position in the repository where the thing you intend to change is implemented and start working your way through the code from there. Setting breakpoints and debugging into nnU-Net really helps in understanding it and thus will help you make the necessary modifications!
Here are some things you might want to read before you start:
build_network_architecture
function.
Make sure your architecture is compatible with deep supervision (if not, use nnUNetTrainerNoDeepSupervision
as basis!) and that it can handle the patch sizes that are thrown at it! Your architecture should NOT apply any
nonlinearities at the end (softmax, sigmoid etc). nnU-Net does that!PlainConvUNet
class
used by default. It needs to have some sort of GPU memory estimation method that can be used to evaluate whether
certain patch sizes and
topologies fit into a specified GPU memory target. Build a new ExperimentPlanner
that can configure your new
class and communicate with its memory budget estimation. Run nnUNetv2_plan_and_preprocess
while specifying your
custom ExperimentPlanner
and a custom plans_name
. Implement a nnUNetTrainer that can use the plans generated by
your ExperimentPlanner
to instantiate the network architecture. Specify your plans and trainer when running nnUNetv2_train
.
It always pays off to first read and understand the corresponding nnU-Net code and use it as a template for your implementation!NotImplementedError
)此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。