99 Star 796 Fork 1.4K

MindSpore / models

 / 详情

Using a custom operator for an arbitrary function similar to TensorFlow PyFunc

ACCEPTED
RFC
创建于  
2022-04-20 19:43

Background(背景信息)

Our team tried to implement the DETR model, which uses the Hungarian algorithm for selecting boxes for assignments during the training.

This Hungarian algorithm uses 3 nested while loops, and operates the tensors, which shape during computation dynamically changes. It makes it impossible to use it in GRAPH_MODE and works extremely slow in PYNATIVE_MODE.

Currently, the only solution is to compute losses manually using NumPy and then assigning the manually computed gradients as the sense parameter for the GradientOp operator.

The link to the DETR implementation: !2266:Models: DETR

The current solution can be described as the following:

net = MyNetwork(...)
one_step_cell = CustomTrainOneStepCellWithSense(net)

# ...
# Training cycle
input_data, gt = next(dataset_iterator)

# Perform a forward step to calculate the network outputs and the loss.
pred = net(input_data)

# Manually compute the loss and provide the back-propagated gradients for the loss.
loss, loss_grad = my_loss_function(pred, gt)

# Update the sense parameter of one-step-cell so it starts 
# its back-propagation starting from those values
one_step_cell.sense_param.set_data(loss_grad)

# Performing one more forward step, calculating gradients and the weights update.
one_step_cell(input_data)

The additional forward pass takes additional time, so this solution is a bit slower than the original PyTorch model.

Benefit / Necessity (价值/作用)

The support for the arbitrary functions will allow wider class of models, supported by MindSpore.

Design(设计方案)

The design of a possible solution:

Adding the PythonCell class, where a user can define forward and backward propagation logic, or at least the CustomLossCell operator which accepts the tensor, and returns the loss value and the computed gradients. The internal computation of the CustomLossCell should support arbitrary functions.

评论 (5)

adenisov 创建了RFC

Please assign maintainer to check this issue.
请为此issue分配处理人。
@fangwenyi @chengxiaoli

Please add labels (comp or sig), also you can visit https://gitee.com/mindspore/community/blob/master/sigs/dx/docs/labels.md to find more.
为了让代码尽快被审核,请您为Pull Request打上 组件(comp)或兴趣组(sig) 标签,打上标签的PR可直接推送给责任人进行审核。
更多的标签可以查看https://gitee.com/mindspore/community/blob/master/sigs/dx/docs/labels.md
以组件相关代码提交为例,如果你提交的是data组件代码,你可以这样评论:
//comp/data
当然你也可以邀请data SIG组来审核代码,可以这样写:
//sig/data
另外你还可以给这个PR标记类型,例如是bugfix或者是特性需求:
//kind/bug or //kind/feature
恭喜你,你已经学会了使用命令来打标签,接下来就在下面的评论里打上标签吧!

adenisov 修改了描述
fangwenyi 任务状态TODO 修改为ACCEPTED
fangwenyi 负责人设置为chenhaozhe
adenisov 修改了描述
adenisov 修改了描述
adenisov 修改了描述
adenisov 修改了描述
adenisov 修改了描述

Thank you for your suggestion. This is a good idea for such arbitrary function.

For the current solution that calculate the loss and scale_sense manually, we have to do the forward propagation twice for both prediction and gradient.

Once if we could create a special Cell for this, the full TrainOneStepCell will be able to eliminatie it and remove these redundant computation.

We will take more discussion about it and make the complete design and plan.

俄罗斯项目算子性能问题,成辉统一评审处理

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(4)
5644189 c 34 1586403335
1
https://gitee.com/mindspore/models.git
git@gitee.com:mindspore/models.git
mindspore
models
models

搜索帮助