2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

在nn.cell类的construct中加入parameter参数判断会出现并行编译报错

TODO
Bug-Report
创建于  
2024-05-23 16:29
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

在盘古2.6B单机8卡模型中,在nn.cell类的construct中增加parameter参数判断,在静态图图编译阶段报并行编译错误

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/GPU/CPU/kirin/等其他芯片

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. 在nn.Cell里添加self.step_num == self.step_config这个判断会编译报错,其中step_num和step_config是Parameter类

Describe the expected behavior / 预期结果 (Mandatory / 必填)

图编译成功,模型正常运行

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

[INFO] CORE(3873834,ffff934dd020,python):2024-05-23-14:26:57.289.529 [mindspore/core/ir/func_graph_base.cc:135] CleanUnusedFuncGraphs] Drop 0xfffaa424c080/↓list_append_59140, use_count: 3, type: FuncGraph
[INFO] PIPELINE(3873834,ffff934dd020,python):2024-05-23-14:26:57.505.028 [mindspore/ccsrc/pipeline/jit/ps/pipeline.cc:838] DelOneNetRes] Delete one net resource end. 1
2024-05-23 14:26:57,552 - mindformers[mindformers/tools/cloud_adapter/cloud_monitor.py:43] - ERROR - Traceback (most recent call last):
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
    result = run_func(*args, **kwargs)
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/run_mindformer.py", line 39, in main
    trainer.train()
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1372, in wrapper
    return func(*args, **kwargs)
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/trainer.py", line 411, in train
    self.trainer.train(
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 113, in train
    self.training_process(
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/base_trainer.py", line 782, in training_process
    model.train(#config.runner_config.epochs, dataset,
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/train/model.py", line 1082, in train
    self._train(epoch,
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/train/model.py", line 115, in wrapper
    func(self, *args, **kwargs)
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/train/model.py", line 636, in _train
    self._train_dataset_sink_process(epoch, train_dataset, list_callback,
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/train/model.py", line 721, in _train_dataset_sink_process
    outputs = train_network(*inputs)
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/cell.py", line 696, in __call__
    out = self.compile_and_run(*args, **kwargs)
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1014, in compile_and_run
    self.compile(*args, **kwargs)
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/cell.py", line 997, in compile
    _cell_graph_executor.compile(self, *self._compile_args, phase=self.phase,
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/common/api.py", line 1643, in compile
    result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
ValueError: For 'Add', input1.shape and input2.shape need to broadcast. The value of input1.shape[2] or input2.shape[1] must be 1 or -1 when they are not the same, but got input1.shape = [2048, 320] and input2.shape = [2560]

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/core/ops/op_utils.cc:83 CalBroadCastShape

----------------------------------------------------
- The Traceback of Net Construct Code:
----------------------------------------------------
# 0 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/wrapper/wrapper.py:106
        loss = self.network(*inputs)
               ^
# 1 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 2 In file /home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:517
        return self._backbone(*output)
               ^
# 3 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 4 In file /home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:517
        return self._backbone(*output)
               ^
# 5 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 6 In file /home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:517
        return self._backbone(*output)
               ^
# 7 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 8 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:496
        output_states, word_table = self.backbone(tokens, input_position, attention_mask,
                                    ^
# 9 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 10 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:496
        output_states, word_table = self.backbone(tokens, input_position, attention_mask,
                                    ^
# 11 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 12 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:496
        output_states, word_table = self.backbone(tokens, input_position, attention_mask,
                                    ^
# 13 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 14 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:343
                hidden_state, _ = self.blocks[i](hidden_state, encoder_masks, init_reset, batch_valid_length)
                                  ^
# 15 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 16 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:343
                hidden_state, _ = self.blocks[i](hidden_state, encoder_masks, init_reset, batch_valid_length)
                                  ^
# 17 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 18 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/transformer/transformer.py:2208
        attention, layer_present = self.attention(input_x, input_x, input_x, input_mask,
                                   ^
# 19 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 20 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/transformer/transformer.py:2208
        attention, layer_present = self.attention(input_x, input_x, input_x, input_mask,
                                   ^
# 21 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 22 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/transformer/transformer.py:2208
        attention, layer_present = self.attention(input_x, input_x, input_x, input_mask,
                                   ^
# 23 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 24 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/transformer/transformer.py:2208
        attention, layer_present = self.attention(input_x, input_x, input_x, input_mask,
                                   ^
# 25 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 26 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/layers.py:500
            x = self.bias_add(x, self.cast(self.bias, self.dtype))
                ^
 (See file '/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/rank_0/om/analyze_fail.ir' for more details. Get instructions about `analyze_fail.ir` at https://www.mindspore.cn/search?inputValue=analyze_fail.ir)

Traceback (most recent call last):
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/run_mindformer.py", line 268, in <module>
    main(config_)
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/tools/cloud_adapter/cloud_monitor.py", line 44, in wrapper
    raise exc
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
    result = run_func(*args, **kwargs)
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/run_mindformer.py", line 39, in main
    trainer.train()
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1372, in wrapper
    return func(*args, **kwargs)
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/trainer.py", line 411, in train
    self.trainer.train(
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 113, in train
    self.training_process(
  File "/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/base_trainer.py", line 782, in training_process
    model.train(#config.runner_config.epochs, dataset,
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/train/model.py", line 1082, in train
    self._train(epoch,
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/train/model.py", line 115, in wrapper
    func(self, *args, **kwargs)
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/train/model.py", line 636, in _train
    self._train_dataset_sink_process(epoch, train_dataset, list_callback,
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/train/model.py", line 721, in _train_dataset_sink_process
    outputs = train_network(*inputs)
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/cell.py", line 696, in __call__
    out = self.compile_and_run(*args, **kwargs)
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1014, in compile_and_run
    self.compile(*args, **kwargs)
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/cell.py", line 997, in compile
    _cell_graph_executor.compile(self, *self._compile_args, phase=self.phase,
  File "/home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/common/api.py", line 1643, in compile
    result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
ValueError: For 'Add', input1.shape and input2.shape need to broadcast. The value of input1.shape[2] or input2.shape[1] must be 1 or -1 when they are not the same, but got input1.shape = [2048, 320] and input2.shape = [2560]

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/core/ops/op_utils.cc:83 CalBroadCastShape

----------------------------------------------------
- The Traceback of Net Construct Code:
----------------------------------------------------
# 0 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/wrapper/wrapper.py:106
        loss = self.network(*inputs)
               ^
# 1 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 2 In file /home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:517
        return self._backbone(*output)
               ^
# 3 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 4 In file /home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:517
        return self._backbone(*output)
               ^
# 5 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 6 In file /home/q30056987/anaconda3/envs/wr_pangualpha_py3.9/lib/python3.9/site-packages/mindspore/nn/wrap/cell_wrapper.py:517
        return self._backbone(*output)
               ^
# 7 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 8 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:496
        output_states, word_table = self.backbone(tokens, input_position, attention_mask,
                                    ^
# 9 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 10 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:496
        output_states, word_table = self.backbone(tokens, input_position, attention_mask,
                                    ^
# 11 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 12 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:496
        output_states, word_table = self.backbone(tokens, input_position, attention_mask,
                                    ^
# 13 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 14 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:343
                hidden_state, _ = self.blocks[i](hidden_state, encoder_masks, init_reset, batch_valid_length)
                                  ^
# 15 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 16 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/models/pangualpha/pangualpha.py:343
                hidden_state, _ = self.blocks[i](hidden_state, encoder_masks, init_reset, batch_valid_length)
                                  ^
# 17 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 18 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/transformer/transformer.py:2208
        attention, layer_present = self.attention(input_x, input_x, input_x, input_mask,
                                   ^
# 19 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 20 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/transformer/transformer.py:2208
        attention, layer_present = self.attention(input_x, input_x, input_x, input_mask,
                                   ^
# 21 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 22 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/transformer/transformer.py:2208
        attention, layer_present = self.attention(input_x, input_x, input_x, input_mask,
                                   ^
# 23 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 24 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/transformer/transformer.py:2208
        attention, layer_present = self.attention(input_x, input_x, input_x, input_mask,
                                   ^
# 25 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/trainer/wr_cell_dump.py:37
            if isinstance(arg, Tensor) and self.step_num[0] == self.step_config[0]:
            ^
# 26 In file /home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/mindformers/modules/layers.py:500
            x = self.bias_add(x, self.cast(self.bias, self.dtype))
                ^
 (See file '/home/q30056987/wr_pangualpha/mindformers-dev/scripts/mf_parallel0/rank_0/om/analyze_fail.ir' for more details. Get instructions about `analyze_fail.ir` at https://www.mindspore.cn/search?inputValue=analyze_fail.ir)
[INFO] PIPELINE(3873834,ffff934dd020,python):2024-05-23-14:26:58.358.458 [mindspore/ccsrc/pipeline/jit/ps/init.cc:545] operator()] Start register...

Special notes for this issue/备注 (Optional / 选填)

评论 (2)

吴瑞 创建了Bug-Report

Please assign maintainer to check this issue.
请为此issue分配处理人。
@fangwenyi @chengxiaoli @Shawny

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(2)
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助

344bd9b3 5694891 D2dac590 5694891