2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

LLama70B 24个节点训练,出现Exec graph failed

VALIDATION
Bug-Report
创建于  
2024-05-13 10:39
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境: 910B2

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/GPU/CPU/kirin/等其他芯片

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :2.2.11
    -- Python version (e.g., Python 3.7.5) :3.9.18
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. 分布式训练参数配置为 DP=6, TP=8, PP=4
  2. 在24个节点上,使用官方配置文件,训练LLaMA2 70B模型

Describe the expected behavior / 预期结果 (Mandatory / 必填)

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

[WARNING] DISTRIBUTED(49,ffffa68ec1c0,python):2024-05-11-14:05:27.362.881 [mindspore/ccsrc/distributed/collective/collective_manager.cc:220] CreateCommunicationGroup] Start to create communication group: 4-5042664273772422699 [const vector]{0, 48, 96, 144}
[WARNING] DISTRIBUTED(49,ffffa68ec1c0,python):2024-05-11-14:05:27.501.955 [mindspore/ccsrc/distributed/collective/collective_manager.cc:220] CreateCommunicationGroup] Start to create communication group: 4-5042664273772422699backward [const vector]{0, 48, 96, 144}
[WARNING] DISTRIBUTED(49,ffffa68ec1c0,python):2024-05-11-14:05:29.631.692 [mindspore/ccsrc/distributed/collective/collective_manager.cc:220] CreateCommunicationGroup] Start to create communication group: 48-4265225556798242533 [const vector]{[0]:{0}, [1]:{1}, [2]:{2}, [3]:{3}, [4]:{4}, [5]:{5}, [6]:{6}, [7]:{7}, [8]:{8}, [9]:{9}, [10]:{10}, [11]:{11}, [12]:{12}, [13]:{13}, [14]:{14}, [15]:{15}, [16]:{16}, [17]:{17}, [18]:{18}, [19]:{19}, [20]:{20}, [21]:{21}, [22]:{22}, [23]:{23}, [24]:{24}, [25]:{25}, [26]:{26}, [27]:{27}, [28]:{28}, [29]:{29}, [30]:{30}, [31]:{31}, [32]:{32}, [33]:{33}, [34]:{34}, [35]:{35}, [36]:{36}, [37]:{37}, [38]:{38}, [39]:{39}, [40]:{40}, [41]:{41}, [42]:{42}, [43]:{43}, [44]:{44}, [45]:{45}, [46]:{46}, [47]:{47}}
[WARNING] DISTRIBUTED(49,ffffa68ec1c0,python):2024-05-11-14:05:29.740.958 [mindspore/ccsrc/distributed/collective/collective_manager.cc:220] CreateCommunicationGroup] Start to create communication group: 8-12158923167718251343 [const vector]{0, 1, 2, 3, 4, 5, 6, 7}
[WARNING] DISTRIBUTED(49,ffffa68ec1c0,python):2024-05-11-14:05:29.835.374 [mindspore/ccsrc/distributed/collective/collective_manager.cc:220] CreateCommunicationGroup] Start to create communication group: 6-459793778647030493 [const vector]{0, 8, 16, 24, 32, 40}
[WARNING] PARALLEL(49,ffffa68ec1c0,python):2024-05-11-14:05:32.614.529 [mindspore/ccsrc/frontend/parallel/step_parallel.cc:2297] GetSensLossPairs] Can not find the loss cnode
[WARNING] RUNTIME_FRAMEWORK(49,ffffa68ec1c0,python):2024-05-11-14:13:59.450.259 [mindspore/ccsrc/runtime/graph_scheduler/control_node_parser.cc:552] FetchInputNodeByNode] Empty abstract for node:ValueNode<FuncGraph> Send1_13206
[WARNING] RUNTIME_FRAMEWORK(49,ffffa68ec1c0,python):2024-05-11-14:13:59.468.569 [mindspore/ccsrc/runtime/graph_scheduler/control_node_parser.cc:2088] ParseAllRealParameterByFormalParameter] Invalid formal parameter:@61589_61158_60727_22263_22227_22072_✓3↓mindformers_wrapper_wrapper_MFPipelineWithLossScaleCell_construct_62000:param_u, maybe there is no call node for funcgraph:61589_61158_60727_22263_22227_22072_✓3↓mindformers_wrapper_wrapper_MFPipelineWithLossScaleCell_construct_62000
[WARNING] RUNTIME_FRAMEWORK(49,ffffa68ec1c0,python):2024-05-11-14:13:59.468.765 [mindspore/ccsrc/runtime/graph_scheduler/control_node_parser.cc:2088] ParseAllRealParameterByFormalParameter] Invalid formal parameter:@61589_61158_60727_22263_22227_22072_✓3↓mindformers_wrapper_wrapper_MFPipelineWithLossScaleCell_construct_62000:param_u, maybe there is no call node for funcgraph:61589_61158_60727_22263_22227_22072_✓3↓mindformers_wrapper_wrapper_MFPipelineWithLossScaleCell_construct_62000
[WARNING] RUNTIME_FRAMEWORK(49,ffffa68ec1c0,python):2024-05-11-14:14:00.485.909 [mindspore/ccsrc/runtime/graph_scheduler/control_node_parser.cc:1740] FetchDeviceContextByNode] Cannot find device context for call node:@61588_61157_60726_22262_22226_1_mindspore_train_dataset_helper__DataWrapper_construct_61996:13208{[0]: 13208} device contexts size:0 index:0
[WARNING] RUNTIME_FRAMEWORK(49,ffffa68ec1c0,python):2024-05-11-14:14:00.508.584 [mindspore/ccsrc/runtime/graph_scheduler/control_node_parser.cc:552] FetchInputNodeByNode] Empty abstract for node:ValueNode<FuncGraph> S
end1_13206
[WARNING] RUNTIME_FRAMEWORK(49,ffffa68ec1c0,python):2024-05-11-14:14:00.536.309 [mindspore/ccsrc/runtime/graph_scheduler/control_node_parser.cc:552] FetchInputNodeByNode] Empty abstract for node:ValueNode<FuncGraph> Send1_13206
[WARNING] RUNTIME_FRAMEWORK(49,ffffa68ec1c0,python):2024-05-11-14:14:01.056.956 [mindspore/ccsrc/runtime/graph_scheduler/control_node_parser.cc:552] FetchInputNodeByNode] Empty abstract for node:ValueNode<FuncGraph> Send1_13206
[ERROR] GE_ADPT(49,fffca7fff1e0,python):2024-05-11-14:22:11.908.529 [mindspore/ccsrc/transform/graph_ir/graph_runner.cc:397] RunGraphWithStreamAsync] Call GE RunGraphWithStreamAsync Failed, ret is: 4294967295
2024-05-11 14:23:11,916 - mindformers[mindformers/tools/cloud_adapter/cloud_monitor.py:43] - ERROR - Traceback (most recent call last):
  File "/workspace/alg-product/mindformers/scripts/mf_parallel0/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
    result = run_func(*args, **kwargs)
  File "/workspace/alg-product/mindformers/scripts/mf_parallel0/run_mindformer.py", line 147, in main
    create_task_trainer(config)
  File "/workspace/alg-product/mindformers/scripts/mf_parallel0/run_mindformer.py", line 86, in create_task_trainer
    trainer.train(config, is_full_config=True)
  File "/workspace/alg-product/mindformers/scripts/mf_parallel0/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 97, in train
    self.training_process(
  File "/workspace/alg-product/mindformers/scripts/mf_parallel0/mindformers/trainer/base_trainer.py", line 763, in training_process
    model.train(config.runner_config.epochs, dataset,
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 1068, in train
    self._train(epoch,
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 114, in wrapper
    func(self, *args, **kwargs)
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 623, in _train
    self._train_dataset_sink_process(epoch, train_dataset, list_callback,
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/train/model.py", line 708, in _train_dataset_sink_process
    outputs = train_network(*inputs)
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 680, in __call__
    out = self.compile_and_run(*args, **kwargs)
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1023, in compile_and_run
    return _cell_graph_executor(self, *new_args, phase=self.phase)
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1589, in __call__
    return self.run(obj, *args, phase=phase)
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1628, in run
    return self._exec_pip(obj, *args, phase=phase_real)
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 121, in wrapper
    results = fn(*arg, **kwargs)
  File "/root/miniconda3/envs/mindspore2.2.11_py39/lib/python3.9/site-packages/mindspore/common/api.py", line 1608, in _exec_pip
    return self._graph_executor(args, phase)
RuntimeError: Exec graph failed

Special notes for this issue/备注 (Optional / 选填)

评论 (5)

hewu2008 创建了Bug-Report

Please assign maintainer to check this issue.
请为此issue分配处理人。
@fangwenyi @chengxiaoli @Shawny

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review

您好,建议移步mindformers issue获取更多支持:https://gitee.com/mindspore/mindformers/issues

i-robot 添加了
 
gitee
标签
Shawny 负责人设置为lzy0920232
Shawny 关联项目设置为MindSpore Issue Assistant
Shawny 计划开始日期设置为2024-05-13
Shawny 计划截止日期设置为2024-06-13
Shawny 添加了
 
mindspore-assistant
标签
Shawny 移除了
 
gitee
标签
Shawny 移除了
 
gitee
标签
Shawny 添加了
 
sig/mindformers
标签
Shawny 任务状态TODO 修改为VALIDATION
hewu2008 修改了描述
i-robot 添加了
 
gitee
标签

您好,由于问题单没有回复,我们后续会关闭,如您仍有疑问,可以反馈下具体信息,并将ISSUE状态修改为WIP,我们这边会进一步跟踪,谢谢

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(3)
8108889 shawny233 1628167362 345233 hewu2008 1578920754
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助

344bd9b3 5694891 D2dac590 5694891