2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

[ST][MS][MF][baichuan2_13b][910B1 8/16p]B1环境微调,报内存不足问题

VALIDATION
Bug-Report
创建于  
2024-05-23 10:45
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

baichua2_13b网络在910B1环境跑微调,网络报内存不足问题,B3环境是可以跑的
模型仓地址:https://gitee.com/mindspore/mindformers/blob/dev/research/baichuan2/finetune_baichuan2_13b.yaml

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):

【CANN版本】:Milan_C18/20240517
【MindSpore版本】:master_108dc9c05
【MindFormers版本】:dev_5b4a973b2425d

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

用例仓地址:
MindFormers_Test/cases/baichuan2/13b/train/
用例:
test_mf_baichuan2_13b_train_belle_8p_0001/

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. get code from mindformers
  2. cd mindformers/research
  3. bash run_singlenode.sh '''python ./baichuan2/run_baichuan2.py --config ./research/baichuan2/finetune_baichuan2_13b.yaml --load_checkpoint /home/workspace/large_model_ckpt//baichuan2/13b/train/ --auto_trans_ckpt True --use_parallel True --run_mode finetune --train_data /home/workspace/large_model_dataset//baichuan2/belle_4096.mindrecord''' /home/workspace/config/hccl_8p.json [0,8] 8
  4. 验证网络在B1环境上是否训练成功

Describe the expected behavior / 预期结果 (Mandatory / 必填)

网络训练成功

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

Traceback (most recent call last):
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
   result = run_func(*args, **kwargs)
 File "/home/jenkins/workspace/TDT_deployment/MindFormers_Test/cases/baichuan2/13b/train/test_mf_baichuan2_13b_train_belle_16p_0001/research/./baichuan2/run_baichuan2.py", line 139, in main
   build_context(config)
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindformers/core/context/build_context.py", line 47, in build_context
   local_rank, device_num = init_context(use_parallel=config.use_parallel,
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindformers/core/context/build_context.py", line 122, in init_context
   raise RuntimeError("Notice: if you are trying to run with a single device, please set "
RuntimeError: Notice: if you are trying to run with a single device, please set use_parallel=False. If not, please check the error message above.

Traceback (most recent call last):
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindformers/core/context/build_context.py", line 120, in init_context
   init()
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindspore/communication/management.py", line 188, in init
   init_hccl()
RuntimeError: Ascend kernel runtime initialization failed. The details refer to 'Ascend Error Message'.

----------------------------------------------------
- Framework Error Message:
----------------------------------------------------
Malloc device memory failed, size[64424509440], ret[207001], Device 7 Available HBM size:65464696832 free size:65157582848 may be other processes occupying this card, check as: ps -ef|grep python

----------------------------------------------------
- Ascend Error Message:
----------------------------------------------------
EL0004: 2024-05-22-22:47:53.613.358 Failed to allocate memory.
       Possible Cause: Available memory is insufficient.
       Solution: Close applications not in use.
       TraceBack (most recent call last):
       rtMalloc execute failed, reason=[driver error:out of memory][FUNC:FuncErrorReason][FILE:error_message_manage.cc][LINE:53]
       alloc device memory failed, runtime result = 207001[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161]

(Please search "CANN Common Error Analysis" at https://www.mindspore.cn for error code description)

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_kernel_runtime.cc:357 Init
mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:293 MallocFromRts

Special notes for this issue/备注 (Optional / 选填)

走给张森镇

评论 (3)

zhangjie18 创建了Bug-Report
zhangjie18 添加了
 
kind/bug
标签
zhangjie18 添加了
 
master
标签
zhangjie18 添加了
 
stage/func-debug
标签
zhangjie18 添加了
 
sig/mindformers
标签
zhangjie18 添加了
 
attr/function
标签
zhangjie18 添加了
 
device/ascend
标签
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@zhangjie18

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review

根因:max_device_memory太大,导致hccl内存不足
解决:max_device_memory由60GB改为59GB
已修复,参考PR:https://gitee.com/mindspore/mindformers/pulls/3151

i-robot 添加了
 
gitee
标签
森镇 任务状态TODO 修改为VALIDATION
森镇 添加了
 
rca/others
标签
森镇 添加了
 
rct/oldrelease
标签
森镇 添加了
 
ctl/rdselftest
标签
森镇 里程碑B-SIG-MindFormers 修改为B-SolutionTest
fangwenyi 添加了关联分支master 选项
fangwenyi 添加了问题后端类型Ascend 选项
于明浩 负责人森镇 修改为未设置
于明浩 负责人设置为zhangjie18

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(4)
11016979 xiangmd 1654824581
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助

344bd9b3 5694891 D2dac590 5694891