2.3K Star 8.1K Fork 4.3K

GVPMindSpore / mindspore

 / 详情

[ST][MS][MF][glm3-6b-32k]网络KBK,batch推理失败,ValueError: Unable to create tensor,

DONE
Bug-Report
创建于  
2024-04-26 16:11
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

[ST][MS][MF][glm3-6b-32k]网络KBK,batch推理失败,ValueError: Unable to create tensor,
模型仓地址:https://gitee.com/mindspore/mindformers/blob/dev/docs/model_cards/glm3.md

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):

【CANN版本】:Milan_C17/20240414
【MindSpore版本】:master_B020
【MindFormers版本】:master_B020

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

用例仓地址:/MindFormers_Test/cases/llama2/7b/train/
用例:
不涉及

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. get code from mindformers

  2. cd mindformers/scripts

  3. python run_mindformer.py --config research/glm32k/predict_glm32k_seq_256_bs_8.yaml --run_mode predict --use_parallel False --predict_data "使用python编写快速排序代码" "晚上睡不着应该怎么办" "请介绍一下华为" "平板有什么用" "使用python编写快速排序代码" "晚上睡不着应该怎么办" "请介绍一下华为" "平板有什么用" --auto_trans_ckpt False --predict_batch_size 8 --device_id 0 > 32k_infer_seq256_bs8.log 2>&1 &

  4. 验证网络是否推理成功

Describe the expected behavior / 预期结果 (Mandatory / 必填)

网络推理成功

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

Traceback (most recent call last):
 File "/home/jenkins0/sjw/mindformers/run_mindformer.py", line 268, in <module>
   main(config_)
 File "/home/jenkins0/sjw/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py", line 44, in wrapper
   raise exc
 File "/home/jenkins0/sjw/mindformers/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
   result = run_func(*args, **kwargs)
 File "/home/jenkins0/sjw/mindformers/run_mindformer.py", line 43, in main
   trainer.predict(predict_checkpoint=config.load_checkpoint, input_data=config.input_data,
 File "/home/jenkins0/.conda/envs/wxy39/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1371, in wrapper
   return func(*args, **kwargs)
 File "/home/jenkins0/sjw/mindformers/mindformers/trainer/trainer.py", line 684, in predict
   output_result = self.trainer.predict(
 File "/home/jenkins0/sjw/mindformers/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 338, in predict
   return self.predict_process(config=config,
 File "/home/jenkins0/sjw/mindformers/mindformers/trainer/base_trainer.py", line 937, in predict_process
   output_results = self.pipeline_task(input_data, top_k=top_k)
 File "/home/jenkins0/sjw/mindformers/mindformers/pipeline/base_pipeline.py", line 149, in __call__
   outputs = self.run_multi(inputs, batch_size, preprocess_params, forward_params, postprocess_params)
 File "/home/jenkins0/sjw/mindformers/mindformers/pipeline/text_generation_pipeline.py", line 187, in run_multi
   outputs.extend(self.run_single(item, preprocess_params,
 File "/home/jenkins0/sjw/mindformers/mindformers/pipeline/base_pipeline.py", line 236, in run_single
   model_inputs = self.preprocess(inputs, **preprocess_params)
 File "/home/jenkins0/sjw/mindformers/mindformers/pipeline/text_generation_pipeline.py", line 135, in preprocess
   return self.tokenizer.build_batch_input(inputs)
 File "/home/jenkins0/sjw/mindformers/mindformers/models/glm3/glm3_tokenizer.py", line 248, in build_batch_input
   return self.batch_encode_plus(batch_inputs, return_tensors="np", is_split_into_words=True)
 File "/home/jenkins0/sjw/mindformers/mindformers/models/tokenization_utils_base.py", line 3352, in batch_encode_plus
   return self._batch_encode_plus(
 File "/home/jenkins0/sjw/mindformers/mindformers/models/tokenization_utils.py", line 871, in _batch_encode_plus
   batch_outputs = self._batch_prepare_for_model(
 File "/home/jenkins0/sjw/mindformers/mindformers/models/tokenization_utils.py", line 951, in _batch_prepare_for_model
   batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors)
 File "/home/jenkins0/sjw/mindformers/mindformers/models/tokenization_utils_base.py", line 273, in __init__
   self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
 File "/home/jenkins0/sjw/mindformers/mindformers/models/tokenization_utils_base.py", line 796, in convert_to_tensors
   raise ValueError(
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`input_ids` in this case) have excessive nesting (inputs type `list` where type `int` is expected).

Special notes for this issue/备注 (Optional / 选填)

走给吴致远

评论 (5)

sunjiawei999 创建了Bug-Report
sunjiawei999 复制于任务 I9JWYN
sunjiawei999 添加了
 
kind/bug
标签
sunjiawei999 添加了
 
v2.3.0.rc2
标签
sunjiawei999 添加了
 
attr/function
标签
sunjiawei999 添加了
 
stage/func-debug
标签
sunjiawei999 添加了
 
sig/mslite
标签
sunjiawei999 添加了
 
device/ascend
标签
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@sunjiawei999

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review
sunjiawei999 修改了标题
sunjiawei999 负责人wangxingyan 修改为xiangminshan
sunjiawei999 修改了描述
sunjiawei999 修改了描述
wangxingyan 负责人xiangminshan 修改为吴致远
wangxingyan 添加协作者wangxingyan
wuzhiyuan1996 任务状态TODO 修改为VALIDATION
wuzhiyuan1996 添加了
 
rct/newfeature
标签
wuzhiyuan1996 添加了
 
rca/codelogic
标签
wuzhiyuan1996 添加了
 
ctl/rdselftest
标签

Appearance & Root Cause
问题:batch推理失败
根因:多batch未padding

Fix Solution
多batch数据padding到统一长度

Self-test Report & DT Review
是否需要补充ST/UT:否
原因:research下模型不涉及

i-robot 添加了
 
gitee
标签
wuzhiyuan1996 负责人吴致远 修改为sunjiawei999
wuzhiyuan1996 添加协作者吴致远
zhongjicheng 任务状态VALIDATION 修改为DONE

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(7)
11016979 xiangmd 1654824581
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助