2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

[ST][MS][MF][llama2_13b_lora]微调后推理失败,For Operator[ReshapeAndCache], slot_mapping's type 'None' does not match expected type 'Tensor'.

DONE
Bug-Report
创建于  
2024-05-20 15:37
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

llama2_13b_lora在环境上微调后推理,网络推理失败
模型仓地址:https://gitee.com/mindspore/mindformers/blob/dev/configs/llama2/lora_llama2_13b.yaml

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):

【CANN版本】:Milan_C18/20240517/
【MindSpore版本】:master_B521
【MindFormers版本】:master_B521

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

用例仓地址:MindFormers_Test/cases/llama2/13b/train/
用例:
test_mf_llama2_13b_lora_train_alpaca_8p_0001

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. get code from mindformers
  2. cd mindformers/scripts
  3. python run_mindformer.py --config ./configs/llama2/lora_llama2_13b.yaml --run_mode predict --load_checkpoint ./target_checkpoint/rank_0/llama2_13b_lora0.ckpt(分布式微调后转换的唯一权重) --use_parallel False --device_id 0 --predict_data 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: what is Monetary Policy? ASSISTANT:'
    4.验证网络是否推理成功

Describe the expected behavior / 预期结果 (Mandatory / 必填)

网络推理成功

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

2024-05-19 15:21:09,478 - mindformers[mindformers/generation/text_generator.py:252] - INFO - The generation mode will be **GREEDY_SEARCH**.
2024-05-19 15:21:09,478 - mindformers[mindformers/generation/text_generator.py:93] - INFO - Set kbk infer :False
[CRITICAL] ANALYZER(2776580,ffff816e6020,python):2024-05-19-15:21:16.615.136 [mindspore/ccsrc/pipeline/jit/ps/static_analysis/prim.cc:1263] CheckArgsSizeAndType] For Operator[ReshapeAndCache], slot_mapping's type 'None' does not match expected type 'Tensor'.
The reason may be: lack of definition of type cast, or incorrect type when creating the node.
This exception is caused by framework's unexpected error. Please create an issue at https://gitee.com/mindspore/mindspore/issues to get help.
2024-05-19 15:21:21,535 - mindformers[mindformers/tools/cloud_adapter/cloud_monitor.py:43] - ERROR - Traceback (most recent call last):
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/tools/cloud_adapter/cloud_monitor.py", line 34, in wrapper
   result = run_func(*args, **kwargs)
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/run_mindformer.py", line 43, in main
   trainer.predict(predict_checkpoint=config.load_checkpoint, input_data=config.input_data,
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindspore/_checkparam.py", line 1372, in wrapper
   return func(*args, **kwargs)
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/trainer/trainer.py", line 692, in predict
   output_result = self.trainer.predict(
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/trainer/causal_language_modeling/causal_language_modeling.py", line 343, in predict
   return self.predict_process(config=config,
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/trainer/base_trainer.py", line 937, in predict_process
   output_results = self.pipeline_task(input_data, top_k=top_k)
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pipeline/base_pipeline.py", line 149, in __call__
   outputs = self.run_multi(inputs, batch_size, preprocess_params, forward_params, postprocess_params)
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pipeline/text_generation_pipeline.py", line 181, in run_multi
   outputs.extend(self.run_single(item, preprocess_params,
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pipeline/base_pipeline.py", line 237, in run_single
   model_outputs = self.forward(model_inputs, **forward_params)
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pipeline/base_pipeline.py", line 303, in forward
   return self._forward(model_inputs, **forward_params)
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pipeline/text_generation_pipeline.py", line 197, in _forward
   output_ids = self.network.generate(input_ids, **forward_params)
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/generation/text_generator.py", line 830, in generate
   target_list, is_finished = self.infer(input_ids=input_ids,
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/generation/text_generator.py", line 952, in infer
   res, current_index = self.forward(input_ids=input_ids,
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/generation/text_generator.py", line 1056, in forward
   res = self._incremental_infer(
 File "/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/generation/text_generator.py", line 306, in _incremental_infer
   res = self(
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 696, in __call__
   out = self.compile_and_run(*args, **kwargs)
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 1014, in compile_and_run
   self.compile(*args, **kwargs)
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindspore/nn/cell.py", line 997, in compile
   _cell_graph_executor.compile(self, *self._compile_args, phase=self.phase,
 File "/home/miniconda3/envs/large_model_39/lib/python3.9/site-packages/mindspore/common/api.py", line 1643, in compile
   result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
TypeError: For Operator[ReshapeAndCache], slot_mapping's type 'None' does not match expected type 'Tensor'.
The reason may be: lack of definition of type cast, or incorrect type when creating the node.

----------------------------------------------------
- Framework Unexpected Exception Raised:
----------------------------------------------------
This exception is caused by framework's unexpected error. Please create an issue at https://gitee.com/mindspore/mindspore/issues to get help.
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/pipeline/jit/ps/static_analysis/prim.cc:1263 CheckArgsSizeAndType

----------------------------------------------------
- The Traceback of Net Construct Code:
----------------------------------------------------
# 0 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pet/pet_model.py:79
       return self.pet_model(input_ids, labels, input_position, position_ids, attention_mask, input_embeds,
# 1 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pet/models/lora.py:72
       return self.lora_model(input_ids=input_ids,
# 2 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:357
       if self.use_past:
# 3 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pet/models/lora.py:72
       return self.lora_model(input_ids=input_ids,
              ^
# 4 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:358
           if not isinstance(batch_valid_length, Tensor):
# 5 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:360
       if self.training:
# 6 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:363
           tokens = input_ids
           ^
# 7 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/pet/models/lora.py:72
       return self.lora_model(input_ids=input_ids,
              ^
# 8 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:366
       if not self.is_first_iteration:
# 9 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:370
       if pre_gather:
# 10 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:368
       output = self.model(tokens, batch_valid_length, batch_index, zactivate_len, block_tables, slot_mapping)
                ^
# 11 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:193
       if self.use_past:
# 12 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:194
           if self.is_first_iteration:
# 13 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:195
               freqs_cis = self.freqs_mgr.prefill(bs, seq_len)
               ^
# 14 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:194
           if self.is_first_iteration:
# 15 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:368
       output = self.model(tokens, batch_valid_length, batch_index, zactivate_len, block_tables, slot_mapping)
                ^
# 16 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:206
       for i in range(self.num_layers):
# 17 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:207
           h = self.layers[i](h, freqs_cis, mask, batch_valid_length=batch_valid_length, block_tables=block_tables,
               ^
# 18 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama_transformer.py:501
       if not self.use_past:
# 19 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama.py:207
           h = self.layers[i](h, freqs_cis, mask, batch_valid_length=batch_valid_length, block_tables=block_tables,
               ^
# 20 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama_transformer.py:508
       h = self.attention(input_x, freqs_cis, mask, batch_valid_length, block_tables, slot_mapping)
           ^
# 21 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama_transformer.py:248
       if self.qkv_concat:
# 22 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama_transformer.py:258
           query = self.cast(self.wq(x), self.dtype)  # dp, 1 -> dp, mp
           ^
# 23 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama_transformer.py:263
       if self.use_past:
# 24 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama_transformer.py:264
           freqs_cos, freqs_sin, _ = freqs_cis
# 25 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/models/llama/llama_transformer.py:265
           context_layer = self.infer_attention(query, key, value, batch_valid_length, block_tables, slot_mapping,
                           ^
# 26 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/modules/infer_attention.py:281
       if self.use_rope_rotary_emb:
# 27 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/modules/infer_attention.py:283
           freqs_cos = self.cast(freqs_cos, mstype.float16)
# 28 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/modules/infer_attention.py:289
       if self.is_first_iteration:
# 29 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/modules/infer_attention.py:290
           if self.input_layout == "BSH":
           ^
# 30 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/modules/infer_attention.py:291
               context_layer = self.flash_attention(query, key, value, attn_mask, alibi_mask)
               ^
# 31 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/modules/infer_attention.py:290
           if self.input_layout == "BSH":
           ^
# 32 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/modules/infer_attention.py:286
       key_out = self.paged_attention_mgr(key, value, slot_mapping)
                 ^
# 33 In file /data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/mindformers/modules/paged_attention_mgr.py:61
       return self.reshape_and_cache(key, value, self.key_cache, self.value_cache, slot_mapping)
              ^
(See file '/data/jenkins_workspace/TDT_deployment/MindFormers_Test/cases/llama2/13b/train/test_mf_llama2_13b_lora_train_alpaca_8p_0001/rank_0/om/analyze_fail.ir' for more details. Get instructions about `analyze_fail.ir` at https://www.mindspore.cn/search?inputValue=analyze_fail.ir)

Special notes for this issue/备注 (Optional / 选填)

走给倪钰鑫

评论 (4)

zhangjie18 创建了Bug-Report
zhangjie18 添加了
 
kind/bug
标签
zhangjie18 添加了
 
master
标签
zhangjie18 添加了
 
stage/func-debug
标签
zhangjie18 添加了
 
sig/mindformers
标签
zhangjie18 添加了
 
device/ascend
标签
zhangjie18 添加了
 
attr/function
标签
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@zhangjie18

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review

根因:lora推理没走kbk流程
已修复,见pr https://gitee.com/mindspore/mindformers/pulls/3113

i-robot 添加了
 
gitee
标签
niyuxin94520 添加了
 
rct/bugfix
标签
niyuxin94520 添加了
 
rca/algorithm
标签
niyuxin94520 添加了
 
ctl/solutiontest
标签
niyuxin94520 负责人niyuxin94520 修改为zhangjie18
niyuxin94520 任务状态TODO 修改为VALIDATION

回归版本:【CANN版本】:Milan_C18/20240517/
【MindSpore版本】:MindSpore_master_659b2536(MindSporeDaily)
【MindFormers版本】:dev_20240521121521_2df1290c073e4
回归步骤:参考issue步骤
基本问题:功能问题已解决
输入图片说明
备注:推理结果乱码是因为没有加载权重,进行推理的,校验功能问题是否解决
测试结论:回归通过
回归时间:2024.5.22

i-robot 添加了
 
foruda
标签
zhangjie18 任务状态VALIDATION 修改为DONE

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(6)
11016979 xiangmd 1654824581
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助

344bd9b3 5694891 D2dac590 5694891