117 Star 814 Fork 468

MindSpore / mindformers

 / 详情

Loss值异常 最后全程loss为0 最后train_loss为e-6量级,微调效果奇差

REJECTED
Question
创建于  
2024-04-01 01:14

Loss值异常

代码运行命令

!bash /root/sft/lora/finetune_lora_single_gpu_01.sh  -m /root/autodl-tmp/Qwen-14B-Chat -d /root/sft/lora/plan_data.json

代码输出

[2024-03-31 22:39:01,920] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
The model is automatically converting to bf16 for faster inference. If you want to disable the automatic precision, please manually add bf16/fp16/fp32=True to "AutoModelForCausalLM.from_pretrained".
Try importing flash-attention for faster inference...
Loading checkpoint shards: 100%|████████████████| 15/15 [00:01<00:00,  7.57it/s]
trainable params: 223,150,080 || all params: 14,390,440,960 || trainable%: 1.5506827109764953
Loading data...
Formatting inputs...Skip in lazy mode
/root/miniconda3/lib/python3.8/site-packages/accelerate/accelerator.py:432: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead: 
dataloader_config = DataLoaderConfiguration(dispatch_batches=None)
  warnings.warn(
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
  0%|                                                   | 0/111 [00:00<?, ?it/s]Token indices sequence length is longer than the specified maximum sequence length for this model (1546 > 512). Running this sequence through the model will result in indexing errors
{'loss': 0.0, 'learning_rate': 0.00015, 'epoch': 0.03}                          
{'loss': 0.0, 'learning_rate': 0.0003, 'epoch': 0.05}                           
{'loss': 0.0, 'learning_rate': 0.0002999377014485777, 'epoch': 0.08}            
{'loss': 0.0, 'learning_rate': 0.00029975085754243745, 'epoch': 0.11}           
{'loss': 0.0, 'learning_rate': 0.00029943962348297535, 'epoch': 0.13}           
{'loss': 0.0, 'learning_rate': 0.0002990042577959387, 'epoch': 0.16}            
{'loss': 0.0, 'learning_rate': 0.00029844512211668286, 'epoch': 0.19}           
{'loss': 0.0, 'learning_rate': 0.0002977626808897792, 'epoch': 0.21}            
{'loss': 0.0, 'learning_rate': 0.0002969575009832261, 'epoch': 0.24}            
{'loss': 0.0, 'learning_rate': 0.000296030251217581, 'epoch': 0.27}             
{'loss': 0.0, 'learning_rate': 0.0002949817018104066, 'epoch': 0.29}            
{'loss': 0.0, 'learning_rate': 0.0002938127237364918, 'epoch': 0.32}            
{'loss': 0.0, 'learning_rate': 0.00029252428800437854, 'epoch': 0.35}           
...
{'train_runtime': 1832.2161, 'train_samples_per_second': 1.958, 'train_steps_per_second': 0.061, 'train_loss': 1.8821106090735493e-06, 'epoch': 2.97}
100%|█████████████████████████████████████████| 111/111 [30:32<00:00, 16.51s/it]

bash文件内容

#!/bin/bash
export CUDA_DEVICE_MAX_CONNECTIONS=1

MODEL="Qwen/Qwen-7B" # Set the path if you do not want to load from huggingface directly
# ATTENTION: specify the path to your training data, which should be a json file consisting of a list of conversations.
# See the section for finetuning in README for more information.
DATA="path_to_data"

function usage() {
    echo '
Usage: bash finetune/finetune_lora_single_gpu.sh [-m MODEL_PATH] [-d DATA_PATH]
'
}

while [[ "$1" != "" ]]; do
    case $1 in
        -m | --model )
            shift
            MODEL=$1
            ;;
        -d | --data )
            shift
            DATA=$1
            ;;
        -h | --help )
            usage
            exit 0
            ;;
        * )
            echo "Unknown argument ${1}"
            exit 1
            ;;
    esac
    shift
done

export CUDA_VISIBLE_DEVICES=0

python finetune.py \
  --model_name_or_path $MODEL \
  --data_path $DATA \
  --bf16 True \
  --output_dir /root/autodl-tmp/output_qwen \
  --num_train_epochs 3 \
  --per_device_train_batch_size 4 \
  --per_device_eval_batch_size 1 \
  --gradient_accumulation_steps 8 \
  --evaluation_strategy "no" \
  --save_strategy "steps" \
  --save_steps 1000 \
  --save_total_limit 10 \
  --learning_rate 3e-4 \
  --weight_decay 0.1 \
  --adam_beta2 0.95 \
  --warmup_ratio 0.01 \
  --lr_scheduler_type "cosine" \
  --logging_steps 1 \
  --report_to "none" \
  --model_max_length 512 \
  --lazy_preprocess True \
  --gradient_checkpointing \
  --use_lora

# If you use fp16 instead of bf16, you should use deepspeed
# --fp16 True --deepspeed finetune/ds_config_zero2.json

运行环境:

A40

评论 (2)

nzr 创建了Question

你这个应该是loss输出精度格式的问题,只输出一位小数位

您好,这里是mindformers套件,torch的问题请到相关社区提问,谢谢

zyw_hw 任务状态TODO 修改为REJECTED

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(3)
Python
1
https://gitee.com/mindspore/mindformers.git
git@gitee.com:mindspore/mindformers.git
mindspore
mindformers
mindformers

搜索帮助