Chinese-LLaMA-Alpaca: 中文词表扩展、模型预训练和微调详细教程

释放双眼,带上耳机,听听看~!
本文详细介绍了Chinese-LLaMA-Alpaca项目从0到1进行中文词表扩展、模型预训练和微调的整个过程,包括环境搭建、依赖库安装等。

大模型词表扩充必备工具SentencePiece一文中,我们提到了在目前开源大模型中,LLaMA无疑是最闪亮的星。但是,与 ChatGLM-6B 和 Bloom 原生支持中文不同。 LLaMA 原生仅支持 Latin 或 Cyrillic 语系,对于中文支持不是特别理想。原版LLaMA模型的词表大小是32K,而多语言模型(如:XLM-R、Bloom)的词表大小约为250K。以中文为例,LLaMA词表中的中文token比较少(只有几百个)。这将导致了两个问题:

  • LLaMA 原生tokenizer词表中仅包含少量中文字符,在对中文字进行tokenzation时,一个中文汉字往往被切分成多个token(2-3个Token才能组合成一个汉字),显著降低编解码的效率。
  • 预训练中没有出现过或者出现得很少的语言学习得不充分。

为了解决这些问题,我们可能就需要进行中文词表扩展。比如:在中文语料库上训练一个中文tokenizer模型,然后将中文 tokenizer 与 LLaMA 原生的 tokenizer 进行合并,通过组合它们的词汇表,最终获得一个合并后的 tokenizer 模型。

而国内Chinese-LLaMA-Alpaca开源项目详细说明了词表扩展、模型预训练和模型指令精调的整个过程。本文将分享Chinese-LLaMA-Alpaca是如何从0到1进行中文词表扩充、模型预训练和微调的整个过程。

环境搭建

基础环境配置如下:

  • 操作系统: Ubuntu 18.04
  • CPUs: 单个节点具有 384GB 内存的 Intel CPU,物理CPU个数为2,每颗CPU核数为20
  • GPUs: 4 卡 A800 80GB GPUs
  • Python: 3.10 (需要先升级OpenSSL到1.1.1t版本(点击下载OpenSSL),然后再编译安装Python),点击下载Python
  • NVIDIA驱动程序版本: 525.105.17,根据不同型号选择不同的驱动程序,点击下载
  • CUDA工具包: 11.6,点击下载
  • cuDNN: 8.8.1.3_cuda11,点击下载

为了便于复现,本文选择使用Doker镜像进行环境搭建。

首先,下载对应版本的Pytorch镜像。

docker pull pytorch/pytorch:1.13.1-cuda11.6-cudnn8-devel

镜像下载完成之后,创建容器。

docker run -dt --name pytorch1131_cu116 --restart=always --gpus all --network=host 
-v /home/gdong/workspace:/workspace 
-w /workspace 
--shm-size 5g 
pytorch/pytorch:1.13.1-cuda11.6-cudnn8-devel 
/bin/bash

进入容器。

docker exec -it pytorch1131_cu116_v1 bash

安装依赖库。

pip install transformers==4.28.1 sentencepiece==0.1.97 google protobuf deepspeed -i https://pypi.tuna.tsinghua.edu.cn/simple  --trusted-host pypi.tuna.tsinghua.edu.cn

从源码安装 Peft,由于 Peft 库变动频繁,从 commit id13e53fc 进行安装。

git clone https://github.com/huggingface/peft.git
git checkout 13e53fc
pip install . -i https://pypi.tuna.tsinghua.edu.cn/simple  --trusted-host pypi.tuna.tsinghua.edu.cn

代码、模型、数据集准备

代码准备

下载Chinese-LLaMA-Alpaca官网代码。

# 3e2f2529
git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca.git

模型权重 及 Tokenizer 准备

将LLaMA原始权重文件转换为Transformers库对应的模型文件格式。具体可参考之前的文章:从0到1复现斯坦福羊驼(Stanford Alpaca 7B)

注: 如果不想转换也可以直接从Hugging Face下载转换好的模型:yahma/llama-7b-hf,具体下载命令如下所示:

git lfs clone https://huggingface.co/yahma/llama-7b-hf

数据集准备

本文预训练数据集使用了一些开源书籍,需预先执行下载其中一部分数据用于预训练或者使用其进行词表扩充训练。下载完之后,对其中的数据进行数据清洗,去除一些空行等。

词表扩充

下面进行词表扩充,由于原版LLaMA对中文的支持非常有限,因此,Chinese-LLaMA-Alpaca 在原版 LLaMA 的基础上进一步扩充了中文词表。

Chinese-LLaMA-Alpaca是在通用中文语料上训练了基于 sentencepiece 的20K中文词表并与原版LLaMA模型的32K词表进行合并,排除重复的token后,得到的最终中文LLaMA词表大小为49953。

注意

在模型精调(fine-tune)阶段 Alpaca 比 LLaMA 多一个 pad token,所以中文Alpaca的词表大小为49954。在后续将 LoRA 权重合并回基础模型时需要注意中文LLaMA和中文Alpaca词表不一致的问题

合并中文扩充词表并与原版LLaMA模型的32K词表,这里直接使用官方训练好的词表chinese_sp.model。当然,你也可以基于特有领域的语料训练专属的词表,具体可参考之前的文章:大模型词表扩充必备工具SentencePiece

运行命令:

cd Chinese-LLaMA-Alpaca/scripts/

python merge_tokenizers.py 
  --llama_tokenizer_dir /workspace/model/llama-7b-hf-tokenizer 
  --chinese_sp_model_file /workspace/code/Chinese-LLaMA-Alpaca/scripts/chinese_sp.model

参数说明:

  • llama_tokenizer_dir: 指向存放原版LLaMA tokenizer的目录
  • chinese_sp_model_file: 指向用sentencepiece训练的中文词表文件

运行过程:

python merge_tokenizers.py 
>   --llama_tokenizer_dir /workspace/model/llama-7b-hf-tokenizer 
>   --chinese_sp_model_file /workspace/code/Chinese-LLaMA-Alpaca/scripts/chinese_sp.model
32000 20000
['<s>', '</s>', '<unk>']
[1, 2, 0]
{'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}
32000
Before:32000
New model pieces: 49953
Chinese-LLaMA tokenizer has been saved to merged_tokenizer_hf
['<s>', '</s>', '<unk>']
[1, 2, 0]
{'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}

Test text:
 白日依山尽,黄河入海流。欲穷千里目,更上一层楼。
The primary use of LLaMA is research on large language models, including

Tokenized by LLaMA tokenizer:['▁', '白', '日', '<0xE4>', '<0xBE>', '<0x9D>', '山', '<0xE5>', '<0xB0>', '<0xBD>', ',', '黄', '河', '入', '海', '流', '。', '<0xE6>', '<0xAC>', '<0xB2>', '<0xE7>', '<0xA9>', '<0xB7>', '千', '里', '目', ',', '更', '上', '一', '<0xE5>', '<0xB1>', '<0x82>', '<0xE6>', '<0xA5>', '<0xBC>', '。', '<0x0A>', 'The', '▁primary', '▁use', '▁of', '▁L', 'La', 'MA', '▁is', '▁research', '▁on', '▁large', '▁language', '▁models', ',', '▁including']

Tokenized by Chinese-LLaMA tokenizer:['▁白', '日', '依', '山', '尽', ',', '黄河', '入', '海', '流', '。', '欲', '穷', '千里', '目', ',', '更', '上', '一层', '楼', '。', '<0x0A>', 'The', '▁primary', '▁use', '▁of', '▁L', 'La', 'MA', '▁is', '▁research', '▁on', '▁large', '▁language', '▁models', ',', '▁including']

查看词表扩充后的文件,其中,merged_tokenizer_sp目录下为训练好的词表模型,merged_tokenizer_hf目录下为HF格式训练好的词表模型:

ls -al merged_tokenizer_hf merged_tokenizer_sp
merged_tokenizer_hf:
total 760
drwxr-xr-x 2 root root   4096 May 13 15:32 .
drwxrwxr-x 5 1001 1001   4096 May 17 09:13 ..
-rw-r--r-- 1 root root    411 May 17 09:41 special_tokens_map.json
-rw-r--r-- 1 root root 757958 May 17 09:41 tokenizer.model
-rw-r--r-- 1 root root    727 May 17 09:41 tokenizer_config.json

merged_tokenizer_sp:
total 752
drwxr-xr-x 2 root root   4096 May 13 15:32 .
drwxrwxr-x 5 1001 1001   4096 May 17 09:13 ..
-rw-r--r-- 1 root root 757958 May 17 09:41 chinese_llama.model

模型训练细节

实验设置

整个训练流程包括第一阶段预训练、第二阶段预训练和指令精调三部分。每个阶段的实验设置参数如下:

实验设置 预训练-第一阶段 预训练-第二阶段 指令精调
Batch Size 1024 1024 512
Initial Learning Rate 2e-4 1e-4 1e-4
Training Steps 3K 6K 6K-10K
Max Length 512 512 512
Trainable Parameters (%) 2.97% 6.06% 6.22%
Training Device 8 × A100 16 × A100 16 × A100
Distributed Training DeepSpeed Zero-2 DeepSpeed Zero-2 DeepSpeed Zero-2

其中,预训练部分又分为两个阶段:

  • 第一阶段:冻结transformer参数,仅训练embedding,在尽量不干扰原模型的情况下适配新增的中文词向量。
  • 第二阶段:使用 LoRA 技术,为模型添加LoRA权重(adapter),训练embedding的同时也更新LoRA参数。

第一阶段预训练

由于第一阶段预训练会冻结transformer参数,仅训练embedding模型,因此,收敛速度较慢,如果不是有特别充裕的时间和计算资源,官方建议跳过该阶段,同时,官网并没有提供该阶段的代码,如果需要进行该阶段预训练,需要自行修改。

  • 第一步:在训练之前,将除了Embedding之外的层设置为param.requires_grad = False,如下所示:
for name, param in model.named_parameters():
    if "model.embed_tokens" not in name:
        param.requires_grad = False
  • 第二步:在训练的时候,在优化器中添加过滤器filter把requires_grad = False的参数过滤掉,这样在训练的时候,不会更新这些参数,如下所示:
optimizer = AdamW(filter(lambda p: p.requires_grad, model.parameters()))

第二阶段预训练

第二阶段预训练使用LoRA技术,为模型添加LoRA权重(adapter),训练embedding的同时也更新LoRA参数。

首先,修改运行脚本run_pt.sh,需要修改的部分参数如下:

  • --model_name_or_path: 原版HF格式的LLaMA模型所在目录
  • --tokenizer_name_or_path: Chinese-LLaMA tokenizer所在的目录
  • --dataset_dir: 预训练数据的目录,可包含多个以txt结尾的纯文本文件
  • --data_cache_dir: 指定一个存放数据缓存文件的目录
  • --output_dir: 模型权重输出路径

其他参数(如:per_device_train_batch_size、training_steps等)是否修改视自身情况而定。

lr=2e-4
lora_rank=8
lora_alpha=32
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05

pretrained_model=/workspace/model/llama-7b-hf
chinese_tokenizer_path=/workspace/code/Chinese-LLaMA-Alpaca/scripts/merged_tokenizer_hf
dataset_dir=/workspace/data/book
data_cache=/workspace/cache/book
per_device_train_batch_size=1
per_device_eval_batch_size=1
training_steps=100
gradient_accumulation_steps=1
output_dir=/workspace/output/book
RANDOM=100


deepspeed_config_file=ds_zero2_no_offload.json

CUDA_VISIBLE_DEVICES=0 torchrun --nnodes 1 --nproc_per_node 1 run_clm_pt_with_peft.py 
    --deepspeed ${deepspeed_config_file} 
    --model_name_or_path ${pretrained_model} 
    --tokenizer_name_or_path ${chinese_tokenizer_path} 
    --dataset_dir ${dataset_dir} 
    --data_cache_dir ${data_cache} 
    --validation_split_percentage 0.001 
    --per_device_train_batch_size ${per_device_train_batch_size} 
    --per_device_eval_batch_size ${per_device_eval_batch_size} 
    --do_train 
    --seed $RANDOM 
    --fp16 
    --max_steps ${training_steps} 
    --lr_scheduler_type cosine 
    --learning_rate ${lr} 
    --warmup_ratio 0.05 
    --weight_decay 0.01 
    --logging_strategy steps 
    --logging_steps 10 
    --save_strategy steps 
    --save_total_limit 3 
    --save_steps 500 
    --gradient_accumulation_steps ${gradient_accumulation_steps} 
    --preprocessing_num_workers 8 
    --block_size 512 
    --output_dir ${output_dir} 
    --overwrite_output_dir 
    --ddp_timeout 30000 
    --logging_first_step True 
    --lora_rank ${lora_rank} 
    --lora_alpha ${lora_alpha} 
    --trainable ${lora_trainable} 
    --modules_to_save ${modules_to_save} 
    --lora_dropout ${lora_dropout} 
    --torch_dtype float16 
    --gradient_checkpointing 
    --ddp_find_unused_parameters False

然后,修改训练代码run_clm_pt_with_peft.py,将trainer.save_model()改为model.save_pretrained(training_args.output_dir + "/lora")。 通过trainer.save_model()保存模型权重为HF格式,需要将模型权重重命名为lora文件名(adapter_model.binadapter_config.json),我们直接使用model.save_pretrained(training_args.output_dir + "/lora")保存为lora权重格式。

    if training_args.do_train:
        checkpoint = None
        if training_args.resume_from_checkpoint is not None:
            checkpoint = training_args.resume_from_checkpoint
        elif last_checkpoint is not None:
            checkpoint = last_checkpoint
        train_result = trainer.train(resume_from_checkpoint=checkpoint)
        
        # trainer.save_model()
        tokenizer.save_pretrained(training_args.output_dir)
        model.save_pretrained(training_args.output_dir + "/lora")
        
        metrics = train_result.metrics

        max_train_samples = (
            data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
        )
        metrics["train_samples"] = min(max_train_samples, len(train_dataset))

        trainer.log_metrics("train", metrics)
        trainer.save_metrics("train", metrics)
        trainer.save_state()

具体执行过程如下所示:

> sh run_pt.sh 

[2023-05-18 06:40:55,101] [INFO] [comm.py:622:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
05/18/2023 06:40:55 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True
[INFO|configuration_utils.py:666] 2023-05-18 06:40:55,815 >> loading configuration file /workspace/model/llama-7b-hf/config.json
...
05/18/2023 06:41:01 - INFO - __main__ - Num train_samples  2706
05/18/2023 06:41:01 - INFO - __main__ - training example:
05/18/2023 06:41:01 - INFO - __main__ -  金簪雪里埋。<s> 宝玉看了仍不解。待要问时,情知她必不肯泄漏;待要丢下,又不舍。遂又往后看时,只见画着一张弓,弓上挂着香橼。也有一首歌词云:<s> 二十年来辨是非,榴花开处照宫闱。三春争及初春景?虎兔相逢大梦归。<s> 后面又画着两人放风筝,一片大海,一只大船,船中有一女子掩面泣涕之状。也有四句写云:<s> 才自精明志自高,生于末世运偏消。清明涕送江边望,千里东风一梦遥。<s> 后面又画几缕飞云,一湾逝水。其词曰:<s> 富贵又何为,襁褓之间父母违。展眼吊斜晖,湘江水逝楚云飞。<s> 后面又画着一块美玉,落在泥垢之中。其断语云:<s> 欲洁何曾洁,云空未必空。可怜金玉质,终陷淖泥中。<s> 后面忽见画着个恶狼,追扑一美女,欲啖之意。其书云:<s> 子系中山狼,得志便猖狂。金闺花柳质,一载赴黄粱。<s> 后面便是一所古庙,里面有一美人在内看经独坐。其判云:<s> 勘破三春景不长,缁衣顿改昔年妆。可怜绣户侯门女,独卧青灯古佛旁。<s> 后面便是一片冰山,上面有一只雌凤。其判曰:<s> 凡鸟偏从末世来,都知爱慕此生才。一从二令三人木,哭向金陵事更哀。<s> 后面又是一座荒村野店,有一美人在那里纺绩。其判云:<s> 势败休云贵,家亡莫论亲。偶因济刘氏,巧得遇恩人。<s> 后面又画着一盆茂兰,旁有一位凤冠霞帔的美人。也有判云:<s> 桃李春风结子完,到头谁似一盆兰。如冰水好空相妒,枉与他人作笑谈。<s>
[INFO|modeling_utils.py:2531] 2023-05-18 06:41:01,800 >> loading weights file /workspace/model/llama-7b-hf/pytorch_model.bin.index.json
[INFO|modeling_utils.py:1176] 2023-05-18 06:41:01,829 >> Instantiating LlamaForCausalLM model under default dtype torch.float16.
...                                                               
{'loss': 6.516, 'learning_rate': 3.7282364152646297e-05, 'epoch': 0.03}                                                                       
{'loss': 5.7281, 'learning_rate': 1.5390474757906446e-05, 'epoch': 0.03}                                                                      
{'loss': 6.0016, 'learning_rate': 2.667340275199426e-06, 'epoch': 0.04}                                                                       
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [01:20<00:00,  1.65it/s][INFO|trainer.py:2039] 2023-05-18 06:47:17,503 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)

{'train_runtime': 80.1304, 'train_samples_per_second': 1.248, 'train_steps_per_second': 1.248, 'train_loss': 6.7801171875, 'epoch': 0.04}     
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [01:20<00:00,  1.25it/s]
[INFO|trainer.py:2868] 2023-05-18 06:47:17,526 >> Saving model checkpoint to /workspace/output/book
[INFO|trainer.py:2880] 2023-05-18 06:47:17,540 >> Trainer.model is not a `PreTrainedModel`, only saving its state dict.
[INFO|tokenization_utils_base.py:2171] 2023-05-18 06:47:19,324 >> tokenizer config file saved in /workspace/output/book/tokenizer_config.json
[INFO|tokenization_utils_base.py:2178] 2023-05-18 06:47:19,324 >> Special tokens file saved in /workspace/output/book/special_tokens_map.json
***** train metrics *****
  epoch                    =       0.04
  train_loss               =     6.7801
  train_runtime            = 0:01:20.13
  train_samples            =       2706
  train_samples_per_second =      1.248
  train_steps_per_second   =      1.248

模型输出文件:

> ls -al --block-size=M /workspace/output/book/lora
total 819M
drwxr-xr-x 2 root root   1M May 18 07:45 .
drwxr-xr-x 3 root root   1M May 18 07:45 ..
-rw-r--r-- 1 root root   1M May 18 07:45 adapter_config.json
-rw-r--r-- 1 root root 819M May 18 07:45 adapter_model.bin

将 LoRA 权重与基础模型合并

修改权重合并脚本merge_llama_with_chinese_lora.py,新增一个tokenizer_path参数接收分词器目录,同lora_model一样可以传入多个目录,中间用逗号,分割。

...

parser = argparse.ArgumentParser()
parser.add_argument('--base_model', default=None, required=True,
                    type=str, help="Please specify a base_model")
 # TODO
parser.add_argument('--tokenizer_path', default=None, required=True,
                    type=str, help="Please specify a tokenizer_path")
parser.add_argument('--lora_model', default=None, required=True,
                    type=str,
                    help="Please specify LoRA models to be merged (ordered); use commas to separate multiple LoRA models.")
parser.add_argument('--offload_dir', default=None, type=str,
                    help="(Optional) Please specify a temp folder for offloading (useful for low-RAM machines). Default None (disable offload).")
parser.add_argument('--output_type', default='pth', choices=['pth', 'huggingface'], type=str,
                    help="save the merged model in pth or huggingface format.")
parser.add_argument('--output_dir', default='./', type=str)

...

if __name__ == '__main__':

    args = parser.parse_args()
    base_model_path = args.base_model
    
    # TODO
    lora_model_paths = [s.strip() for s in args.lora_model.split(',') if len(s.strip()) != 0]
    tokenizer_paths = [s.strip() for s in args.tokenizer_path.split(',') if len(s.strip()) != 0]
    
    output_dir = args.output_dir
    output_type = args.output_type
    offload_dir = args.offload_dir

    print(f"Base model: {base_model_path}")
    print(f"LoRA model(s) {lora_model_paths}:")

    ...
    
    ## infer the model size from the checkpoint
    embedding_size = base_model.get_input_embeddings().weight.size(1)
    model_size = emb_to_model_size[embedding_size]
    print(f"Peft version: {peft.__version__}")
    print(f"Loading LoRA for {model_size} model")

    lora_model = None
    lora_model_sd = None
    for lora_index, lora_model_path in enumerate(lora_model_paths):
    
        # TODO
        tokenizer_path = tokenizer_paths[lora_index]
        print(f"Loading LoRA: {lora_model_path} , tokenizer path: {tokenizer_path} ")
        # tokenizer = LlamaTokenizer.from_pretrained(lora_model_path)
        tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path)

运行命令:

python merge_llama_with_chinese_lora.py 
    --base_model /workspace/model/llama-7b-hf 
    --tokenizer_path /workspace/code/Chinese-LLaMA-Alpaca/scripts/merged_tokenizer_hf 
    --lora_model /workspace/output/book/lora 
    --output_type huggingface 
    --output_dir /workspace/output/book-merge-hf

参数说明:

  • --base_model:存放HF格式的LLaMA模型权重和配置文件的目录
  • --lora_model:中文LLaMA/Alpaca LoRA解压后文件所在目录
  • --tokenizer_path:分词器所在目录
  • --output_type: 指定输出格式,可为pth或huggingface。若不指定,默认为pth
  • --output_dir:指定保存全量模型权重的目录,默认为./
  • (可选)--offload_dir:对于低内存用户需要指定一个offload缓存路径

运行过程:

python merge_llama_with_chinese_lora.py 
>     --base_model /workspace/model/llama-7b-hf 
>     --tokenizer_path /workspace/code/Chinese-LLaMA-Alpaca/scripts/merged_tokenizer_hf 
>     --lora_model /workspace/output/book/lora 
>     --output_type huggingface 
>     --output_dir /workspace/output/book-merge-hf
Base model: /workspace/model/llama-7b-hf
LoRA model(s) ['/workspace/output/book/lora']:
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00,  5.65s/it]
Peft version: 0.3.0.dev0
Loading LoRA for 7B model
Loading LoRA /workspace/output/book/lora
Extended vocabulary size to 49953
merging base_model.model.model.embed_tokens.weight
merging base_model.model.lm_head.weight
merging base_model.model.model.layers.0.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.0.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.0.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.0.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.0.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.0.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.0.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.1.self_attn.q_proj.lora_A.weight
...
merging base_model.model.model.layers.30.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.30.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.31.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.31.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.31.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.31.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.up_proj.lora_A.weight
Saving to Hugging Face format...

权重合并前后的对比:

# 合并LLaMA和LoRA后的权重
ls -al --block-size=K /workspace/output/book-merge-hf
total 13449140K
drwxr-xr-x 2 root root       4K May 18 08:54 .
drwxrwxrwx 8 root root       4K May 18 08:53 ..
-rw-r--r-- 1 root root       1K May 18 08:53 config.json
-rw-r--r-- 1 root root       1K May 18 08:53 generation_config.json
-rw-r--r-- 1 root root 9710286K May 18 08:54 pytorch_model-00001-of-00002.bin
-rw-r--r-- 1 root root 3738047K May 18 08:54 pytorch_model-00002-of-00002.bin
-rw-r--r-- 1 root root      27K May 18 08:54 pytorch_model.bin.index.json
-rw-r--r-- 1 root root       1K May 18 08:53 special_tokens_map.json
-rw-r--r-- 1 root root     741K May 18 08:53 tokenizer.model
-rw-r--r-- 1 root root       1K May 18 08:53 tokenizer_config.json

# 原始llama权重
> ls -al --block-size=K /workspace/model/llama-7b-hf
total 13161660K
drwxr-xr-x  3 root root       4K May 11 05:37 .
drwxrwxrwx 21 root root       4K May 18 08:33 ..
drwxr-xr-x  9 root root       4K May 11 05:36 .git
-rw-r--r--  1 root root       2K May 11 05:36 .gitattributes
-rw-r--r--  1 root root       9K May 11 05:36 README.md
-rw-r--r--  1 root root       1K May 11 05:36 config.json
-rw-r--r--  1 root root       1K May 11 05:36 generation_config.json
-rw-r--r--  1 root root 9742808K May 11 05:37 pytorch_model-00001-of-00002.bin
-rw-r--r--  1 root root 3418277K May 11 05:37 pytorch_model-00002-of-00002.bin
-rw-r--r--  1 root root      27K May 11 05:37 pytorch_model.bin.index.json
-rw-r--r--  1 root root       1K May 11 05:37 special_tokens_map.json
-rw-r--r--  1 root root     489K May 11 05:37 tokenizer.model
-rw-r--r--  1 root root       1K May 11 05:37 tokenizer_config.json

指令精调

指令精调阶段的任务形式基本与Stanford Alpaca相同。训练方案也采用了LoRA进行高效精调,并进一步增加了可训练参数数量。在prompt设计上,精调以及预测时采用的都是原版Stanford Alpaca不带input的模版。对于包含input字段的数据,采用f"{instruction}+n+{input}"的形式进行拼接。

其中,Stanford Alpaca 格式如下所示:

[
  {"instruction" : ...,
   "input" : ...,
   "output" : ...},
  ...
]

首先,修改模型精调脚本run_sft.sh,需要修改的参数如下:

  • --model_name_or_path: 模型经过词表扩充并完成预训练进行权重合并之后所在的目录
  • --tokenizer_name_or_path: Chinese-Alpaca tokenizer 所在的目录
  • --dataset_dir: 指令精调数据的目录,包含一个或多个以json结尾的Stanford Alpaca格式的指令精调数据文件
  • --validation_file: 用作验证集的单个指令精调文件,以json结尾,同样遵循Stanford Alpaca格式
  • --output_dir: 模型权重输出路径

其他参数(如:per_device_train_batch_size、training_steps等)是否修改视自身情况而定。

lr=1e-4
lora_rank=8
lora_alpha=32
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05

pretrained_model=/workspace/output/book-merge-hf
chinese_tokenizer_path=/workspace/code/Chinese-LLaMA-Alpaca/scripts/merged_tokenizer_hf
dataset_dir=/workspace/code/Chinese-LLaMA-Alpaca/data
per_device_train_batch_size=1
per_device_eval_batch_size=1
training_steps=100
gradient_accumulation_steps=1
output_dir=/workspace/output/llama-book-alpace-zh
#peft_model=path/to/peft/model/dir
validation_file=/workspace/data/alpaca_valid.json
RANDOM=1000
deepspeed_config_file=ds_zero2_no_offload.json

CUDA_VISIBLE_DEVICES=0 torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py 
    --deepspeed ${deepspeed_config_file} 
    --model_name_or_path ${pretrained_model} 
    --tokenizer_name_or_path ${chinese_tokenizer_path} 
    --dataset_dir ${dataset_dir} 
    --validation_split_percentage 0.001 
    --per_device_train_batch_size ${per_device_train_batch_size} 
    --per_device_eval_batch_size ${per_device_eval_batch_size} 
    --do_train 
    --do_eval 
    --seed $RANDOM 
    --fp16 
    --max_steps ${training_steps} 
    --lr_scheduler_type cosine 
    --learning_rate ${lr} 
    --warmup_ratio 0.03 
    --weight_decay 0 
    --logging_strategy steps 
    --logging_steps 10 
    --save_strategy steps 
    --save_total_limit 3 
    --evaluation_strategy steps 
    --eval_steps 250 
    --save_steps 500 
    --gradient_accumulation_steps ${gradient_accumulation_steps} 
    --preprocessing_num_workers 8 
    --max_seq_length 512 
    --output_dir ${output_dir} 
    --overwrite_output_dir 
    --ddp_timeout 30000 
    --logging_first_step True 
    --lora_rank ${lora_rank} 
    --lora_alpha ${lora_alpha} 
    --trainable ${lora_trainable} 
    --modules_to_save ${modules_to_save} 
    --lora_dropout ${lora_dropout} 
    --torch_dtype float16 
    --validation_file ${validation_file} 
    --gradient_checkpointing 
    --ddp_find_unused_parameters False

然后,同第二阶段预训练一样,修改精调代码run_clm_sft_with_peft.py,将trainer.save_model()改为model.save_pretrained(training_args.output_dir + "/lora")

具体运行过程如下:

> sh run_sft.sh 
[2023-05-18 10:01:20,052] [INFO] [comm.py:622:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
05/18/2023 10:01:21 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True
[INFO|configuration_utils.py:666] 2023-05-18 10:01:21,095 >> loading configuration file /workspace/output/book-merge-hf/config.json

...

### Instruction:
我们如何在日常生活中减少用水?

### Response:  1. 使用节水装置,如节水淋浴喷头和水龙头。 
2. 使用水箱或水桶收集家庭废水,例如洗碗和洗浴。 
3. 在社区中提高节水意识。 
4. 检查水管和灌溉系统的漏水情况,并及时修复它们。 
5. 洗澡时间缩短,使用低流量淋浴头节约用水。 
6. 收集雨水,用于园艺或其他非饮用目的。 
7. 刷牙或擦手时关掉水龙头。 
8. 减少浇水草坪的时间。 
9. 尽可能多地重复使用灰水(来自洗衣机、浴室水槽和淋浴的水)。 
10. 只购买能源效率高的洗碗机和洗衣机。</s>

...

[INFO|trainer.py:1769] 2023-05-18 10:04:40,115 >> ***** Running training *****
[INFO|trainer.py:1770] 2023-05-18 10:04:40,115 >>   Num examples = 51,179
[INFO|trainer.py:1771] 2023-05-18 10:04:40,116 >>   Num Epochs = 1
[INFO|trainer.py:1772] 2023-05-18 10:04:40,116 >>   Instantaneous batch size per device = 1
[INFO|trainer.py:1773] 2023-05-18 10:04:40,116 >>   Total train batch size (w. parallel, distributed & accumulation) = 1
[INFO|trainer.py:1774] 2023-05-18 10:04:40,116 >>   Gradient Accumulation steps = 1
[INFO|trainer.py:1775] 2023-05-18 10:04:40,116 >>   Total optimization steps = 100
[INFO|trainer.py:1776] 2023-05-18 10:04:40,121 >>   Number of trainable parameters = 429,211,648
  0%|                                                                                                                 | 0/100 [00:00<?, ?it/s][WARNING|logging.py:295] 2023-05-18 10:04:40,308 >> `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
[2023-05-18 10:04:49,143] [INFO] [loss_scaler.py:188:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, but hysteresis is 2. Reducing hysteresis to 1
{'loss': 3.9824, 'learning_rate': 0.0, 'epoch': 0.0}                                                                                          
  1%|█                                                                                                        | 1/100 [00:09<14:53,  9.02s/it][2023-05-18 10:04:49,673] [INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768
  2%|██                                                                                                       | 2/100 [00:09<06:34,  4.03s/it][2023-05-18 10:04:50,191] [INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768, reducing to 16384
  4%|████▏                                                                                                    | 4/100 [00:10<02:46,  1.74s/it][2023-05-18 10:04:51,386] [INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384, reducing to 8192
  6%|██████▎                                                                                                  | 6/100 [00:11<01:41,  1.08s/it][2023-05-18 10:04:52,557] [INFO] [loss_scaler.py:181:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 8192, reducing to 4096
{'loss': 9.0204, 'learning_rate': 9.989514131188559e-05, 'epoch': 0.0}                                                                        
...                                                                 
{'loss': 6.959, 'learning_rate': 8.25859734853645e-06, 'epoch': 0.0}                                                                          
{'loss': 7.5645, 'learning_rate': 1.6689574843694433e-06, 'epoch': 0.0}                                                                       
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [01:09<00:00,  1.62it/s][INFO|trainer.py:2039] 2023-05-18 10:05:49,601 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)


{'train_runtime': 69.4802, 'train_samples_per_second': 1.439, 'train_steps_per_second': 1.439, 'train_loss': 7.189541015625, 'epoch': 0.0}    
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [01:09<00:00,  1.44it/s]
[INFO|trainer.py:2868] 2023-05-18 10:05:49,603 >> Saving model checkpoint to /workspace/output/llama-book-alpace-zh
[INFO|trainer.py:2880] 2023-05-18 10:05:49,612 >> Trainer.model is not a `PreTrainedModel`, only saving its state dict.
[INFO|tokenization_utils_base.py:2171] 2023-05-18 10:05:51,389 >> tokenizer config file saved in /workspace/output/llama-book-alpace-zh/tokenizer_config.json
[INFO|tokenization_utils_base.py:2178] 2023-05-18 10:05:51,389 >> Special tokens file saved in /workspace/output/llama-book-alpace-zh/special_tokens_map.json
[INFO|tokenization_utils_base.py:2228] 2023-05-18 10:05:51,390 >> added tokens file saved in /workspace/output/llama-book-alpace-zh/added_tokens.json
***** train metrics *****
  epoch                    =        0.0
  train_loss               =     7.1895
  train_runtime            = 0:01:09.48
  train_samples            =      51179
  train_samples_per_second =      1.439
  train_steps_per_second   =      1.439
05/18/2023 10:05:51 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:3129] 2023-05-18 10:05:51,398 >> ***** Running Evaluation *****
[INFO|trainer.py:3131] 2023-05-18 10:05:51,398 >>   Num examples = 8
[INFO|trainer.py:3134] 2023-05-18 10:05:51,398 >>   Batch size = 1
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 10.28it/s]
***** eval metrics *****
  epoch                   =        0.0
  eval_loss               =     7.6797
  eval_runtime            = 0:00:00.90
  eval_samples            =          8
  eval_samples_per_second =      8.866
  eval_steps_per_second   =      8.866
  perplexity              =  2163.9434

显存占用:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17   Driver Version: 525.105.17   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA A800 80G...  Off  | 00000000:3B:00.0 Off |                    0 |
| N/A   44C    P0    88W / 300W |  25343MiB / 81920MiB |     17%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+

模型输出文件:

> ls -al --block-size=M /workspace/output/llama-book-alpace-zh/lora/
total 819M
drwxr-xr-x 2 root root   1M May 18 10:20 .
drwxr-xr-x 3 root root   1M May 18 10:20 ..
-rw-r--r-- 1 root root   1M May 18 10:20 adapter_config.json
-rw-r--r-- 1 root root 819M May 18 10:20 adapter_model.bin

将多个LoRA权重与基础模型合并

接下来,将进过预训练和精调的LoRA权重合并回基础模型(注意:多个Lora权重和多个tokenizer之间使用逗号分割,两个LoRA模型的顺序很重要,不能颠倒。先写预训练Lora权重,然后写精调Lora权重),具体执行运行过程如下:

python merge_llama_with_chinese_lora.py 
>     --base_model /workspace/model/llama-7b-hf 
>     --tokenizer_path /workspace/output/book,/workspace/output/llama-book-alpace-zh 
>     --lora_model /workspace/output/book/lora,/workspace/output/llama-book-alpace-zh/lora 
>     --output_type huggingface 
>     --output_dir /workspace/output/book-alpaca-merge-hf

Base model: /workspace/model/llama-7b-hf
LoRA model(s) ['/workspace/output/book/lora', '/workspace/output/llama-book-alpace-zh/lora']:
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████| 2/2 [00:08<00:00,  4.41s/it]
Peft version: 0.3.0.dev0
Loading LoRA for 7B model
Loading LoRA: /workspace/output/book/lora , tokenizer path: /workspace/output/book 
Extended vocabulary size to 49953
merging base_model.model.model.embed_tokens.weight
merging base_model.model.lm_head.weight
merging base_model.model.model.layers.0.self_attn.q_proj.lora_A.weight
merging 
...
base_model.model.model.layers.31.mlp.up_proj.lora_A.weight
Loading LoRA: /workspace/output/llama-book-alpace-zh/lora , tokenizer path: /workspace/output/llama-book-alpace-zh 
Extended vocabulary size to 49954
merging base_model.model.model.embed_tokens.weight
merging base_model.model.lm_head.weight
merging base_model.model.model.layers.0.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.0.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.0.self_attn.v_proj.lora_A.weight
merging 
...
base_model.model.model.layers.31.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.up_proj.lora_A.weight
Saving to Hugging Face format...

除此之外,也可以直接使用原始权重与模型预训练Lora权重合并之后的权重与模型精调的Lora权重进行合并。

模型推理

最后,使用将 LoRA 合并到基础模型之后的权重进行模型推理测试。

> scripts/inference_hf.py 
>     --base_model /workspace/output/book-alpaca-merge-hf 
>     --with_prompt 
>     --interactive
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████| 2/2 [00:12<00:00,  6.41s/it]
Vocab of the base model: 49954
Vocab of the tokenizer: 49954
Start inference with instruction mode.
=====================================================================================
+ 该模式下仅支持单轮问答,无多轮对话能力。
+ 如要进行多轮对话,请使用llama.cpp或llamachat工具。
-------------------------------------------------------------------------------------
+ This mode only supports single-turn QA.
+ If you want to experience multi-turn dialogue, please use llama.cpp or llamachat.
=====================================================================================
Input:who are you?
Response:  I am Sara, 20 years old and in my second year of college studying to become a nurse practitioner.

Input:I hava a dream 
Response:  "I have a dream, too!"

由于本次示例只做演示,仅使用极少的中文语料进行训练验证,因此,这里测试效果不太好,需自行添加更多的数据集进行模型预训练和精调。

结语

目前来看,虽然词表扩充+预训练+指令精调能够给模型带来明显的性能提升,但是该方案还是显得过于繁重。如果不是有特别充裕的时间和计算资源,不太推荐这种方式。如果既想要中文词表,又没有很大的算力,可以直接使用ChatGLM-6B或者使用BELLE和Chinese-LLaMA-Alpaca进行中文词表扩充后训练好的模型作为Base模型进行微调。

参考文档:

本网站的内容主要来自互联网上的各种资源,仅供参考和信息分享之用,不代表本网站拥有相关版权或知识产权。如您认为内容侵犯您的权益,请联系我们,我们将尽快采取行动,包括删除或更正。
AI教程

深度学习实战系列之GECToR模型在语法纠错中的应用

2023-12-11 11:38:14

AI教程

知乎发布“知海图AI”中文大模型,开启内测

2023-12-11 11:50:14

个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索