Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: 进行chatglm2 lora微调时,设置pipeline parallel:4,报错 module 'paddlenlp.transformers.chatglm_v2.modeling' has no attribute 'ChatGLMv2ForCausalLMPipe' #8593

Open
shanyuaa opened this issue Jun 12, 2024 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@shanyuaa
Copy link

请提出你的问题

  • 前提条件:单机单卡已经跑通chatglm2的lora微调训练代码;llama的多卡pp并行训练已跑通。

  • 问题场景:想进一步尝试单机多卡,设置/chatglm2/lora_argument.json配置文件中的 "pipeline_parallel_degree": 4,然后参照官网样例,启动命令行:srun --gres=gpu:4 python3 -u -m paddle.distributed.launch --gpus "0,1,2,3" finetune_generation.py ./chatglm2/lora_argument.json

  • 所遇问题:跑chatglm2的多卡pp并行时会报错找不到ChatGLMv2ForCausalLMPipe类,错误代码行显示在:

File "/home/LAB/wangzy/paddle/PaddleNLP/llm/finetune_generation.py", line 183, in main
   model = AutoModelForCausalLMPipe.from_pretrained(

Q:请问paddlenlp支持chatglm2的pp并行/tp并行策略吗?以及如何解决该问题?(错误截图如下)谢谢!

截屏2024-06-13 00 20 09 截屏2024-06-13 00 20 26
@shanyuaa shanyuaa added the question Further information is requested label Jun 12, 2024
@DrownFish19
Copy link
Collaborator

当前chatglm_v2当前未接入TP/PP

Copy link

This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。

@github-actions github-actions bot added the stale label Aug 13, 2024
@shanyuaa
Copy link
Author

shanyuaa commented Aug 13, 2024 via email

@github-actions github-actions bot removed the stale label Aug 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants