-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provide the docker file #7
Comments
I found that the runners provided by github action did not contains GPU capacity. Self-hosted runner can solve this problem if we need continuous integration. |
can @LianxinGao look at this , and provide the self container running in our dev env |
@merlintang gpu_runner launched on gpu01 machine. @mikecovlee 4 gpu, number from 0 to 3. (4090: 0,2,3; 3090:1 ) |
The main entrance
|
@mikecovlee Where to configure gpu device? Not found in finetune.json |
Use |
error @mikecovlee :
|
@merlintang done #18 |
can you check vicuna-7b vocab size ? |
@mikecovlee the vicuna-7b and llama-7b have different lm_head / embedding size, need to adapt. |
My tests are passed on both |
@mikecovlee which version of vicuna7B are you using? |
@LianxinGao vicuna-7b-delta-v1.1 |
I'll change the version of vicuna in ci, and retest it |
You can directly commit to |
Now I split CI checks on GPU into to separate jobs. Tests on LLaMA-7B are passed while failed on Vicuna-7B. #21 |
Plz check local model Referring to CI runs. Local machine tests are passed. |
Btw, later commits plz create a new branch rather than |
The models on the machine seems buggy😭,now all fixed.... |
also found this, i will fix it. |
@LianxinGao can you send a pr with a docker file |
ok,I'll do it. |
No description provided.
The text was updated successfully, but these errors were encountered: