You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I used the DocOwl1.5/scripts/finetune_docowl_lora.sh script to train on my own data, I increased the per_device_train_batch_size to 4 for parallel acceleration. However, the time taken with batch_size=4 is four times that of batch_size=1. There is no acceleration effect. Why?
The text was updated successfully, but these errors were encountered:
When I used the DocOwl1.5/scripts/finetune_docowl_lora.sh script to train on my own data, I increased the per_device_train_batch_size to 4 for parallel acceleration. However, the time taken with batch_size=4 is four times that of batch_size=1. There is no acceleration effect. Why?
The text was updated successfully, but these errors were encountered: