You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thank you for porting MiniGPT. This looks a lot faster than running the PyTorch in Metal, which is very poorly supported indeed (CPU is faster in my tests!). I see that in the model itself generation is implemented, but it is not used anywhere, nor the model is saved after training or tested just to output some text. I wonder if inference would work if I save/reload the model. Thanks.
The text was updated successfully, but these errors were encountered:
hey @antirez thanks! Just a heads up, I am working on a PR for saving/loading pre-trained models for inference. You can expect the feature sometime this week.
Hi! Thank you for porting MiniGPT. This looks a lot faster than running the PyTorch in Metal, which is very poorly supported indeed (CPU is faster in my tests!). I see that in the model itself generation is implemented, but it is not used anywhere, nor the model is saved after training or tested just to output some text. I wonder if inference would work if I save/reload the model. Thanks.
The text was updated successfully, but these errors were encountered: