Does it make sense to change the # hidden layers of a pretrained transformer? #12264
Answered
by
rmitsch
bhartm3
asked this question in
Help: Model Advice
-
Hello, |
Beta Was this translation helpful? Give feedback.
Answered by
rmitsch
Feb 9, 2023
Replies: 1 comment
-
Hi @bhartm3, no, it doesn't make sense to do that. At this point the model architecture is fixed. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
bhartm3
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @bhartm3, no, it doesn't make sense to do that. At this point the model architecture is fixed.