-
-
Notifications
You must be signed in to change notification settings - Fork 43
Add finetuning exercise to transfer learning episode #584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add finetuning exercise to transfer learning episode #584
Conversation
Thank you!Thank you for your pull request 😃 🤖 This automated message can help you check the rendered files in your submission for clarity. If you have any questions, please feel free to open an issue in {sandpaper}. If you have files that automatically render output (e.g. R Markdown), then you should check for the following:
Rendered Changes🔍 Inspect the changes: https://github.com/carpentries-lab/deep-learning-intro/compare/md-outputs..md-outputs-PR-584 The following changes were observed in the rendered markdown documents: What does this mean?If you have source files that require output and figures to be generated (e.g. R Markdown), then it is important to make sure the generated figures and output are reproducible. This output provides a way for you to inspect the output in a diff-friendly manner so that it's easy to see the changes that occur due to new software versions or randomisation. ⏱️ Updated at 2025-05-15 20:12:38 +0000 |
carschno
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a very valuable addition indeed!
I have added a comment about the added figure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The image does not look very convincing, both curves end at a similar value for validation accuracy (~0.6). This seems to require some explanation.
|
|
||
| ``` | ||
|
|
||
|  |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to have alt image descriptions everywhere, e.g.
|  | |
| {alt="A comparison of the accuracy on the validation set for both the frozen and the fine-tuned setup."} |
Finetuning is a critical component of transfer learning, and I think this episode really needs to explore its impact. This added exercise has learners unfreeze one layer of the pretrained model, and compare this to our original frozen model's performance. We see a noticeable improvement with finetuning, as expected. The solution discusses the pros/cons of unfreezing layers, and how to balance overfitting and training time considerations.