Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Progressive learning of EfficientNetV2 #718

Open
pawopawo opened this issue Jun 25, 2021 · 2 comments
Open

Progressive learning of EfficientNetV2 #718

pawopawo opened this issue Jun 25, 2021 · 2 comments
Labels
enhancement New feature or request

Comments

@pawopawo
Copy link

Do you will try to reproduce Progressive Learning in EfficientNetv2 ?

@pawopawo pawopawo added the enhancement New feature or request label Jun 25, 2021
@liu-zhenhua
Copy link

+1

@fffffgggg54
Copy link
Contributor

I'm interested in writing a PR for this, since I use it in my own training scripts. I have it implemented by modifying the dataset transforms every epoch.

IME the main issue is that that the start of training uses far less vram compared to the end of training. Additional throughput can be had by adjusting batch size/gradient accumulation to maximize vram usage, but implementing this adjustment is nightmarish. I was trying to do it by halving/doubling the values, respectively, but the vram would not deallocate. Might be better with the timm script, since its set up differently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants