Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
OCPNeb loading of fine tuned models is broken (#981)
This can be reproduced when training, or finetuning any new model and passing it to OCPNeb. The reason here is that our loader uses the 'trainer' field in config to query the registry.
However we populate the 'trainer' field with the task_name here and here
To fix this we normally load models using OCPCalculator and pass the explicit trainer name in as an argument overriding the one present in the checkpoint.
However OCPNeb does not have this option to pass in trainer specifically, and only uses the one present in the checkpoint, which has trainer='s2ef', however 's2ef' is not a valid trainer so this fails.
Additionally when OCPNeb calls the trainer initialization it does not pass through loss_functions, but loss_functions is used as a early return condition in update_config, which if not present will fail on updated configs as they try to get converted but cant.
This PR fixes (hopefully) both issues and unblocks #981