Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add train and eval mode to taskmodules #101

Open
ChristophAlt opened this issue Mar 4, 2022 · 2 comments
Open

Add train and eval mode to taskmodules #101

ChristophAlt opened this issue Mar 4, 2022 · 2 comments
Labels

Comments

@ChristophAlt
Copy link
Collaborator

ChristophAlt commented Mar 4, 2022

Problem: Some taskmodules require different behavior during training.
Solution: Introduce a training flag (controlled by .train(..) and .eval(..)) similar to pytorch's Module, as users are already familiar with the usage.

@ChristophAlt ChristophAlt changed the title Add train and eval model to taskmodules Add train and eval mode to taskmodules Mar 4, 2022
@ArneBinder
Copy link
Owner

After thinking about this for a while, I'm not so much in favor of it since this introduces a flag that can be set in arbitrary locations and its state may not easy to track. I would propose another solution: a method Taskmodule.prepare_documents that is called before Taskmodule.encode_input and by default does nothing. However, it gets a boolean parameter ground_truth_available which is set to the value of encode_target. This method can be used to e.g. add missing annotation layers or some metadata to the document. It can also be used to e.g. add candidate entity pairs for RE with bit more sophisticated logic (this would facilitate some parts of the SAM models). See #102 for the required changes.

@ArneBinder
Copy link
Owner

This requires the bugfix PR #108 to be functional.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants