-
Notifications
You must be signed in to change notification settings - Fork 27.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add qwen2vl for sequence classification #34086
base: main
Are you sure you want to change the base?
add qwen2vl for sequence classification #34086
Conversation
Maybe @zucchini-nlp can take a look as well! |
@ArthurZucker @zucchini-nlp Any of you who could take a look at this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I missed this one. LGTM in general but I am not sure if we add dedicated classes for specific tasks when there is no checkpoint for that model. I'll leave that questiion for @ArthurZucker
I left a couple comments, mostly nits. Thanks for working on this!
@unittest.skip("LM test not for VLM") | ||
def test_attention_outputs(self): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did all the tests start failing after SequenceClassification was added? These tests should be okay with VLMs so I don't think it is a good idea to skip them. We should rather try to fix it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes these tests started failing after adding sequenceclassification! I can try to look into it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like it is a problem with the pooling of logits that was copied from llamaForSequenceClassification. Will investigate further.
In general, we add these classes if:
|
Sorry for not being active here, have had some busy weeks I have not seen any implementation or talk about this, but this is probably because multimodality is still not very mainstream given the increased compute requirements. We have seen big releases recently for retrieval (col-pali, col-qwen, dse), I currently have this need, and would much prefer if this was a part of the standard library. |
For anyone else looking for vision language reranker/cross-encoder checkout this model released by lightonai that is taking qwen2-vl 2b and finetuning it on vidore. |
I am using the Qwen2-VL-2B model for a classification task, and I want to modify the Qwen2VLForConditionalGeneration by adding a linear classification head to adapt it to the task. I am unsure whether my modification is correct, as the test results show that the model has almost no classification ability after adding the classification head. Below is my training code.
|
What does this PR do?
Adds sequence classification for qwen2-vl. This work was done because there are currently no way to do text-image classification in transformers. This is useful for rerankers, reward models etc. Mostly copied and stitched together from
Qwen2VLForConditionalGeneration
andLLamaForSequenceClassification.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker I saw you review the original qwen2 vl PR, so would love for you to review this also. Let me know if you think I need to do some refactoring or other things.