Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce the performence on CUHK-PEDES? #10

Open
liyingGao opened this issue Jul 7, 2020 · 2 comments
Open

How to reproduce the performence on CUHK-PEDES? #10

liyingGao opened this issue Jul 7, 2020 · 2 comments

Comments

@liyingGao
Copy link

I use python 3.6.4, Pytorch1.0.0 & torchvision 0.2.1, scipy 1.2.1.
The results in paper 'Deep Cross-Modal Pojection Learning for Image-Text Matching' on CUHK-PEDES are:{top- 1 = 49.37%,top-10 = 79.27%}, but I only get {top- 1 = 38.35%,top-10 = 63.39%} using MobileNetv1 as backbone, and {top- 1 = 41.44%,top-10 = 65.66%} using Resnet152. I wander if anyone could reproduce the results, and if it is convenient, please share the training details and hypeparameters.

@npumazehong
Copy link

I use python 3.6.4, Pytorch1.0.0 & torchvision 0.2.1, scipy 1.2.1.
The results in paper 'Deep Cross-Modal Pojection Learning for Image-Text Matching' on CUHK-PEDES are:{top- 1 = 49.37%,top-10 = 79.27%}, but I only get {top- 1 = 38.35%,top-10 = 63.39%} using MobileNetv1 as backbone, and {top- 1 = 41.44%,top-10 = 65.66%} using Resnet152. I wander if anyone could reproduce the results, and if it is convenient, please share the training details and hypeparameters.
?
I have the similar results like you.Have you reproduced the paper's result using pytorch ? Can we share with each other?

@yyll1998
Copy link

yyll1998 commented Feb 4, 2021

I use python 3.6.4, Pytorch1.0.0 & torchvision 0.2.1, scipy 1.2.1.
The results in paper 'Deep Cross-Modal Pojection Learning for Image-Text Matching' on CUHK-PEDES are:{top- 1 = 49.37%,top-10 = 79.27%}, but I only get {top- 1 = 38.35%,top-10 = 63.39%} using MobileNetv1 as backbone, and {top- 1 = 41.44%,top-10 = 65.66%} using Resnet152. I wander if anyone could reproduce the results, and if it is convenient, please share the training details and hypeparameters.
?
I have the similar results like you.Have you reproduced the paper's result using pytorch ? Can we share with each other?

I have the same problem with you.Have you solved this problem? If you have,could we share with each other?Thank you !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants