You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use python 3.6.4, Pytorch1.0.0 & torchvision 0.2.1, scipy 1.2.1.
The results in paper 'Deep Cross-Modal Pojection Learning for Image-Text Matching' on CUHK-PEDES are:{top- 1 = 49.37%,top-10 = 79.27%}, but I only get {top- 1 = 38.35%,top-10 = 63.39%} using MobileNetv1 as backbone, and {top- 1 = 41.44%,top-10 = 65.66%} using Resnet152. I wander if anyone could reproduce the results, and if it is convenient, please share the training details and hypeparameters.
The text was updated successfully, but these errors were encountered:
I use python 3.6.4, Pytorch1.0.0 & torchvision 0.2.1, scipy 1.2.1.
The results in paper 'Deep Cross-Modal Pojection Learning for Image-Text Matching' on CUHK-PEDES are:{top- 1 = 49.37%,top-10 = 79.27%}, but I only get {top- 1 = 38.35%,top-10 = 63.39%} using MobileNetv1 as backbone, and {top- 1 = 41.44%,top-10 = 65.66%} using Resnet152. I wander if anyone could reproduce the results, and if it is convenient, please share the training details and hypeparameters.
?
I have the similar results like you.Have you reproduced the paper's result using pytorch ? Can we share with each other?
I use python 3.6.4, Pytorch1.0.0 & torchvision 0.2.1, scipy 1.2.1.
The results in paper 'Deep Cross-Modal Pojection Learning for Image-Text Matching' on CUHK-PEDES are:{top- 1 = 49.37%,top-10 = 79.27%}, but I only get {top- 1 = 38.35%,top-10 = 63.39%} using MobileNetv1 as backbone, and {top- 1 = 41.44%,top-10 = 65.66%} using Resnet152. I wander if anyone could reproduce the results, and if it is convenient, please share the training details and hypeparameters.
?
I have the similar results like you.Have you reproduced the paper's result using pytorch ? Can we share with each other?
I have the same problem with you.Have you solved this problem? If you have,could we share with each other?Thank you !
I use python 3.6.4, Pytorch1.0.0 & torchvision 0.2.1, scipy 1.2.1.
The results in paper 'Deep Cross-Modal Pojection Learning for Image-Text Matching' on CUHK-PEDES are:{top- 1 = 49.37%,top-10 = 79.27%}, but I only get {top- 1 = 38.35%,top-10 = 63.39%} using MobileNetv1 as backbone, and {top- 1 = 41.44%,top-10 = 65.66%} using Resnet152. I wander if anyone could reproduce the results, and if it is convenient, please share the training details and hypeparameters.
The text was updated successfully, but these errors were encountered: