-
Notifications
You must be signed in to change notification settings - Fork 126
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit a40a475
Showing
1 changed file
with
19 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
*WORK IN PROGRESS ...* | ||
|
||
The implementation of paper [**CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval**](https://arxiv.org/abs/2104.08860). | ||
|
||
CLIP4Clip is a video-text retrieval model based on [CLIP (ViT-B/32)](https://github.com/openai/CLIP). We investigate three similarity calculation approaches: parameter-free type, sequential type, and tight type, in this work. The model achieve SOTA results on MSR-VTT, MSVC, and LSMDC by a significant margin. | ||
|
||
# Citation | ||
If you find CLIP4Clip useful in your work, you can cite the following paper: | ||
``` | ||
@Article{Luo2021CLIP4Clip, | ||
author = {Huaishao Luo and Lei Ji and Ming Zhong and Yang Chen and Wen Lei and Nan Duan and Tianrui Li}, | ||
title = {CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval}, | ||
journal = {arXiv preprint arXiv:2104.08860}, | ||
year = {2021}, | ||
} | ||
``` | ||
|
||
# Acknowledgments | ||
Our code is based on [CLIP (ViT-B/32)](https://github.com/openai/CLIP) and [UniVL](https://github.com/microsoft/UniVL). |