|
1 |
| -# COVER |
| 1 | +# COVER |
| 2 | + |
| 3 | +Official Code for [CVPR Workshop2024] Paper *"COVER: A Comprehensive Video Quality Evaluator"*. |
| 4 | +Official Code, Demo, Weights for the [Comprehensive Video Quality Evaluator (COVER)]. |
| 5 | + |
| 6 | +# Todo:: update date, hugging face model below |
| 7 | +- xx xxx, 2024: We upload weights of [COVER](https://github.com/vztu/COVER/release/Model/COVER.pth) and [COVER++](TobeContinue) to Hugging Face models. |
| 8 | +- xx xxx, 2024: We upload Code of [COVER](https://github.com/vztu/COVER) |
| 9 | +- 12 Apr, 2024: COVER has been accepted by CVPR Workshop2024. |
| 10 | + |
| 11 | + |
| 12 | +# Todo:: update [visitors](link) below |
| 13 | + [](https://github.com/vztu/COVER) |
| 14 | +[](https://github.com/QualityAssessment/COVER) |
| 15 | +<a href="https://colab.research.google.com/github/taskswithcode/COVER/blob/master/TWCCOVER.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a> |
| 16 | + |
| 17 | + |
| 18 | +# Todo:: update predicted score for YT-UGC challenge dataset specified by AIS |
| 19 | +**COVER** Pseudo-labelled Quality scores of [YT-UGC](https://www.deepmind.com/open-source/kinetics): [CSV](https://github.com/QualityAssessment/COVER/raw/master/cover_predictions/kinetics_400_1.csv) |
| 20 | + |
| 21 | + |
| 22 | +[](https://paperswithcode.com/sota/video-quality-assessment-on-youtube-ugc?p=disentangling-aesthetic-and-technical-effects) |
| 23 | + |
| 24 | + |
| 25 | +## Introduction |
| 26 | +# Todo:: Add Introduction here |
| 27 | + |
| 28 | +### the proposed COVER |
| 29 | + |
| 30 | +*This inspires us to* |
| 31 | + |
| 32 | + |
| 33 | + |
| 34 | +## Install |
| 35 | + |
| 36 | +The repository can be installed via the following commands: |
| 37 | +```shell |
| 38 | +git clone https://github.com/vztu/COVER |
| 39 | +cd COVER |
| 40 | +pip install -e . |
| 41 | +mkdir pretrained_weights |
| 42 | +cd pretrained_weights |
| 43 | +wget https://github.com/vztu/COVER/release/Model/COVER.pth |
| 44 | +cd .. |
| 45 | +``` |
| 46 | + |
| 47 | + |
| 48 | +## Evaluation: Judge the Quality of Any Video |
| 49 | + |
| 50 | +### Try on Demos |
| 51 | +You can run a single command to judge the quality of the demo videos in comparison with videos in VQA datasets. |
| 52 | + |
| 53 | +```shell |
| 54 | + python evaluate_one_video.py -v ./demo/video_1.mp4 |
| 55 | +``` |
| 56 | + |
| 57 | +or |
| 58 | + |
| 59 | +```shell |
| 60 | + python evaluate_one_video.py -v ./demo/video_2.mp4 |
| 61 | +``` |
| 62 | + |
| 63 | +Or choose any video you like to predict its quality: |
| 64 | + |
| 65 | + |
| 66 | +```shell |
| 67 | + python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$ |
| 68 | +``` |
| 69 | + |
| 70 | +### Outputs |
| 71 | + |
| 72 | +#### ITU-Standarized Overall Video Quality Score |
| 73 | + |
| 74 | +The script can directly score the video's overall quality (considering all perspectives). |
| 75 | + |
| 76 | +```shell |
| 77 | + python evaluate_one_video.py -v $YOUR_SPECIFIED_VIDEO_PATH$ |
| 78 | +``` |
| 79 | + |
| 80 | +The final output score is averaged among all perspectives. |
| 81 | + |
| 82 | + |
| 83 | +## Evaluate on a Exsiting Video Dataset |
| 84 | + |
| 85 | + |
| 86 | +```shell |
| 87 | + python evaluate_one_dataset.py -in $YOUR_SPECIFIED_DIR$ -out $OUTPUT_CSV_PATH$ |
| 88 | +``` |
| 89 | + |
| 90 | +## Evaluate on a Set of Unlabelled Videos |
| 91 | + |
| 92 | + |
| 93 | +```shell |
| 94 | + python evaluate_a_set_of_videos.py -in $YOUR_SPECIFIED_DIR$ -out $OUTPUT_CSV_PATH$ |
| 95 | +``` |
| 96 | + |
| 97 | +The results are stored as `.csv` files in cover_predictions in your `OUTPUT_CSV_PATH`. |
| 98 | + |
| 99 | +Please feel free to use COVER to pseudo-label your non-quality video datasets. |
| 100 | + |
| 101 | + |
| 102 | +## Data Preparation |
| 103 | + |
| 104 | +We have already converted the labels for most popular datasets you will need for Blind Video Quality Assessment, |
| 105 | +and the download links for the **videos** are as follows: |
| 106 | + |
| 107 | +:book: LSVQ: [Github](https://github.com/baidut/PatchVQ) |
| 108 | + |
| 109 | +:book: KoNViD-1k: [Official Site](http://database.mmsp-kn.de/konvid-1k-database.html) |
| 110 | + |
| 111 | +:book: LIVE-VQC: [Official Site](http://live.ece.utexas.edu/research/LIVEVQC) |
| 112 | + |
| 113 | +:book: YouTube-UGC: [Official Site](https://media.withyoutube.com) |
| 114 | + |
| 115 | +*(Please contact the original authors if the download links were unavailable.)* |
| 116 | + |
| 117 | +After downloading, kindly put them under the `../datasets` or anywhere but remember to change the `data_prefix` respectively in the [config file](cover.yml). |
| 118 | + |
| 119 | +# Training: Adapt COVER to your video quality dataset! |
| 120 | + |
| 121 | +Now you can employ ***head-only/end-to-end transfer*** of COVER to get dataset-specific VQA prediction heads. |
| 122 | + |
| 123 | +We still recommend **head-only** transfer. As we have evaluated in the paper, this method has very similar performance with *end-to-end transfer* (usually 1%~2% difference), but will require **much less** GPU memory, as follows: |
| 124 | + |
| 125 | +```shell |
| 126 | + python transfer_learning.py -t $YOUR_SPECIFIED_DATASET_NAME$ |
| 127 | +``` |
| 128 | + |
| 129 | +For existing public datasets, type the following commands for respective ones: |
| 130 | + |
| 131 | +- `python transfer_learning.py -t val-kv1k` for KoNViD-1k. |
| 132 | +- `python transfer_learning.py -t val-ytugc` for YouTube-UGC. |
| 133 | +- `python transfer_learning.py -t val-cvd2014` for CVD2014. |
| 134 | +- `python transfer_learning.py -t val-livevqc` for LIVE-VQC. |
| 135 | + |
| 136 | + |
| 137 | +As the backbone will not be updated here, the checkpoint saving process will only save the regression heads with only `398KB` file size (compared with `200+MB` size of the full model). To use it, simply replace the head weights with the official weights [COVER.pth](https://github.com/vztu/COVER/release/Model/COVER.pth). |
| 138 | + |
| 139 | +We also support ***end-to-end*** fine-tune right now (by modifying the `num_epochs: 0` to `num_epochs: 15` in `./cover.yml`). It will require more memory cost and more storage cost for the weights (with full parameters) saved, but will result in optimal accuracy. |
| 140 | + |
| 141 | +Fine-tuning curves by authors can be found here: [Official Curves](https://wandb.ai/timothyhwu/COVER) for reference. |
| 142 | + |
| 143 | + |
| 144 | +## Visualization |
| 145 | + |
| 146 | +### WandB Training and Evaluation Curves |
| 147 | + |
| 148 | +You can be monitoring your results on WandB! |
| 149 | + |
| 150 | +## Acknowledgement |
| 151 | + |
| 152 | +Thanks for every participant of the subjective studies! |
| 153 | + |
| 154 | +## Citation |
| 155 | + |
| 156 | +Should you find our work interesting and would like to cite it, please feel free to add these in your references! |
| 157 | + |
| 158 | + |
| 159 | +# Todo, add bibtex of cover below |
| 160 | +```bibtex |
| 161 | +%cover |
| 162 | +
|
| 163 | +``` |
0 commit comments