Skip to content

Latest commit

 

History

History
123 lines (104 loc) · 6.48 KB

File metadata and controls

123 lines (104 loc) · 6.48 KB

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet, arxiv

PaddlePaddle training/validation code and pretrained models for T2T-ViT.

The official pytorch implementation is here.

This implementation is developed by PaddleViT.

drawing

T2T-ViT Model Overview

Update

  • Update (2022-03-24): Code is refactored and bugs are fixed.
  • Update (2021-09-27): Model FLOPs and # params are uploaded.
  • Update (2021-08-18): Code is released and ported weights are uploaded.

Models Zoo

Model Acc@1 Acc@5 #Params FLOPs Image Size Crop_pct Interpolation Link
t2t_vit_7 71.68 90.89 4.3M 1.0G 224 0.9 bicubic google/baidu
t2t_vit_10 75.15 92.80 5.8M 1.3G 224 0.9 bicubic google/baidu
t2t_vit_12 76.48 93.49 6.9M 1.5G 224 0.9 bicubic google/baidu
t2t_vit_14 81.50 95.67 21.5M 4.4G 224 0.9 bicubic google/baidu
t2t_vit_19 81.93 95.74 39.1M 7.8G 224 0.9 bicubic google/baidu
t2t_vit_24 82.28 95.89 64.0M 12.8G 224 0.9 bicubic google/baidu
t2t_vit_t_14 81.69 95.85 21.5M 4.4G 224 0.9 bicubic google/baidu
t2t_vit_t_19 82.44 96.08 39.1M 7.9G 224 0.9 bicubic google/baidu
t2t_vit_t_24 82.55 96.07 64.0M 12.9G 224 0.9 bicubic google/baidu
t2t_vit_14_384 83.34 96.50 21.5M 13.0G 384 1.0 bicubic google/baidu

*The results are evaluated on ImageNet2012 validation set.

Data Preparation

ImageNet2012 dataset is used in the following file structure:

│imagenet/
├──train_list.txt
├──val_list.txt
├──train/
│  ├── n01440764
│  │   ├── n01440764_10026.JPEG
│  │   ├── n01440764_10027.JPEG
│  │   ├── ......
│  ├── ......
├──val/
│  ├── n01440764
│  │   ├── ILSVRC2012_val_00000293.JPEG
│  │   ├── ILSVRC2012_val_00002138.JPEG
│  │   ├── ......
│  ├── ......
  • train_list.txt: list of relative paths and labels of training images. You can download it from: google/baidu
  • val_list.txt: list of relative paths and labels of validation images. You can download it from: google/baidu

Usage

To use the model with pretrained weights, download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs/.

For example, assume weight file is downloaded in ./t2t_vit_7.pdparams, to use the t2t_vit_7 model in python:

from config import get_config
from t2t_vit import build_t2t_vit as build_model
# config files in ./configs/
config = get_config('./configs/t2t_vit_7.yaml')
# build model
model = build_model(config)
# load pretrained weights
model_state_dict = paddle.load('./t2t_vit_7.pdparams')
model.set_state_dict(model_state_dict)

Evaluation

To evaluate model performance on ImageNet2012, run the following script using command line:

sh run_eval_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python main_multi_gpu.py \
-cfg='./configs/t2t_vit_7.yaml' \
-dataset='imagenet2012' \
-batch_size=256 \
-data_path='/dataset/imagenet' \
-eval \
-pretrained='./t2t_vit_7.pdparams' \
-amp

Note: if you have only 1 GPU, change device number to CUDA_VISIBLE_DEVICES=0 would run the evaluation on single GPU.

Training

To train the model on ImageNet2012, run the following script using command line:

sh run_train_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python main_multi_gpu.py \
-cfg='./configs/t2t_vit_7.yaml' \
-dataset='imagenet2012' \
-batch_size=256 \
-data_path='/dataset/imagenet' \
-amp

Note: it is highly recommanded to run the training using multiple GPUs / multi-node GPUs.

Reference

@article{yuan2021tokens,
  title={Tokens-to-token vit: Training vision transformers from scratch on imagenet},
  author={Yuan, Li and Chen, Yunpeng and Wang, Tao and Yu, Weihao and Shi, Yujun and Jiang, Zihang and Tay, Francis EH and Feng, Jiashi and Yan, Shuicheng},
  journal={arXiv preprint arXiv:2101.11986},
  year={2021}
}