The image tagging task involves predicting the attributes of each image. We provide classification models for weather condition prediction and scene type prediction.
The BDD100K dataset contains image tagging annotations for 100K diverse images (70K/10K/20K for train/val/test). Each annotation contains image attributes regarding the weather condition (6 classes), the type of scene (6 classes), and the time of day (3 classes). For details about downloading the data and the annotation format for this task, see the official documentation.
For training the models listed below, we follow the common settings used by MMClassification. All models are trained on 4 GeForce RTX 2080 Ti GPUs. Training parameters can be found in the config files.
Six classes: rainy, snowy, clear, overcast, partly cloudy, and foggy (plus undefined).
Very Deep Convolutional Networks for Large-Scale Image Recognition [ICLR 2015]
Authors: Karen Simonyan, Andrew Zisserman
Abstract
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.Backbone | Input | Acc-val | Scores-val | Acc-test | Scores-test | Config | Weights | Preds | Visuals |
---|---|---|---|---|---|---|---|---|---|
VGG-11 | 224 * 224 | 80.92 | scores | 80.62 | scores | config | model | MD5 | preds | visuals |
VGG-13 | 224 * 224 | 80.84 | scores | 80.71 | scores | config | model | MD5 | preds | visuals |
VGG-16 | 224 * 224 | 80.77 | scores | 80.70 | scores | config | model | MD5 | preds | visuals |
Deep Residual Learning for Image Recognition [CVPR 2016]
Authors: Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Abstract
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28\% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.Backbone | Input | Acc-val | Scores-val | Acc-test | Scores-test | Config | Weights | Preds | Visuals |
---|---|---|---|---|---|---|---|---|---|
ResNet-18 | 640 * 640 | 81.57 | scores | 81.39 | scores | config | model | MD5 | preds | visuals |
ResNet-34 | 640 * 640 | 81.48 | scores | 81.05 | scores | config | model | MD5 | preds | visuals |
ResNet-50 | 640 * 640 | 81.94 | scores | 81.56 | scores | config | model | MD5 | preds | visuals |
ResNet-101 | 640 * 640 | 81.73 | scores | 81.22 | scores | config | model | MD5 | preds | visuals |
ResNet-18 | 224 * 224 | 81.66 | scores | 81.14 | scores | config | model | MD5 | preds | visuals |
ResNet-34 | 224 * 224 | 81.61 | scores | 81.06 | scores | config | model | MD5 | preds | visuals |
ResNet-50 | 224 * 224 | 81.78 | scores | 81.24 | scores | config | model | MD5 | preds | visuals |
ResNet-101 | 224 * 224 | 81.59 | scores | 81.12 | scores | config | model | MD5 | preds | visuals |
Deep Layer Aggregation [CVPR 2018]
Authors: Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell
Abstract
Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at [this https URL](https://github.com/ucbdrive/dla).Backbone | Input | Acc-val | Scores-val | Acc-test | Scores-test | Config | Weights | Preds | Visuals |
---|---|---|---|---|---|---|---|---|---|
DLA-34 | 224 * 224 | 81.35 | scores | 81.24 | scores | config | model | MD5 | preds | visuals |
DLA-60 | 224 * 224 | 79.99 | scores | 79.65 | scores | config | model | MD5 | preds | visuals |
DLA-X-60 | 224 * 224 | 80.22 | scores | 79.80 | scores | config | model | MD5 | preds | visuals |
Six classes: tunnel, residential, parking lot, city street, gas stations, and highway (plus undefined).
Very Deep Convolutional Networks for Large-Scale Image Recognition [ICLR 2015]
Authors: Karen Simonyan, Andrew Zisserman
Abstract
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.Backbone | Input | Acc-val | Scores-val | Acc-test | Scores-test | Config | Weights | Preds | Visuals |
---|---|---|---|---|---|---|---|---|---|
VGG-11 | 224 * 224 | 77.22 | scores | 77.01 | scores | config | model | MD5 | preds | visuals |
VGG-13 | 224 * 224 | 77.37 | scores | 77.10 | scores | config | model | MD5 | preds | visuals |
VGG-16 | 224 * 224 | 77.57 | scores | 77.23 | scores | config | model | MD5 | preds | visuals |
Deep Residual Learning for Image Recognition [CVPR 2016]
Authors: Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Abstract
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28\% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.Backbone | Input | Acc-val | Scores-val | Acc-test | Scores-test | Config | Weights | Preds | Visuals |
---|---|---|---|---|---|---|---|---|---|
ResNet-18 | 640 * 640 | 78.07 | scores | 77.48 | scores | config | model | MD5 | preds | visuals |
ResNet-34 | 640 * 640 | 77.47 | scores | 77.40 | scores | config | model | MD5 | preds | visuals |
ResNet-50 | 640 * 640 | 77.92 | scores | 77.35 | scores | config | model | MD5 | preds | visuals |
ResNet-101 | 640 * 640 | 77.51 | scores | 77.06 | scores | config | model | MD5 | preds | visuals |
ResNet-18 | 224 * 224 | 77.84 | scores | 77.11 | scores | config | model | MD5 | preds | visuals |
ResNet-34 | 224 * 224 | 77.77 | scores | 77.34 | scores | config | model | MD5 | preds | visuals |
ResNet-50 | 224 * 224 | 77.66 | scores | 77.17 | scores | config | model | MD5 | preds | visuals |
ResNet-101 | 224 * 224 | 77.47 | scores | 77.14 | scores | config | model | MD5 | preds | visuals |
Deep Layer Aggregation [CVPR 2018]
Authors: Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell
Abstract
Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes. The code is at [this https URL](https://github.com/ucbdrive/dla).Backbone | Input | Acc-val | Scores-val | Acc-test | Scores-test | Config | Weights | Preds | Visuals |
---|---|---|---|---|---|---|---|---|---|
DLA-34 | 224 * 224 | 77.64 | scores | 77.13 | scores | config | model | MD5 | preds | visuals |
DLA-60 | 224 * 224 | 75.14 | scores | 74.80 | scores | config | model | MD5 | preds | visuals |
DLA-X-60 | 224 * 224 | 75.80 | scores | 75.69 | scores | config | model | MD5 | preds | visuals |
a. Create a conda virtual environment and activate it.
conda create -n bdd100k-mmcls python=3.8
conda activate bdd100k-mmcls
b. Install PyTorch and torchvision following the official instructions, e.g.,
conda install pytorch torchvision -c pytorch
Note: Make sure that your compilation CUDA version and runtime CUDA version match. You can check the supported CUDA version for precompiled packages on the PyTorch website.
c. Install mmcv and mmclassification.
pip install mmcv-full
pip install mmcls
You can also refer to the official instructions.
Single GPU inference:
python ./test.py ${CONFIG_FILE} --out ${OUTPUT_DIR} [--options]
Multiple GPU inference:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch \
--nproc_per_node=4 --master_port=12000 ./test.py $CFG_FILE \
--out ${OUTPUT_DIR} [--options] --launcher pytorch
For visualization, you can use the visualization tool provided by Scalabel.
Below is an example:
import os
import numpy as np
from PIL import Image
from scalabel.label.io import load
from scalabel.vis.label import LabelViewer
# load prediction frames
frames = load('$OUTPUT_DIR/tagging.json').frames
viewer = LabelViewer()
for frame in frames:
img = np.array(Image.open(os.path.join('$IMG_DIR', frame.name)))
viewer.draw(img, frame)
viewer.save(os.path.join('$VIS_DIR', frame.name))
You can include your models in this repo as well! Please follow the contribution instructions.