Skip to content

Commit 0416011

Browse files
committed
update
1 parent 617782e commit 0416011

File tree

4 files changed

+24
-1
lines changed

4 files changed

+24
-1
lines changed

assets/cifar10.png

218 KB
Loading

assets/cifar100.png

239 KB
Loading

assets/tinyimagenet.png

152 KB
Loading

index.md

+24-1
Original file line numberDiff line numberDiff line change
@@ -84,4 +84,27 @@ For a 1D regression task (above) and a 2D classification task (below), FoRDEs ca
8484
- <span class="my_blue">Blue lines</span> show accuracies of FoRDEs, while <span class="my_orange">dotted orange lines</span> show accuracies of Deep ensembles for comparison.
8585
- When moving from the identity lengthscale \\(\mathbf{I}\\) to the PCA lengthscale \\(\color[RGB]{68,114,196}\boldsymbol{\Lambda}\\):
8686
- FoRDEs exhibit small performance degradations on clean images of CIFAR-100;
87-
- while becomes more robust against the natural corruptions of CIFAR-100-C.
87+
- while becomes more robust against the natural corruptions of CIFAR-100-C.
88+
89+
# Benchmark comparison
90+
91+
<img src="./assets/cifar100.png" alt="drawing" width="100%" max-width="1000px">
92+
93+
<img src="./assets/cifar10.png" alt="drawing" width="100%" max-width="1000px">
94+
95+
<img src="./assets/tinyimagenet.png" alt="drawing" width="100%" max-width="1000px">
96+
97+
# Main takeaways
98+
99+
<div class="my_box">
100+
1. Input-gradient-space repulsion can perform better than weight- and function-space repulsion.
101+
2. Better corruption robustness can be achieved by configuring the repulsion kernel using the eigen-decomposition of the training data.
102+
</div>
103+
104+
## References
105+
<p style="font-size: small;">
106+
[1] F. D’Angelo and V. Fortuin, “Repulsive deep ensembles are Bayesian,” Advances in Neural Information Processing Systems, vol. 34, pp. 3451–3465, 2021.
107+
</p>
108+
<p style="font-size: small;">
109+
[2] C. Liu, J. Zhuo, P. Cheng, R. Zhang, and J. Zhu, “Understanding and Accelerating Particle-Based Variational Inference,” in International Conference on Machine Learning, 2019.
110+
</p>

0 commit comments

Comments
 (0)