Skip to content

Commit

Permalink
Update talk slides just to introduce neos
Browse files Browse the repository at this point in the history
  • Loading branch information
matthewfeickert committed Jan 24, 2024
1 parent 3fdb9cf commit f65a778
Showing 1 changed file with 30 additions and 18 deletions.
48 changes: 30 additions & 18 deletions talk.md
Original file line number Diff line number Diff line change
Expand Up @@ -765,29 +765,41 @@ $$
.bold.center[Having access to the gradients can make the fit orders of magnitude faster than finite difference]

---
# Enable new techniques with autodiff

.kol-2-3[
* Familiar (toy) example: Optimizing selection "cut" for an analysis.<br>
Place discriminate selection cut on observable $x$ to maximize significance.
* Traditionally, step along values in $x$ and calculate significance at each selection. Keep maximum.
* Need differentiable analogue to non-differentiable "cut".<br>
Weight events using activation function of sigmoid

.center[$w=\left(1 + e^{-\alpha(x-c)}\right)^{-1}$]

* Most importantly though, with the differentiable model we have access to the gradient $\partial_{x} f(x)$
* So can find the maximum significance at the point where the gradient of the significance is zero $\partial_{x} f(x) = 0$
* With a simple gradient descent algorithm can easily automate the significance optimization
# Enabling new tools with autodiff [TODO: CLARIFY]

.kol-1-1[
.kol-1-3[
<p style="text-align:center;">
<img src="figures/signal_background_stacked.png"; width=100%>
</p>
]
.kol-1-3.center[
.kol-1-3[
<p style="text-align:center;">
<img src="figures/significance_scan_compare.png"; width=100%>
</p>
]
.kol-1-3[
<p style="text-align:center;">
<img src="figures/signal_background_stacked.png"; width=72%>
<img src="figures/significance_scan_compare.png"; width=72%>
<img src="figures/automated_optimization.png"; width=72%>
<img src="figures/automated_optimization.png"; width=100%>
</p>
]
]
<!-- -->
.kol-1-3[
* Counting experiment for presence of signal process
* Place discriminate selection cut on observable $x$ to maximize significance $f(x)$
* Step along cut values in $x$ and calculate significance
]
.kol-1-3[
* Need differentiable analogue to non-differentiable cut
* Weight events using activation function of sigmoid

.center[$w=\left(1 + e^{-\alpha(x-c)}\right)^{-1}$]
]
.kol-1-3[
* With a simple gradient descent algorithm can easily automate the significance optimization
* Allows for the "cut" to become a parameter that can be differentiated through for the larger analysis
]

---
# New Art: Analysis as a Differentiable Program
Expand Down

0 comments on commit f65a778

Please sign in to comment.