-
Notifications
You must be signed in to change notification settings - Fork 112
/
HEPML.tex
249 lines (235 loc) · 52.5 KB
/
HEPML.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
\documentclass[12pt,letterpaper]{article}
\usepackage{jheppub}
%\usepackage[hmargin=1.0in,vmargin=1.0in]{geometry}
%\usepackage{cite}
\usepackage[usenames,dvipsnames]{xcolor} % For colors and names for color boxed links
% hyperref included through jheppub
\hypersetup{
colorlinks=false, % Surround the links by color frames (false) or colors the text of the links (true)
citecolor=blue, % Color of citation links
filecolor=black, % Color of file links
linkcolor=red, % Color of internal links (sections, pages, etc.)
urlcolor=black, % Color of url hyperlinks
linkbordercolor=red, % Color of links to bibliography
citebordercolor=blue, % Color of file links
urlbordercolor=blue % Color of external links
}
% c.f.:
% http://inspirehep.net/info/faq/general#utf8
% https://tex.stackexchange.com/questions/172421/how-to-easily-use-utf-8-with-latex
%\usepackage{fontspec}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Document body
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{\boldmath A Living Review of Machine Learning \\ for Particle and Nuclear Physics}
\abstract{
Modern machine learning techniques, including deep learning, are rapidly being applied, adapted, and developed for high energy particle and nuclear physics. The goal of this document is to provide a nearly comprehensive list of citations for those developing and applying these approaches to experimental, phenomenological, or theoretical analyses. As a living document, it will be updated as often as possible to incorporate the latest developments. A list of proper (unchanging) reviews can be found within. Papers are grouped into a small set of topics to be as useful as possible. Suggestions are most welcome.
}
\begin{document}
\maketitle
The purpose of this note is to collect references for modern machine learning as applied to particle and nuclear physics. A minimal number of categories is chosen in order to be as useful as possible. Note that papers may be referenced in more than one category. The fact that a paper is listed in this document does not endorse or validate its content - that is for the community (and for peer-review) to decide. Furthermore, the classification here is a best attempt and may have flaws - please let us know if (a) we have missed a paper you think should be included, (b) a paper has been misclassified, or (c) a citation for a paper is not correct or if the journal information is now available. In order to be as useful as possible, this document will continue to evolve so please check back\footnote{See \href{https://github.com/iml-wg/HEPML-LivingReview}{https://github.com/iml-wg/HEPML-LivingReview}.} before you write your next paper. You can simply download the .bib file to get all of the latest references. Please consider citing Ref.~\cite{Feickert:2021ajf} when referring to this living review.
This review was built with the help of the HEP-ML community, the INSPIRE REST API~\cite{Moskovic:2021zjs}, and the moderators Benjamin Nachman, Matthew Feickert, Claudius Krause, and Ramon Winterhalder.
\begin{itemize}
\item \textbf{Reviews}
\\\textit{Below are links to many (static) general and specialized reviews. The third bullet contains links to classic papers that applied shallow learning methods many decades before the deep learning revolution.}
\begin{itemize}
\item Modern reviews~\cite{Shanahan:2022ifi,Boehnlein:2021eym,Karagiorgi:2021ngt,Schwartz:2021ftp,Bourilkov:2019yoi,Carleo:2019ptp,Radovic:2018dip,Albertsson:2018maf,Guest:2018yhq,Larkoski:2017jix}
\item Specialized reviews~\cite{Krause:2024avx,Malara:2024zsj,Duarte:2024lsg,Sahu:2024fzi,Halverson:2024hax,Larkoski:2024uoc,Barman:2024wfx,Ahmad:2024dql,Huetsch:2024quz,Mondal:2024nsa,Bardhan:2024zla,Kheddar:2024osf,Gooding:2024wpi,Araz:2023mda,Belis:2023mqs,Hashemi:2023rgo,Allaire:2023fgp,Du:2023qst,DeZoort:2023vrm,Zhou:2023pti,Huber:2022lpm,Huerta:2022kgj,Cheng:2022idp,Plehn:2022ftl,Chen:2022pzc,Benelli:2022sqn,Coadou:2022nsh,Harris:2022qtm,Thais:2022iok,Adelmann:2022ozp,Dvorkin:2022pwo,Butter:2022rso,Bogatskiy:2022hub,Viren:2022qon,Baldi:2022okj,Alanazi:2021grv,deLima:2021fwm,Guan:2020bdl,Kagan:2020yrm,Rousseau:2020rnz,Cranmer:2019eaq,Vlimant:2020enz,Duarte:2020ngm,Nachman:2020ccu,Brehmer:2020cvb,Forte:2020yip,Butter:2020tvl,Psihas:2020pby,Shlomi:2020gdn,1807719,Kasieczka:2019dbj}
\item Classical papers~\cite{Lonnblad:1990bi,Denby:1987rk}
\item Datasets~\cite{Krause:2024avx,Bhimji:2024bcd,Zoch:2024eyp,Rusack:2023pob,Eller:2023myr,Qu:2022mxj,Chen:2021euv,Govorkova:2021hqu,Benato:2021olt,Aarrestad:2021oeb,Kasieczka:2021xcg}
\end{itemize}
\item \textbf{Classification}
\\\textit{Given a feature space $x\in\mathbb{R}^n$, a binary classifier is a function $f:\mathbb{R}^n\rightarrow [0,1]$, where $0$ corresponds to features that are more characteristic of the zeroth class (e.g. background) and $1$ correspond to features that are more characteristic of the one class (e.g. signal). Typically, $f$ will be a function specified by some parameters $w$ (e.g. weights and biases of a neural network) that are determined by minimizing a loss of the form $L[f]=\sum_{i}\ell(f(x_i),y_i)$, where $y_i\in\{0,1\}$ are labels. The function $\ell$ is smaller when $f(x_i)$ and $y_i$ are closer. Two common loss functions are the mean squared error $\ell(x,y)=(x-y)^2$ and the binary cross entropy $\ell(x,y)=y\log(x)+(1-y)\log(1-x)$. Exactly what `more characteristic of' means depends on the loss function used to determine $f$. It is also possible to make a multi-class classifier. A common strategy for the multi-class case is to represent each class as a different basis vector in $\mathbb{R}^{n_\text{classes}}$ and then $f(x)\in[0,1]^{n_\text{classes}}$. In this case, $f(x)$ is usually restricted to have its $n_\text{classes}$ components sum to one and the loss function is typically the cross entropy $\ell(x,y)=\sum_\text{classes $i$} y_i\log(x)$.}
\begin{itemize}
\item \textbf{Parameterized classifiers}~\cite{Chen:2023ind,Nachman:2021yvi,Cranmer:2015bka,Baldi:2016fzo}
\\\textit{A classifier that is conditioned on model parameters $f(x|\theta)$ is called a parameterized classifier.}
\item \textbf{Representations}
\\\textit{There is no unique way to represent high energy physics data. It is often natural to encode $x$ as an image or another one of the structures listed below.}
\begin{itemize}
\item \textbf{Jet images}~\cite{Kheddar:2024osf,Han:2023djl,Choi:2023slq,Filipek:2021qbe,Du:2020pmp,collado2021learning,Lee:2019cad,li2020attention,li2020reconstructing,Macaluso:2018tck,Kasieczka:2017nvn,Komiske:2016rsd,Barnard:2016qma,Komiske:2018oaa,Lin:2018cin,ATL-PHYS-PUB-2017-017,deOliveira:2015xxd,Almeida:2015jua,Cogan:2014oua,Pumplin:1991kc}
\\\textit{Jets are collimated sprays of particles. They have a complex radiation pattern and such, have been a prototypical example for many machine learning studies. See the next item for a specific description about images.}
\item \textbf{Event images}~\cite{He:2024ppu,Ban:2023jfo,Yang:2023djv,Bae:2022dnw,Pol:2021iqw,Andrews:2021ejw,Du:2019civ,Chung:2020ysf,Andrews:2018nwy,Lin:2018cin,ATL-PHYS-PUB-2019-028,Nguyen:2018ugw}
\\\textit{A grayscale image is a regular grid with a scalar value at each grid point. `Color' images have a fixed-length vector at each grid point. Many detectors are analogous to digital cameras and thus images are a natural representation. In other cases, images can be created by discretizing. Convolutional neural networks are natural tools for processing image data. One downside of the image representation is that high energy physics data tend to be sparse, unlike natural images.}
\item \textbf{Sequences}~\cite{ATL-PHYS-PUB-2017-003,deLima:2021fwm,goto2021development,Bols:2020bkb,Nguyen:2018ugw,Guest:2016iqz}
\\\textit{Data that have a variable with a particular order may be represented as a sequence. Recurrent neural networks are natural tools for processing sequence data. }
\item \textbf{Trees}~\cite{Choudhury:2024crp,Matousek:2024vpa,Finke:2023ltw,Belfkir:2023vpo,Dutta:2023jbz,Jercic:2021bfc,Cheng:2017rdo,Louppe:2017ipp}
\\\textit{Recursive neural networks are natural tools for processing data in a tree structure.}
\item \textbf{Graphs}~\cite{Kakati:2024dun,BESIII:2024mgg,Ma:2024qoa,CMS:2024xzb,Calafiura:2024qhv,Correia:2024ogc,Soybelman:2024mbv,Aamir:2024lpz,Kobylianskii:2024sup,Aurisano:2024uvd,Pfeffer:2024tjl,Belle-II:2024lwr,Birch-Sykes:2024gij,Lu:2024qrc,Mo:2024dru,Heinrich:2024tdf,Chatterjee:2024pbp,Konar:2023ptv,Murnane:2023ksa,Bhattacherjee:2023evs,Holmberg:2023rfr,BelleII:2023egc,Duperrin:2023elp,GarciaPardinas:2023pmx,Liu:2023siw,McEneaney:2023vwp,Wang:2023cac,Neu:2023sfh,Yu:2023juh,Murnane:2023kfm,Ehrke:2023cpn,Anisha:2023xmh,Forestano:2023fpj,Huang:2023ssr,Mokhtar:2022pwm,DiBello:2022iwf,Builtjes:2022usj,Bogatskiy:2022czk,Ma:2022bvt,Qasim:2022rww,Gong:2022lye,Pata:2022wam,Elabd:2021lgo,Tsan:2021brw,Atkinson:2021jnj,Konar:2021zdg,Atkinson:2021nlt,Belavin:2021bxb,Hariri:2021clz,Verma:2021ceh,Dezoort:2021kfk,Thais:2021qcb,Hewes:2021heg,Rossi:2021tjf,Biscarat:2021dlj,Pata:2021oez,Qian:2021vnh,Dreyer:2020brq,Verma:2020gnq,Heintz:2020soy,guo2020boosted,alonsomonsalve2020graph,Choma:2020cry,1811770,Iiyama:2020wap,Shlomi:2020gdn,1801423,1797439,Chakraborty:2020yfc,DiBello:2020bas,Chakraborty:2019imr,Qasim:2019otl,Moreno:2019bmu,Ren:2019xhp,Martinez:2018fwc,Abdughani:2018wrw,Ju:2020xty,Henrion:DLPS2017}
\\\textit{A graph is a collection of nodes and edges. Graph neural networks are natural tools for processing data in a tree structure.}
\item \textbf{Sets (point clouds)}~\cite{Araz:2024bom,Leigh:2024ked,Gambhir:2024dtf,Odagiu:2024bkp,Hammad:2023sbd,Mondal:2023law,Acosta:2023nuw,Buhmann:2023zgc,Badea:2023jdb,Kach:2023rqw,Athanasakos:2023fhq,Onyisi:2022hdh,Kach:2022uzq,Qu:2022mxj,ATL-PHYS-PUB-2020-014,Shimmin:2021pkm,Shmakov:2021qdz,Mikuni:2021pou,collado2021learning,Lee:2020qil,Fenton:2020woz,Dolan:2020qkr,Shlomi:2020ufi,Mikuni:2020wpr,Qu:2019gqs,Komiske:2018cqr}
\\\textit{A point cloud is a (potentially variable-size) set of points in space. Sets are distinguished from sequences in that there is no particular order (i.e. permutation invariance). Sets can also be viewed as graphs without edges and so graph methods that can parse variable-length inputs may also be appropriate for set learning, although there are other methods as well.}
\item \textbf{Physics-inspired basis}~\cite{Vatellis:2024vjl,Hallin:2024gmt,Farrell:2024aah,Ramirez-Morales:2024krk,Matchev:2024ash,Diaz:2023otq,Romero:2023hrk,Witkowski:2023xmx,Munoz:2023csn,Larkoski:2023nye,Kishimoto:2022eum,Grojean:2020ech,Butter:2017cot,Komiske:2017aww,Datta:2017lxt,Datta:2017rhs,Datta:2019}
\\\textit{This is a catch-all category for learning using other representations that use some sort of manual or automated physics-preprocessing.}
\end{itemize}
\item \textbf{Targets}
\begin{itemize}
\item \textbf{$W/Z$ tagging}~\cite{Ma:2024qoa,Bose:2024pwc,Bogatskiy:2023nnw,Baron:2023yhw,Grossi:2023fqq,Athanasakos:2023fhq,Aguilar-Saavedra:2023pde,Subba:2022czw,Kim:2021gtv,Dreyer:2020brq,1811770,Chen:2019uar,Sirunyan:2020lcu,Louppe:2017ipp,Barnard:2016qma,deOliveira:2015xxd}
\\\textit{Boosted, hadronically decaying $W$ and $Z$ bosons form jets that are distinguished from generic quark and gluon jets by their mass near the boson mass and their two-prong substructure.}
\item \textbf{$H\rightarrow b\bar{b}$}~\cite{Tagami:2024gtc,Ma:2024qoa,Khosa:2021cyk,Jang:2021eph,Abbas:2020khd,guo2020boosted,Tannenwald:2020mhq,Chung:2020ysf,Sirunyan:2020lcu,Chakraborty:2019imr,Moreno:2019neq,Lin:2018cin,Datta:2019ndh}
\\\textit{Due to the fidelity of $b$-tagging, boosted, hadronically decaying Higgs bosons (predominantly decaying to $b\bar{b}$) has unique challenged and opportunities compared with $W/Z$ tagging.}
\item \textbf{quarks and gluons}~\cite{Geuskens:2024tfo,Brehmer:2024yqw,Tagami:2024gtc,Wu:2024thh,Sandoval:2024ldp,Blekman:2024wyf,Dolan:2023abg,Shen:2023ofd,He:2023cfc,Athanasakos:2023fhq,CrispimRomao:2023ssj,Bright-Thonney:2022xkx,Dreyer:2021hhr,Filipek:2021qbe,Romero:2021qlf,Dreyer:2020brq,Lee:2019cad,Lee:2019ssx,1806025,Kasieczka:2018lwf,Moreno:2019bmu,Chien:2018dfn,Stoye:DLPS2017,Cheng:2017rdo,Komiske:2016rsd,ATL-PHYS-PUB-2017-017}
\\\textit{Quark jets tend to be narrower and have fewer particles than gluon jets. This classification task has been a benchmark for many new machine learning models.}
\item \textbf{top quark} tagging~\cite{Brehmer:2024yqw,Larkoski:2024hfe,Kvita:2024ooa,Sahu:2024fzi,Dong:2024xsg,Cai:2024xnt,Ngairangbam:2023cps,Furuichi:2023vdx,Batson:2023ohn,Liu:2023dio,Bogatskiy:2023fug,Baron:2023yhw,Sahu:2023uwb,Isildak:2023dnf,Shen:2023ofd,Bogatskiy:2023nnw,He:2023cfc,Keicher:2023mer,Choi:2023slq,Bhattacherjee:2022gjq,Munoz:2022gjq,Ahmed:2022hct,Dreyer:2022yom,Andrews:2021ejw,Aguilar-Saavedra:2021rjk,Dreyer:2020brq,Lim:2020igi,Bhattacharya:2020vzu,Macaluso:2018tck,Kasieczka:2017nvn,Butter:2017cot,Diefenbacher:2019ezd,Chakraborty:2020yfc,Kasieczka:2019dbj,Stoye:DLPS2017,Almeida:2015jua}
\\\textit{Boosted top quarks form jets that have a three-prong substructure ($t\rightarrow Wb,W\rightarrow q\bar{q}$).}
\item \textbf{strange jets}~\cite{Greljo:2024ytg,Tagami:2024gtc,Kats:2024eaq,Subba:2023rpm,Erdmann:2020ovh,Erdmann:2019blf,Nakai:2020kuu}
\\\textit{Strange quarks have a very similar fragmentation to generic quark and gluon jets, so this is a particularly challenging task.}
\item \textbf{$b$-tagging}~\cite{Malara:2024zsj,Song:2024aka,VanStroud:2023ggs,Tamir:2023aiz,ATLAS:2023gog,Stein:2023cnt,Liao:2022ufk,ATL-PHYS-PUB-2020-014,ATL-PHYS-PUB-2017-003,Bols:2020bkb,bielkov2020identifying,Keck:2018lcd,Guest:2016iqz,Sirunyan:2017ezt}
\\\textit{Due to their long (but not too long) lifetime, the $B$-hadron lifetime is macroscopic and $b$-jet tagging has been one of the earliest adapters of modern machine learning tools.}
\item \textbf{Flavor physics}~\cite{Nishimura:2024apb,Mansouri:2024uwc,Chen:2024epd,Malekhosseini:2024eot,Co:2024bfl,Chang:2024ksq,Tian:2024yfz,Smith:2023ssh,Nishimura:2023wdu,Zhang:2023czx,Bahtiyar:2022une,1811097}
\\\textit{This category is for studies related to exclusive particle decays, especially with bottom and charm hadrons.}
\item \textbf{BSM particles and models}~\cite{Richter-Was:2024jxn,Cornell:2024dki,Arganda:2024tqo,Verma:2024kdx,Grosso:2024wjt,Wojcik:2024lfy,Bickendorf:2024ovi,Esmail:2024gdc,Ahmed:2024iqx,Birch-Sykes:2024gij,Chiang:2024pho,Jurciukonis:2024hlg,Ma:2024deu,Zhang:2024bld,Hammad:2023sbd,Hammad:2023wme,Zhang:2023ykh,Wang:2023pqx,Grefsrud:2023dad,Bhattacherjee:2023evs,Choudhury:2023eje,Esmail:2023axd,Cremer:2023gne,Aguilar-Saavedra:2023pde,Bardhan:2023mia,Flacke:2023eil,Lu:2023gjk,Guo:2023jkz,Dong:2023nir,MB:2023edk,Pedro:2023sdp,Liu:2023gpt,Palit:2023dvs,ATLAS:2023mcc,Ballabene:2022fms,CMS:2022idi,ATLAS:2022ihe,Bhattacharyya:2022umc,Bardhan:2022sif,Bhattacharya:2022kje,Faucett:2022zie,Hall:2022bme,Chiang:2022lsn,Barbosa:2022mmw,Alasfar:2022vqw,Yang:2022fhw,Ai:2022qvs,Lv:2022pme,Goodsell:2022beo,Freitas:2022cno,Badea:2022dzb,Konar:2022bgc,Feng:2021eke,Beauchesne:2021qrw,Vidal:2021oed,Cornell:2021gut,Drees:2021oew,Jung:2021tym,Morais:2021ead,Alvestad:2021sje,Yang:2021gge,Barron:2021btf,Ren:2021prq,Jorge:2021vpo,Arganda:2021azw,Stakia:2021pvp,Freitas:2019hbk,Khosa:2019kxd,Freitas:2020ttd,Englert:2020ntw,Ngairangbam:2020ksz,Grossi:2020orx,Cogollo:2020afo,Chang:2020rtc,1801423,1792136,10.1088/2632-2153/ab9023,Chakraborty:2019imr,Baldi:2014kfa,Datta:2019ndh}
\\\textit{There are many proposals to train classifiers to enhance the presence of particular new physics models.}
\item \textbf{Particle identification}~\cite{VanStroud:2024fau,Ai:2024mkl,Kasak:2023hhr,Song:2023ceh,Karwowska:2023dhl,NA62:2023wzm,Charan:2023ldg,Novosel:2023cki,Lange:2023gbe,Prasad:2023zdd,Wu:2023pzn,Kushawaha:2023dms,Ryzhikov:2022lbu,Dimitrova:2022uum,Fanelli:2022ifa,Graczykowski:2022zae,Graziani:2021vai,Verma:2021ixg,Collado:2020fwm,Qasim:2019otl,Belayneh:2019vyx,Keck:2018lcd,Hooberman:DLPS2017,Paganini:DLPS2017,deOliveira:2018lqd}
\\\textit{This is a generic category for direct particle identification and categorization using various detector technologies. Direct means that the particle directly interacts with the detector (in contrast with $b$-tagging).}
\item \textbf{Neutrino Detectors}~\cite{Leon:2024zfk,Yu:2024eog,Migala:2024ael,Yu:2024ldv,Kopp:2024lch,Cai:2024bpv,IceCube:2024xjj,Aurisano:2024uvd,Bat:2024gln,Mo:2024dru,Yu:2023ehc,Biassoni:2023lih,Bai:2022lbv,IceCube:2022njh,Sogaard:2022qgg,Bachlechner:2022cvf,Chappell:2022yxd,Lutkus:2022eou,DUNE:2022fiy,Elkarghli:2020owr,MicroBooNE:2021ojx,MicroBooNE:2021nss,Carloni:2021zbc,Garcia-Mendez:2021vts,Gavrikov:2021ktt,Maksimovic:2021dmz,Belavin:2021bxb,Acciarri:2021oav,Hewes:2021heg,Rossi:2021tjf,Drielsma:2021jdv,abbasi2021convolutional,Qian:2021vnh,Chen:2020zkj,Abratenko:2020ocq,Liu:2020pzv,Clerbaux:2020ttg,Abratenko:2020pbp,alonsomonsalve2020graph,Psihas:2020pby,Yu:2020wxu,Koh:2020snv,DeepLearnPhysics:2020hut,DUNE:2020gpm,Domine:2020tlx,Adams:2020vlj,Aiello:2020orq,Domine:2019zhm,Adams:2018bvi,Hertel:DLPS2017,Acciarri:2016ryt,Aurisano:2016jvx}
\\\textit{Neutrino detectors are very large in order to have a sizable rate of neutrino detection. The entire neutrino interaction can be characterized to distinguish different neutrino flavors.}
\item \textbf{Direct Dark Matter Detectors}~\cite{Cerdeno:2024uqt,Ghrear:2024rku,XENONCollaboration:2023dar,Biassoni:2023lih,Li:2022tvg,Liang:2021nsz,Herrero-Garcia:2021goa,Coarasa:2021fpv,McDonald:2021hus,Golovatiuk:2021lqn,Khosa:2019qgp,Akerib:2020aws,Ilyasov_2020}
\\\textit{Dark matter detectors are similar to neutrino detectors, but aim to achieve `zero' background.}
\item \textbf{Cosmology, Astro Particle, and Cosmic Ray physics}~\cite{Takahashi:2024bxx,Heisig:2024jkk,Ahn:2024lkh,Hatefi:2024asc,Yoon:2024pbz,Riehn:2024prp,Kalaczynski:2024wxa,Thakur:2024mxs,Guo:2023mhf,Hatefi:2023gpj,Krastev:2023fnh,Cai:2023gol,Carvalho:2023ele,Zhou:2023cfs,Kim:2023wuk,Goriely:2022upe,Nguyen:2022ldb,Zhang:2022djp,Abel:2022nje,Sun:2022djj,Glauch:2022xth,Montel:2022fhv,De:2022sde,Chen:2019avc,Bister:2021arb,Mishra-Sharma:2021oxe,Mishra-Sharma:2021nhh,Sabiu:2021aea,Kahlhoefer:2021sha,List:2021aer,Vago:2021grx,Aizpuru:2021vhd,Ikeda:2021sxm,Shih:2021kbt,1853992,Arjona:2021hmg,Han:2021kjx,Droz:2021wnh,huang2021convolutionalneuralnetwork,Conceicao:2021xgn,gonzalez2021tackling,Balazs:2021uhg,Aab:2021rcn,Verma:2020gnq,Tsai:2020vcx,Brehmer:2019jyt,Ostdiek:2020cqz}
\\\textit{Machine learning is often used in astrophysics and cosmology in different ways than terrestrial particle physics experiments due to a general divide between Bayesian and Frequentist statistics. However, there are many similar tasks and a growing number of proposals designed for one domain that apply to the other. See also https://github.com/georgestein/ml-in-cosmology.}
\item \textbf{Tracking}~\cite{Caron:2024cyo,Guiang:2024qzk,Gavalian:2024icb,Huang:2024voo,Allaire:2023llb,Allaire:2023dfg,Mieskolainen:2023hkz,Akar:2023zhd,Knipfer:2023zrv,Bae:2023eec,Abidi:2022ogh,Sun:2022bxx,Akram:2022zmj,Bakina:2022mhs,Alonso-Monsalve:2022zlm,Wang:2022oer,Goncharov:2021wvd,Huth:2021zcm,Lavrik:2021zgt,Edmonds:2021lzd,Dezoort:2021kfk,Ju:2021ayy,Thais:2021qcb,Akar:2021gns,Biscarat:2021dlj,goto2021development,Amrouche:2021tlm,Fox:2020hfm,Siviero:2020tim,Choma:2020cry,Shlomi:2020ufi,Akar:2020jti,Ju:2020xty,Amrouche:2019wmx,Farrell:2018cjr,Farrell:DLPS2017}
\\\textit{Charged particle tracking is a challenging pattern recognition task. This category is for various classification tasks associated with tracking, such as seed selection.}
\item \textbf{Heavy Ions / Nuclear Physics}~\cite{He:2024ppu,Graczyk:2024pjm,JETSCAPE:2024cqe,Liuti:2024zkc,Hirvonen:2024zne,Santos:2024bqr,Goswami:2024xrx,Hirvonen:2024ycx,Mengel:2024fcl,Wang:2024gjz,Lay:2023boz,Bedaque:2023udu,Allaire:2023fgp,Wen:2023oju,Hizawa:2023plv,Liu:2023xgl,Yoshida:2023wrb,Lasseri:2023dhi,Karmakar:2023mhy,Yiu:2023ido,Ai:2023azx,Wang:2023kcg,Wang:2023muv,AlHammal:2023svo,Dellen:2023avd,Lin:2023bmy,Soleymaninia:2023dds,Shi:2023xfz,Basak:2023wzq,CrispimRomao:2023ssj,Zhou:2023pti,He:2023zin,Biro:2023kyx,Hirvonen:2023lqy,Escher:2023oyy,Mumpower:2023lch,Kanwar:2023otc,Xu:2023fbs,He:2023urp,Mallick:2023vgi,Steffanic:2023cyx,Fore:2022ljl,Mallick:2022alr,Goriely:2022upe,Munoz:2022slm,Yang:2022rlw,Rigo:2022ces,Yang:2022eag,Zhang:2022hjh,Biro:2022zhl,Lee:2022kdn,Saha:2022skj,Chen:2022shj,Fanelli:2022kro,Liu:2022hzd,Liyanage:2022byj,Boglione:2022gpv,Rahman:2022tfq,Soma:2022qnv,Xiang:2021ssj,Du:2021brx,Du:2021qwv,Lai:2021ckt,Biro:2021zgm,Habashy:2021qku,Ng:2021ibr,Mishra:2021eqb,Zepeda:2021tzp,Habashy:2021orz,He:2021uko,Shokr:2021ouh,Huang:2021iux,Kuttan:2021npg,Du:2021pqa,Brown:2021upr,Apolinario:2021olp,Zhou:2021bvw,Sombillo:2021ifs,Zhao:2021yjo,Nagu:2021zho,Mallick:2021wop,Du:2019civ,Du:2020pmp,Chien:2018dfn,Pang:2016vdc}
\\\textit{Many tools in high energy nuclear physics are similar to high energy particle physics. The physics target of these studies are to understand collective properties of the strong force.}
\end{itemize}
\item \textbf{Learning strategies}
\\\textit{There is no unique way to train a classifier and designing an effective learning strategy is often one of the biggest challenges for achieving optimality.}
\begin{itemize}
\item \textbf{Hyperparameters}~\cite{Allaire:2023dfg,Schroff:2023see,DeZoort:2023dvb,Bevan:2017stx,Dudko:2021cie,Tani:2020dyi}
\\\textit{In addition to learnable weights $w$, classifiers have a number of non-differentiable parameters like the number of layers in a neural network. These parameters are called hyperparameters.}
\item \textbf{Weak/Semi supervision}~\cite{Lieberman:2024hga,Beauchesne:2023vie,Witkowski:2023htt,Bardhan:2023mia,Dolan:2022ikg,LeBlanc:2022bwd,Finke:2022lsu,Li:2022omf,Komiske:2022vxg,Lieberman:2021krq,Lee:2019ssx,Dahbi:2020zjw,Brewer:2020och,Amram:2020ykb,collaboration2020dijet,Metodiev:2018ftz,Komiske:2018vkc,Cohen:2017exh,Borisyak:2019vbz,Collins:2019jip,Collins:2018epr,Komiske:2018oaa,Metodiev:2017vrx,Dery:2017fap}
\\\textit{For supervised learning, the labels $y_i$ are known. In the case that the labels are noisy or only known with some uncertainty, then the learning is called weak supervision. Semi-supervised learning is the related case where labels are known for only a fraction of the training examples.}
\item \textbf{Unsupervised}~\cite{Sheldon:2024sbe,Cai:2024xnt,Lu:2024ict,Kishimoto:2023cys,Badea:2023jdb,Kitouni:2023rct,Huang:2023kgs,Dillon:2021gag,Howard:2021pos,Cai:2020vzx,Dillon:2019cqt,1797846,Komiske:2019fks,Mackey:2015hwa}
\\\textit{When no labels are provided, the learning is called unsupervised.}
\item \textbf{Reinforcement Learning}~\cite{Nishimura:2024apb,Angloher:2023oya,Alvestad:2023jgl,Nishimura:2023wdu,Dersy:2022bym,Windisch:2021mem,Cranmer:2021gdt,Harvey:2021oue,John:2020sak,Brehmer:2020brs,Carrazza:2019efs}
\\\textit{Instead of learning to distinguish different types of examples, the goal of reinforcement learning is to learn a strategy (policy). The prototypical example of reinforcement learning in learning a strategy to play video games using some kind of score as a feedback during the learning.}
\item \textbf{Quantum Machine Learning}~\cite{Scott:2024txs,Yang:2024bqw,Nelakurti:2024xol,Zhang:2024ebl,Lazar:2024luq,Chen:2024rna,Hoque:2023zjt,Hammad:2023wme,Rehm:2023ovj,Schuhmacher:2023pro,Wozniak:2023xbe,Rousselot:2023pcj,Duckett:2022ccc,Araz:2022zxk,Peixoto:2022zzk,Alvi:2022fkk,Delgado:2022aty,Araz:2022haf,Abel:2022lqr,Gianelle:2022unu,Ngairangbam:2021yma,Kim:2021wrr,Bravo-Prieto:2021ehz,Araz:2021ifk,Belis:2021zqi,Wu:2021xsj,Heredge:2021vww,Blance:2021gcs,Chen:2021ouz,Guan:2020bdl,Wu:2020cye,Chen:2020zkj,Terashi:2020wfi,Blance:2020nhl,Zlokapa:2019lvv,Mott:2017xdb}
\\\textit{Quantum computers are based on unitary operations applied to quantum states. These states live in a vast Hilbert space which may have a usefully large information capacity for machine learning.}
\item \textbf{Feature ranking}~\cite{Das:2022cjl,Grojean:2020ech,Faucett:2020vbu}
\\\textit{It is often useful to take a set of input features and rank them based on their usefulness.}
\item \textbf{Attention}~\cite{Kach:2023rqw,Biassoni:2023lih,Qiu:2023ihi,Finke:2023veq,goto2021development}
\\\textit{This is an ML tool for helping the network to focus on particularly useful features.}
\item \textbf{Regularization}~\cite{Sforza:2013hua,Araz:2021wqm}
\\\textit{This is a term referring to any learning strategy that improves the robustness of a classifier to statistical fluctuations in the data and in the model initialization.}
\item \textbf{Optimal Transport}~\cite{Bright-Thonney:2023sqf,ATLAS:2023mny,Gouskos:2022xvn,Manole:2022bmi,Cai:2021hnn,Pollard:2021fqv,Romao:2020ojy,Cai:2020vzx,Komiske:2019fks}
\\\textit{Optimal transport is a set of tools for transporting one probability density into another and can be combined with other strategies for classification, regression, etc. The above citation list does not yet include papers using optimal transport distances as part of generative model training.}
\end{itemize}
\item \textbf{Fast inference / deployment}
\\\textit{There are many practical issues that can be critical for the actual application of machine learning models.}
\begin{itemize}
\item \textbf{Software}~\cite{Yu:2024eog,Ragoni:2024jhg,Pratiush:2024ltm,Bierlich:2024vqo,Ivanov:2024whr,CALICE:2024imr,Held:2024gwj,Kauffman:2024bov,Bal:2023bvt,DiBello:2023kzc,DPHEP:2023blx,Tyson:2023zkx,Guo:2023nfu,Duarte:2022job,Garg:2022tal,Jiang:2022zho,Saito:2021vpp,Goncharov:2021wvd,Pol:2021iqw,Amrouche:2021tio,Mahesh:2021iph,Rehm:2021zow,Balazs:2021uhg,1792136,Bourgeois:2018nvk,Nguyen:2018ugw,Weitekamp:DLPS2017,Gligorov:2012qt,Strong:2020mge}
\\\textit{Strategies for efficient inference for a given hardware architecture.}
\item \textbf{Hardware/firmware}~\cite{CMS:2024psu,Migala:2024ael,Badea:2024zoq,Serhiayenka:2024han,Borella:2024mgs,Zhu:2024ubz,Los:2024xzl,Parpillon:2024maz,Tiras:2024yzr,Bahr:2024dzg,CMS:2024twn,Dickinson:2023yes,Delaney:2023swp,Zipper:2023ybp,Lin:2023xrw,Jin:2023xts,Grosso:2023owo,Yoo:2023lxy,Schulte:2023gtt,Yaary:2023dvw,Okabe:2023efz,Neu:2023sfh,Coccaro:2023nol,Herbst:2023lug,Cai:2023ldc,MeyerzuTheenhausen:2022ffb,Abidi:2022ogh,Carlson:2022vac,Khoda:2022dwz,Sun:2022bxx,Butter:2022lkf,Jwa:2019zlh,Elabd:2021lgo,Govorkova:2021utb,Migliorini:2021fuj,DiGuglielmo:2021ide,Hong:2021snb,Teixeira:2021yhl,Hawks:2021ruw,Aarrestad:2021zos,Rossi:2020sbh,Heintz:2020soy,Rankin:2020usv,Carrazza:2020qwu,Mohan:2020vvi,Iiyama:2020wap,1808088,Summers:2020xiy,DiGuglielmo:2020eqx,Duarte:2018ite}
\\\textit{Various accelerators have been studied for fast inference that is very important for latency-limited applications like the trigger at collider experiments.}
\item \textbf{Deployment}~\cite{Li:2024uju,Bieringer:2024pzt,Savard:2023wwi,Holmberg:2023rfr,SunnebornGudnadottir:2021nhk,Kuznetsov:2020mcj}
\\\textit{This category is for the deployment of machine learning interfaces, such as in the cloud.}
\end{itemize}
\end{itemize}
\item \textbf{Regression}
\\\textit{In contrast to classification, the goal of regression is to learn a function $f:\mathbb{R}^n\rightarrow\mathbb{R}^m$ for input features $x\in\mathbb{R}^n$ and target features $y\in\mathbb{R}^m$. The learning setup is very similar to classification, where the network architectures and loss functions may need to be tweaked. For example, the mean squared error is the most common loss function for regression, but the network output is no longer restricted to be between $0$ and $1$.}
\begin{itemize}
\item \textbf{Pileup}~\cite{Algren:2024bqw,Lieret:2023aqg,Kim:2023koz,CRESST:2022qor,Li:2022omf,Maier:2021ymx,Carrazza:2019efs,Martinez:2018fwc,ATL-PHYS-PUB-2019-028,Komiske:2017ubm}
\\\textit{A given bunch crossing at the LHC will have many nearly simultaneous proton-proton collisions. Only one of those is usually interesting and the rest introduce a source of noise (pileup) that must be mitigating for precise final state reconstruction.}
\item \textbf{Calibration}~\cite{CMS:2024jdl,Akchurin:2024ffj,Britton:2024pdy,Hashmani:2024ykk,Zdybal:2024yzu,Kocot:2023izs,Acosta:2023nuw,Bein:2023ylt,Holmberg:2023rfr,Meyer:2023ffd,ALICETPC:2023ojd,ATLAS:2023tyv,Khozani:2023bql,Raine:2023fko,Soleymaninia:2023dds,Grosso:2023ltd,Grosso:2023jxp,Basak:2023wzq,Schwenker:2023bih,Lee:2023jew,Aad:2023ula,Guglielmi:2022ftj,Ge:2022xrv,Darulis:2022brn,Leigh:2022lpn,Valsecchi:2022rla,Gambhir:2022dut,Gambhir:2022gua,Akchurin:2022apq,Qiu:2022xvr,Alves:2022gnw,Dorigo:2022tfi,Chadeeva:2022kay,Pata:2022wam,Renteria-Estrada:2021zrd,Kronheim:2021hdb,Arratia:2021tsq,Micallef:2021src,Polson:2021kvr,Diefenthaler:2021rdj,Akchurin:2021ahx,Kieseler:2020wcq,Akchurin:2021afn,Pollard:2021fqv,Kieseler:2021jxc,Du:2020pmp,Baldi:2020hjm,Sirunyan:2019wwa,Kasieczka:2020vlh,Hooberman:DLPS2017,ATL-PHYS-PUB-2018-013,ATL-PHYS-PUB-2020-001,Cheong:2019upg}
\\\textit{The goal of calibration is to remove the bias (and reduce variance if possible) from detector (or related) effects.}
\item \textbf{Recasting}~\cite{Goodsell:2024aig,Hammad:2022wpq,1806026,Bertone:2016mdy,Caron:2017hku}
\\\textit{Even though an experimental analysis may provide a single model-dependent interpretation of the result, the results are likely to have important implications for a variety of other models. Recasting is the task of taking a result and interpreting it in the context of a model that was not used for the original analysis.}
\item \textbf{Matrix elements}~\cite{Heimel:2023ngj,Kaidisch:2023lwp,Maitre:2023dqz,Janssen:2023ahv,Badger:2022hwf,Dersy:2022bym,Alnuqaydan:2022ncd,Karl:2022jda,Winterhalder:2021ngy,Danziger:2021eeg,Maitre:2021uaa,Aylett-Bullock:2021hmo,Sombillo:2021rxv,Sombillo:2021yxe,Bury:2020ewi,1804325,Bishara:2019iwh,Badger:2020uow}
\\\textit{Regression methods can be used as surrogate models for functions that are too slow to evaluate. One important class of functions are matrix elements, which form the core component of cross section calculations in quantum field theory.}
\item \textbf{Parameter estimation}~\cite{Biro:2024tzv,Simkina:2023ztj,Dubey:2023pro,Yang:2023rbg,Schroder:2023akt,Goos:2023opq,Shi:2023xfz,AlHammal:2023svo,Qiu:2023ihi,Garg:2022tal,Meng:2022lmd,Castro:2022zpq,Craven:2021ems,Alda:2021rgt,Kim:2021pcz,Lazzarin:2020uvv,1808105,Lei:2020ucb}
\\\textit{The target features could be parameters of a model, which can be learned directly through a regression setup. Other forms of inference are described in later sections (which could also be viewed as regression).}
\item \textbf{Parton Distribution Functions (and related)}~\cite{Chowdhury:2024ymm,Kriesten:2024are,Liuti:2024umy,Yan:2024yir,Barontini:2024dyb,Ochoa-Oregon:2024zgm,Soleymaninia:2024jam,Bertone:2024taw,Costantini:2024xae,Gombas:2024rvw,DallOlio:2024vjv,NNPDF:2024dpb,NNPDF:2024djq,Kriesten:2023uoi,Rabemananjara:2023xfq,Fernando:2023obn,Wang:2023poi,Kassabov:2023hbm,Wang:2023nab,Candido:2023utz,Gao:2022srd,Gao:2022uhg,Iranipour:2022iak,Khalek:2021gon,Ball:2021xlu,Ball:2021leu,Carrazza:2021hny,Rossi:2020sbh,Grigsby:2020auv,DelDebbio:2020rgv}
\\\textit{Various machine learning models can provide flexible function approximators, which can be useful for modeling functions that cannot be determined easily from first principles such as parton distribution functions.}
\item \textbf{Lattice Gauge Theory}~\cite{Zhu:2024kiu,Gerdes:2024rjk,Wang:2024ykk,Rovira:2024aqd,Gao:2024zdz,Luo:2024iwf,Gao:2024nzg,Bachtis:2024dss,Jiang:2024vsr,Cai:2024eqa,Bachtis:2024vks,Apte:2024vwn,Xu:2024tjp,Chen:2024mmd,Bai:2024pii,Abbott:2024knk,Finkenrath:2024tdp,Kim:2024rpd,Lin:2024eiz,Bonanno:2024udh,Chu:2024swv,Boyle:2024nlh,Chen:2024ckb,Catumba:2024wxc,Holland:2024muu,Goswami:2024jlc,Kanwar:2024ujc,Lawrence:2023cft,Foreman:2023ymy,Gao:2023quv,Holland:2023lfx,Soloveva:2023tvj,Gao:2023uel,Wang:2023sry,Tomiya:2023jdy,Alvestad:2023jgl,Albandea:2023ais,Ermann:2023unw,Kashiwa:2023dfx,Detmold:2023kjm,Caselle:2023mvh,Buzzicotti:2023qdv,Riberdy:2023awf,Singha:2023xxq,Lehner:2023prf,NarcisoFerreira:2023kak,Bender:2023gwr,R:2023dcr,Hudspith:2023loy,Zhou:2023pti,Aronsson:2023rli,Nicoli:2023qsl,Albandea:2023wgd,Lehner:2023bba,Peng:2022wdl,Lawrence:2022dba,Aguilar:2022thg,Gao:2022uhg,Bacchio:2022vje,Bacchio:2022vje,Chen:2022asj,Favoni:2022mcg,Karsch:2022yka,Kim:2022rna,Sale:2022snt,Khan:2022vot,Albandea:2022fky,Kang:2022jbg,Li:2022ozl,Chen:2022ytr,Luo:2022jzl,Shi:2022yqw,Bulusu:2021njs,Chen:2021jey,Favoni:2021epq,Nguyen:2019gpo,Zhang:2019qiq,Yoon:2018krb,Hackett:2021idh,Shi:2021qri,Bulusu:2021rqz,Favoni:2020reg,Kanwar:2003.06413}
\\\textit{Lattice methods offer a complementary approach to perturbation theory. A key challenge is to create approaches that respect the local gauge symmetry (equivariant networks).}
\item \textbf{Function Approximation}~\cite{Wolf:2024zkz,Rovira:2024aqd,Hirst:2024abn,Reyes-Gonzalez:2023oei,Fernando:2023obn,Wang:2023nab,Lei:2022dvn,Kitouni:2021fkh,Wang:2021jou,Chahrour:2021eiv,Haddadin:2021mmo,Coccaro:2019lgs,1853982}
\\\textit{Approximating functions that obey certain (physical) constraints.}
\item \textbf{Symbolic Regression}~\cite{Cushman:2024jgi,Wang:2023poi,Lu:2022joy,Zhang:2022uqk,Butter:2021rvz}
\\\textit{Regression where the result is a (relatively) simple formula.}
\item \textbf{Monitoring}~\cite{AbdusSalam:2024obf,Li:2024akn,Cushman:2024jgi,Shutt:2024che,CMSECAL:2023fvz,Das:2023ktd,Harilal:2023smf,Chen:2023cim,Joshi:2023btt,CMSMuon:2023czf,Matha:2023tmf,Mukund:2023oyy}
\\\textit{Regression models can be used to monitor experimental setups and sensors.}
\end{itemize}
\item \textbf{Equivariant networks}~\cite{Brehmer:2024yqw,Maitre:2024hzp,Hendi:2024yin,Cruz:2024grk,Spinner:2024hjm,Bhardwaj:2024wrf,Sahu:2024sts,Bhardwaj:2024djv,Chatterjee:2024pbp,Bressler:2024wzc,Gu:2024lrz,Bright-Thonney:2023gdl,Bogatskiy:2023nnw,Murnane:2023kfm,Lehner:2023prf,Forestano:2023qcy,Buhmann:2023pmh,Aronsson:2023rli,Forestano:2023fpj,Lehner:2023bba,Hao:2022zns,Bogatskiy:2022czk,Favoni:2022mcg,Bogatskiy:2022hub,Shi:2022yqw,Gong:2022lye,Bulusu:2021njs,Favoni:2020reg,Dolan:2020qkr,Kanwar:2003.06413}
\\\textit{It is often the case that implementing equivariance or learning symmetries with a model better describes the physics and improves performance}
\item \textbf{Physics-informed neural networks (PINNs)}~\cite{Terin:2024iyy,Vatellis:2024vjl,Panahi:2024sfb}
\\\textit{Physics-informed networks are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs).}
\item \textbf{Decorrelation methods}~\cite{Algren:2023spv,Rabusov:2022woa,Das:2022cjl,Klein:2022hdv,Mikuni:2021nwn,Dolan:2021pml,Ghosh:2021hrh,Kitouni:2020xgb,Kasieczka:2020pil,clavijo2020adversarial,10.1088/2632-2153/ab9023,Rogozhnikov:2014zea,Wunsch:2019qbo,Englert:2018cfo,Xia:2018kgd,DiscoFever,ATL-PHYS-PUB-2018-014,Bradshaw:2019ipy,Shimmin:2017mfk,Stevens:2013dya,Moult:2017okx,Dolen:2016kst,Louppe:2016ylz}
\\\textit{It it sometimes the case that a classification or regression model needs to be independent of a set of features (usually a mass-like variable) in order to estimate the background or otherwise reduce the uncertainty. These techniques are related to what the machine learning literature calls model `fairness'.}
\item \textbf{Generative models / density estimation}
\\\textit{The goal of generative modeling is to learn (explicitly or implicitly) a probability density $p(x)$ for the features $x\in\mathbb{R}^n$. This task is usually unsupervised (no labels).}
\begin{itemize}
\item \textbf{GANs}~\cite{Krause:2024avx,Kach:2024yxi,Wojnar:2024cbn,Dooney:2024pvt,Simsek:2024zhj,Chan:2023icm,Scham:2023usu,Scham:2023cwn,FaucciGiannelli:2023fow,Erdmann:2023ngr,Barbetti:2023bvi,Alghamdi:2023emm,Dubinski:2023fsy,Chan:2023ume,Diefenbacher:2023prl,EXO:2023pkl,Hashemi:2023ruu,Yue:2023uva,Buhmann:2023pmh,Anderlini:2022hgm,ATLAS:2022jhk,Rogachev:2022hjg,Ratnikov:2022hge,Anderlini:2022ckd,Ghosh:2022zdz,Bieringer:2022cbs,Buhmann:2021caf,Desai:2021wbb,Chisholm:2021pdn,Anderlini:2021qpm,Bravo-Prieto:2021ehz,Li:2021cbp,Mu:2021nno,Khattak:2021ndw,NEURIPS2020_a878dbeb,Kansal:2021cqp,Winterhalder:2021ave,Lebese:2021foi,Rehm:2021qwm,Carrazza:2021hny,Rehm:2021zoz,Rehm:2021zow,Choi:2021sku,Lai:2020byl,Maevskiy:2020ank,Kansal:2020svm,2008.06545,Diefenbacher:2020rna,Alanazi:2020jod,buhmann2020getting,Wang:2020tap,Belayneh:2019vyx,Hooberman:DLPS2017,Farrell:2019fsm,deOliveira:2017rwa,Oliveira:DLPS2017,Urban:2018tqv,Erdmann:2018jxd,Erbin:2018csv,Derkach:2019qfk,Deja:2019vcv,Erdmann:2018kuh,Musella:2018rdi,Datta:2018mwd,Vallecorsa:2018zco,Carminati:2018khv,Zhou:2018ill,ATL-SOFT-PUB-2018-001,Chekalina:2018hxi,Hashemi:2019fkn,DiSipio:2019imz,Lin:2019htn,Butter:2019cae,Carrazza:2019cnt,SHiP:2019gcl,Vallecorsa:2019ked,Bellagente:2019uyp,Martinez:2019jlu,Butter:2019eyo,Alonso-Monsalve:2018aqs,Paganini:2017dwg,Paganini:2017hrr,deOliveira:2017pjk}
\\\textit{Generative Adversarial Networks~\cite{Goodfellow:2014upx} learn $p(x)$ implicitly through the minimax optimization of two networks: one that maps noise to structure $G(z)$ and one a classifier (called the discriminator) that learns to distinguish examples generated from $G(z)$ and those generated from the target process. When the discriminator is maximally `confused', then the generator is effectively mimicking $p(x)$.}
\item \textbf{(Variational) Autoencoders}~\cite{Smith:2024lxz,Krause:2024avx,Liu:2024kvv,Kuh:2024lgx,Hoque:2023zjt,Zhang:2023khv,Chekanov:2023uot,Lasseri:2023dhi,Anzalone:2023ugq,Roche:2023int,Cresswell:2022tof,AbhishekAbhishek:2022wby,Collins:2022qpr,Ilten:2022jfm,Touranakou:2022qrp,Buhmann:2021caf,Tsan:2021brw,Jawahar:2021vyu,Orzari:2021suh,Collins:2021pld,Fanelli:2019qaq,Hariri:2021clz,deja2020endtoend,Bortolato:2021zic,Buhmann:2021lxj,Howard:2021pos,1816035,Cheng:2020dal,ATL-SOFT-PUB-2018-001,Monk:2018zsb}
\\\textit{An autoencoder consists of two functions: one that maps $x$ into a latent space $z$ (encoder) and a second one that maps the latent space back into the original space (decoder). The encoder and decoder are simultaneously trained so that their composition is nearly the identity. When the latent space has a well-defined probability density (as in variational autoencoders), then one can sample from the autoencoder by applying the detector to a randomly chosen element of the latent space.}
\item \textbf{(Continuous) Normalizing flows}~\cite{Krause:2024avx,Saito:2024fmr,Bodendorfer:2024egw,Heimel:2024wph,Quetant:2024ftg,Dreyer:2024bhs,Buss:2024orz,Favaro:2024rle,Du:2024gbp,Bai:2024pii,Abbott:2024knk,Daumann:2024kfd,Schnake:2024mip,Kelleher:2024jsh,Vaselli:2024vrx,Kelleher:2024rmb,Deutschmann:2024lml,Kanwar:2024ujc,Krause:2023uww,Ernst:2023qvn,ElBaz:2023ijr,Bierlich:2023zzd,Heimel:2023ngj,Gavranovic:2023oam,Pham:2023bnl,Albandea:2023ais,Bright-Thonney:2023sqf,Finke:2023ltw,Bickendorf:2023nej,Reyes-Gonzalez:2023oei,Golling:2023mqx,Pang:2023wfx,Buckley:2023rez,Singha:2023xxq,Xu:2023xdc,Wen:2023oju,Golling:2023yjq,Raine:2023fko,Nachman:2023clf,R:2023dcr,Nicoli:2023qsl,Diefenbacher:2023vsw,Rousselot:2023pcj,Albandea:2023wgd,Heimel:2022wyj,Backes:2022vmn,Dolan:2022ikg,Kach:2022uzq,Kach:2022qnf,Cresswell:2022tof,Krause:2022jna,Albandea:2022fky,Chen:2022ytr,Leigh:2022lpn,Verheyen:2022tov,Butter:2022lkf,Winterhalder:2021ngy,Butter:2021csz,Krause:2021wez,Bister:2021arb,Jawahar:2021vyu,Vandegar:2020yvw,NEURIPS2020_a878dbeb,Hallin:2021wme,Menary:2021tjg,Hackett:2021idh,Krause:2021ilc,Winterhalder:2021ave,Hollingsworth:2021sii,Bieringer:2020tnw,Lu:2020npg,Choi:2020bnf,Nachman:2020lpy,Gao:2020vdv,Gao:2020zvv,Bothmann:2020ywa,Brehmer:2020vwc,Kanwar:2003.06413,1800956,Albergo:2019eim}
\\\textit{Normalizing flows~\cite{pmlr-v37-rezende15} learn $p(x)$ explicitly by starting with a simple probability density and then applying a series of bijective transformations with tractable Jacobians.}
\item \textbf{Diffusion Models}~\cite{Krause:2024avx,Araz:2024bom,Algren:2024bqw,Aarts:2024rsl,Zhu:2024kiu,Wojnar:2024cbn,Quetant:2024ftg,Kita:2024nnw,Favaro:2024rle,Kobylianskii:2024sup,Jiang:2024bwr,Vaselli:2024vrx,Kobylianskii:2024ijw,Jiang:2024ohg,Sengupta:2023vtm,Butter:2023ira,Wang:2023sry,Heimel:2023ngj,Devlin:2023jzp,Buhmann:2023acn,Buhmann:2023zgc,Buhmann:2023kdg,Hunt-Smith:2023ccp,Mikuni:2023tqg,Diefenbacher:2023wec,Cotler:2023lem,Diefenbacher:2023flw,Amram:2023onf,Imani:2023blb,Leigh:2023zle,Acosta:2023zik,Mikuni:2023tok,Butter:2023fov,Buhmann:2023bwk,Shmakov:2023kjj,Mikuni:2023dvk,Leigh:2023toe,Mikuni:2022xry}
\\\textit{These approaches learn the gradient of the density instead of the density directly.}
\item \textbf{Transformer Models}~\cite{Brehmer:2024yqw,Quetant:2024ftg,Spinner:2024hjm,Paeng:2024ary,Li:2023xhj,Tomiya:2023jdy,Raine:2023fko,Butter:2023fov,Finke:2023veq}
\\\textit{These approaches learn the density or perform generative modeling using transformer-based networks.}
\item \textbf{Physics-inspired}~\cite{Abasov:2024hyq,Larkoski:2023xam,Barenboim:2021vzh,Lai:2020byl,1808876,Andreassen:2019txo,Andreassen:2018apy}
\\\textit{A variety of methods have been proposed to use machine learning tools (e.g. neural networks) combined with physical components.}
\item \textbf{Mixture Models}~\cite{Vermunt:2023fsr,Liu:2022dem,Graziani:2021vai,Burton:2021tsd,Chen:2020uds}
\\\textit{A mixture model is a superposition of simple probability densities. For example, a Gaussian mixture model is a sum of normal probability densities. Mixture density networks are mixture models where the coefficients in front of the constituent densities as well as the density parameters (e.g. mean and variances of Gaussians) are parameterized by neural networks.}
\item \textbf{Phase space generation}~\cite{Deutschmann:2024lml,Calisto:2023vmm,Singh:2023yvj,Renteria-Estrada:2023buo,Heimel:2022wyj,Jinno:2022sbr,Maitre:2022xle,Yoon:2020zmb,Danziger:2021eeg,Backes:2020vka,Verheyen:2020bjw,Chen:2020nfb,Nachman:2020fff,Carrazza:2020rdn,Klimek:2018mza,Gao:2020vdv,Gao:2020zvv,Bothmann:2020ywa,Bendavid:2017zhk}
\\\textit{Monte Carlo event generators integrate over a phase space that needs to be generated efficiently and this can be aided by machine learning methods.}
\item \textbf{Gaussian processes}~\cite{Cisbani:2019xta,1804325,Bertone:2016mdy,Frate:2017mai}
\\\textit{These are non-parametric tools for modeling the `time'-dependence of a random variable. The `time' need not be actual time - for instance, one can use Gaussian processes to model the energy dependence of some probability density.}
\item \textbf{Other/hybrid}~\cite{Sahu:2023uwb,Santos:2023mib,Kronheim:2023jrl,Butter:2023fov,Kansal:2022spb,Li:2022jon,DiBello:2022rss,Cresswell:2022tof}
\\\textit{Architectures that combine different network elements or otherwise do not fit into the other categories.}
\end{itemize}
\item \textbf{Anomaly detection}~\cite{Das:2024fwo,DARWIN:2024unx,Duarte:2024lsg,Zhang:2024ebl,Chekanov:2024ezm,Matos:2024ggs,Harilal:2024tqq,Leigh:2024chm,Grosso:2024nho,Li:2024htp,Cheng:2024yig,Krause:2023uww,Sengupta:2023vtm,Zipper:2023ybp,Metodiev:2023izu,Liu:2023djx,Zhang:2023khv,Bai:2023yyy,Grosso:2023owo,Freytsis:2023cjr,Buhmann:2023acn,Finke:2023ltw,Bickendorf:2023nej,CMSECAL:2023fvz,Chekanov:2023uot,ATLAS:2023azi,Vaslin:2023lig,Golling:2023yjq,Mikuni:2023tok,Sengupta:2023xqy,Golling:2023juz,Roche:2023int,Schuhmacher:2023pro,Mastandrea:2022vas,Araz:2022zxk,Kasieczka:2022naq,Hallin:2022eoq,Kamenik:2022qxs,Park:2022zov,Caron:2022wrw,Dillon:2022mkq,Verheyen:2022tov,Finke:2022lsu,Fanelli:2022xwl,Letizia:2022xbe,Raine:2022hht,Birman:2022xzu,Dillon:2022tmm,Jiang:2022sfw,Alvi:2022fkk,Buss:2022lxw,Aguilar-Saavedra:2022ejy,Bradshaw:2022qev,Ngairangbam:2021yma,Canelli:2021aps,dAgnolo:2021aun,Chekanov:2021pus,Mikuni:2021nwn,Lester:2021aks,Tombs:2021wae,Aguilar-Saavedra:2021utu,Herrero-Garcia:2021goa,Jawahar:2021vyu,Fraser:2021lxm,Ostdiek:2021bem,Hallin:2021wme,Govorkova:2021utb,Volkovich:2021txe,Kasieczka:2021tew,Govorkova:2021hqu,Caron:2021wmq,Dorigo:2021iyy,Aarrestad:2021oeb,Kahn:2021drv,Atkinson:2021nlt,Shih:2021kbt,Finke:2021sdf,Dillon:2021nxw,Collins:2021nxn,Bortolato:2021zic,Blance:2021gcs,Batson:2021agz,Chakravarti:2021svb,Kasieczka:2021xcg,Stein:2020rou,Faroughy:2020gas,Park:2020pak,vanBeekveld:2020txa,Mikuni:2020qds,pol2020anomaly,1815227,aguilarsaavedra2020mass,Alexander:2020mbx,Thaprasop:2020mzp,Khosa:2020qrz,Cheng:2020dal,Amram:2020ykb,1800445,1797846,collaboration2020dijet,knapp2020adversarially,Romao:2020ojy,Romao:2019dvs,Aguilar-Saavedra:2017rzt,Nachman:2020lpy,Andreassen:2020nkr,Dillon:2019cqt,1809.02977,Mullin:2019mmh,DeSimone:2018efk,Hajer:2018kqm,Blance:2019ibf,Cerri:2018anq,Roy:2019jae,Heimel:2018mkt,Farina:2018fyg,DAgnolo:2019vbw,Collins:2019jip,Collins:2018epr,DAgnolo:2018cun}
\\\textit{The goal of anomaly detection is to identify abnormal events. The abnormal events could be from physics beyond the Standard Model or from faults in a detector. While nearly all searches for new physics are technically anomaly detection, this category is for methods that are mode-independent (broadly defined). Anomalies in high energy physics tend to manifest as over-densities in phase space (often called `population anomalies') in contrast to off-manifold anomalies where you can flag individual examples as anomalous. }
\item \textbf{Foundation Models, LLMs}~\cite{Leigh:2024ked,Mikuni:2024qsr,Zhang:2024kws,Fanelli:2024ktq,Harris:2024sra,Birk:2024knn,Vigl:2024lat}
\\\textit{A foundation model is a machine learning or deep learning model that is trained on broad data such that it can be applied across a wide range of use cases.}
\item \textbf{Simulation-based (`likelihood-free') Inference}
\\\textit{Likelihood-based inference is the case where $p(x|\theta)$ is known and $\theta$ can be determined by maximizing the probability of the data. In high energy physics, $p(x|\theta)$ is often not known analytically, but it is often possible to sample from the density implicitly using simulations.}
\begin{itemize}
\item \textbf{Parameter estimation}~\cite{Heimel:2024drk,Maitre:2024hzp,Bahl:2024meb,JETSCAPE:2024cqe,Mastandrea:2024irf,Diaz:2024yfu,Alvarez:2024owq,Chatterjee:2024pbp,Chai:2024zyl,Heimel:2023mvw,Espejo:2023wzf,Barrue:2023ysk,Morandini:2023pwj,Erdogan:2023uws,Breitenmoser:2023tmi,Heinrich:2023bmt,Rizvi:2023mws,Neubauer:2022gbu,Butter:2022vkj,Arganda:2022zbs,Kong:2022rnd,Arganda:2022qzy,Bahl:2021dnc,Barman:2021yfh,Mishra-Sharma:2021oxe,NEURIPS2020_a878dbeb,Chatterjee:2021nms,Nachman:2021yvi,Bieringer:2020tnw,Flesher:2020kuy,Coogan:2020yux,Andreassen:2020gtw,Cranmer:2015bka,Brehmer:2018hga,Brehmer:2019xox,Brehmer:2018eca,Brehmer:2018kdj,Hollingsworth:2020kjg,Stoye:2018ovl,Andreassen:2019nnm}
\\\textit{This can also be viewed as a regression problem, but there the goal is typically to do maximum likelihood estimation in contrast to directly minimizing the mean squared error between a function and the target.}
\item \textbf{Unfolding}~\cite{Butter:2024vbx,Duarte:2024lsg,Zhu:2024drd,Desai:2024kpd,Huetsch:2024quz,Shmakov:2024gkd,Shmakov:2023kjj,Chan:2023tbf,Backes:2022vmn,Arratia:2022wny,Wong:2021zvv,Arratia:2021otl,H1:2021wkz,Komiske:2021vym,Andreassen:2021zzk,Baron:2021vvl,Howard:2021pos,Vandegar:2020yvw,1800956,Zech2003BinningFreeUB,Lindemann:1995ut,Martschei:2012pr,Glazov:2017vni,Gagunashvili:2010zw,Bellagente:2019uyp,Datta:2018mwd,Andreassen:2019cjw,Mieskolainen:2018fhf}
\\\textit{This is the task of removing detector distortions. In contrast to parameter estimation, the goal is not to infer model parameters, but instead, the undistorted phase space probability density. This is often also called deconvolution.}
\item \textbf{Domain adaptation}~\cite{Glazier:2024ogg,Kelleher:2024jsh,Kelleher:2024rmb,Zhao:2024ely,Algren:2023qnb,Schreck:2023pzs,Camaiani:2022kul,Nachman:2021opi,Diefenbacher:2020rna,Cranmer:2015bka,Andreassen:2019nnm,Rogozhnikov:2016bdp}
\\\textit{Morphing simulations to look like data is a form of domain adaptation.}
\item \textbf{BSM}~\cite{Heimel:2024drk,Maselek:2024qyp,Yang:2024bqw,Florez:2024lrr,Saito:2024fmr,Schofbeck:2024zjo,Hammad:2024hhm,Ahmed:2024uaz,Choudhury:2024mox,Baruah:2024gwy,Ahmed:2024oxg,Catena:2024fjn,Bhattacharya:2024sxl,vanBeekveld:2024cby,Barman:2024xlc,Romao:2024gjx,Arganda:2023qni,Franz:2023gic,Mandal:2023mck,Chhibra:2023tyf,vanBeekveld:2023ney,Dennis:2023kfe,Anisha:2023xmh,Castro:2022zpq,GomezAmbrosio:2022mpm,deSouza:2022uhk,Romao:2020ojy,Brehmer:2019xox,Brehmer:2018hga,Brehmer:2018eca,Brehmer:2018kdj,Hollingsworth:2020kjg,Andreassen:2020nkr}
\\\textit{This category is for parameter estimation when the parameter is the signal strength of new physics.}
\item \textbf{Differentiable Simulation}~\cite{Heller:2024onk,Chung:2024vfg,Heimel:2024wph,BarhamAlzas:2024ggt,Smith:2023ssh,Aehle:2023wwi,Kagan:2023gxz,Shenoy:2023ros,Napolitano:2023jhg,Lei:2022dvn,Nachman:2022jbj,MODE:2022znx,Heinrich:2022xfa}
\\\textit{Coding up a simulation using a differentiable programming language like TensorFlow, PyTorch, or JAX.}
\end{itemize}
\item \textbf{Uncertainty Quantification}
\\\textit{Estimating and mitigating uncertainty is essential for the successful deployment of machine learning methods in high energy physics. }
\begin{itemize}
\item \textbf{Interpretability}~\cite{Kriesten:2024are,Gavrikov:2024rso,Wilkinson:2024xva,Ngairangbam:2023cps,Mengel:2023mnw,Roy:2022gge,Khot:2022aky,Grojean:2022mef,Anzalone:2022hrt,Bradshaw:2022qev,Mokhtar:2021bkf,Collins:2021pld,Romero:2021qlf,Grojean:2020ech,Agarwal:2020fpt,Diefenbacher:2019ezd,Chang:2017kvc,deOliveira:2015xxd}
\\\textit{Machine learning methods that are interpretable maybe more robust and thus less susceptible to various sources of uncertainty.}
\item \textbf{Estimation}~\cite{Panahi:2024sfb,Bieringer:2024nbc,Dickinson:2023yes,Golutvin:2023fle,Koh:2023wst,Cheung:2022dil,Bellagente:2021yyh,Barnard:2016qma,Nachman:2019yfl,Nachman:2019dol}
\\\textit{A first step in reducing uncertainties is estimating their size.}
\item \textbf{Mitigation}~\cite{Stein:2022nvf,Araz:2021wqm,Louppe:2016ylz,Englert:2018cfo,Estrade:DLPS2017}
\\\textit{This category is for proposals to reduce uncertainty.}
\item \textbf{Uncertainty- and inference-aware learning}~\cite{Layer:2023lwi,Simpson:2022suz,Abudinen:2021qpc,Ghosh:2021roe,Wunsch:2020iuh,deCastro:2018mgh,Bollweg:2019skg,Caron:2019xkx}
\\\textit{The usual path for inference is that a machine learning method is trained for a nominal setup. Uncertainties are then propagated in the usual way. This is suboptimal and so there are multiple proposals for incorporating uncertainties into the learning to get as close to making the final statistical test the target of the machine learning as possible.}
\end{itemize}
\item \textbf{Formal Theory and ML}
\\\textit{ML can also be utilized in formal theory.}
\begin{itemize}
\item Theory and physics for ML~\cite{Zhang:2024mcu,Halverson:2023ndu,Demirtas:2023fir,Kumar:2023hlu,Zuniga-Galindo:2023uwp,Banta:2023kqe,Zuniga-Galindo:2023hty,Erbin:2022lls}
\item ML for theory~\cite{Kawai:2024pws,Butbaia:2024xgj,Ek:2024fgd,Bhat:2024agd,Capuozzo:2024vdw,Halverson:2024axc,Bodendorfer:2024egw,Cheung:2024svk,Dao:2024zab,Gukov:2024opc,LopesCardoso:2024tol,Keita:2024skh,Hou:2024vtx,Balduf:2024gvv,Bea:2024xgv,Orman:2024mpw,Hashimoto:2024aga,Lanza:2024mqp,Gukov:2024buj,Berman:2024pax,Constantin:2024yxh,Ishiguro:2023hcv,Hirst:2023kdl,Erbin:2023ncy,Lanza:2023vee,Matchev:2023mii,Halverson:2023ndu,Choi:2023rqg,Alawadhi:2023gxa,Wojcik:2023usm,Seong:2023njx,Gnech:2023prs,Mizera:2023bsw,Cotler:2023lem,Dersy:2023job,Forestano:2023ijh,Dorrill:2023vox,Lal:2023dkj,He:2023csq,Cheung:2022itk,Chen:2022jwd,Escalante-Notario:2022fik,Gerdes:2022nzr,Erbin:2022rgx,Berglund:2022gvm}
\end{itemize}
\item \textbf{Experimental results}
\\\textit{This section is incomplete as there are many results that directly and indirectly (e.g. via flavor tagging) use modern machine learning techniques. We will try to highlight experimental results that use deep learning in a critical way for the final analysis sensitivity.}
\begin{itemize}
\item Performance studies~\cite{Kara:2024xkk,Karwowska:2024xqy,Palo:2023xnr,ATLAS:2023zca,Gronroos:2023qff,Jiang:2022zho,NEOS-II:2022mov,Yang:2022dwu,CMS:2022prd}
\item Searches and measurements where ML reconstruction is a core component~\cite{ATLAS:2024mrr,BESIII:2024mgg,CMS:2024xzb,ATLAS:2024rua,CMS:2024ddc,CALICE:2024jke,MicroBooNE:2024zhz,ATLAS:2024xxl,CMS:2024zqs,Belle-II:2024vvr,ATLAS:2024itc,CMS:2024vjn,ATLAS:2024auw,CMS:2024fkb,ATLAS:2024fdw,ATLAS:2024ett,CMS:2024trg,ATLAS:2024rcx,Vourliotis:2024bem,BOREXINO:2023pcv,Akar:2023puf,Tung:2023lkv,Belfkir:2023vpo,Dutta:2023jbz,Gravili:2023hbp,NOvA:2023uxq,ATLAS:2023dnm,ATLAS:2023sbu,ATLAS:2023bzb,ATLAS:2023qdu,ATLAS:2023vxg,ATLAS:2023hbp,ATLAS:2023mcc,CMS:2022wjc,Manganelli:2022whv,Tran:2022ago,Li:2022gpb,CMS:2022fxs,CMS:2022idi,ATLAS:2022ihe,MicroBooNE:2021jwr,MicroBooNE:2021nxr,CMS:2019dqq,Keck:2018lcd}
\item Final analysis discriminate for searches~\cite{Manganelli:2022whv,Sirunyan:2020hwz,collaboration2020dijet,Aad:2020hzm,Aad:2019yxi}.
\item Measurements using deep learning directly (not through object reconstruction)~\cite{H1:2023fzk,H1:2021wkz}
\end{itemize}
\end{itemize}
\clearpage
\flushbottom
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% References
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\bibliographystyle{uiuchept}
\bibliographystyle{JHEP}
\bibliography{HEPML}
\end{document}