Skip to content

Commit

Permalink
episode fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
ssekmen committed Aug 1, 2024
1 parent e270e31 commit 3f0876a
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 12 deletions.
6 changes: 3 additions & 3 deletions episodes/01-statinference.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Statistical inference is the last step of an analysis and plays a crucial role i

**Statistical model** is the mathematical framework used to describe and make inferences about the underlying processes that generate observed data. It encodes the probabilistic dependence of the observed quantities (i.e. data) on parameters of the model. These parameters are not directly observable but can be inferred from experimental data. They include

- **parameters of interest (POI), $\vec{\mu}$:** The quantities we are interested in estimating or testing. Examples are cross section, signal strength, resonance mass, ...
- **parameters of interest (POI), $\vec{\mu}$:** The quantities we are interested in estimating or testing. Examples are cross section, signal strength modifier, resonance mass, ...
- **nuisance parameters, $\vec{\nu}$:** parameters that are not of direct interest, but required to explain data. These could be uncertainties of experimental or theoretical origin, such as detector effects, background measurements, lumi calibration, cross-section calculation.

Data are also partitioned into two:
Expand Down Expand Up @@ -149,13 +149,13 @@ Let's give a concrete example for luminosity. Imagine a counting analysis, wher

$$n_{exp} = \mu \rm{\sigma_{sig}^{eff}} L + \rm{\sigma_{bg}^{eff}} L$$

where $\mu$ is the signal strength, $\rm{\sigma_{sig}^{eff}}$ and $\rm{\sigma_{bg}^{eff}}$ are signal and background effective cross sections and $L$ is the integrated luminosity. Suppose that, in a different study, we have measured that there is a 2.5% uncertainty on luminosity, which would directly effect the expected number of events:
where $\mu$ is the signal strength modifier, $\rm{\sigma_{sig}^{eff}}$ and $\rm{\sigma_{bg}^{eff}}$ are signal and background effective cross sections and $L$ is the integrated luminosity. Suppose that, in a different study, we have measured that there is a 2.5% uncertainty on luminosity, which would directly effect the expected number of events:

$$L \rightarrow L(1 + 0.025)^\nu$$

When $\nu = 0$, no change happens in $L$, and consequently $n_{exp}$. When $\nu = \pm 1$, we have the $+/-$ effect. We apply a Gaussian constraint as

$$\pi(\nu_0, \nu) = \pi(0, \nu) e^{-\frac{1}{2}\nu^2}$$
$$\pi(\nu_0, \nu) = \pi(0 | \nu) - e^{-\frac{1}{2}\nu^2}$$

Hence, the nuisance parameter for luminosity uncertainty is **log-normally distributed**.

Expand Down
5 changes: 3 additions & 2 deletions episodes/03-limits.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ exercises: 0

**Observed limits:** These are limits derived directly from the experimental data. They represent the actual constraint on the parameter of interest based on the measurements taken during the experiment.

**Expected limits:** These are limits are based on simulated data assuming no new phenomena exist (i.e., that the null hypothesis is true). They provide a benchmark for comparing the observed limits to what would be expected if only known physics were at play.
**Expected limits:** These are limits based on simulated data assuming no new phenomena exist (i.e., that the null hypothesis is true). They provide a benchmark for comparing the observed limits to what would be expected if only known physics were at play.

## Calculating limits

Expand All @@ -38,10 +38,11 @@ The process of calculating limits typically involves the following steps:
- **Null hypothesis (or background-only hypothesis) ($H_0$)**: This hypothesis assumes that there is no new physics, meaning the data can be fully explained by the standard model or another established theory.
- **Alternative hypothesis or signal+background hypothesis ($H_1$)**: This hypothesis posits the presence of new physics, implying deviations from the predictions of the null hypothesis.
- **Test statistic**: Calculate a test statistic, such as the profile likelihood ratio, which compares how well the data fits under both $H_0$ and $H_1$. The profile likelihood ratio is defined as:$$\lambda(\mu) = \frac{\mathcal{L}(\mu, \hat{\hat{\nu}})}{\mathcal{L}(\hat{\mu}, \hat{\nu})}$$
where $\mathcal{L}$ is the likelihood function, $\mu$ and $\nu$ represent the parameters of interest and nuisance parameters, $\hat{\mu}$ and $\hat{\nu}$ are the best-fit parameters, and $\hat{\hat{\nu}}$ is the conditional maximum likelihood estimator of the nuisance parameters given $\mu$. Note that in the current LHC analyses, we use more complex test statistics such as the LHC-style test statistic. However, despite the added complexity, the main idea is the same. The test statistic is evaluated for observed data or pseudo-data
where $\mathcal{L}$ is the likelihood function, $\mu$ and $\nu$ represent the parameters of interest and nuisance parameters, $\hat{\mu}$ and $\hat{\nu}$ are the best-fit parameters, and $\hat{\hat{\nu}}$ (or $ \hat{\nu}(\mu)$) is the conditional maximum likelihood estimator of the nuisance parameters given $\mu$. Note that in the current LHC analyses, we use more complex test statistics such as the LHC-style test statistic. However, despite the added complexity, the main idea is the same. The test statistic is evaluated for observed data or pseudo-data
- **p-value**: Determine the p-value, which quantifies the probability of obtaining data as extreme as observed under the null hypothesis. A small p-value indicates that the null hypothesis is unlikely.
- **Confidence level**: Set a confidence level (e.g., 95%) to determine the exclusion limits. The confidence level represents the probability that the true parameter values lie within the calculated limits if the experiment were repeated many times.


3. **Calculate limits**: The p-values for the signal-only and signal+BG hypotheses are combined in a certain way to obtain limits. At the LHC, we use the so-called **$\mathrm{CL_s}$** quantity.

- **Expected limits:** Obtain by comparing observed data with 1) signal MC + estimated BG and 2) with only estimated BG. Observed limits check the consistency of the observation with the signal + BG hypothesis and compares it to the BG-only hypothesis.
Expand Down
4 changes: 2 additions & 2 deletions episodes/04-combine.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Datacard includes information such as

Combine supports building models for counting analyses, template shape analyses and parametric shape analyses. Though the main datacard syntax is similar for these three cases, there are minor differences reflecting the model input. For example, in the case for a template shape model, one needs to specify a ROOT file with input histograms, and in the case of a parametric shape model, one needs to specify the process probability distribution functions.

Constructing a datacard is usually the level most users input information to Combine. However, there are some cases where the statistical model requires modifications. An example case is where we need a model with multiple parameters of interest associated with different signal processes (e.g. measurement of signal strengths for two different Higgs production channels, gluon-gluon fusion and vector boson fusion). Combine also allows to build custom models by introducing modified model classes.
Constructing a datacard is usually the level most users input information to Combine. However, there are some cases where the statistical model requires modifications. An example case is where we need a model with multiple parameters of interest associated with different signal processes (e.g. measurement of signal strength modifiers for two different Higgs production channels, gluon-gluon fusion and vector boson fusion). Combine also allows to build custom models by introducing modified model classes.
Combine scales well with model complexity, and therefore is a powerful tool for combining a large number of analyses.


Expand Down Expand Up @@ -107,6 +107,6 @@ In December 2023, CMS Collaboration took the decision to release statistical mod

The first statistical model published is that for the Higgs boson discovery. The model consists of the Run 1 combination of 5 main Higgs channels ([CMS-HIG-12-028](https://cms-results.web.cern.ch/cms-results/public-results/publications/HIG-12-028/index.html)). The model can be found in [this link](https://repository.cern/records/c2948-e8875).

You can download the model into the Combine container and see the discovery for yourself. The commands are available to combine channels, calculate the significance, measure the signal strength and build a model as Higgs-vector boson, Higgs-fermion coupling modifiers as POIs.
You can download the model into the Combine container and see the discovery for yourself. The commands are available to combine channels, calculate the significance, measure the signal strength modifier and build a model as Higgs-vector boson, Higgs-fermion coupling modifiers as POIs.

More models are on their way to become public soon!
10 changes: 5 additions & 5 deletions episodes/05-challenge.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ exercises: 30

:::::::::::

The goal of this exercise is to use the `Combine` tool to calculate limits from the results of the $Z'$ search studied in the previous exercises. We will use the `Zprime_hists_FULL.root` file generated during the [Uncertainties challenge](https://cms-opendata-workshop.github.io/workshop2024-lesson-uncertainties/instructor/05-challenge.html). We will build various datacards from it, add systematics, calculate limits on the signal strength and understand the output.
The goal of this exercise is to use the `Combine` tool to calculate limits from the results of the $Z'$ search studied in the previous exercises. We will use the `Zprime_hists_FULL.root` file generated during the [Uncertainties challenge](https://cms-opendata-workshop.github.io/workshop2024-lesson-uncertainties/instructor/05-challenge.html). We will build various datacards from it, add systematics, calculate limits on the signal strength modifier and understand the output.

:::::::: prereq

Expand Down Expand Up @@ -67,13 +67,13 @@ python writecountdatacard.py

:::::::::::::::::::::::::::::::::::::::::::::::

Now run Combine over this datacard to obtain the limits on our parameter of interest, signal strength, with the simple `AsymptoticLimits` option:
Now run Combine over this datacard to obtain the limits on our parameter of interest, signal strength modifier, with the simple `AsymptoticLimits` option:

```bash
combine -M AsymptoticLimits datacard_count.txt
```

You will see some error messages concerning the computation of the observed limit, arising from numerical stability issues. In order to avoid this, let's rerun by limiting the signal strength to be maximum 2:
You will see some error messages concerning the computation of the observed limit, arising from numerical stability issues. In order to avoid this, let's rerun by limiting the signal strength modifier to be maximum 2:

```bash
combine -M AsymptoticLimits datacard_count.txt --rMax=2
Expand Down Expand Up @@ -217,14 +217,14 @@ Run Combine with this datacard. How do the limits compare with respect to the c

## Limits on cross section

So far we have worked with limits on the signal strength. How can we compute the limits on cross section?
So far we have worked with limits on the signal strength modifier. How can we compute the limits on cross section?
Can you calculate the upper limit on $Z'$ cross section for this model?

:::::::::::::::::::::::: solution

## Solution

We can multiply the signal strength limit with the theoretically predicted cross section for the signal process.
We can multiply the signal strength modifier limit with the theoretically predicted cross section for the signal process.

:::::::::::::::::::::::::::::::::

Expand Down

0 comments on commit 3f0876a

Please sign in to comment.