Skip to content

Commit

Permalink
Fixes FO00039-63: some minor documentation typos.
Browse files Browse the repository at this point in the history
  • Loading branch information
Fabian Kueppers committed Jul 25, 2023
1 parent cc1a772 commit 10cebd5
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion docs/source/safety-aspects/fairness.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ rate (expectation of model predictions w.r.t. the sensitive feature), so that is
model works equally well for various groups. It is stricter than demographic parity because it not only demands
that the model's predictions are not influenced by sensitive group membership but also requires that the groups
have the same rates of false positive and true positive predictions :cite:p:`fairness-Agarwal2018`.
More formally, it equalized odds can be described by
More formally, equalized odds can be described by

.. math::
Expand Down
4 changes: 2 additions & 2 deletions docs/source/safety-aspects/uncertainty.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Thetis analyzes and evaluates the quality and consistency of the uncertainty tha
AI prediction in most cases (e.g., score or confidence information in classification, "objectness" score in object
detection, or variance estimation in probabilistic regression settings).
The evaluation of uncertainty quality depends on the selected task. We give a brief overview about the mathematical
background and the used metrics in the following.
background and the used metrics in the following section.

Classification
--------------
Expand Down Expand Up @@ -103,7 +103,7 @@ The uncertainty evaluation differs from standard classification evaluation in tw
IoU describes to which degree predicted and existing objects need to overlap to be considered as matching. Thus,
all evaluation results are given w.r.t. a certain IoU score.

Furthermore, recent work have shown that the calibration error might also be position-dependent
Furthermore, recent work has shown that the calibration error might also be position-dependent
:cite:p:`uncertainty-Kueppers2020`, :cite:p:`uncertainty-Kueppers2022a`, i.e., the calibration properties of objects located in the center
of an image might differ from objects located at the image boundaries.
Thus, given an object detection model that estimates an object with label :math:`\hat{Y} \in \mathcal{Y}`,
Expand Down

0 comments on commit 10cebd5

Please sign in to comment.