You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the major tasks of the library is evaluating the quality of the models and evaluating the AutoML objectives.
To that end, metrics are needed for every supported problem type.
One of them is evaluating survival analysis tasks. The library should offer an API for using any of these metrics, testing the predicted values against the ground truth.
The metrics should be reported by each evaluation time horizon, and aggregated(mean, std).
Important metrics to cover here:
[X] c_index : The concordance index or c-index is a metric to evaluate the predictions made by a survival algorithm. It is defined as the proportion of concordant pairs divided by the total number of possible evaluation pairs.
[X] brier_score: The Brier Score is a strictly proper score function or strictly proper scoring rule that measures the accuracy of probabilistic predictions.
aucroc : the Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.
sensitivity: Sensitivity (true positive rate) is the probability of a positive test result, conditioned on the individual truly being positive.
specificity: Specificity (true negative rate) is the probability of a negative test result, conditioned on the individual truly being negative.
PPV: The positive predictive value(PPV) is the probability that following a positive test result, that individual will truly have that specific disease.
NPV: The negative predictive value(NPV) is the probability that following a negative test result, that individual will truly not have that specific disease.
DrShushen
changed the title
[Evaluation] Add metrics for evaluating survival analysis tasks
[Evaluation] Add more metrics for evaluating survival analysis tasks
May 25, 2023
DrShushen
changed the title
[Evaluation] Add more metrics for evaluating survival analysis tasks
[Enhancement] Evaluation: Add more metrics for evaluating survival analysis tasks
Sep 13, 2023
Feature Description
One of the major tasks of the library is evaluating the quality of the models and evaluating the AutoML objectives.
To that end, metrics are needed for every supported problem type.
One of them is evaluating
survival analysis
tasks. The library should offer an API for using any of these metrics, testing the predicted values against the ground truth.The metrics should be reported by each evaluation time horizon, and aggregated(mean, std).
Important metrics to cover here:
[X]
c_index
: The concordance index or c-index is a metric to evaluate the predictions made by a survival algorithm. It is defined as the proportion of concordant pairs divided by the total number of possible evaluation pairs.[X]
brier_score
: The Brier Score is a strictly proper score function or strictly proper scoring rule that measures the accuracy of probabilistic predictions.aucroc
: the Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.sensitivity
: Sensitivity (true positive rate) is the probability of a positive test result, conditioned on the individual truly being positive.specificity
: Specificity (true negative rate) is the probability of a negative test result, conditioned on the individual truly being negative.PPV
: The positive predictive value(PPV) is the probability that following a positive test result, that individual will truly have that specific disease.NPV
: The negative predictive value(NPV) is the probability that following a negative test result, that individual will truly not have that specific disease.AP reference: https://github.com/vanderschaarlab/autoprognosis/blob/main/src/autoprognosis/utils/tester.py
The text was updated successfully, but these errors were encountered: