Skip to content

Accuracy, Precision and Recall

Ankit Jha edited this page Jan 9, 2022 · 2 revisions

Accuracy

Accuracy is the number of correct predictions divided by the total number of predictions.

Accuracy = correct_preds / all_preds

Imagine testing a model on a dataset consists of 90% dog images and only 10% cat images, what if all of the predictions it gives you are dogs ? you would easily get a 90% accuracy, Even if the accuracy is high, this still would be a poor model.

Accuracy=(TruePositives+TrueNegatives)/(TruePositives+FalsePositives+TrueNegatives+FalseNegatives)

Accuracy=(totalsamplesyourclassifierrecognizedcorrectly)/(totalsamplesyourclassified)

Precision(P) measures "Of all the samples we classified as true how many are actually true or ability of a classification model to return only relevant instances?"

Precision is how many of the returned hits were true positive i.e. how many of the found were correct hits.

Precision=(TruePositives)/(TruePositives+FalsePositives)

Precision=(TruePositives)/(totalsamplesyourclassifierrecognizedaspositive)

Low precision tells you that there’s a high false positives rate. Precision is important when the cost of false positives is high.

Recall(R) measures "Of all the actual true samples how many did we classify as true or the ability of a classification model to identify all relevant instances?"

Recall literally is how many of the true positives were recalled (found), i.e. how many of the correct hits were found.

Recall=(TruePositives)/(TruePositives+FalseNegatives)

Recall=(TruePositives)/(totalactualpositives)

Recall is same as sensitivity

Recall is important when the cost of false negatives is high

F1 Score

(Harmonic Mean)

We use the harmonic mean instead of a simple average because it punishes extreme values. It's a single metric that combines recall and precision using the harmonic mean

F1 score considers both the precision and recall

Understanding

Sensitivity And Specificity

MindMap

Decision Trees

Clone this wiki locally