Mark As Completed Discussion

Evaluating a Classification Model

There are several metrics we can use to evaluate our model. Some of the most popular ones include:

  • Precision – the ratio of true positives to the total number of predicted positives

    • P=TPTP+FP
  • Recall – the ratio of true positives to the total number of actual positives. Also known as the True Positive Rate (TPR).

    • R=TPTP+FN.
  • Accuracy – the ratio of correctly predicted samples to the total number of samples.

    • A=TP+TNTP+TN+FP+FN
  • AUC – also known as the Area Under Curve. More specifically, this is the area under the ROC curve. It measures the whole area under the curve. The higher the curve, the better the predictions the model makes. AUC is a composite measure of performance that takes all potential thresholds into account. In other words, it is the likelihood that the model rates a random positive example higher than a random negative example.