In this talk weβll go through the following topics:
Cutting Through the Confusion of the Confusion Matrix: Techniques for Choosing Cutoffs in Order to Translate Predictions into Binary Decisions: The Probability Threshold vs. the PPCR Approach βοΈ
ROC Curve: What it means and why I donβt like it π€
Lift (& Gains) Curve: What is the Lift of the prediction model? ποΈ
Precision-Recall: short description (Not a big fan of this one as well). π€
Code for Performance Metrics and Curves
All interactive plots in this presentation were created with rtichoke (I am the author π).
You are also invited to explore rtichoke blog for reproducible examples and some theory.
Motivation
Why using performance metrics? π₯π₯π₯
Compare different candidate models.
Selecting features.
Evaluate if the prediction model will do more harm than good.
Categories of Performance Metrics and Curves
Discrimination π: Modelβs ability to separate between events and non-events.
Calibration βοΈ: Agreement between predicted probabilities and the observed outcomes.
Utility π: The usefulness of the model in terms of decision-making.
Green
π
Red
π
Green
Red
Cutting Through the Confusion of the Confusion Matrix βοΈ
Decision Tree
True Positives
Infected and Predicted as Infected - Good
π π€’
False Positives
Not-Infected and Predicted as Infected - BAD
π π€¨
False Negatives
Infected and Predicted as Not-Infected - BAD
π€’
True Negatives
Not-Infected and Predicted as Not-Infected - GOOD
π€¨
Probability Threshold:
Most Performance Metrics are estimated by using a Probability Threshold in order to classify each probability to Predicted Negative (Do not Treat) or Predicted Positive (Treat π).
This type of dichotomization is being used when the intervention carries a potential risk and there is a trade-off of risks between the intervention and the outcome.
Probability Threshold:
pΜ
0.11
0.15
0.18
0.29
0.31
0.33
0.45
0.47
0.63
0.72
Y
0
0
0
0
1
0
1
0
1
1
π€¨
π€¨
π€¨
π€¨
π€’
π€¨
π€’
π€¨
π€’
π€’
Low Probability Threshold:
Low Probability Threshold means that Iβm worried about the outcome:
Iβm worried about Prostate Cancer π¦
Iβm worried about Heart Disease π
Iβm worried about Infection π€’
Probability Threshold of 0.25
pΜ
0.11
0.15
0.18
0.29
0.31
0.33
0.45
0.47
0.63
0.72
Y
0
0
0
0
1
0
1
0
1
1
ΕΆ
0
0
0
1
1
1
1
1
1
1
π€¨
π€¨
π€¨
π π€¨
π π€’
π π€¨
π π€’
π π€¨
π π€’
π π€’
TN
TN
TN
FP
TP
FP
TP
FP
TP
TP
High Probability Threshold:
High Probability Threshold means that Iβm worried about the Intervention:
Sometimes we will classify each observation according to the ranking of the risk in order to prioritize high-risk patients regardless their absolute risk.
The implied assumption is that the highest risk might gain the highest benefit from the treatment and that the treatment does not carry a significant potential risk.
This type of dichotomization is being used when the organization face Resource Constraint. In healthcare we call it also Risk Percentile.
PPCR of 0.1
pΜ
0.11
0.15
0.18
0.29
0.31
0.33
0.45
0.47
0.63
0.72
R
Y
0
0
0
0
1
0
1
0
1
1
ΕΆ
π€¨
π€¨
π€¨
π€¨
π€’
π€¨
π€’
π€¨
π€’
π€’
PPCR of 0.1
pΜ
0.11
0.15
0.18
0.29
0.31
0.33
0.45
0.47
0.63
0.72
R
10
9
8
7
6
5
4
3
2
1
Y
0
0
0
0
1
0
1
0
1
1
ΕΆ
0
0
0
0
0
0
0
0
0
1
π€¨
π€¨
π€¨
π€¨
π€’
π€¨
π€’
π€¨
π€’
π π€’
TN
TN
TN
TN
FN
TN
FN
TN
FN
TP
ROC Curve
Discrimination - Performance Curves
Curve
Sens
Spec
PPV
PPCR
Lift
ROC
y
x
Lift
x
y
Precision- Recall
x
y
Gains
y
x
ROC Curve
Curve
Sens
Spec
PPV
PPCR
Lift
ROC
y
x
Lift
x
y
Precision- Recall
x
y
Gains
y
x
ROC Curve
The most famous form of Performance Metrics Visualization
Displays Sensitivity (also known as True Positive Rate or Recall) on the y axis
Displays 1 - Specificity (also known as False Positive Rate) on the x axis.
Why I donβt like ROC Curve π€
Why 1 - Specificity? Why not just Specificity? π
Honestly, I didnβt find anywhere why 1 - Specificity is more insightful than just Specificity.
Why I donβt like ROC Curve π€
Why 1 - Specificity? Why not just Specificity? π
Honestly, I didnβt find anywhere why 1 - Specificity is more insightful than just Specificity.
Why I donβt like ROC Curve π€
Sensitivity and Specificity do not respect the flow of time π°οΈ
Why I donβt like ROC Curve π€
Sensitivity and Specificity do not respect the flow of time π°οΈ
We know the condition of the Conditional Probability: The number of Predicted Positives and the number of Predicted Negatives.
Why I donβt like ROC Curve π€
You donβt care about AUROC, you care about the c-statistics
Generally speaking more area under a curve with two βGoodβ performance metrics means better model. Other than that, there is no context and performance metrics with no context might lead to ambiguity and bad decisions.
Another Curve: Precision-Recall is made of Sensitivity (Precision) and PPV (Recall). How much PRAUC is enough?
Why I donβt like ROC Curve π€
You donβt care about AUROC, you care about the c-statistics
Why not calculating GAINSAUC? Or any combination of two good performance metrics? We can get with Sensitivity, Specificity, NPV, PPV 6 AUC metrics. Do they provide any meaningful insight besides a vague the more the better?
What is the AUROC of the following Models?
What is the AUROC of the following Models?
Why I donβt like ROC Curve π€
You donβt care about AUROC, you care about the c-statistics
High Ink-to-information ratio π΅
One might suggest that the visual aspect is useful, but as human beings we are really bad at interpreting round things (Thatβs why pie-charts are considered to be bad practice).
Yet, the AUROC is valuable because of the equivalence to the c-statistics and it might provide good intuition about the performance of the model.
Why I donβt like ROC Curve π€
You donβt care about AUROC, you care about the c-statistics
If youβll take randomly one event and one non-event, the probability that the event will be estimated with higher probability is exactly the AUROC.
Gains Curve displays Sensitivity on the y axis and PPCR on the x axis.
Gains shows the Sensitivity for a given PPCR.
Reference Line for a Random Guess: The sensitivity is equal to the proportion of predicted positives.
Reference Line for a Perfect Prediction: All Predicted Positives are Real Positives until there are no more Real Positives (PPCR = Prevalence, Sensitivity = 1).
Precision Recall
Curve
Sens
Spec
PPV
PPCR
Lift
ROC
y
x
Lift
x
y
Precision- Recall
x
y
Gains
y
x
Precision Recall Curve
Precision-Recall Curve displays PPV on the y axis and Sensitivity on the x axis.
The reference line stands for a random guess: the PPV is equal to the Prevalence, the Sensitivity depends on the Probability Threshold or PPCR.
The Curve is not defined if there are no Predicted Positives (probability threshold is too high or PPCR = 0).
Main Takeaways
Think carefully about the problem you are trying to solve when you translate probabilities to binary predictions:
Treatment Harm? Use Probability Threshold. βοΈ
Resource Constraint? Use PPCR.
Watch out for unusual interpretations when examining sensitivity, specificity, or the ROC curveβthey all move backward in timeπΊπ°
The AUROC is equivalent to the C-Index in the binary case. It might provide some intuition about the performance of the model but donβt take the numbers too literally π΅
Lift Curve is useful when facing resource constraint ποΈ